This change ensures Wrangler doesn’t try to open http://* when * is used as the dev server’s hostname. Instead, Wrangler will now open http://127.0.0.1.
Also includes a fix for commands with --json where the log file location message would cause stdout to not be valid JSON. That message now goes to stderr.
#4341Open external linkd9908743Open external link Thanks @RamIdeasOpen external link! - Wrangler now writes all logs to a .log file in the .wrangler directory. Set a directory or specific .log filepath to write logs to with WRANGLER_LOG_PATH=../Desktop/my-logs/ or WRANGLER_LOG_PATH=../Desktop/my-logs/my-log-file.log. When specifying a directory or using the default location, a filename with a timestamp is used.
Wrangler now filters workerd stdout/stderr and marks unactionable messages as debug logs. These debug logs are still observable in the debug log file but will no longer show in the terminal by default without the user setting the env var WRANGLER_LOG=debug.
#4347Open external link102e15f9Open external link Thanks @Skye-31Open external link! - Feat(unstable_dev): Provide an option for unstable_dev to perform the check that prompts users to update wrangler, defaulting to false. This will prevent unstable_dev from sending a request to NPM on startup to determine whether it needs to be updated.
Firing PUTs to the secret api in parallel has never been a great solution - each request independently needs to lock the script, so running in parallel is at best just as bad as running serially.
Luckily, we have the script settings PATCH api now, which can update the settings for a script (including secret bindings) at once, which means we don’t need any parallelization. However this api doesn’t work with a partial list of bindings, so we have to fetch the current bindings and merge in with the new secrets before PATCHing. We can however just omit the value of the binding (i.e. only provide the name and type) which instructs the config service to inherit the existing value, which simplifies this as well. Note that we don’t use the bindings in your current wrangler.toml, as you could be in a draft state, and it makes sense as a user that a bulk secrets update won’t update anything else. Instead, we use script settings api again to fetch the current state of your bindings.
This simplified implementation means the operation can only fail or succeed, rather than succeeding in updating some secrets but failing for others. In order to not introduce breaking changes for logging output, the language around “${x} secrets were updated” or “${x} secrets failed” is kept, even if it doesn’t make much sense anymore.
This is needed as the existing fetchListResult handler for fetching potentially paginated results doesn’t work for endpoints that don’t implement cursor.
Previously, wrangler would error with a message like Uncaught TypeError: Class extends value undefined is not a constructor or null. This improves that messaging to be more understandable to users.
Added a direction parameter to all Layer 3 endpoints. Use together with location parameter to filter by origin or
target location timeseries groupsOpen API docs link.
A new usage model called Workers Standard is available for Workers and Pages Functions pricing. This is now the default usage model for accounts that are first upgraded to the Workers Paid plan. Read the blog postOpen external link for more information.
The usage model set in a script’s wrangler.toml will be ignored after an account has opted-in to Workers Standard pricing. It must be configured through the dashboard (Workers & Pages > Select your Worker > Settings > Usage Model).
Workers and Pages Functions on the Standard usage model can set custom CPU limits for their Workers
Real-time Logs: Logs are now real-time, showing logs for the last hour. If you have a need for persistent logs, please let the team know on Discord. We are building out a persistent logs feature for those who want to store their logs for longer.
Providers: Azure OpenAI is now supported as a provider!
Docs: Added Azure OpenAI example.
Bug Fixes: Errors with costs and tokens should be fixed.
If a compatibility date greater than the installed version of workerd was
configured, a warning would be logged. This warning was only actionable if a new
version of wrangler was available. The intent here was to warn if a user set
a new compatibility date, but forgot to update wrangler meaning changes
enabled by the new date wouldn’t take effect. This change hides the warning if
no update is available.
It also changes the default compatibility date for wrangler dev sessions
without a configured compatibility date to the installed version of workerd.
This previously defaulted to the current date, which may have been unsupported
by the installed runtime.
In Cloudflare for SaaS, custom hostnames of third party domain owners can be used in Cloudflare.
Workers are allowed to intercept these requests based on the routes configuration.
Before this change, the same logic used by wrangler dev was used in wrangler deploy, which caused wrangler to fail with:
✘ [ERROR] Could not find zone for [partner-saas-domain.com]
This change enables Node-like console.log()ing in local mode. Objects with
lots of properties, and instances of internal classes like Request, Headers,
ReadableStream, etc will now be logged with much more detail.
As Wrangler builds your code, it writes intermediate files to a temporary
directory that gets cleaned up on exit. Previously, Wrangler used the OS’s
default temporary directory. On Windows, this is usually on the C: drive.
If your source code was on a different drive, our bundling tool would generate
invalid source maps, breaking breakpoint debugging. This change ensures
intermediate files are always written to the same drive as sources. It also
ensures unused build outputs are cleaned up when running wrangler pages dev.
This change also means you no longer need to set cwd and
resolveSourceMapLocations in .vscode/launch.json when creating an attach
configuration for breakpoint debugging. Your .vscode/launch.json should now
look something like…
{
"configurations":[
{
"name":"Wrangler",
"type":"node",
"request":"attach",
"port":9229,
// These can be omitted, but doing so causes silent errors in the runtime
Previously, console.log() calls before the Workers runtime was ready to
receive requests wouldn’t be shown. This meant any logs in the global scope
likely weren’t visible. This change ensures startup logs are shown. In particular,
this should fix Remix’s HMROpen external link,
which relies on startup logs to know when the Worker is ready.
Added the crypto_preserve_public_exponent
compatibility flag to correct a wrong type being used in the algorithm field of RSA keys in
the WebCrypto API.
User limits provided via script metadata on upload
Example configuration:
[limits]
cpu_ms = 20000
#2162Open external linka1f212e6Open external link Thanks @WalshyDevOpen external link! - add support for service bindings in wrangler pages dev by providing the
new --service|-s flag which accepts an array of BINDING_NAME=SCRIPT_NAME
where BINDING_NAME is the name of the binding and SCRIPT_NAME is the name
of the worker (as defined in its wrangler.toml), such workers need to be
running locally with with wrangler dev.
For example if a user has a worker named worker-a, in order to locally bind
to that they’ll need to open two different terminals, in each navigate to the
respective worker/pages application and then run respectively wrangler dev and
wrangler pages ./publicDir --service MY_SERVICE=worker-a this will add the
MY_SERVICE binding to pages’ worker env object.
Note: additionally after the SCRIPT_NAME the name of an environment can be specified,
prefixed by an @ (as in: MY_SERVICE=SCRIPT_NAME@PRODUCTION), this behavior is however
experimental and not fully properly defined.
When tailing a tail worker, messages previously had a null event property. Following https://github.com/cloudflare/workerd/pull/1248Open external link, these events have a valid event, specifying which scripts produced events that caused your tail worker to run.
As part of rolling this out, we’re filtering out tail events in the internal tail infrastructure, so we control when these new messages are forward to tail sessions, and can merge this freely.
One idiosyncracy to note, however, is that tail workers always report an “OK” status, even if they run out of memory or throw. That is being tracked and worked on separately.
For projects which are slow to upload - either because of client bandwidth or large numbers of files and sizes - It’s possible for the JWT to expire multiple times. Since our network request concurrency is set to 3, it’s possible that each time the JWT expires we get 3 failed attempts. This can quickly exhaust our upload attempt count and cause the entire process to bail.
This change makes it such that jwt refreshes do not count as a failed upload attempt.
Logs: Logs will now be limited to the last 24h. If you have a use case that requires more logging, please reach out to the team on Discord.
Dashboard: Logs now refresh automatically.
Docs: Fixed Workers AI example in docs and dash.
Caching: Embedding requests are now cacheable. Rate limit will not apply for cached requests.
Bug Fixes: Identical requests to different providers are not wrongly served from cache anymore. Streaming now works as expected, including for the Universal endpoint.
Known Issues: There’s currently a bug with costs that we are investigating.
Fixed a bug in the WebCrypto API where the publicExponent field of the algorithm of RSA keys would have the wrong type. Use the crypto_preserve_public_exponent compatibility flag to enable the new behavior.
Queue consumers can now scale to 20 concurrent invocations (per queue), up from 10. This allows you to scale out and process higher throughput queues more quickly.
Setting find_additional_modules to true in your configuration file will now instruct Wrangler to look for files in
your base_dir that match your configured rules, and deploy them as unbundled, external modules with your Worker.
base_dir defaults to the directory containing your main entrypoint.
Wrangler can operate in two modes: the default bundling mode and --no-bundle mode. In bundling mode, dynamic imports
(e.g. await import("./large-dep.mjs")) would be bundled into your entrypoint, making lazy loading less effective.
Additionally, variable dynamic imports (e.g. await import(`./lang/${language}.mjs`)) would always fail at runtime,
as Wrangler would have no way of knowing which modules to upload. The --no-bundle mode sought to address these issues
by disabling Wrangler’s bundling entirely, and just deploying code as is. Unfortunately, this also disabled Wrangler’s
code transformations (e.g. TypeScript compilation, --assets, --test-scheduled, etc).
With this change, we now additionally support partial bundling. Files are bundled into a single Worker entry-point file
unless find_additional_modules is true, and the file matches one of the configured rules. See
https://developers.cloudflare.com/workers/wrangler/bundling/ for more details and examples.
On macOS, wrangler pages dev previously generated source maps with an
incorrect number of ../s in relative paths. This change ensures paths are
always correct, improving support for breakpoint debugging.
During local development, inside your worker, the host of request.url is inferred from the routes in your config.
Previously, route patterns like “*/some/path/name” would infer the host as “some”. We now handle this case and determine we cannot infer a host from such patterns.
D1 is now in public beta, and storage limits have been increased:
Developers with a Workers Paid plan now have a 2 GB per-database limit (up from 500 MB) and can create 25 databases per account (up from 10). These limits will continue to increase automatically during the public beta.
Developers with a Workers Free plan retain the 500 MB per-database limit and can create up to 10 databases per account.
Databases must be using D1’s new storage subsystem to benefit from the increased database limits.
Previously, wrangler pages dev attempted to send messages on a closed IPC
channel when sources changed, resulting in an ERR_IPC_CHANNEL_CLOSED error.
This change ensures the channel stays open until the user exits wrangler pages dev.
Low-Latency HTTP Live Streaming (LL-HLS) is now in open beta. Enable LL-HLS on your live input for automatic low-latency playback using the Stream built-in player where supported.
name = "ai-worker"
main = "src/index.ts"
[ai]
binding = "AI"
Example script:
import Ai from "@cloudflare/ai"
export default {
async fetch(request: Request, env: Env): Promise<Response> {
const ai = new Ai(env.AI);
const story = await ai.run({
model: 'llama-2',
input: {
prompt: 'Tell me a story about the future of the Cloudflare dev platform'
}
});
return new Response(JSON.stringify(story));
},
};
export interface Env {
AI: any;
}
Note this release changes the layout of persisted data in the .wrangler folder. KV namespaces, R2 buckets and D1 databases will automatically be migrated to the new layout. See the corresponding miniflare@3.20230918.0Open external link release notes for more information.
Previously, breakpoint debugging using Wrangler’s DevTools was only supported
in local mode, when using Wrangler’s built-in bundler. This change extends that
to remote development, and --no-bundle.
When using --remote and --no-bundle together, uncaught errors will now be
source-mapped when logged too.
#3928Open external link95b24b1eOpen external link Thanks @JacobMGEvansOpen external link! - Colorize Deployed Bundle Size
Most bundlers, and other tooling that give you size outputs will colorize their the text to indicate if the value is within certain ranges.
The current range values are:
red 100% - 90%
yellow 89% - 70%
green <70%
D1 now returns a count of rows_written and rows_read for every query executed, allowing you to assess the cost of query for both pricing and index optimization purposes.
The meta object returned in D1’s Client API contains a total count of the rows read (rows_read) and rows written (rows_written) by that query. For example, a query that performs a full table scan (for example, SELECT * FROM users) from a table with 5000 rows would return a rows_read value of 5000:
"meta":{
"duration":0.20472300052642825,
"size_after":45137920,
"rows_read":5000,
"rows_written":0
}
Refer to D1 pricing documentation to understand how reads and writes are measured. D1 remains free to use during the alpha period.
Stopped collecting data in the old Layer 3 data source.
Updated Layer 3
timeseriesOpen API docs link endpoint
to start using the new Layer 3 data source by default, fetching the old data source now requires sending the parameter
metric=bytes_old.
Deprecated Layer 3
summaryOpen API docs link endpoint, this will stop
receiving data after 2023-08-14.
You can now bind a D1 database to your Workers directly in the Cloudflare dashboardOpen external link. To bind D1 from the Cloudflare dashboard, select your Worker project -> Settings -> Variables -> and select D1 Database Bindings.
Note: If you have previously deployed a Worker with a D1 database binding with a version of wrangler prior to 3.5.0, you must upgrade to wrangler v3.5.0Open external link first before you can edit your D1 database bindings in the Cloudflare dashboard. New Workers projects do not have this limitation.
Legacy D1 alpha users who had previously prefixed their database binding manually with __D1_BETA__ should remove this as part of this upgrade. Your Worker scripts should call your D1 database via env.BINDING_NAME only. Refer to the latest D1 getting started guide for best practices.
We recommend all D1 alpha users begin using wrangler 3.5.0 (or later) to benefit from improved TypeScript types and future D1 API improvements.
Stream now supports adding a scheduled deletion date to new and existing videos. Live inputs support deletion policies for automatic recording deletion.
Databases using D1’s new storage subsystem can now grow to 500 MB each, up from the previous 100 MB limit. This applies to both existing and newly created databases.
Updated HTTP timeseries endpoints urls
to timeseries_groups (exampleOpen API docs link)
due to consistency. Old timeseries endpoints are still available, but will soon be removed.
Databases created via the Cloudflare dashboard and Wrangler (as of v3.4.0) now use D1’s new storage subsystem by default. The new backend can be 6 - 20x fasterOpen external link than D1’s original alpha backend.
To understand which storage subsystem your database uses, run wrangler d1 info YOUR_DATABASE and inspect the version field in the output.
Databases with version: beta use the new storage backend and support the Time Travel API. Databases with version: alpha only use D1’s older, legacy backend.
Time Travel is now available. Time Travel allows you to restore a D1 database back to any minute within the last 30 days (Workers Paid plan) or 7 days (Workers Free plan), at no additional cost for storage or restore operations.
Refer to the Time Travel documentation to learn how to travel backwards in time.
Improved performance for ranged reads on very large files. Previously ranged reads near the end of very large files would be noticeably slower than
ranged reads on smaller files. Performance should now be consistently good independent of filesize.
New documentation has been published on how to use D1’s support for generated columns to define columns that are dynamically generated on write (or read). Generated columns allow you to extract data from JSON objects or use the output of other SQL functions.
Fixed a bug where calling GetBucketOpen API docs link on a non-existent bucket would return a 500 instead of a 404.
Improved S3 compatibility for ListObjectsV1, now nextmarker is only set when truncated is true.
The R2 worker bindings now support parsing conditional headers with multiple etags. These etags can now be strong, weak or a wildcard. Previously the bindings only accepted headers containing a single strong etag.
S3 putObject now supports sha256 and sha1 checksums. These were already supported by the R2 worker bindings.
CopyObject in the S3 compatible api now supports Cloudflare specific headers which allow the copy operation to be conditional on the state of the destination object.
To facilitate a transition from the previous Error.cause behaviour, detailed error messages will continue to be populated within Error.cause as well as the top-level Error object until approximately July 14th, 2023. Future versions of both wrangler and the D1 client API will no longer populate Error.cause after this date.
Following an update to the WHATWG URL specOpen external link, the delete() and has() methods of the URLSearchParams class now accept an optional second argument to specify the search parameter’s value. This is potentially a breaking change, so it is gated behind the new urlsearchparams_delete_has_value_arg and url_standard compatibility flags.
A new Hibernatable WebSockets API
(beta) has been added to Durable Objects. The Hibernatable
WebSockets API allows a Durable Object that is not currently running an event
handler (for example, processing a WebSocket message or alarm) to be removed from
memory while keeping its WebSockets connected (“hibernation”). A Durable Object
that hibernates will not incur billable Duration (GB-sec) charges.
D1 has a new experimental storage back end that dramatically improves query throughput, latency and reliability. The experimental back end will become the default back end in the near future. To create a database using the experimental backend, use wrangler and set the --experimental-backend flag when creating a database:
You can now provide a location hint when creating a D1 database, which will influence where the leader (writer) is located. By default, D1 will automatically create your database in a location close to where you issued the request to create a database. In most cases this allows D1 to choose the optimal location for your database on your behalf.
New documentation has been published that covers D1’s extensive JSON function support. JSON functions allow you to parse, query and modify JSON directly from your SQL queries, reducing the number of round trips to your database, or data queried.
The V2 build system is now available in open beta. Enable the V2 build system by going to your Pages project in the Cloudflare dashboard and selecting Settings > Build & deploymentsOpen external link > Build system version.
The new connect() method allows you to connect to any TCP-speaking services directly from your Workers. To learn more about other protocols supported on the Workers platform, visit the new Protocols documentation.
We have added new native database integrations for popular serverless database providers, including Neon, PlanetScale, and Supabase. Native integrations automatically handle the process of creating a connection string and adding it as a Secret to your Worker.
You can now also connect directly to databases over TCP from a Worker, starting with PostgreSQL. Support for PostgreSQL is based on the popular pg driver, and allows you to connect to any PostgreSQL instance over TLS from a Worker directly.
The R2 Migrator (Super Slurper), which automates the process of migrating from existing object storage providers to R2, is now Generally Available.
Cursor, an experimental AI assistant, trained to answer
questions about Cloudflare’s Developer Platform, is now available to preview!
Cursor can answer questions about Workers and the Cloudflare Developer Platform,
and is itself built on Workers. You can read more about Cursor in the announcement
blogOpen external link.
The new nodeJsCompatModule type can be used with a Worker bundle to emulate a Node.js environment. Common Node.js globals such as process and Buffer will be present, and require('...') can be used to load Node.js built-ins without the node: specifier prefix.
Fixed an issue where websocket connections would be disconnected when updating workers. Now, only websockets connected to Durable Object instances are disconnected by updates to that Durable Object’s code.
Cloudflare Stream now supports player enhancement properties.
With player enhancements, you can modify your video player to incorporate elements of your branding, such as your logo, and customize additional options to present to your viewers.
For more, refer to the documentation to get started.
URL.canParse(...) is a new standard API for testing that an input string can be parsed successfully as a URL without the additional cost of creating and throwing an error.
The Workers-specific IdentityTransformStream and FixedLengthStream classes now support specifying a highWaterMark for the writable-side that is used for backpressure signaling using the standard writer.desiredSize/writer.ready mechanisms.
Queue consumers will now automatically scale up based on the number of messages being written to the queue. To control or limit concurrency, you can explicitly define a max_concurrency for your consumer.
Fixed a bug in Wrangler tail and and live logs on the dashboard that
prevented the Administrator Read-Only and Workers Tail Read roles from successfully
tailing Workers.
Previously, generating a download for a live recording exceeding four hours resulted in failure.
To fix the issue, now video downloads are only available for live recordings under four hours. Live recordings exceeding four hours can still be played but cannot be downloaded.
Queue consumers will soon automatically scale up concurrently as a queues’ backlog grows in order to keep overall message processing latency down. Concurrency will be enabled on all existing queues by 2023-03-28.
To opt-out, or to configure a fixed maximum concurrency, set max_concurrency = 1 in your wrangler.toml file or via the queues dashboardOpen external link.
To opt-in, you do not need to take any action: your consumer will begin to scale out as needed to keep up with your message backlog. It will scale back down as the backlog shrinks, and/or if a consumer starts to generate a higher rate of errors. To learn more about how consumers scale, refer to the consumer concurrency documentation.
This allows you to mark a message as delivered as you process it within a batch, and avoids the entire batch from being redelivered if your consumer throws an error during batch processing. This can be particularly useful when you are calling external APIs, writing messages to a database, or otherwise performing non-idempotent actions on individual messages within a batch.
Fixed a bug where transferring large request bodies to a Durable Object was unexpectedly slow.
Previously, an error would be thrown when trying to access unimplemented standard Request and Response properties. Now those will be left as undefined.
IPv6 percentage started to be calculated as (IPv6 requests / requests for dual-stacked content), where as before it
was calculated as (IPv6 requests / IPv4+IPv6 requests).
Added new Layer 3 data source and related endpoints.
Updated Layer 3
timeseriesOpen API docs link endpoint
to support fetching both current and new data sources. For retro-compatibility
reasons, fetching the new data source requires sending the parameter metric=bytes else the current data
source will be returned.
Cloudflare Stream now detects non-video content on upload using the POST API and returns a 400 Bad Request HTTP error with code 10059.
Previously, if you or one of your users attempted to upload a file that is not a video (ex: an image), the request to upload would appear successful, but then fail to be encoded later on.
With this change, Stream responds to the upload request with an error, allowing you to give users immediate feedback if they attempt to upload non-video content.
Queues now allows developers to create up to 100 queues per account, up from the initial beta limit of 10 per account. This limit will continue to increase over time.
You can now deep-link to a Pages deployment in the dashboard with :pages-deployment. An example would be https://dash.cloudflare.com?to=/:account/pages/view/:pages-project/:pages-deployment.
The “per-video” analytics API is being deprecated. If you still use this API, you will need to switch to using the GraphQL Analytics API by February 1, 2023. After this date, the per-video analytics API will be no longer available.
The GraphQL Analytics API provides the same functionality and more, with additional filters and metrics, as well as the ability to fetch data about multiple videos in a single request. Queries are faster, more reliable, and built on a shared analytics system that you can use across many Cloudflare products.
Cloudflare Stream now has no limit on the number of live inputsOpen API docs link you can create. Stream is designed to allow your end-users to go live — live inputs can be created quickly on-demand via a single API request for each of user of your platform or app.
For more on creating and managing live inputs, get started with the docs.
Multipart upload part sizes are always expected to be of the same size, but this enforcement is now done when you complete an upload instead of being done very time you upload a part.
Fixed a performance issue where concurrent multipart part uploads would get rejected.
When playing live video, Cloudflare Stream now provides significantly more accurate estimates of the bandwidth needs of each quality level to client video players. This ensures that live video plays at the highest quality that viewers have adequate bandwidth to play.
As live video is streamed to Cloudflare, we transcode it to make it available to viewers at mulitple quality levels. During transcoding, we learn about the real bandwidth needs of each segment of video at each quality level, and use this to provide an estimate of the bandwidth requirements of each quality level the in HLS (.m3u8) and DASH (.mpd) manifests.
If a live stream contains content with low visual complexity, like a slideshow presentation, the bandwidth estimates provided in the HLS manifest will be lower, ensuring that the most viewers possible view the highest quality level, since it requires relatively little bandwidth. Conversely, if a live stream contains content with high visual complexity, like live sports with motion and camera panning, the bandwidth estimates provided in the HLS manifest will be higher, ensuring that viewers with inadequate bandwidth switch down to a lower quality level, and their playback does not buffer.
This change is particularly helpful if you’re building a platform or application that allows your end users to create their own live streams, where these end users have their own streaming software and hardware that you can’t control. Because this new functionality adapts based on the live video we receive, rather than just the configuration advertised by the broadcaster, even in cases where your end users’ settings are less than ideal, client video players will not receive excessively high estimates of bandwidth requirements, causing playback quality to decrease unnecessarily. Your end users don’t have to be OBS Studio experts in order to get high quality video playback.
No work is required on your end — this change applies to all live inputs, for all customers of Cloudflare Stream. For more, refer to the docs.
You can now deep-link to a Pages project in the dashboard with :pages-project. An example would be https://dash.cloudflare.com?to=/:account/pages/view/:pages-project.
Cloudflare Stream now supports playback of live videos and live recordings using the AV1 codecOpen external link, which uses 46% less bandwidth than H.264.
CORS preflight responses and adding CORS headers for other responses is now implemented for S3 and public buckets. Currently, the only way to configure CORS is via the S3 API.
Fixup for bindings list truncation to work more correctly when listing keys with custom metadata that have " or when some keys/values contain certain multi-byte UTF-8 values.
The S3 GetObject operation now only returns Content-Range in response to a ranged request.
The R2 put() binding options can now be given an onlyIf field, similar to get(), that performs a conditional upload.
The R2 delete() binding now supports deleting multiple keys at once.
The R2 put() binding now supports user-specified SHA-1, SHA-256, SHA-384, SHA-512 checksums in options.
User-specified object checksums will now be available in the R2 get() and head() bindings response. MD5 is included by default for non-multipart uploaded objects.
You can now enable and disable individual live outputs via the API or Stream dashboard, allowing you to control precisely when you start and stop simulcasting to specific destinations like YouTube and Twitch. For more, read the docs.
The S3 DeleteObjects operation no longer trims the space from around the keys before deleting. This would result in files with leading / trailing spaces not being able to be deleted. Additionally, if there was an object with the trimmed key that existed it would be deleted instead. The S3 DeleteObject operation was not affected by this.
Fixed presigned URL support for the S3 ListBuckets and ListObjects operations.
URLs in the Stream Dashboard and Stream API now use a subdomain specific to your Cloudflare Account: customer-{CODE}.cloudflarestream.com. This change allows you to:
Use Content Security PolicyOpen external link (CSP) directives specific to your Stream subdomain, to ensure that only videos from your Cloudflare account can be played on your website.
Allowlist only your Stream account subdomain at the network-level to ensure that only videos from a specific Cloudflare account can be accessed on your network.
No action is required from you, unless you use Content Security Policy (CSP) on your website. For more on CSP, read the docs.
Uploads will automatically infer the Content-Type based on file body
if one is not explicitly set in the PutObject request. This functionality will
come to multipart operations in the future.
Added dummy implementation of the following operation that mimics
the response that a basic AWS S3 bucket will return when first created: GetBucketAcl.
Fixed an S3 compatibility issue for error responses with MinIO .NET SDK and any other tooling that expects no xmlns namespace attribute on the top-level Error tag.
List continuation tokens prior to 2022-07-01 are no longer accepted and must be obtained again through a new list operation.
The list() binding will now correctly return a smaller limit if too much data would otherwise be returned (previously would return an Internal Error).
Improvements to 500s: we now convert errors, so things that were previously concurrency problems for some operations should now be TooMuchConcurrency instead of InternalError. We’ve also reduced the rate of 500s through internal improvements.
ListMultipartUpload correctly encodes the returned Key if the encoding-type is specified.
S3 XML documents sent to R2 that have an XML declaration are not rejected with 400 Bad Request / MalformedXML.
Minor S3 XML compatibility fix impacting Arq Backup on Windows only (not the Mac version). Response now contains XML declaration tag prefix and the xmlns attribute is present on all top-level tags in the response.
Support the r2_list_honor_include compat flag coming up in an upcoming runtime release (default behavior as of 2022-07-14 compat date). Without that compat flag/date, list will continue to function implicitly as include: ['httpMetadata', 'customMetadata'] regardless of what you specify.
cf-create-bucket-if-missing can be set on a PutObject/CreateMultipartUpload request to implicitly create the bucket if it does not exist.
Fix S3 compatibility with MinIO client spec non-compliant XML for publishing multipart uploads. Any leading and trailing quotes in CompleteMultipartUpload are now optional and ignored as it seems to be the actual non-standard behavior AWS implements.
Pages now supports .dev.vars in wrangler pages, which allows you to use use environmental variables during your local development without chaining --envs.
This functionality requires Wrangler v2.0.16 or higher.
Unsupported search parameters to ListObjects/ListObjectsV2 are
now rejected with 501 Not Implemented.
Fixes for Listing:
Fix listing behavior when the number of files within a folder exceeds the limit (you’d end
up seeing a CommonPrefix for that large folder N times where N = number of children
within the CommonPrefix / limit).
Fix corner case where listing could cause
objects with sharing the base name of a "folder" to be skipped.
Fix listing over some files that shared a certain common prefix.
DeleteObjects can now handle 1000 objects at a time.
S3 CreateBucket request can specify x-amz-bucket-object-lock-enabled with a value of false and not have the requested rejected with a NotImplemented
error. A value of true will continue to be rejected as R2 does not yet support
object locks.
We now keep track of the files that make up each deployment and intelligently only upload the files that we have not seen. This means that similar subsequent deployments should only need to upload a minority of files and this will hopefully make uploads even faster.
This functionality requires Wrangler v2.0.11 or higher.
Fixed a bug where the S3 API’s PutObject or the .put() binding could fail but still show the bucket upload as successful.
If conditional headersOpen external link are provided to S3 API UploadObject or CreateMultipartUpload operations, and the object exists, a 412 Precondition Failed status code will be returned if these checks are not met.
Add support for S3 virtual-hosted style pathsOpen external link, such as <BUCKET>.<ACCOUNT_ID>.r2.cloudflarestorage.com instead of path-based routing (<ACCOUNT_ID>.r2.cloudflarestorage.com/<BUCKET>).
Implemented GetBucketLocation for compatibility with external tools, this will always return a LocationConstraint of auto.
During or after uploading a video to Stream, you can now specify a value for a new field, creator. This field can be used to identify the creator of the video content, linking the way you identify your users or creators to videos in your Stream account. For more, read the blog postOpen external link.
When using the S3 API, an empty string and us-east-1 will now alias to the auto region for compatibility with external tools.
GetBucketEncryption, PutBucketEncryption and DeleteBucketEncrypotion are now supported (the only supported value currently is AES256).
Unsupported operations are explicitly rejected as unimplemented rather than implicitly converting them into ListObjectsV2/PutBucket/DeleteBucket respectively.
S3 API CompleteMultipartUploads requests are now properly escaped.
Pagination cursors are no longer returned when the keys in a bucket is the same as the MaxKeys argument.
The S3 API ListBuckets operation now accepts cf-max-keys, cf-start-after and cf-continuation-token headers behave the same as the respective URL parameters.
The S3 API ListBuckets and ListObjects endpoints now allow per_page to be 0.
The S3 API CopyObject source parameter now requires a leading slash.
The S3 API CopyObject operation now returns a NoSuchBucket error when copying to a non-existent bucket instead of an internal error.
Enforce the requirement for auto in SigV4 signing and the CreateBucketLocationConstraint parameter.
The S3 API CreateBucket operation now returns the proper location response header.
The Stream Dashboard now has an analytics panel that shows the number of minutes of both live and recorded video delivered. This view can be filtered by Creator ID, Video UID, and Country. For more in-depth analytics data, refer to the bulk analytics documentation.
The Stream Player can now be configured to use a custom letterbox color, displayed around the video (’letterboxing’ or ‘pillarboxing’) when the video’s aspect ratio does not match the player’s aspect ratio. Refer to the documentation on configuring the Stream Player here.
Cloudflare Stream now supports the SRT live streaming protocol. SRT is a modern, actively maintained streaming video protocol that delivers lower latency, and better resilience against unpredictable network conditions. SRT supports newer video codecs and makes it easier to use accessibility features such as captions and multiple audio tracks.
When viewers manually change the resolution of video they want to receive in the Stream Player, this change now happens immediately, rather than once the existing resolution playback buffer has finished playing.
If you choose to use a third-party player with Cloudflare Stream, you can now easily access HLS and DASH manifest URLs from within the Stream Dashboard. For more about using Stream with third-party players, read the docs here.
When a live input is connected, the Stream Dashboard now displays technical details about the connection, which can be used to debug configuration issues.
You can now configure Stream to send webhooks each time a live stream connects and disconnects. For more information, refer to the Webhooks documentation.
You can now start and stop live broadcasts without having to provide a new video UID to the Stream Player (or your own player) each time the stream starts and stops. Read the docs.
Once a live video has ended and been recorded, you can now give viewers the option to download an MP4 video file of the live recording. For more, read the docs here.
The Stream Player now displays preview images when viewers hover their mouse over the seek bar, making it easier to skip to a specific part of a video.
All Cloudflare Stream customers can now give viewers the option to download videos uploaded to Stream as an MP4 video file. For more, read the docs here.
You can now opt-in to the Stream Connect beta, and use Cloudflare Stream to restream live video to any platform that accepts RTMPS input, including Facebook, YouTube and Twitch.
You can now use Cloudflare Stream to restream or simulcast live video to any platform that accepts RTMPS input, including Facebook, YouTube and Twitch.
If you use Cloudflare Stream with a third party player, and send the clientBandwidthHint parameter in requests to fetch video manifests, Cloudflare Stream now selects the ideal resolution to provide to your client player more intelligently. This ensures your viewers receive the ideal resolution for their network connection.
If you use Cloudflare Stream with a third party player, and send the clientBandwidthHint parameter in requests to fetch video manifests, Cloudflare Stream now selects the ideal resolution to provide to your client player more intelligently. This ensures your viewers receive the ideal resolution for their network connection.
Cloudflare Stream now delivers video using 3-10x less bandwidth, with no reduction in quality. This ensures faster playback for your viewers with less buffering, particularly when viewers have slower network connections.
Videos with multiple audio tracks (ex: 5.1 surround sound) are now mixed down to stereo when uploaded to Stream. The resulting video, with stereo audio, is now playable in the Stream Player.