AWS pre:Invent 2025 — The Season AWS Quietly Turns Up the Volume

A builder’s guide to the announcements shaping the next wave of AWS services, from Nova to Q Developer/Kiro.

There’s no official banner that says Welcome to pre:Invent, but you know it when it starts. For me, it was around September 30th — the day announcements started coming in hot and heavy.
As of 23rd November, AWS has quietly dropped 560 updates. That’s not marketing fluff. That’s real features, real services, and a lot of “Wait—when did that get announced?”

UPDATE: At the time of publishing the number of updates had risen to over 600

One of the earliest surprises?
CodeCommit emerged from the dead.
Not just patched — revived. And suddenly, AWS’s developer story started feeling different. Especially with Kiro going GA and Q Developer updates waiting in the wings.

Image

But first, some personal reflection

It has been a quieter year on the blog, and for that I feel I should explain. Most of my time has been split between co-authoring Cloud Native Anti-Patterns, organising AWS Community Day Australia, and navigating some career shifts that have kept me deeply involved in strategy, delivery and community work.

But while the writing may have paused, the observing definitely did not.

I have been fortunate to stand on some significant stages this year, including the AWS Summit in Sydney and the AWS Global Ambassador Summit. At the same time, I have kept my ear to the ground and my eyes on everything AWS has been releasing, especially the announcements that slipped under the radar yet signal where AWS is heading.

This year alone, I travelled to the United States four times. One of those was a family holiday in Florida, I promise. I also travelled to New Zealand twice for key customer engagements that spanned architecture and AI strategy. Throughout these trips, the same pattern became clear. AWS is not just shipping features. It is repositioning how builders are expected to build.

Image

It is now Thanksgiving weekend in the United States, and I am on my way to Las Vegas for AWS re:Invent 2025. Most people travel with podcasts or playlists. I travel with announcement logs, RSS feeds and a folder of release notes. And so, as I have done in previous years, it is time to write my annual pre:Invent blog.

This season is not just about what was announced. It is about what those announcements tell us about the year ahead.


Early Teasers: Small, but Meaningful

Before the bigger releases landed, AWS started with a series of smaller, quality-of-life improvements. On their own, they were useful. Together, they hinted at the direction AWS was heading.

Feature DropWhy It Matters
ECS IPv6-Only SupportA clean and future-ready approach to container networking. Something I intend to try.
Claude Sonnet 4.5 in BedrockThe first of several Claude 4.5 models released this season.
AWS Transfer Family: VPC Endpoint Policies and FIPS SupportQuietly significant for security and compliance heavy environments.

And then the tempo shifted.


Amazon ECS Managed Instances: The First Big Signal

This was not a patch or an upgrade. It marked a new way to run containers using EC2 without manually handling the parts most people dislike. AWS introduced a model that sits between Fargate and fully managed EC2, without sending you back into infrastructure administration.

This is the comparison I’ve been using:

FargateECS Managed Instances
Who manages the infrastructure?Fully abstracted by AWSAWS provisioned, but you retain EC2 control
Pricing ModelCharged per task, based on CPU and memoryBased on EC2 instances. Supports Reserved, Spot, and bin-packing
IsolationFirecracker microVM per taskShared EC2 host, more flexible networking
Best ForFast deployment, short lived or event-driven workloadsGPU workloads, model serving, custom runtimes, cost tuned scaling

For AI workloads, the distinction becomes clear. Fargate works well for fast deployment of smaller agent services or lightweight inference. When you need GPU access, advanced networking, or higher throughput on long-running workloads, ECS Managed Instances allow you to make those choices without taking on full infrastructure management.

It is the balance point AWS has been working towards.


The Stream of Announcements That Followed

Once Managed Instances arrived, the rate of announcements accelerated. Over the following weeks, updates landed across containers, serverless, databases, AI, and platform-level tooling.

To make sense of it all, I’ve grouped the ones that stood out most to me:

Containers and ECS

Image

Amazon ECS Express Mode: Faster Starts, But Where Does It Fit?

Amazon ECS Express Mode arrived quietly but confidently on 21 November. It positions itself as a faster, more guided way to deploy containerised applications on ECS without having to stitch together VPCs, load balancers, HTTPS, scaling policies and DNS mapping yourself. In short, you give it a container image, and it gives you a running, load-balanced, HTTPS-ready application with a public or private URL already attached.

It automatically provisions an Application Load Balancer and intelligently consolidates up to 25 Express Mode services behind a single ALB when it makes sense to do so. Each service still maintains isolation using rule-based routing. Everything remains visible in your own account, and the control plane is not hidden behind an abstraction layer. If you want to later evolve that application into a more advanced ECS architecture, you have full access to the underlying resources.

It is fast, it is tidy, and it definitely lowers the barrier to get a container-based service online quickly. In that sense, it feels more like “AWS-native App Runner, but without the abstraction penalty”.

The real question, in my view, is where it sits in the ECS story. Managed Instances is clearly reshaping how we think about running containers with EC2 flexibility and Fargate simplicity. Express Mode is different. It is not a new compute model, but rather a deployment accelerator. Useful, and certainly more aligned with how builders think today, but I’m still deciding if this becomes a mainstream pattern or ends up in the same category as Elastic Beanstalk and App Runner — technically elegant, but not widely adopted.

It is available in all regions, with no extra charge… but whether it becomes essential, the community will decide.

Serverless Compute

Image

AWS Lambda: Tenant Isolation Mode for Multi-Tenant Architectures

AWS has released Tenant Isolation Mode for Lambda, aimed at teams building SaaS and multi-tenant platforms. When enabled, Lambda now guarantees that each tenant’s invocations run in fully isolated execution environments, rather than sharing runtime sandboxes across tenants.

This avoids the previous trade-off between maintaining a separate Lambda function per tenant (which doesn’t scale well) or using a shared function with careful manual isolation. Tenant Isolation Mode strikes the middle ground: a single function, securely isolated tenant runtimes, and full access to existing Lambda features including per-tenant warm starts, IAM controls, and serverless scaling.

It must be enabled when creating the function, and each invocation needs a tenant identifier. Expect stronger security isolation at the cost of additional execution environments being maintained, especially for high tenant counts.

This helps bring serverless back into consideration for SaaS-style multi-tenant designs where compliance, separation, and data trust boundaries matter.


Amazon S3

One of the most mature AWS services continues to evolve throughout 2025 Vectors and Tables… the story continues

Image

Amazon S3: Attribute-Based Access Control for General Purpose Buckets

Amazon S3 now supports attribute-based access control (ABAC) for general purpose buckets. This lets bucket access be governed using tags rather than static bucket names or long policy statements. It is particularly useful in environments where new buckets are regularly created for projects, accounts, or workload isolation.

With ABAC enabled, access decisions can be made based on tags such as project, environment, or cost-center. For example, any IAM role tagged project:Alpha can automatically be granted access to S3 buckets with the same tag, without updating individual bucket policies.

This helps reduce the number of static policies that need to be written and maintained, and shifts S3 access control from bucket-by-bucket configuration to scalable, metadata-driven governance. It also aligns with existing AWS tagging practices used for cost allocation and resource tracking.

ABAC must be enabled on each bucket individually. Once enabled, tags can be managed using the recommended resource tagging APIs, and existing bucket policies continue to work alongside tag-based policies.

AWS continues to turn unspoken assumptions into enforceable controls. Encryption in transit, tenant isolation, S3 public access, ABAC — each time, it moves us from “We think it’s secure” to “We can prove it.”


Machine Learning and Foundation Models

If there’s one category that refuses to stay quiet during pre:Invent, it’s this one. While containers and serverless saw meaningful updates, Machine Learning and Foundation Models completely dominated the release cycle. Not just with model upgrades, but with signals of how AWS plans to evolve the developer workflow, the agent ecosystem, and what “native AI” on AWS actually looks like.

This is not just about faster models or lower latency. It is about how we will build with them.

Image

Amazon Bedrock AgentCore — Infrastructure for Running AI Agents at Scale

Amazon Bedrock AgentCore is now generally available. It is a managed runtime built specifically for AI agents… not models, but the systems that use models, tools, memory, identity and context to complete multi-step tasks.

AgentCore provides a serverless execution environment for agents, handling state, authentication, tool calls, and session orchestration. It supports long-running workflows, private VPC deployment, tagging, IAM integration, and native observability through CloudWatch and X-Ray. You can build with any model, including custom frameworks and open-source agents, without managing containers, servers, or session lifecycles.

The value here sits between experimentation and production. Rather than stitching together Lambda, Step Functions, custom middleware, state stores and API gateways to make something “agent-like”, AgentCore provides those architectural building blocks as first-class services.

It does not replace models or frameworks. It gives them somewhere to live.

Nova: Last Year’s Stage, This Year’s Context

Nova made its debut at re:Invent 2024. It was introduced by Andy Jassy in his first keynote appearance in three years. That was not just a model launch. It was AWS setting a benchmark for what native, integrated, multimodal reasoning inside the AWS ecosystem could look like.

Nova sits less as a competitor to other foundation models, and more as a preview of how AWS intends to combine cloud-native architecture, compute, and intelligent systems.

Nova Act Extension for IDEs

Announced on 23 September, the Nova Act extension brings agent development directly into familiar development environments including Visual Studio Code, Kiro and Cursor. Instead of writing scripts in isolation, switching between browser tools or copying JSON back and forth, developers can now build, test and refine Nova Act agent workflows directly in their IDE.

It consolidates natural language scripting, precision editing and browser-based task simulation in one interface. This feels closer to real agent development rather than a chat interface pretending to be an IDE. It is built on top of the Nova Act SDK, and keeps all resources visible and owned in your account.

Over time, this could become the pattern for how Nova agents are built, tested and packaged into real AWS applications. It is available now through standard IDE marketplaces.

Customisable Content Moderation Controls

Released on 21 October, Nova Lite and Nova Pro now support adjustable moderation profiles, allowing organisations to configure controls around safety, sensitive content, fairness and security. Some guardrails remain non-negotiable, especially relating to child safety, privacy and harm prevention.

The value here is not about turning controls off, but about tuning them to match different use cases. Content generation for healthcare, legal review, education, product support and financial modelling are not all the same. Nova now reflects that reality.

Available currently in US East (N. Virginia). Configuration is handled through documented policy settings and is tied to approved business use cases.

Nova Multimodal Embeddings

Announced on 28 October, Nova Multimodal Embeddings brings text, images, documents, audio and video into a single embedding space for retrieval and semantic search. Instead of stitching together separate models, organisations can now use a unified embedding model for RAG and search across diverse formats.

Supports up to 8K tokens and media segments up to 30 seconds, with auto-segmentation for longer files. Offers multiple embedding output sizes to balance accuracy and cost, and supports both synchronous and asynchronous processing.

This allows applications to search over training archives, product media libraries, financial reports with diagrams, and multi-modal support material using a single query format. It is available through Bedrock in US East (N. Virginia).

Web Grounding: Bringing Real-Time Information into Nova Models

Launched on 29 October, Web Grounding allows Nova models (currently supported on Nova Premier) to retrieve publicly available information in real time, including verifiable citations, as part of responses. It brings an embedded RAG-like capability directly into the model via the Bedrock Tool Use API.

This shifts Nova from static knowledge to dynamic knowledge. Instead of relying only on pre-training or private context, Nova can access real-time sources, apply reasoning and return attributed responses.

Available in US East (N. Virginia), Ohio and Oregon via cross-region inference.

These announcements are not isolated features. They suggest that Nova is evolving from a single model family into a buildable, governable, multi-modal system intended to live inside real AWS architectures. Nova is not just something you prompt. It is something you design with.

Claude 4.5: Released in Stages, Not by Accident

AWS did not release Claude 4.5 models all at once. They staggered them through the pre:Invent period to match capability to use case.

Claude VariantRelease DateBest For
Sonnet 4.529 September 2025Balanced workloads and everyday reasoning
Haiku 4.515 October 2025High-speed, low-latency agent services
Opus 4.524 November 2025Deep reasoning, structured problem solving, enterprise scale

All models are now available in Bedrock.

This staged rollout allows builders to choose based on workload type, rather than size labels or marketing tiers.

Kiro: AWS Clarifies the Developer Experience

Kiro started quietly as a preview, back in June, but has now become a central part of Amazon’s developer experience strategy. It bridges specification, assisted build automation, and AI-powered engineering workflows in a way we have not seen from AWS before.

What changed this season:

  • Preview waitlist has been removed and sign-up is open
  • Kiro is now Generally Available
  • Kiro can be used via CLI as well as IDE
  • Agent workflows now support checkpointing, replay, and behaviour verification
  • Property-based testing based on specifications is now supported
  • Enterprise features introduced such as agent access control, shared context and governance

Over 250,000 developers participated in the preview period. That is a significant level of engagement for a platform still finding its voice.


Infrastructure and Networking

VPC Encryption Controls: From Assumption to Evidence

For as long as I can remember, there has been a common belief (or perhaps a comforting assumption) that traffic moving inside a VPC was always encrypted. It felt logical. Surely AWS would not send packets across data centres in clear text. But while it may have been technically true in parts of the stack, there was never a clear way to verify it, let alone enforce it.

Image

Now there is!

VPC Encryption Controls turn that long-held assumption into something you can actually confirm, measure and eventually enforce. It introduces two operating modes. Monitor mode lets you inspect whether traffic between resources in your VPC is truly encrypted in transit. Enforce mode blocks non-encrypted traffic once you have hardened workloads to comply.

It works within a single VPC or across VPCs in the same region, and it includes support for modern resource types such as Nitro-based EC2 instances, Fargate, Load Balancers and Transit Gateway. The real shift is not the encryption itself, but the enforcement and the visibility. You can now treat encryption-in-transit inside the VPC the same way you treat encryption-at-rest on S3, EBS or RDS. It becomes a controllable and auditable property of your architecture, not a silent assumption.

Available in all commercial regions… no additional cost… and more importantly, it replaces rumour with control.


Billing and Invoicing: Small Changes, Useful Outcomes

Two billing-related updates were released on 19 November, both aimed at improving how customers and partners manage invoices, billing control and financial operations across AWS Organizations.

Image

Get Invoice PDF API Now Generally Available

The Get Invoice PDF API allows customers to programmatically download AWS invoices using SDK calls. It accepts an AWS Invoice ID and returns a pre-signed Amazon S3 URL for immediate PDF download. This includes both invoice documents and any supplemental billing files.

For bulk retrieval, the List Invoice Summaries API can be used to gather invoice IDs for a billing period, which can then be processed through the Get Invoice PDF API. This supports automation for financial reconciliation, compliance reporting and invoice archiving.

The API is hosted in the US East (N. Virginia) Region and is available to customers in all commercial regions except China. Documentation is available through the AWS Cost Management API Reference.

Channel Partners Can Now Resell AWS Services Using Billing Transfer

Billing Transfer is now available for AWS Channel Partners in the AWS Solution Provider or Distribution programs. It lets partners assume billing responsibility for customer AWS Organizations, while customers retain full access and control of their AWS accounts.

Partners can centrally manage invoicing, payments and cost optimization for multiple customer organizations from a single partner management account. Partner program benefits and incentives are automatically applied to the consolidated bills, while customers still view their own billing details at partner-configured rates.

New APIs in Partner Central also support reporting, channel operations and incentive qualification through partner-owned systems. Billing Transfer is available in all public AWS Regions except AWS GovCloud (US) and the China Regions.

This change aligns AWS billing operations more closely with how many channel providers already deliver managed cloud services.


Late Additions While Writing

Just as I was finalising this post and getting ready to fly out for re:Invent, AWS released a couple of announcements on 26 November that stood out. Both feel like they were shaped by real-world lessons rather than roadmap timing.

Image

Amazon Route 53: Accelerated Recovery for Public DNS Changes

This update introduces a recovery objective of up to one hour to regain DNS change capability if the control plane in US East (N. Virginia) becomes unavailable. In previous interruptions, Route 53 would keep resolving existing records, but you could not update or recreate DNS entries, even when you needed to reroute traffic or activate failover plans.

It is hard not to view this as a direct response to the recent DNS-related outage. The timing is telling.

This matters most to teams where DNS changes are part of deployment workflows, customer onboarding, and disaster recovery. Once enabled, you retain the ability to modify DNS records even if that primary AWS region is impaired. No extra cost.

What is most interesting is not the feature itself, but the acknowledgement that DNS configuration is now part of availability and resilience, not just administration.

Amazon S3 Block Public Access: Now Enforceable Across AWS Organizations

S3 Block Public Access can now be enforced centrally through AWS Organizations. Attach it to the organisation root or to specific OUs, and every current and new account inherits the policy automatically.

This feels long overdue. BPA has been one of the most important guardrails for years, yet it has only been enforceable at the account level until now. Most enterprises have treated it as an organisational standard, and AWS has finally caught up.

It includes CloudTrail visibility for policy attachment and enforcement. No additional cost. Available in all commercial regions.

Amazon CloudWatch: Deletion Protection for Log Groups

Amazon CloudWatch now supports deletion protection for log groups, designed to prevent accidental removal of critical operational and audit logs.

I spend a lot of time trying to delete things from CloudWatch. I generally prefer routing logs into S3 for both retention and cost efficiency. But CloudWatch has definite advantages when it comes to debugging, filtering and generating metrics. So while most of my logs are transient, I can fully appreciate the reality that some logs should be protected, not cleaned up.

This feature allows deletion protection to be enabled on any log group. Once turned on, it must be manually disabled before a log group can be deleted. This is particularly valuable for compliance records, audit trails and production logs that need to exist long after the event.

It is a small feature, but one that aligns CloudWatch more closely with how real operational practices work.

Some EKS updates also landed. No doubt someone is excited about them. That someone is not me.