Recap of AWS re:Invent 2022: An Honest Review
Table of contents
It’s that time of the year again; AWS re:Invent took place last week in Las Vegas, spanning over a week. It was good to be in Las Vegas with builders from around the world! Attendance was around 50k people; some even say it’s 65k people. Could be even more salespeople without a ticket!
In this post, we will provide you with an honest review (see 2021 coverage) to properly assess whether all those announcements should mean anything to you and ensure you are not missing anything. Some companies have huge yearly conferences, and AWS has the largest one; good things tend to focus on that time frame, and there is much to digest. Don’t worry; we got you covered with a critical AWS re:Invent review rather than a generic summary.
Whenever a new release is announced, the first thing you should do is to understand the cost model, assumptions, and limitations, if any. There ain’t no such thing as free lunch or magic formula to scale to millions, or save money, especially on the cloud; despite all the cool-sounding announcements, the devil is always in the details.
This year, we were on the ground to cover the event. It’s mostly known that re:Invent is focused on first-time joiners, aimed to inspire builders, and it was very good at that. Most keynotes start intriguing, especially Werner Vogel’s Matrix-inspired video promoting why the world is asynchronous.
The good announcements are always kept to Keynotes; you can watch them here.
- Peter DeSantis focused on EC2 and the building blocks of the custom network protocols making EBS work at AWS hyper-scale.
- Adam Selipsky announced data-related improvements and new interesting services such as Omics, Supply Chain, SimSpace Weaver that make you wonder why AWS is building very specific services.
- Swami Sivasubramanian’s focus is on data, and they announced a bunch of improvements t on machine learning and database services.
- Ruba Borno hosts the AWS Partner keynote, there were a few announcements, but it was mostly a panel with multiple guests.
- Werner Vogel’s keynote was great, with a video entrance based on Matrix, discussing the world is async and applications should be async too because it’s natural. It can be hard to accomplish because most of us are used to writing sync code for years, but it pays off if you can. Werner also announced various EventBridge improvements and new services that you might find interesting.
Keynotes are amazing to get inspired by both new announcements and guest speakers talking about their experiences, but some parts of it can feel like time-fillers.
We consider re:Invent announcements starting with the beginning of November and starting the flood of the What’s New at AWS page at a huge pace, so this blog has announcements other than The impact of them might vary depending on how you use your AWS, and some announcements make you wonder “Why this was not in the first release at all?” which is a huge topic on how different AWS teams operate.
We divided the announcements into specific subsections so you can consume them easier because it’s easy to get lost and miss important ones. For each of them, we also tried to provide as much context as we could so that you know why they exist and whether it makes sense to use them. Enjoy!
Have you ever been lost in AWS Console searching for a resource? This new UI improvement searches for certain resources using free text across multiple regions. It can search for names and tags of the resources and is pretty quick. However, there are some important things you need to be aware of: Services are pretty limited, but I’m sure they will quickly add others; it’s the AWS playbook of releasing good enough functionality but fast. It also takes up-top 36 hours to do the initial indexing, which is a bit scary. Third, it does not work with multi-accounts, but there is no native AWS way of using multiple accounts at the same time as well, so this is expected. If you want to learn more, read our earlier coverage. Alternatively, you can consider using Resmo. (That’s the only self-plug on this page.)
This is huge. The ability to scale millions of one-time or cron jobs is an enabler. To schedule cron jobs AWS way, you had limited options. Of course, there are many enterprise schedulers like Quartz, but thankfully, nobody likes server maintenance anymore.
The first way is you could’ve used EventBridge rules, with a major limitation on the number of events you can configure and the throughput. It’s better than nothing, but if you wanted to configure cron jobs per customer, per workflow, you could easily hit limits.
The second common way is to use SQS with visibility timeouts to mimic a cron. However, the maximum timeout is 15 minutes, and if you had a cron with a daily rate, you should have sent 96 messages until the execution time comes, and you also need to handle edge cases.
This feature, on the other hand, is scalable (1 million soft limit & 1000 TPS, wow!), supports time windows and one-off events, which would probably satisfy all your needs, and you won’t need another solution.
The third unusual way would be to leverage EKS to configure an unlimited number of CronJob to send messages to SQS, and you would pay a fixed 72$ per month as AWS handles master & etcd nodes scaling transparently. However, it would probably be considered abuse, but we have waited for this feature to land for so long even to consider such a bizarre solution.
Lambda cold start has been a huge problem since it was announced and continues to be despite all of the new announcements. In the last few years, the majority of the cold starts were resolved when the Lambda functions were running in VPCs, and provisioning a network interface no longer took ages. There have also been significant improvements in the speed of code distribution, mostly required when Lambda started to support Docker containers up to 10 GBs.
However, for languages like Java, cold starts are still a problem. Although the infrastructure-related problems are mostly solved, Java, as a language still, is hard on initial startup. Even in a beefy machine, loading classes can take more than 1-2 seconds, and Lambda's restrictive environment does not help, and it is easy to experience 10-second cold starts, which makes using Java for user interactions a pain.
To solve the slow cold starts, AWS announced SnapStart for Java functions. In short, it initializes the Lambda function and saves the state in the memory, and restores it instead of doing the initialization again. This feature makes use of the Firecracker snapshot feature, and it has been promising for many users so far that it can drive down the slow start to 200 milliseconds from 6 seconds!
However, you need to be aware of the requirements and limitations. For instance, as memory is saved from a previous execution, all network connections must be presumed dead. Another is the assumption that ephemeral data that you have downloaded is lost. However, the most important one is ensuring uniqueness, which is most important for encryption. Amazon Linux, Java’s SecureRandom, and OpenSSL in Lambda are snap-resilient. You cannot use it with Graviton lambdas and if Lambda uses EFS.
The SnapStart feature is available for only java11 runtime. Although it makes use of Firecracker’s feature, the other runtimes are not supported yet. The uniqueness issue is important for security and might need to be verified extensively for the other runtimes. In addition, the one runtime that needs SnapStart at most is Java, and it can even make Spring Boot Lambda functions fun to use now!
This looks like a very technical release when you read the title only, but watching the keynotes, the details were amazing. As most of the EC2 instances use non-local network disks (EBS), there are latency and bandwidth issues accessing the data, especially in congestion. This new release improves the already existing ENA (Elastic Network Adapter) to replace TCP with AWS SRD (AWS Scalable Reliable Datagram) to achieve multi-path transmissions to spray packets through the network, resulting in much lower latencies, even in %99.9 percentile.
As always, with a new and innovative service, One important caveat is that although using ENA Express is free, it’s initially supported with C6gn.16xl instance, which costs $2.7648/hour.
If you have integrated your AWS accounts with security tools, you might have noticed that almost all of them ingest your logs from various systems such as Cloudtrail, WAF, ELB, and S3 to their own systems, duplicating the data, and there is no standard, some reads directly from S3, some require setting up SQS notifications, even Firehose destinations.
Amazon Security Lake brings a standard to this mess, normalizes data from various sources with Open Cybersecurity Schema Framework (OCSF) in an S3 bucket, and other tools can use it from there. It looks like they can either access the raw data in S3 or make use of querying with Athena and Lake Formation with cross-account access. You can also import data from external services and internal application logs. Although Security Lake does not solve most efficiency
This new service is a fully integrated development environment, meaning that it manages source code, issues, CI/CD, and even a Kanban board. Suddenly, AWS becomes a competitor to Atlassian and Github. You might think, why would AWS build a similar service to the CodeStar suite? You would be right. However, CodeStar is a combination of various services glued together, and in my experience, it never ends well because you need to be aware of each service's details and go to individual service dashboards to track builds, deployments, and pull requests.
Instead, CodeCatalyst is a completely integrated platform. You define your project using a devfile, code hosting, issues, and pipelines are bundled together. It also allows you to spin up developer environments based on your scaling requirements, and they are compatible with IDEs like VSCode and IntelliJ suite. It’s possible that it makes use of the already existing services underneath. One major problem we faced with CodeBuild is that provisioning a machine takes around 90 seconds on average, which can be annoying, considering other commercial services take only a few seconds. A good thing about Catalyst is that you don’t have to use the bundled issue tracker if you have already invested in Jira and the others.
This new service makes service-to-service communication (HTTP) among different VPCs and accounts easy. It exposed the connections in each EC2 container, even serverless workloads, without you requiring to set up VPC peering, shared VPCs coupled with load balancers, or any sidecars that install 100s of things to your Kubernetes clusters. Of course, you could have made a better network design to avoid all of these issues in the first place, but the complexity adds up and becomes unmanageable with growth and acquisitions, and VPC Lattice is a welcome service in this case.
Cost-wise it does not look so bad; however, as always, you might need to run the numbers yourself before going guns blazing and deleting your VPC peerings and Transit Gateways.
Verified Access is AWS’s take on zero-trust. In short, it allows you to attach an endpoint to a VPC, even private subnets, for your employees to access protected services easily. Verified Access integrates with Identity and Security providers like CrowdStrike, JumpCloud, Okta, and JAMF so that you can write your own policies in Cedar policy language.
The configuration of Verified Access is most similar to Client VPN, which wraps an OpenVPN server with AWS ACM and a public endpoint. However, Verified Access only works with HTTP protocol, whether the endpoint is ALB or NLB.
Although the pricing says there is no fixed price, you need to associate an application, and the pricing is based on each application associated, based on hours, whether you use it or not, in addition to consumed bandwidth. Having at least 1 application associated starts at almost $200/month. It’s similar to Client VPN pricing, which costs $72/month at minimum.
People used to disassociate Client VPNs from subnets when not used to save money. However, it takes a significant amount of time for it to come back online. You can go this way with Verified Access, too; however, even if it would start instantly, it would result in a bad user experience since you need to automate setting up and tearing down associations and never forget to consider how much dev time is lost and inconvenience is introduced when making these calculations!
Create Point-to-Point Integrations Between Event Producers and Consumers with Amazon EventBridge Pipes
This new feature allows you to create serverless pipes, similar to how Linux pipes work (from Werner’s keynote), by gluing together multiple services. EventBridge Pipes consist of a Source, optional Filtering, Enrichment, and, finally, a Target. Today, it supports 15 services and HTTP endpoints, including SQS, DynamoDB, and Kinesis streams. This new service relieves you from writing tiny glue functions (not to be confused with Glue service) to calling other services.
The filtering capabilities are powered by open-source Event Ruler, which is extensive enough for most cases. Filtered events do not cost you extra. Optionally, you can write enrichment and advanced filtering functions with Lambda, Step Functions, or external APIs.
Updating a CloudFront was always giving me chills because one misconfiguration would mean downtime for all users. Also, you cannot have two CloudFront distributions with the same domain, so you cannot make a copy of it and make use of weighted routing by Route53.
This new feature allows you to have staging distribution, the same concept described above, but with more power than just DNS. It can shift clients to staging distribution not just based on weights but also headers, so you can have tighter control over who will see the latest changes. Before, we had to clone an existing distribution with a different domain just to test TLS upgrades. This is a very welcome change!
AWS Step Functions launches large-scale parallel workflows for data processing and serverless applications
It was announced at Werner Vogels’ keynote that one customer is using Step Functions to emulate Map-Reduce, so they made this improvement to Step Functions. This new feature allows you to scan an S3 bucket for thousands of files and invoke your custom logic to process images, log files, etc. Although you could have done this on your own by calling S3 and Lambda APIs, Step Function is already integrated with 220 services and manages concurrency errors for you, and provides visibility.
You’ll see in the later parts of the blog post why we made this distinction of serverless.
Until now, you could already filter the SNS events with the message attributes for your custom routing needs. This new release, similar to AWS EventBridge Pipes filters, allows you to filter the message content directly. The most important improvement is that payload-based policies support property nesting, unlike attribute-based policies.
As Werner promoted in the Keynote, the async is natural, we should all write async, event-driven applications. However, debugging them is a huge mess. If you are doing observability with X-Ray, this new feature lets you track SQS and Lambda together.
This feature allows filtering to match more string functionality so that you don’t have to write Lambda functions just to filter data before sending the event somewhere else, relieving you from cost and maintenance.
This feature inspects (ba-dum-tss) the dependencies used in an AWS Lambda function to detect vulnerabilities. You might have detected them in your build & deployment pipelines; this new feature ensures you also catch the newer vulnerabilities discovered after your deployment.
AWS is sometimes hit by criticism for not contributing enough to open source, and causing many vendors to adopt business licenses to prevent AWS from packaging them and not giving back. However, AWS also makes great open-source products with huge impact and contributions, including Firecracker, and new, important projects have been announced this year at re:Invent.
This is an interesting open-source project by AWS with three major use cases. First, it manages which extensions can be installed by whom, making cluster management easier. Second, it allows installing an extension without needing to access the filesystem, which is the case for all managed Postgres providers. Finally, it allows you to create extensions in trusted languages without forking the Postgres database. Although there are already 85 extensions, this makes writing new ones much easier. I just hope this does not cause Postgres to turn into a monolith. However, considering some exploits leveraging extensions, it’s a welcome addition and will make extending Postgres functionality on Aurora and RDS much easier.
This new open-source tool allows you to ease your container development. There have been other alternatives since Docker for Mac became commercial for companies that have $10M ARR. This tool makes it very easy to spin up VMs inside Mac, ready to run and build containers. It makes use of other open-source projects such as Lima, nerdctl, containerd, and BuildKit. It’s compatible with Dockerfile and docker-compose.yml so that you will feel at home with your existing tools.
The good thing about this announcement, I discovered Lima, which is a tool to create virtual machines inside Mac quickly. Although containers are good and all, sometimes all I need is a good old Linux virtual machine. Multipass is an alternative, but it only has Ubuntu support, and I hate Snap packages.
New feature allows the push-down capability to S3 when using Trino, an open-source fork of Presto (with the usual open-source drama involved), making S3 more than just blob storage but also making use of the computation capabilities to achieve more. As Athena is based on Presto, I wonder whether this feature already exists for Athena as well, but I cannot find a reference.
Starting this year, we have seen a trend of services that are called serverless but also not being able to scale to zero services from AWS, which split the online community into two. Ones that are cost-sensitive and trying to stay in a free tier not just in AWS but Vercel, Upstash, Cloudflare, and others; and the ones that seek almost-infinite scalability. The war has not settled, and whether they can be considered serverless is not decided. The argument is that you do not configure an instance type and counts; hence it's serverless. Another side is if it can’t scale down to zero, it can’t be serverless.
If you are configuring a capacity unit corresponding to CPUs and memory but not actual instance types, and it cannot be zero, is it just servers in disguise? One can argue that DynamoDB is also not serverless if you only think of capacity units, but Dynamo capacity units translate to reads and writes per second, not CPUs. DynamoDB also allows on-demand mode with cost amortized to reads and writes instead of actual capacity and does not have a minimum base price.
Nevertheless, when wrapping external services like OpenSearch or DocumentDB (Aurora Postgres under the hood), you might not have the choice to scale to zero capacity because of the startup cost and background threads that need to run. Also, if the ability to add more capacity when you need is more important than scaling to zero, these services make more sense to you.
This service also manages indexes and its own dashboards in addition to regular OpenSearch. However, the minimum you can configure is 4 OCUs (OpenSearch Compute Unit, 6 GB Ram + 1 vCPU + GP3 disk), and considering 1 OCU costs $0.24, you are looking at almost a $700 monthly bill. It does not make sense. However, I’ve not seen a benchmark yet, but if it can scale much better than regular OpenSearch, it might be worth giving a shot if you regularly have spikes that hog your clusters and cannot scale fast enough.
To be honest, this title confused me at a glance. Is it DocumentDB, MongoDB, or Elasticsearch? Are we there again? Shamingly, after a considerable while, I understood it is similar to previous debates; the new capability is about configuring capacity in a serverless way instead of instance types and counts. The good thing is that it does not have Serverless in the name and avoids confusion because instead of a capacity unit, we now have total vCPU-based pricing that is aimed at handling spiky workloads. However, it comes with great limitations that you need to be aware of.
New Instance Types
With re:Invent, there are always new instance types. The new instance-type announcements on keynotes resemble Apple’s announcements, claiming 2x improvements on various verticals. Of course, as with everything in computing, the outcome will depend on your usage. That being said, there is a new instance type for HPC workloads, called Hpc6id, that is focused on performance and inter-node communications.
There is also a new Inf2 instance type focused on demanding deep learning applications that improved not only performance characteristics but also allow calling out to other nodes for scalability and supporting ultra-large models. The new Intel-based instance types are not as exciting as the Graviton-based ones as AWS makes their own, and Intel ones are just supply chain issues solved; we should note that R7iz instances are the first Intel ones that support DDR5 memory and have the highest per-core performance.
There are new regions in Switzerland, Spain, and Hyderabad, making the total number of regions 30! All of them are available to use now. Keep in mind that new regions start with a reduced number of services and usually a higher cost, so be sure to check the documentation as necessary without jumping to them.
Organizations & Delegation
It’s always nice to see multi-account use cases getting loved at AWS. Although it’s nowhere as good as Azure or GCP for easy multiple accounts or project segmentation, it’s getting much better. Earlier this year, there was an API call to delete an account! At this re:Invent, there have been various improvements in the delegation of responsibilities to other accounts, so you don’t have to use the root accounts for certain tasks. There also have been improvements to the Control Tower.
Amazon VPC Reachability Analyzer now supports network reachability analysis across accounts in an AWS Organization
If you work at a relatively large organization, you might wonder who architected this network monstrosity. However, it happens either due to hyper-growth or acquisitions. This new release for VPC Reachability Analyzer allows you to debug why your packets are not received at a different account in the organization.
This feature allows you to not navigate to the AWS Config page for the rules that are not managed by Control Tower so that you can save a few seconds.
This new feature lets users programmatically implement controls across multi-account environments to enforce the least privilege, restrict network access, and enforce data encryption.
AWS IAM Identity Center now supports session management capabilities for AWS Command Line Interface (AWS CLI) and SDKs.
You can limit the session duration from 15 minutes up to 7 days for a credential acquired using AWS IAM IDentity Center, whether it's console or CLI. It used to be static, 8 hours which was not ideal for compliance and security reasons.
This gives member accounts flexibility to manage CloudTrail administrative actions even though they are centrally managed using Organizations, without disrupting the flow for all accounts.
This allows administrators to delegate management of different business units without using the root management account.
There have also been some updates not as significant as above and might not appeal to everyone, but each of them will help you delete some code, relieving years of pain and introducing minor but impactful new functionality. But some of them will just leave you wondering why they were not in the initial release already. Let’s hop on those:
If you connect your BI tool to Athena, you are probably very rich because, with each page visit or refresh, those bulky OLAP queries run for a long while, and cost you many dollars to show you a probably slightly changed result. This release finally lets you cache certain queries, so both dashboards run fast and affordable.
We love Redis for simplicity but for a long while, it was single-threaded to make it more simple and free of most of the issues parallelism brings, but it also meant it could not utilize all cores, and you paid for idle CPUs. Starting with 5.0.3 Elasticache can utilize more cores for networking which is usually the bottleneck and results were promising, and Redis itself introduced it in version 6. With this new Redis 7 release, you can take advantage of custom functions (if you are willing to learn Lua), improved ACL, and sharded Pub/Sub support for scaling your cluster. So, even if Redis started and tried to stay simple, changes like these are inevitable with the demanding scale and operations.
If you missed it, MemoryDB for Redis is Redis compatible, in-memory-only service that focuses on scalability & fast recovery enabled with a transaction log. However, it turns out memory is expensive; and this release adds support for data-tiering for MemoryDB, meaning it makes use of the disk for rarely used keys to lower cost, but only if %20 of your dataset is accessed regularly. Don’t try to test it, though!
This is an exciting development for MySQL-based RDS databases. It allows you to fork a running database to a staging one and keeps that one in sync, and you can perform schema changes and upgrades without impacting the production one and promote the staging to production when you are done, similar to Planetscale features. It will make complex deployments easy. I hope the support is extended to Postgres soon.
Amazon Virtual Private Cloud (VPC) now supports the transfer of Elastic IP addresses between AWS accounts
This feature is especially useful in migration or M&A cases where you want to retain the same IP addresses. Until this assignment, all you could do is play roulette, release the IP and try to get the same IP from the other account, which would have a very low winning rate.
AWS Lambda announces Telemetry API, further enriching monitoring and observability capabilities of Lambda Extensions
I’m glad that AWS Lambda is getting more love, and this release adds better support for extracting traces and metrics natively without resenting parsing the logs with the Logs API as before.
If you are handling asynchronous tasks, possibly from a queue, and they are taking considerable time, and you push a new image, ECS would just kill your computation as it does not have outstanding HTTP request information as an ALB would have. This release adds support to protect your instances from scale-in or new deployments if they are in the middle of a huge computation that you do not want to restart.
AWS had a time sync service for their own internal use in the EC2s with Chrony (what a cute name) and now offering their NTP server to the public. So if everything you use is tied to AWS, it might make sense to switch to the NTP server of AWS as well.
Simplifying Amazon EC2 instance type flexibility with new attribute-based instance type selection features
Until last year, you had to specify the exact instance types for selecting an instance for your auto scale groups, meaning you had to be aware of new instance type releases and see how many instances there are and try not to get a heart attack. This new release adds the specific instance types because the characteristics of the CPUs might differ a lot depending on the frequency and the generation.
You should not be using static passwords anywhere for security, and I’m glad IAM authentication exists for Redis now.
It looks like you can store and restore 5 TB of AMI from S3 now if you are in the 0.1% of AWS customers that need it. It was 1 TB. It was still huge. But please don’t try to have EC2s with TBs of data.
Amazon SQS announces attribute-based access control (ABAC) for flexible and scalable access permissions
If a service announces attribute-based access control (ABAC), as soon as possible, you should revisit your permission model to see if you can make use of it. It’ll pay in the end! I’m glad this trend is spreading to many more AWS services each day.
I did not know what JA3 even was. It’s apparently the fingerprint of a TLS Client Hello packet, that you can identify malicious actors.
This should already have existed in the initial release, which would also copy the tags to the copied AMI image. At least, I’ll delete some code.
This has been a long-requested feature. The AWS CloudFormation service and feature coverage are usually disappointing, some features take years to be available on CloudFormation. This new release allows you to manage your accounts, organization units, and policies with CloudFormation.
Similar to CloudFormation, the coverage for AWS Config-supported services is increasing gradually, but not all services have their resources available on AWS Config, unfortunately. And using Config is always a cost surprise, you will never estimate your usage without enabling it and seeing it. See our previous coverage on AWS Config here if you are already not familiar with it.
Amazon EC2 enables easier patching of guest operating system and applications with Replace Root Volume
This feature allows replacing the root volume of a running instance. It might sound bizarre at first, but it allows you to patch the underlying image but also keep all configurations, such as IP address and IAM settings. But it still needs to reboot your instance. Otherwise, it would be a huge surprise for the application in the memory, possibly resulting in corruption and unexpected behavior.
WAF can be used to block/allow certain countries, mostly for compliance reasons. Now there are more granular details you can use for that decision, down to specific regions, based on ISO 3166 codes. So, you can block users from Texas if you want for some reason.
This is an in-preview service that allows you to implement central policy management for your custom applications. It makes use of Cedar policy language. It uses the same language model as IAM; PARC, Principal, Action, Resource, and Condition. It will make it easier to implement fine-grained authorization for your applications instead of custom code.
There also have been open source alternatives for a while in the same area, including projects like Cerbos, which has a similar model, and Permify based on Zanzibar and also Oso. It’s a growing area, and the actual need for tools like this is to relieve authorization logic from individual services and centralize them for both ease of use and better auditing capabilities.
That’s an interesting new feature. It allows 16 KiB write operations not to be torn when the operating system crashes or power-loss. With it, you can disable the double-write functionalities on databases which ensures the data is really written and get improved performance. It works for both local disks and EBS disks, as well as RDS databases.
This release makes the data in Aurora available to Redshift immediately without needing a complex and wasteful ETL pipeline so that you can do more deep dive analysis for a reduced cost.
This release adds support for KMS to have the key content to be in complete customer control for mostly audit and compliance requirements for the most paranoid customers out there that can no longer use KMS as an excuse to avoid the cloud.
There are many vendors selling services and software in the AWS marketplace, and this new release allows vendors to prove their security and compliance posture within the marketplace. It makes use of AWS Config and Audit Manager assessments. With the insights easily available, you can reduce the time to the procurement of services and software.
This was a keynote announcement, but it’s a very specialized service. SimSpace Weaver allows large space spatial simulations with the ability to scale it to many nodes for quicker turnaround.
New Amazon S3 Multi-Region Access Points failover controls enable active-passive configurations and customer-initiated failovers
This new feature allows you to use a single endpoint for multi-region S3 buckets and have auto or manual failover for your multi-region workloads, so you don’t have to fiddle with application settings during maintenance or unavailability.
If you are one of those companies that lost control of your S3 buckets and dumps of data in it have been sitting without provision, this new feature can detect sensitive data such as PII or credentials and it gives you a score for your S3 buckets. 11 out of 10 security professionals love scores because it gives them priority to address the issues. DSPM (data-security posture management) is a new and exciting area, and AWS wants to make it easy for you.
This feature makes use of the local SSD disks on the RDS instances to place temporary tables. You might think you don’t use temporary tables but hash joins, CTE expressions, sorts, and big joins.
This new feature allows you to connect multiple services to each other without thinking about load balancers and security groups. It’s available for both Fargate and EC2-based ECS clusters. It can even make the distinction between whether a service is a client, server, or both. With Load balancers, they are just exposed to each other. The connected services become aliases that you can connect without worrying about the infrastructure to support them.
If your throughput is spiky and you cannot predict the usage, this new mode allows elastic throughput instead of static configuration.
This release improves the latency of EFS significantly for frequently accessed data and writing small data. There have been important improvements to EFS this year re:Invent that you might want to revisit it for your use cases.
Amazon Elastic File System introduces 1-Day Lifecycle Management Policy to help customers reduce costs for cold data sets
If you are making use of EFS as temporary storage for your workloads, you can use life cycle policies to remove data after a certain time. It now supports a 1-day retention period as compared to a minimum of 7-days before.
You can lock up the data in AWS Backup and prevent deletion of them with lifecycle policies for legal purposes. Some industries mandate this, and providing evidence showing just a configuration is a godsend for them.
Amazon EBS launches Rule Lock for Recycle Bin to prevent unintended changes to Region-level retention rules for Snapshots and AMIs
With releases like this, I wonder how many customers need to lose enough data for this to bump up in the backlog of the service teams? There was already a Recycle Bin for accidental deletes that you might think would solve most accidental deletes, and now there is an additional lock that prevents changing the retention rules for the Recycle Bin.
This is a UI-based, collaborative application prototyping and development tool that you can drag & drop Lambda functions, DynamoDB tables, SQS Queues, and other services and compose them visually. It can be a great starting point for most applications. It also generates a CloudFormation template to continue building your application locally.
This new feature detects suspicious logins to your Aurora databases. That’s it. Not sure what I expected; although you already segment your network perfectly and use IAM authentication, exploits can happen anywhere. It does not hurt to enable it if you already use GuardDuty since the volume of login events would not be too much to inflict additional costs.
It’s yet another way of sharing data in S3 buckets with other accounts without needing to copy them. I remember hearing the same functionality from other services but not exactly sure. Here is one from this re:Invent.
Minimum time for automatic rotation of the secrets used to be 1 day, now, it’s 4 hours. However, one would wonder what is special about 4 hours. Why is it not 1 hour or even less? Anyways, it’s a good change; rotating secrets frequently helps.
With the release of ChatGPT, this seems less impressive now but QuickSight Q is a service to ask your questions in your plain text to the QuickSight. It allows you to refrain from writing queries on your own, but the actual success would be user-dependent.