blog post cover

An Honest Review of AWS re:Invent Announcements: 2023 Edition

If you have been following the Resmo blog for a while, you might have noticed that we offer critical evaluations of re:Invent, which you can find in the 2021 and 2022 editions. Although there are tons of new services or improvements announced, you are probably overwhelmed at reading them and trying to understand. Does this even matter, is this an exaggeration, or a rehash of existing features? No worries, I got you covered. 

This is my 2nd re:Invent in person, and being in Vegas is nice. You definitely need to see the Sphere in person, and attend a show. Beware though, seats are too steep and I was almost falling from the cheapest row, which is pretty high from the ground. However, it was a very nice show.

This year, I’ll be splitting the announcements in a few main categories to make it more comprehensible due to sheer volume and variety. To summarize, mostly, this was an OK-level re:Invent.

As everyone expected, there were many AI related announcements, where most of them was shoving an LLM somewhere where you can think it was really needed, but there were some fundamental improvements and new services, and a few diss at OpenAI being an unreliable partner due to recent events that unfold.

Other announcements, while might seem insignificant and uncool, are really the ones that let me delete code, worry about things less and focus on my true passion and actual business value: Building domain-specific CRUD apps.

Announcing SaaS Quick Launch for AWS Marketplace: Before digging deeper, Resmo is a launch partner for this new Marketplace feature which allows you to onboard SaaS apps through the marketplace much easier. We are among the top companies to first support this.  

AI, LLMs, Chatbots

AWS re:Invent 2023 announcements

Of course, our first guest is AI. There were some important AI announcements, and LLMs included in AWS for you to write better queries. Most surprising one is that half of AWS Werner Vogels keynote focused on traditional ML with an application on brain medical imaging scanning, not even a bit of LLMs, and after he talked about cost for an hour.

  • AWS announces Amazon Q (Preview): This is one of the most important announcements of re:Invent but also the most confusing one. To clarify, Amazon Q stands for multiple unrelated services. 

- First, you can ask advice on how to build on AWS, and such as the 17 ways of running containers on AWS, or ask questions such as why your security group does not allow SSH to specific instances. It’s also integrated with code editors and CodeWhisperer, CodeCatalyst and can even open PRs for you. 

- Second, you can connect your business knowledge here, 40 tools like Google Drive, Jira and Confluence. Then it can answer questions for you. The most surprising and innovative part of this was it actually respects file permissions of users so that they can’t use AI to get access to forbidden knowledge.

- Third, you can use Amazon Q with AWS Connect, to improve your contact center performance and reduce costs by making use of AI to summarize contents, search through the documents etc. In other words, it’s corporate speak to reduce headcount to achieve the same amount of work with less people. 

Compute, Storage and Networking

AWS Graviton4

Compute is the fundamental part, assuming you have a workload running on AWS that is not full serverless.

  • Announcing the Amazon S3 Express One Zone storage class: This is a new storage class for S3, not to be confused with S3 One Zone-IA, purpose built for applications that use S3 for shuffling intermediate data around, such as databases and machine learning applications. This new S3 Class runs on special hardware and offers 10x performance and 50% cost reduction on certain workloads. See how ClickHouse uses it for their cloud offering to get inspired and a great analysis by WarpStream on the actual cost of this service as there are few caveats you must consider.  
  • Announcing new Amazon EC2 R8g instances powered by AWS Graviton4 processors (Preview): Graviton4 is much better than Graviton3, as you can probably imagine. The claim is that it can be 40% faster for databases, 30% faster for web apps, and 45% faster for Java apps. I’m not sure why they mentioned Java applications specifically, and the difference between regular web apps, but it's always best to use the latest generation of instances to make best of your budget. 
  • Amazon EC2 Capacity Blocks for ML: Finding GPUs can come hard and people are willing to commit felonies to just get H100s around the world due to AI hype. With this you can reserve a huge capacity for 1 to 14 days in advance so AWS can plan and provide it for you. But if it was me, I’d just do a reverse spot, where people try to outbid each other like they are in the 2021 housing market.
  • Amazon EFS now supports up to 250,000 IOPS per file system: If you really need a networked storage instead of S3 and you are using EFS, it can be very fast now. Probably it costs a fortune at that performance level, so beware. I have nothing against EFS or networked storage (NFS), but if it’s the first solution that comes to your mind, you can re-evaluate the traditional NFS approach and ask yourself if you really need it or if you could re-architect things in a better way? 
  • Mountpoint for Amazon S3 optimizes for repeated data access : This is one of the services AWS does not want you to actually use, but releases it anyway because there are so bad implementations, they needed to step in. First, you don’t treat S3 like a file system. Please. Just because it works does not make it ok to use it in a way that’s so wrong. You’ll either end up with a huge cost and suboptimal performance. No way it turns out great. Anyways, this new feature caches data in short.
  • Announcing Amazon EC2 High Memory U7i instances (Preview) : This is a new huge memory instance family for the ones that do not use garbage collection in their apps, or use SAP HANA, Oracle and SQL Server which performs very well with high memory. It supports up to 896 vCPUs and 32TiB DDR5 memory. Look at the damn instance name: u7in-32tb.224xlarge. Considering you pay based on CPUs and sockets to those databases, I’m not sure which will be more costly, license or EC2? Nobody reading this blog will probably use those instances anyways.
  • Amazon Elastic Block Store announces default policies to backup EC2 instances and EBS volumes: Considering there has been EC2 Instance delete termination lock for many years, it’s safe to assume people tend to lose their EC2s and EBS volumes in a heartbeat and there is no way to recover them. This new feature is a default policy.
  • EC2 Security group connection tracking adds support for configurable idle timeouts: Security groups uses conntrack to keep track of TCP connections and UDP sessions and has a predefined timeout to terminate them, which is 5 days. Wow. It can cause exhaustion, and you can now configure it to reduce it to even 60 seconds.
  • Amazon EFS Replication now supports failback: Now you can switch the replication from primary back to secondary and EFS will take care of replicating incremental changes to the right region for you.

Databases

Amazon Aurora

Well, whether SQL, NoSQL, Graph, Blockchain (oh no), most of the customers store data on cloud. It’s probably a fundamental piece of your stack on AWS, and also a huge part of your bill. So any slight improvements here are appreciated for cost, performance and management. 

  • Announcing Amazon Aurora Limitless Database: This is one of the most promising Database announcements at re:Invent. The way Aurora works is already magic, but it has a major limitation. Single master and writer, meaning scaling out with new readers is impossible if you have a writing-heavy workload. This new feature allows sharding certain tables to different instances so that each instance can act as masters for their shard. But there is no mention of how you select a shard key and it’s a preview service, so beware. You can also set some tables as reference tables to replicate them to each instance to reduce data shuffling when you reference cross-tables in your queries. Imagine this also supports Global Database and you solved almost all of your data sovereignty problems that mostly EU and AU companies require thanks to 3-letter agencies snooping around servers, cables and switches.  
  • AWS announces OR1 for Amazon OpenSearch Service: Well, it’s faster OpenSearch instance, there is no information about which instance family type it belongs to, however it utilizes the Amazon S3, similar to other providers decoupling storage from compute using S3, using local disk as cache and S3 as primary storage. The announcement specifies it’s primarily suited for index-heavy workloads. 
  • Announcing the general availability of Amazon RDS for Db2 : This is a testament that AWS will do anything for you to move to cloud. For the GenZs reading this, Db2 is a database from IBM, and although it’s from the 1980s, (its latest release is in 2021 though) it's surprisingly still in use by many, and their IT teams probably have no excuse not moving to the cloud now. 
  • Amazon Aurora Global Database for PostgreSQL now supports write forwarding: Global databases allow you to replicate data to another region. When I come to the US and as our application is being hosted in the Oregon region, I always have tears in my eyes that my app is actually very fast. Vercel guys push this a lot “edge computing” but only integrate with cool guys like PlanetScale or Neon. Now you can be among them as well if you are an AWS shop. Since Aurora Global Database has only one writer, even if you had replicas in other regions, you had to know which one is the active one and make your write queries go there, which is a bit useless work. New feature is that the computation near the reader region can use the reader to write data and it will proxy it to the right region. Be sure to set the DB endpoints to a region where your compute resides, so you don’t actually make your application 2x slower and burn 2x money doing this.
  • Amazon Athena adds cost-based optimizer to enhance query performance: Cost-based optimization is key to a performant database. Consider you are joining two tables, 1K rows and 1M rows and your query filters 1K rows to 10. If you 1M rows to the 1K, you’d have 1B to filter again, which makes query slow. But, if you did the filtering first then join later, you’d only have 10M rows to filter again. That’s join reordering. There are other methods as well, such as early aggregations. Athena now supports cost-based optimizers by making use of statistics generated by Glue. But I’m not sure how it would work with an ever growing database (which is likely if you are using Atena). Do I need to run Glue continuously to get a proper cost estimate which might be even more costly? (pun-intended)
  • AWS announces Amazon DocumentDB I/O-Optimized : If you are one of the weird people that uses DocumentDB instead of MongoDB Atlas, you can use this new feature. Remember this feature from Aurora Postgres? Because DocumentDB is based on that which makes this service even weirder for me, you can’t even have multiple writers (with Aurora Limitless, maybe now you can?). The only reason you are using this service is you have invested in MongoDB at an early stage, wanted to get out, but can’t and you use DocumentDB instead of Atlas so your data can be in AWS. The reason I’m this negative is because it does not even support all the shiny features of MongoDB, which probably made you choose Mongo, not just for arbitrary JSON data support.
  • AWS announces Amazon DynamoDB zero-ETL integration with Amazon Redshift: This might be a threat to job security for many. However, no one really likes ETL. You have this gigantic data on DynamoDB but attaching to streams, using point in time snapshots to populate data on other systems is just tedious work. I just wonder what the latency is, but there is probably a CloudWatch metric for that. 
  • AWS announces Amazon DynamoDB zero-ETL integration with Amazon OpenSearch Service: Same as above, but Redshift is mostly used by the data teams, where OpenSearch (Elasticsearch fork, remember the drama?) is used with DynamoDB to provide search and aggregation capabilities for your data in DynamoDB. You no longer need to replicate data yourself. But again, the latency is an important factor before you go jumping in this.
  • Amazon Neptune Analytics is now generally available: Graph databases are cool and hold a lot of information if queried correctly. Neptune Analytics allows you to perform analysis to find out fraud, security issues or social media. It can analyze tens of billions of relationships in seconds, according to the docs. It’s different from the actual Neptune database, and you need to load the data in it. 
  • Vector engine for Amazon OpenSearch Serverless now generally available: If you are using 900$/month “serverless” OpenSearch, now you can store your vectors on it to use it in your AI applications.
  • Announcing preview of AMB Access Polygon, serverless access to Polygon blockchain: I’ve been using AWS for almost 10 years and I have no idea what this service does. But it's surprising that AWS still invests in obscure blockchain services. 
  • AWS Clean Rooms Differential Privacy is now available in preview: This feature allows you to clean the PII Data in Clean Rooms, for those who care for user privacy at all. 
  • AWS Lambda now supports IAM access control for multi-VPC enabled Amazon MSK clusters: This release is so random and so specific that I just wanted to include this to show you that AWS has covered your obscure deployment with Kafka which is in multiple VPCs and IAM access control is supported when using it from Lambda. I just got tired while typing this.
  • Amazon OpenSearch Service now supports Neural Sparse Retrieval: This allows you to find documents with semantic similarity instead of just term-based similarity like TF-IDF or BM2.5.  
  • Amazon ElastiCache Serverless for Redis and Memcached is now available: Another not-serverless service that starts from $90 per month. But the great part is the updates are transparent and do not require downtime anymore, which is a huge upside. If you are looking for scale to zero services, AWS is not the place for you, at least for 3rd party software like Redis, Postgres or Elastic (oops Opensearch), instead consider offering by our friends at Upstash.

Security and Compliance

AWs reinvent 2023 recap

It can be the most boring part of the blog, but keeping up with new services, their security and compliant with your commitments such as SOC2, other certifications, or internal policies, SLAs keeps getting harder, so improvements here are very welcome. 

  • New organization-wide IAM condition keys to restrict AWS service-to-service requests: You can grant permissions on behalf of your account to AWS to perform operations. This new feature introduces two new condition keys, aws:SourceOrgID and aws:SourceOrgPaths that allows you to limit AWS’s reach on your accounts to your organization, so that you can spend days to configure and validate this feature is working as exactly, and reduce accidentally letting 3rd parties abuse AWS service access on your accounts.
  • AWS IAM Identity Center provides a new account instance for faster evaluation and adoption of AWS managed applications: This allows you to try AWS managed SaaS like services such as CodeCatalyst without tying it to the central Identity Center you might have deployed in your organization, which you might not have access to modify. In other words, it allows independent identity centers to deploy in the account and organization.  
  • Introducing CloudFront Security Dashboard, a Unified CDN and Security Experience: I’m not in for dashboard announcements, but this one looks actually useful. If you have enabled WAF, you can see how endpoints are doing and if they are targeted with malicious actors.
  • Amazon Elastic Block Store now supports Block Public Access for EBS Snapshots: If you have ever tried to create an EC2 instance from the console you might have seen there are thousands of snapshots available for you to select. AWS does not maintain them all. Most of them are from partners that provide AMIs for their software. But there are a handful that are just other people’s images that a lazy person just made public to share it with another account but now everyone can see. This new release allows you to block it account wide without you trying to mess with SCPs. 
  • AWS Fault Injection Simulator announces scenarios and scheduled experiments: 3 years ago, we had a discussion with my colleagues that wanted to do chaos experimenting because the Head of SRE told everyone in the company to do so to increase the availability systems. Problem was that they did not have any hypothesis to test, just shut down servers and observe. Of course in a microservices environment there will be 500s or delays. But how much? Without educated guesses, making use of system architecture and determining the likelihood of outcomes with confidence, it’s just playing and fooling around that even a child can do without supervision in AWS Console with admin rights. Anyways (as you can see I'm a bit startled on this), AWS Fault Injection Simulator is the AWS way of doing chaos experimenting, but you might have missed this last year because it supported a very few operations that were not much useful. Today, there are many actions that are useful, ranging from EC2 to RDS, S3 and many more services. Some of them work by having an agent on the underlying system, some of them modify security group rules, or the API calls to some services fail. This new addition allows predefined scenarios to be used that are likely to happen and lets you schedule them so you can observe if your system is always resilient to systematic failures that are likely to happen and you can see how long it takes you to recover and measure error rate. 
  • AWS Fault Injection Service launches two highly requested scenarios: I never seen “highly requested” in an AWS news article before, so it must be really highly requested by customers. New scenarios for Fault Injection Service (FIS) allows you to test for AZ Availability: Power Interruption and Cross-Region: Connectivity, which many of your customers ask you and you assume your application survives those. They are pretty advanced in capabilities, be sure to check it out and test it on pre-production first.

Observability and DevOps

It’s easy to get lost in the Dune of AWS, thanks to many services existing and interacting with each other in many different ways. Keeping track of performance and cost becomes harder. These developments might make your job easier.

  • AWS Resource Explorer supports 86 new resource types: This was a service that was announced at last re:Invent to search your resources in an AWS account with free-text but only supported a few of them. This release adds 86 making the total even larger (could not find a reference for actual number). You might wonder why many services were not supported at launch, since it’s just a basic search that should be probably generalized for every AWS service already, it’s not. AWS Config is also like this where teams probably need to call an external service during the lifecycle of a resource to properly index a resource.
  • AWS Resource Explorer now supports multi-account resource search: You can search for multiple accounts in the same console, making finding that rogue EC2 much easier. However, AWS Resource Explorer only searches by the name. On the other hand, at Resmo you can use full-text search for 150+ resources, including the whole configuration of the resource, like instance type, tags and any other field.
  • AWS announces multiple stats query command for Amazon CloudWatch Logs Insights: This allows you to write less queries and use insights more like Splunk. Too bad using CloudWatch Logs as your main logging system is almost as expensive as Splunk with the 10% of the capabilities.
  • Amazon CloudWatch Container Insights launches enhanced observability for Amazon EKS: This is a one-click enable addon to export performance metrics from your EKS clusters, including both control plane and workloads. It’s great for the ones that can’t wrap their head around how to properly install and configure Prometheus and friends. 
  • AWS CodeBuild now supports AWS Lambda compute: I always wanted to make use of AWS Lambda for running massively parallel tests and due to their nature that can spin up very fast. AWS finally supports CodeBuild to support AWS Lambda as computation. However, beware, it might not be cost friendly for some builds. Here is a great analysis by Ian Mckay
  • Announcing teams for Amazon CodeCatalyst: For a code development tool, you’d think teams would be a first day construct, but apparently it wasn’t. I have tried this service, when it was first announced, I’m sure it came a long way since then, but I can confidently say that please use GitHub, even Bitbucket. AWS’s SaaS-like service offerings are usually a let down. But in the end, you are just pushing code to some external system and kicking up CI/CD (I hope), so you can also consider this if you are feeling adventurous and let every piece of your company tech stack reside in AWS for some reason. All we are missing is Jira by AWS.
  • AWS CloudFormation introduces Git management of stacks: This feature allows CloudFormation to sync pushes from Git so you don’t have to do it yourself on your pipeline. That’s it.  
  • AWS Free Tier usage is now available through the GetFreeTierUsage API: When you go over your Free Tier usage, you sometimes receive an email. There is also a nice console view. This is an API so you don’t have to parse emails, or use browser automation to get this information, and can call this API periodically to ensure you are not paying a dime to AWS.  
  • Announcing AWS Console-to-Code (Preview) to generate code for console actions:  If you have a Terraform template at your launch, you’ve launched too late. Despite Infra-as-Code (IaC) zealots, ClickOps is an essential and natural part of a company in the early stage, even at late stage when experimenting. But there comes a time when it makes more sense for IaC to exist, and with already deployed resources you are often at an ugly place. This new AI-backed service examines your selected AWS console actions, and tries to come up with CloudFormation or CDK templates. As with m
  • AWS Config launches generative AI-powered natural language querying (Preview): If you cannot write queries to find your resources using Config, you can now use AI. It’s the most obvious use case of AI. Show me all EC2 instances with public IPs. 
  • Amazon CloudWatch announces AI-powered natural language query generation (in preview): Searching through thousands of logs and you have no idea how to filter, extract and aggregate data with CloudWatch insights? Me neither. I always go to documentation. This new feature supposedly can create queries for you using AI 
  • Introducing Cost Optimization Hub: Werner’s keynote was disturbingly focused on cost; I guess it was a result of the macroeconomic conditions. This is most likely a combination of already existing recommendations from compute optimizations and savings plan, but for all accounts in an organization. 
  • New Amazon CloudWatch log class for infrequent access logs at a reduced price: Most of your logs are really not that important. I personally prefer proper tracing over unstructured logs. Anyways, this allows you to reduce the cost of something you probably rarely use, mostly for forensic cases and such. It’s half of the cost of regular access up to 10TB of ingestion. However, exporting to S3 does not work so you cannot pay less to AWS and use external logging and get away with it. 
  • AWS announces CloudWatch Logs Anomaly Detection and Pattern analysis: You can find out anomalies happening in your logs with this new service with machine learning and pattern recognition, you know the original AI. However, beware this only supports the first 1500 characters of a log line, which will not be useful for your 9000 lines of Java stacktraces. Also, such services in AWS can be very costly, so be sure to test it only on a small subset of data before turning up Skynet for your logs.   
  • Announcing Data Exports for AWS Billing and Cost Management: You could already export billing data in the CUR format. This CUR 2.0, with improvements of course. First, it has more detail. Second, it has a schema and you can schedule your custom exports by specifying an SQL statement. I assume with SQL they mean selecting fields and filtering them, not actual group by and joins. 
  • AWS Cost Explorer now provides more historical and granular data: AWS Cost Explorer now offers 14 months of daily data (previously 13 months), with options for 38 months of monthly data and 14 days of detailed daily resource data. Announcement says it's free, but there is a cost calculation on the page, so beware before opening it on a huge account for all services. Detailed resource data allows you to dig deep on individual resources on Cost Explorer without setting up tags (which you should).
  • Amazon Web Services announces Unified Billing and Cost Management console: This year there is a lot of focus on cost and savings. It’s improvements to existing consoles and specific views that many would find useful, such as recommendations and chart.s
  • Amazon Managed Service for Prometheus launches an agentless collector for Prometheus metrics from Amazon EKS: Have you ever configured Prometheus in a Kubernetes cluster? It’s hundreds of lines of magic configuration and different containers. Now, managed Prometheus can collect metrics from your EKS cluster, including workload metrics, without setting an agent, which is huge!
  • Observe your applications with Amazon CloudWatch Application Signals (Preview): Another new dashboard from AWS? Application Signals is a summary of your applications basic metrics such as latency, errors, and volume. You can also create SLOs and keep track of your business KPIs, so that you can present a nice report to the management team.
  • myApplications: One place to view and manage your applications on AWS: This is an abstraction for an application that is hosted on AWS, and you can get a birds eye-view for its cost, health, security posture and performance on it in a single place at CloudWatch. 

Application Development 

aws reivent 2023 review

You have workloads. Hopefully using native AWS services, unless you are a huge company that did just lift and shift to AWS. These little to big improvements can make your workloads more efficient in terms of both cost and performance, or just make it possible to exist in cloud-native way.

  • Amazon SNS increases default FIFO topic throughput by 10x to 3,000 messages per second: When FIFO versions of SQS and SNS were announced, its throughput was very limited, making it unusable for heavy workloads. Now it allows 3000 messages per second, which is very fast.
  • AWS Lambda adds support for Amazon Linux 2023: It’s almost the end of 2023, and we got support for the 2023 version of the operating system for Lambda. For the record, it was released on March 15, 2023. Nevertheless, all new runtimes will be built on Amazon Linux 2023.
  • Amazon SQS announces support for JSON protocol: Since SQS is a very old service from 2006, its API is XML-based. Even with all the compression, just changing it from XML to JSON has an overall 23% performance benefit for parsing the data at server and client. You can imagine, at the scale of AWS, that probably resulted in 10s of millions of dollars savings among all tenants, assuming they update their SDKs, which usually does not happen until a 9.8 CVE surfaces.
  • Amazon MSK adds checks for too many partitions to AWS Trusted Advisor: This is another very specific release, but it seems people assume that just because they have Kafka, they can have thousands of topics and partitions in the smallest cluster, this is to warn them.  
  • Amazon CloudFront announces CloudFront KeyValueStore, a globally managed key value datastore: This is an exciting development that allows you to use global data in CloudFront functions, which CloudFlare had for many years. The most common use case with this would be properly redirecting your users to correct regions or fast-authenticating them at the edge now with data available to you.
  • Amazon Kinesis Data Streams launches cross-account access with AWS Lambda: Multi account use case is great but mostly you need to assume a role to use to Kinesis.
  • Amazon EventBridge now supports over 20 new Amazon CloudWatch Metrics for event buses: EventBridge is one of the coolest services on AWS, that lets you build proper event messaging with buses, connections to many AWS services and even partner buses, and a schema registry for you to validate and evolve messages. Without any schema registry and you are passing events, you are at the mercy of your producers and consumers, and in reality there can be different teams that you have no idea. Anyways, the addition of 20 new useful metrics, such as latency, throttles, payload sizes so you can dig deeper on issues.  
  • AWS Step Functions launches support for HTTPS endpoints and a new TestState API: I did not use Step Functions other than messing around with them in Console, but I’d assume it’d already support invoking HTTP endpoints. Apparently, until today you had to write a Lambda function to just call HTTP endpoints. 
  • Amazon EventBridge now supports partner integrations with Adobe and Stripe: I’m not entirely sure why one would consume Adobe events, but Stripe events through EventBridge can be very helpful, because EventBridge centralizes all events and connects to many AWS Services already, so you have one deployment reduced to consume and forward Stripe webhooks to EventBridge. I love announcements like this, compared to shiny features, it actually lets me delete code. Lovely. 
  • Amazon EKS introduces EKS Pod Identity: I was there Gandalf, 3000 years ago, when AWS did not have a native way to let your Pods have IAM Roles, and we had to use great open-source projects like kube2iam and kiam to handle multi-purpose pods in one cluster. But as handing identity properly is a dangerous concept, AWS announced IAM roles for Service Accounts which allowed pods to safely assume a role by using annotations. EKS Pod Identity is a new development to simplify handling IAM roles for EKS workloads, where you assign IAM roles to service accounts, and the agent takes care of assigning credentials to pods. However, I certainly do not like this being an agent that is running inside the same cluster, which is subject to compromise and performance problems. But if you are not using Fargate to run your workloads, you set your own miserable path of managing a lot of things on your own. If you are using EKS, seriously consider Fargate. There are very valid reasons for using EKS, but Fargate is a much better service now to run containers and handle deployments gracefully. My previous old EKS cluster had ALB, Nginx and kube-proxy routing, and it was always a hassle. 
  • Introducing an Integrated Development Environment (IDE) extension for AWS Application Composer: This is a VSCode extension that allows you to write AWS Application Composer apps visually and using generative AI. Looked pretty cool in the demo. The problem with simplified app builders is that, I’m not sure until how long you can use such a service and when it becomes a liability instead of an enabler. Regardless, it’s a cool service to build the applications real cloud-native way (not like the cloud-native abomination of hosting your own S3 clone, come on)     
  • AWS Lambda adds support for Java 21: Considering Java 21 was released on 19 September, this is a fast response from AWS. Back then, it took forever for AWS to implement verizon upgrades to managed software like Postgres and libraries like Java in Lambda, but now the verizon upgrades come much sooner and I like this direction. 
  • Introducing advanced logging controls for AWS Lambda functions: You can now choose JSON to log in AWS Lambda, which helps you to get rid of START and END messages in your parsing pipelines. Another addition is to set a log level for your application without redeploying it. We used to use environment variables for this.You can set both application level or system level logs level independently. For this to work, you have to use recommended logging libraries for other languages so it can pick up the change automatically. Switching env variables was not so bad, but standardizing always helps. 

Continue Reading

Sign up for our Newsletter