AWS re:Invent 2019 Takeaways

Peter Njihia
8 min readDec 25, 2019
Photo by The Climate Reality Project on Unsplash

Highlights

If there’s one event that level-sets the Cloud space for the next 12 months, it has to be re:Invent. Re:Invent 2019 happened and it keeps getting bigger every year, and I was excited to be part of it. Lots of cool things were announced and great energy all round. There are two key themes I perceived for this year’s event:

  1. Machine Learning/AI: 17 of the new 56 new services were announced were on AI/ML from DeepRacer Multi-car to DeepComposer, CodeGuru (AI code reviews), Fraud Detector, Kendra for enhanced search and so on.
  2. Improved Accessibility/Extensions: AWS Outposts with support for EC2, RDS and EMR. S3 Access Points (creating different virtual views of the same bucket), Nitro Enclaves, Local Zones (think of them as non-redundant regions), RDS On-Prem and so on.

By the numbers:

  • Over 2500 sessions
  • Over 57 new service offerings/announcements
  • Over 60k attendees

My main interactions and sources for learning were:

  • Workshops: This is purely hands-on labs on best practices and how to navigate new services.
  • Chalk Talks: These are more audience engaging, and they address common issues companies face in different stages of their Cloud Adoption.
  • Keynotes: These are streamed live as well to the public and it’s where major announcements happen. Werner’s keynote on Thursday takes us closer to the inner workings of AWS, very cool, I’d recommend all technical teams to spare a moment and watch:
  • Informal engagements with other attendees: This was also a major source of information, especially when you realize that most companies have similar challenges as you do and the crafty solutions they’ve put in place to resolve these challenges.
  • Recordings/Github: For the sessions I couldn’t get in, and they were tons of them, recaps on YouTube plus repositories in AWS Labs github were super helpful.

Takeaways

Now, here are key takeaways from this year’s re:Invent. They’ll actually sound very cliche, but I’ll try to back it up with specifics:

  1. Manage Risks
  2. Make Incremental Improvements
  3. 80/20 Rule: Don’t hold up solutions because of edge cases
  4. Experiment a lot
  5. What AI/ML Means

Manage Risks

Understanding risk unlocks a lot of potential to innovate. Masking risks brings about stagnation and far worse, recurring issues. AWS knows this and they are putting out new features to help with managing risk. To help illustrate this, AWS put out a workshop that helps you test your CloudFormation templates in different accounts and regions. The reason this is interesting, is that it’s pretty common to run into trouble whenever you deploy automation from one account/region to the other. If you are not using, CloudFormation, I’d highly recommend you start. The Workshop introduced TaskCat: an open source project for testing CloudFormation templates. At the end of it, we were able to test templates in multiple regions leveraging a CI-CD pipeline, and when one region fails, we’d fail the pipeline. This keeps issues so close to Dev! One limitation however, was the inability to apply updates to existing stacks (a high percentage of all CloudFormation changes are introduced through updates). I raised this concern with them, since this would catch “forbidden” resource switches. The tool deploys and tears down the stacks once testing is complete with an option to keep them for debugging. This will cost you very little. No reason for keeping stacks around to test.

Check it out on GitHub: https://github.com/aws-quickstart/taskcat.

Another instance of this takeaway, was a casual chat with an AWS Aurora Architect. Moving to Aurora comes with a plethora of benefits in performance and cost savings etc, but the migration is not trivial. Addition support for Aurora is not the biggest hurdle for large projects, but a way to safely cutover with minimal interruption is. So I asked him, how one would overcome migration challenges, and his answer was simple: Don’t do it big bang, make it an incremental conversion over time, and you’ll get better with each iteration, with improved workflows and migration times. The key however, was analyzing all the risks, putting in place rollback plans: for instance, replicating back to their old DB even after the cutover, why? Simply managing risk.

The theme here is, when you address risk, you’ll have increased confidence in your automation, faster turn around times, leaving you to make incremental business value. There’s no harm in over-testing your automation and product code changes, you’ll save $$ in the long run (short run too).

Make Incremental Improvements

Everyone wants quicker turn-around times, whether it’s in the drive-through getting coffee or having your favorite gadgets shipped to you quickly. The desire is even stronger when you are a builder/creator: You want faster feedback on your improvements, nothing is more satisfying than that. But let’s face it, it usually doesn’t happen as fast as we want it, and “managing the rollout” process becomes another task by itself. I had so many side conversations with folks from other companies saying the exact same thing! It’s a pretty common problem. Well, some people are not just sitting there, they are taking it on and being successful at it.

The Team that develops the Elastic Beanstalk service, presented this challenge and delivered a ChalkTalk on their solution. Expectedly, they use AWS Dev tools but you realize fast it’s not about the tools. The key is small incremental changes. Every commit done by this team is destined to Production, sounds a bit extreme! No, because this is not a change that’s spent months or weeks in the making. Here’s another key point, the change is rolled out slowly, addressing risk (deja vu?), and it’s spat out like poison the moment it misbehaves: slow rollouts + rapid rollbacks. 99% of all changes for the Elastic Beanstalk service, are rolled out in this fashion. What about the 1%? Well a different approach might be needed for a major shift, no shame in pausing the pipeline, but don’t let this hold you back and stay too long in the “hoard-then-deploy” arena. I found a session worth watching, which has both serverless and CI-CD pipeline: CI-CD Serverless.

80–20 Rule

This has become quite cliche as we all know, but let me ride this wave one last time. we fail to deliver 80% of benefits because of a challenging 20% use-case that we need a month to a year to figure out first. Well, scratch that, and focus on the 80%. It’s humbling to realize AWS started off with the S3 service, and from there on, they completely redefined how computing is delivered to the public. They knew the one service will not address 100% of business needs but they didn’t sit there gathering requirements, so they went head-down crafting the right services to start up with.

Another exhibition of this: Not all of their services are in every region, making the launch of new regions/AZs less challenging. This should transfer in everything you do, granted we’ll always have external pressure (from customers, compliance, ,markets etc), but as a colleague once told me:

Customers apply pressure when there’s a communication breakdown or a chronic failure to deliver. Nothing wrong in telling a customer they wont get their change within an agreed-upon timeline, they’ll work with you to get what they want.(paraphrased).

Experimentation

Experiment with many services, most you may end up not using but it’ll be natural to refer to them when the ideal problem comes along. It can be quite a “FireHose” (pun intended) to drink from but most of the labs provided will take you 2 hours or less. Set aside some time daily/weekly to play around. This can translate to days of time saved in the future just because you have situational awareness. Visit https://github.com/aws-samples and start experimenting.

Artificial Intelligence/Machine Learning

As I mentioned before, AI/ML was huge and you can’t help but think of what it means to businesses and the role it’ll play in the future. Probably the biggest announcement relative to Software Development shops, was AWS CodeGuru: an intelligent code review managed service based on Amazon’s own codebase reviews and popular open source projects. This will enforce patterns and create more confidence and faster turn-around times.

I did a fun thing and attended a DeepRacer workshop, but first let me break down AI/ML as I perceived it: Machine Learning/ML targets to solving a problem, whether it’s binary (e.g. buy or not), multi-class (e.g. what products are you likely to buy) or regression (e.g. what’s the value of a home in 3 years).

There are different approaches to learning: Supervised, Unsupervised, ReInforcement. DeepRacer walks you through Reinforcement learning. AWS provides much of the infrastructure you need to train models, however, you still have a part to play in providing the reward algorithm for reinforced learning. SageMaker uses this reward function to train your model. Check out DeepRacer workshop here.

Worthy Mentions

Conclusion

I highly recommend teams that drive technology innovations to send at least one person to ReInvent (or another equivalent conference) every year. It’s well worth it. This exposure could fuel innovation sparks and turn teams and products to a better direction within a short amount of time.

If you happen to go, please plan ahead, it can be all too overwhelming and you may miss on the sessions you really need the most. Be aware of when reserved seating open up and be online within a minute. Make it easier by marking “I’m Interested” on sessions you find valuable. There are different types of sessions but I’d prioritize Builder Sessions, Workshops and ChalkTalks first — harder to get recordings of this after the fact. It’s much easier to find recorded 30 minute sessions online after the fact.

Please do check out ReInvent perspective from one of the leading Cloud training companies: A Cloud Guru.

Thanks for reading and make it a great day. You are awesome!

--

--

Peter Njihia

I'm a Cloud Architect/SRA/DevSecOps Engineer helping folks build and run in the cloud efficiently..