Debunking AWS Pricing

To Help You Save and Design Better

Peter Njihia
8 min readNov 10, 2019
Photo by Mika Baumeister on Unsplash

Ever find yourself looking up something just because you heard about it or it’s getting popular? I have, multiple times actually. In today’s social media age, it’s becoming more common for folks to jump into something popular before understanding it, or even defining what they need to get out of it, some would say it’s FOMO (Fear of Missing Out) and it’s surprising how this carries on to other areas too: Movies, fashion, politics, technology, diets, restaurants! Well, this post will bring some clarity in pricing when it comes to the trending and popular AWS public cloud services.

The bulk of all negotiations is mostly around money. It’s no different when you decide to take your workloads to the cloud, except that you’ll be negotiating between yourself, because well, you pay as you go and only for what you’ve provisioned. Simple. Now, I have to admit, AWS pricing is well documented, but as the number of services offered go up, there’s a lot of price points, even within a single service! Per AWS pricing page, there are over 160 services, and even some services like EC2, it’s further broken down by instance/server size and configuration. But if you look closely, there are themes to the pricing, which is what I’ll try and bring out. But one may ask: if I pay as I go making adjustments should be easy, right? Well, yes and no.

If you have a strong agile Cloud team with an established review process, then it’s easier to re-architect components for a better price point and performance. On the other hand, if you had contracted implementation and now running things on your own, or just consumed with core business tasks, adapting to a better price point can be challenging. So, before you jump into, or hire someone for a cloud implementation project, equip yourself with knowledge. It’s very important to have a grasp of what you are getting into financially, technically too but that’s a broader topic. AWS is pricing is not complex, but it’s very detailed, and I’ll try my best to break it down without unnecessary low level details.

First thing to grasp, is how AWS approaches pricing: you pay-as-you-go: no longer need a service? Drop it and that’s it, billing stops. Another pricing tenet from AWS is, the more you use the less you pay, it’s a form of volume discount. Lastly, the more a service scales and gets adopted, the less everyone pays, in fact per this blog, AWS has cut prices a whooping 67 times since launch as adoption picks up. Pricing is also region specific, primarily driven by the cost of running data centers in that region. Management and provisioning tools are largely free, but you pay for resources these tools provisioned.

There’s quite some variance for when it comes to prices per region. US/EU regions seem to have the lowest prices while South America, and some of the newer regions, tend to have higher prices. Choose wisely and if you bound to a region by legal/compliance requirements, continue reading for more tips on how to save.

How much it costs to run a medium sized EC2 instance (4 vCPU/16G Memory) in sampled regions

Understanding the AWS approach with respect to your requirements, will guide you in making better decisions of what, where, and how much resources to provision, and for how long. The number one driver is still your core requirements: but beyond that you have options, for instance, consolidated billing, among your AWS accounts can save you with volume pricing, and a bigger pool for reserved instances.

Pricing for most services fall under these 4 categories. Some services use a combo of these, so its not a 1-to-1 pricing point. Here we go:

Time/Hourly-Based Pricing

Hourly-Based Billing from us-east-1 region regardless of use

This is the default criteria for anything compute. This includes EC2 (servers), RDS (database), Redshift (data warehouse), Load Balancers. Pricing is by the second, but mostly communicated by the hour, and rolled up to a minute. This is granular and fair especially with start and stop workloads where you are spinning up an instance, say every 20 minutes for 5 minutes. In the same scenario, a strict billing by the hour would have you paying for 3 hours, as compared to just 15 minutes with per second billing.

Tips

  1. Use the cheapest instance family that meets your core requirements. Execute performance tests to help you decide.
  2. When possible, leverage cheap instance offerings like spot (spontaneous instances available for short periods of time) and reserved instances (long-term commitment). You can see up to 90% in savings. Not all workloads however fit the bill, so choose carefully.
  3. Separate workloads as much as possible and adapt each to requirements, you could for example, shut down development environments after office-hours, saving a ton, or run some modules server-less. Helps you make modular and frequent changes.
  4. Autoscaling can help a lot here: provision more instances when needed and dial down during off-peak hours. More-so, there’s a new feature that lets you mix instance types/families in one Autoscaling group helping you leverage spot instances and cheaper instance types. This is worthy a blog post by itself.
  5. When you run seasonal workloads, persist your results in cheap storage types like S3, then tear down compute resources and EBS volumes once you are done. Tear down the analytics tools as well.
  6. AWS Fargate (managed container hosts) might be better solution than building out and managing your own docker hosts, that may be idle for periods of time. All this is dependent on the workload under analysis.

Storage-Based Pricing

This is common for services that store data: S3/Glacier (object storage), EBS (instance volumes), EFS (shared storage), even RDS (database). You pay based on how much you’ve stored or provisioned. For instance if you provision a 200 GB volume and use just 50 GB, you are paying for 200 GB. But with object storage, you are actually paying based on how much you’ve stored. Expectedly, length of time you store the data is an element of the pricing. Take note of the variance in pricing between the storage options in the diagram: a decision to keep images in databases (RDS) can cost you more than a penny, as compared to storing the same in S3. EFS is the most costly by quite a margin: more than the rest combined, consider removing unnecessary data from shared volumes.

Tips

  1. Design carefully where data should go, but let your requirements drive the choice.
  2. Do not over-provision by a lot when it comes to EBS/EFS
  3. For infrequently accessed objects, consider using S3-IA/Glacier
  4. Clean out old backup files, for these add up quickly

Data Transfer Pricing

Transferring 100 GB of data by Regions

This is also a major criteria: for both data in and data out. It’s not specific to a certain service, since data transfer is a common feature for most services. Intra-AZ traffic is mostly not charged, but you’ll be charged in these scenarios:

  1. Data transfer out to the internet (highest cost)
  2. Data transfer out to other Regions
  3. Data transfer out to another Availability Zone

Tips

  1. Consolidate most of your services in the cloud: analytics, transactions, workflows, encryption, CDN, etc. If your raw data is in S3, it could be used in multiple projects for online searches, a source for data warehousing, and even migrations. Though you still pay for the initial upload/ingestion, it saves you a lot as compared to pushing data from an external source to the cloud on a constant basis for different workflows.
  2. Avoid unnecessary huge payloads in your applications.
  3. Consider using CloudFront/CDN, these reduces stress at the origin with the more cache hits you get plus a lower transfer rate.
  4. Consider using a delta-based migration scheme, where you push only new changes for sync/migration jobs.
  5. Consolidated billing can get you volume discounts.

Transactions/Duration/Usage Pricing

Cost for running 512MB Lambda functions for 2 seconds

This is the ideal criteria, directly related to consumption, and mostly what’s referenced when Cloud providers say: “pay-for-what-you-use”. It’s popular with managed services like load balancers, server-less compute/Lambda, CloudFront/CDN. The cost is directly related to number of transactions and/or for how long, you run against these services. The diagram depicts the cost for running Lambda functions as requests increase.

Tips

  1. There’s volume discounts here, so the more transactions, the less you pay, hence consider consolidating billing, and migrating other workloads to the same platform.
  2. Avoid frequent runs that yield similar results over a period of time, might be a good idea to cache some of these results especially with fairly static data.

Static Pricing

Yeap! There is static pricing in AWS. However, it barely goes over 1 month range. To give an example, you get charged once monthly for each Quicksight author you provision: $9/$18 depending on version (Standard/Enterprise), regardless of whether these users logged in or not. It’s the same case for AppStream users (for Application streaming) except that a charge kicks in if a user logged in at least once. Appstream also charges by fleet size, whether someone is streaming or not, consider on-demand fleets, they take down the cost significantly (cold storage), at the expense of a longer startup time for a new streaming session. To the left is a sample of services charged per-user. Load balancers too, have an element of a static cost, regardless of traffic.

Tips

  1. Be diligent in removing no longer active users. Consider granting users minimum access for different roles that cost more for powerful roles. Actually this is best practice regardless of cost.
  2. Consolidating ALBs to serve multiple domains may serve you well, but ensure you have a dependable change process.
  3. You need to intently be aware of these services, they tend to slip by unnoticed, they’ve got me several times, they still do. Implement periodic reviews.
  4. If you are evaluating services, consider automation, where you teardown resources when not needed/accessed, and run the automation again to re-provision them, when it’s time to pick back up. If automation is time-consuming, a cold backup is far much cheaper than a live EBS volume.

--

--

Peter Njihia

I'm a Cloud Architect/SRA/DevSecOps Engineer helping folks build and run in the cloud efficiently..