Cloud Economics: AWS Cost Optimization Checklist
As a midsize business, we appreciate effective cost management. With a decade of AWS cloud hosting experience, we’ve learned a great deal about cloud economics. Yes, we’ve seen wasted expense on more than one occasion (some by ourselves), which gives us the experience to share our tips here. We’ve created this AWS cost optimization checklist, which we hope will guide you to a smarter budget and lower cloud costs.
AWS is based on a pay-as-you-go pricing model. If you know what you need, you can save on your costs, but if you’re flying blind, it will probably cost more. AWS’s strategy is to let customers choose the right pricing model: on-demand is “standard” pricing, while pre-paid and spot instances provide lower costs.
Reserved Instances (RIs) require paying for what you need up front. The longer the term, the higher the discount (1 or 3 years). Because you’re locking in to a specific deal, you are trading flexibility for savings. As such, you should only buy reserved instances with a realistic duration. That is, don’t overprovision by reserving instances long-term if you won’t need them. Fortunately, if you buy a RI and you don’t need it, you can sell it back or buy a shorter duration version in the Amazon marketplace.
For serverless applications or applications with short-term needs, you can also save money using Spot Instances. Spot Instances let you take advantage of unused Amazon EC2 capacity, offering up to a 90% discount compared to On-Demand prices.
And another way of lowering costs once you’re up and running in the cloud is to identify instances with low-utilization. You can reduce your costs by rightsizing them.
Choosing the right instances is easily one of the most important items on the AWS cost optimization checklist!
AWS Savings Plans
Amazon introduced another pricing model, Savings Plans, which offers a cost-effective alternative. Savings Plans lower prices on Amazon EC2 instances usage, regardless of instance family, size, OS, tenancy or AWS Region. Like Reserved Instances, they offer significant savings over On Demand in exchange for a commitment to use a specific amount of compute power over one or three years.
One of the areas we’ve seen significant wasted expense is storage. Like the detritus on your local hard drive, you may be overstoring data. So making sure WHAT you’re storing is a good first step. In addition, you can analyze Amazon S3 usage and reduce your costs by leveraging lower cost storage tiers. AWS includes 5 tiers of S3 object storage, so it’s important to know when and why to use each.
You may want to explore object lifecycle management. To optimize the cost of you data storage, you can automatically transition data between storage classes. For example, you can automatically move data from S3 Standard to S3 Standard-Infrequent Access after a period, archive data to Glacier after 90 days, or setup a delete policy to expire specific objects after 180 days.
Delete any unattached EBS volumes — terminating an EC2 instance doesn’t automatically delete associated EBS volumes, which leaves unattached volumes. Deleting them reduces costs.
When mapping a migration, it’s worth investing time in a look at your options with software. Once in the cloud, it’s worth considering migration within the cloud. We have worked with clients to find less expensive alternatives for software and operating systems. Linux is less expensive than Windows, MySQL is less expensive than SQL Server, and open source is more affordable than licensed software.
Scheduling & Auto Scaling
It’s helpful if you can identify instances with low-utilization. Once identified, you can reduce costs by stopping them. In addition, you can schedule instances to shut down or auto-scale during off-peak hours. Scheduled auto scaling provides the ability for infrastructure (including application) to seamlessly scale out and scale in based on the load. Each cluster can have capacity providers to manage infrastructure that the tasks in your clusters use, Your capacity provider strategy determines how the tasks are spread across capacity providers in a cluster. When you run a task or create a service, you can use the cluster’s default capacity provider strategy or specify an override.
Migrating from on-premise or traditional data center infrastructure to the cloud requires planning. Rather than winging it, we highly recommend that you seek expertise in both migration and cloud infrastructure. Suffice it to say the learning curve is steep. We offer our Cloud Migration Checklist to help, and you’ll see that the work is somewhat intricate. Cloud migration is an investment, first and foremost. It’s an investment in becoming more agile, reliable, and elastic, and when executed properly, it provides a path to lower costs. We urge you not to underestimate your planning phase! That said, an advantage of planning is finding places to lower other costs (like licensing mentioned previously) and optimize workflows.
If you’re using AWS, Amazon provides three valuable tools to help analyze your costs.
AWS COST OPTIMIZATION CHECKLIST
Historical Analysis (Compute | Store )
- Review current utilization
- Use Cost Explorer to track running hours and usage
- Review historical utilization (trend).
- Review expense forecast with Trusted Advisor.
- Seek unused resources with Trusted Advisor.
- Create budgets for your resources.
Compute Resource Planning
- Purchase instances that suit your needs.
- Consider Reserved Instances with appropriate duration.
- Consider Spot Instances for resources not set up for production.
- Consider Savings Plan
- Monitor your instances regularly (weekly/monthly).
- Schedule resource utilization.
- Schedule instances to run only during business hours. If you don’t need instances to be running during certain working hours, turn them off.
- Schedule your resources based on their activity. Shut them down if they are idle.
- Choose (more economical) open-source operating systems.
- Delete old snapshots, unallocated disk volumes, unnecessary objects & buckets.
- Store running resources on S3 and move them between tiers according to their activity.
- Create our data lifecycle policy.
- Archive less active resources on Glacier.
- Archive data backups long-term on Glacier Deep Archive.
- Run high workload applications on Amazon EFS to take advantage of auto-scaling
- Employ auto-scaling to allocate storage for the workload
If you have additional questions about cloud migration or our cost optimization on AWS, please contact us.