What's missing in your cloud optimization projects

The cool kids are into cloud optimization these days, saving money and making deployments much more efficient. Here are a few trends you should know.

What's missing in your cloud optimization projects
Thinkstock

The concept of cloud optimization is emerging out of the concerns that many businesses are not getting the value out of cloud computing they expected. Simply put, companies are discovering that their existing cloud-hosted systems need to undergo “optimization.” This can range from refactoring code to ensure processor and storage efficiencies, to finding new and more cost-effective cloud platforms, and in some cases returning the applications and data to where they came from. Mostly this means on-premises repatriation or hitting the reset button. 

As I watch many of these in flight right now, I’m seeing some concerning patterns: Many enterprises are not considering certain optimization approaches, and they should be. These often-overlooked items can leave millions of dollars on the table in lost cloud optimization savings. Let’s look at a few.

Cloud optimization requires careful consideration of the resources needed, including how they are utilized and allocated. This should be obvious, but it’s the single thing that I see most often overlooked. This type of optimization right-sizes resource consumption for maximum efficiency and effectiveness.

Analysis should focus on performance metrics and usage patterns. Avoid overprovisioning while eliminating underutilization pitfalls that incur unnecessary additional expenses. This means potentially moving applications to another platform, such as on-premises data centers where the cost of computing and storage has been dropping like a rock in the past several years.

Autoscaling capabilities allow you to increase or decrease the number and type of resources you’re leveraging, such as storage and computing, depending on demand. This mechanism provides autoconfiguration by establishing rules-based metrics like CPU utilization levels, storage utilization, network traffic, etc., so only the resources required are assigned.

Most enterprises don’t use the autoscaling features of cloud-based platforms and tend to overprovision the resources they need, thinking of them more like traditional computing platforms. Cloud computing provides autoscaling features that should be enabled for cloud optimization projects.

For long-term and predictable workloads, reserved instances offer significant cost savings compared to on-demand pricing. Spot instances with even lower costs leverage unused capacity but are not suitable for critical workloads since availability needs to be considered. By now, you should understand your patterns of use and if reserved instances will work for you. Normally hundreds of thousands of dollars are wasted when these cost-saving opportunities are not considered.

Finally, minimize storage costs in the cloud and appropriately utilize storage classes based on access frequency and retrieval-time requirements for efficient data management. Object storage services such as Amazon S3 or Google Cloud storage could be utilized for storing infrequently accessed data at lower costs. Setting data life-cycle policies to either transition aging or old data out of the storage system or to delete it automatically helps meet retention requirements while minimizing cost impact.

None of these are earth-shattering suggestions; they are fairly simple ones that can be leveraged now and are proven to bring value.

Copyright © 2023 IDG Communications, Inc.