In our journey to the public cloud, we have the ability to rapidly and programmatically spin up resources. Though these resources are certainly not free. With the great agility that the public cloud has brought to our application infrastructure, having clear and concise billing details of our workloads, in theory, should be easy. Though when diving into our bills especially with multiple workloads, cloud environments, and teams, we start to lose visibility into our exact workload costs.
In the midst of this complexity arises the need for robust FinOps tools. A FinOps tool serves as a bridge between the agility of public cloud usage and financial operations. Such a tool provides an organized, analytical, and actionable view of cloud expenditure, helping teams make sense of the multitude of charges stemming from their cloud resources.
The major public cloud vendors all have cloud billing APIs; though if looking at your bill in a visual format, having a JSON format of your bill might not provide any more insights for yourself. The first item in understanding your spend is getting access to your bill, and depending on organization the billing details might be hidden from you.
The major cloud providers all provide you some sort of billing dashboard.
Your billing/total bill might be calculated in some sort of hierarchy. Microsoft Azure Billing Management documentation provides insights on how multiple accounts can be billed and rolled up into a hierarchy.
Though we can take a look at an easier example. Below is my AWS Bill after running one of our Virtual Harness Universities. Can estimate every time we run the class we incur about $200 in cloud spend.
How I go about calculating cost is pretty rudimentary; take a look at the spend the day before and the day after and hope that no one else is in my account racking up changes. Cloud bills are a lot like your credit card bills, they tell you what happened but not why something happened. The challenge of the after the fact of “what happened” is challenging especially if you don’t have an excellent grasp on all the moving pieces and billing intricacies, which is one reason your cloud bills are hard to understand.
When looking for a change, I interviewed in early 2014 to be a solutions architect at Amazon Web Services [I ended up at Red Hat]. Part of the interview process at the time was to create a sample architecture and stack for a fake company. You were given an AWS voucher of size to create, run, and explain what you created so as not to incur a cost.
Not to raise any flags at my current employer at the time, I created my own AWS account. Not being green to AWS, I was able to create a novel architecture and demonstrate why I made certain choices. Though I was green to the billing of AWS which my firm at the time did not expose to engineers and my personal account was the first time for myself interacting with a bill.
When the interview process was over, I thought I turned off all of my services, terminating every piece of infrastructure. A few months go by and my voucher ends up being exhausted then small bills start to hit my credit card. I would be billed a few dollars a month and when I would log into the console, no services were present.
I opened up a ticket with AWS and that went into the ether so it was up to myself to figure this or completely terminate my account. The bill was pretty cryptic to me. What ended up I was being billed for was my Elastic IP which was spun up but not removed with an item I created for the interview. I was at wit's end after a few months, was about to dispute the credit card charge with American Express.
As you start working with new and exciting services, you will get billed for those pieces. Sometimes the items all don’t get cleared up when you terminate services. There is no such thing as a free lunch, the below graphic is a reminder of everything that you pay for.
One way of providing dimensions of billing would be to query your cloud provider billing API as the billing API should be what the billing visualizations are based on.
Certainly, improvements have been made in the last several years as more workloads have been placed on the public cloud thus the need for more robust billing APIs. AWS provides an example of how to formulate a query leveraging AWS Cost Explorer. Because of the security signatures needed, leveraging your AWS Command Line Interface [CLI] or SDK would be a more direct way to query. HTTP methods outside the CLI or SDK would require formulating a signature that requires a hash of the canonical request e.g what the CLI and SDK take care of for you under the covers.
Remember the lack of a free lunch in the previous section? Querying the AWS Cost Explorer API does incur a cost which at the time of this blog is $0.01 per API call. When building out your query will need to make sure that you are getting what you need e.g being granular or if you plan to process after the call can be pretty macro.
Once installed, you can run “aws configure” then enter your access and secret key and if you would like your default AWS Region. With those items entered, you are now off to the races with the AWS CLI.
The AWS Cost Explorer commands are invoked with “aws ce <expression>”. Formulating a Cost Explorer query can follow the following format and leverage a filters.json for more specific insights.
For myself the challenging part making sure I had the proper AWS Service as a dimension and potentially the corresponding infrastructure/services. I wanted to query EKS spend which includes charging for the underlying EC2 infrastructure, which this query was not representative of a more realistic spend. You can start to see where the challenges start to add up especially needing a way to correlate the spend without intricate tagging. This is exactly what we aimed to solve with Cloud Cost Management.
Back to the Harness University example, the most direct way to reduce spend is to reduce the amount of infrastructure you need. Easier said than done; we would get an alert and the autoscaler group for the worker node pool would trigger if we went above our utilization threshold, but the inverse if we were underutilized, not a chirp thus letting the over-provisioning live on.
To see the pressure on our cluster I was leveraging the Kubernetes Metric Server and running “kubectl top nodes” to see the pressure in the cluster. I was substantially over deployed in this case.
Trying to correlate the resources in use vs application spend was difficult because of the challenges in aggregating and correlating those metrics together. Drilling into Cloud Cost Management, I am able to see usage and spend information in one spot.
Zooming out in the granularity, Cloud Cost Management allows us to get an enterprise-level dashboard of spend across our many applications/clusters.
The Harness Platform continues to grow in capability not only as a Continuous Delivery platform but a platform that can help reduce your cloud costs.
Harness is here and ready to partner with you to help further your Continuous Delivery and cloud cost efficiency goals. Harness as a system of record for your deployments can help correlate those deployments/running applications and help track the cost of those applications. In my example instead of drilling into the AWS Cost Explorer, you can simply wire AWS to Harness Cloud Cost Management to start gaining great insights quickly. Feel free to sign up for a Harness Account today and give us a shout to take a look at Cloud Cost Management.
Enjoyed reading this blog post or have questions or feedback?
Share your thoughts by creating a new topic in the Harness community forum.