There have been about 1.3 zillion blogs posted this week recapping the announcements from AWS re:invent 2019, and of course we have our own spin on the topic. Looking primarily at cost optimization and cost visibility, there were a few cool new features posted. None of them were quite as awesome as the new Savings Plan announcement last month, but they are still worthy of note.
AWS Compute Optimizer
With AWS jumping feet-first into machine learning, it is no surprise that they turned it loose on instance rightsizing.
The Compute Optimizer is a standalone service in AWS, falling under the Management & Governance heading (yes, it is buried in the gigantic AWS menu). It offers rightsizing for the M, C, R, T, and X instance families and Auto Scaling groups of a fixed size (with the same values for desired/min/max capacity). To use the service you must first “opt-in” in each of your AWS accounts. Navigate to AWS Cost Optimizer and click the “Get Started” button.
Interestingly, they only promise a cost reduction “up to 25%”. This is probably a realistic yet humble claim, given that the savings for a single downsize in the same instance family is typically 50%. That said, the only way to get that 50% cost reduction is to install the AWS CloudWatch Agent on your instances and configure it to send memory metrics to CloudWatch. If you are not running the agent…then no memory metrics. Like ParkMyCloud rightsizing, in the absence of memory metrics, the AWS Compute Optimizer can only make cross-family recommendations that change only the CPU or network configuration, leaving memory constant. Hence — a potential 25% cost reduction.
The best part? It is free! All in all, this feature looks an awful lot like ParkMyCloud rightsizing recommendations, though I believe we add a bit more value by making our recommendations a bit more prominent in our Console — not mixed-in with 100+ other menu items… The jury is still out on the quality of the recommendations; watch for another blog soon with a deeper dive.
Amazon EC2 Inf1 Instance Family
Every time you congratulate yourself on how much you have been able to save on your cloud costs, AWS comes up with a new way to help you spend that money you had “left over.” In this case, AWS has created a custom chip, the “Inferentia”, purposely designed to optimize machine learning inference applications.
Inference applications essentially take a machine learning model that has already been trained via some deep-learning framework like TensorFlow, and uses that model to make predictions based on new data. Examples of such applications include fraud detection and image or speech recognition.
The Inferentia is combined in the new Inf1 family with Intel® Xeon® CPUs to make a blazingly fast machine for this special-purpose processing. This higher processing speed allows you to do more work in less time than you could do with the previous instance type used for inferencing applications, the EC2 G4 family. The G4 is built around Graphics Processing Unit (GPU) chips, so it is pretty easy to see that a purpose-built machine learning chip can be made a lot faster. AWS claims that the Inf1 family will have a “40% lower cost per inference than Amazon EC2 G4 instances.” This is a huge immediate savings, with only the work of having to recompile your trained model using AWS Neuron, which will optimize it for use with the Inferentia chip.
Next Generation Graviton2 Instances
The final cool cost-savings item is another new instance type that fits into the more commonly used M, C, and R instances families. These new instance types are built around another custom AWS chip (watch out Intel and AMD…) the Graviton2. The Graviton chips, in general, are built around the ARM processor design, more commonly found in smartphones and the like. Graviton was first released last year on the A1 instance family and honestly, we have not seen too many of them pass through the ParkMyCloud system. Since the Graviton2 is built to support M, C, and R, I think we are much more likely to see widespread use.
Looking at how they perform relative to the current M5 family, AWS described the following performance improvements:
- HTTPS load balancing with Nginx: +24%
- Memcached: +43% performance, at lower latency
- X.264 video encoding: +26%
- EDA simulation with Cadence Xcellium: +54%
Overall, the new instances offer “40% better price performance over comparable current generation instances.”
The new instance types will be the M6g and M6gd (“g”=Graviton, “d”=NVMe local storage), the C6g and C6gd, and the R6g and R6gd. The new family is still in Preview mode, so pricing is not yet posted, but AWS is claiming a net “20% lower cost and up to 40% higher performance over Amazon EC2 M5 instances, based on internal testing of workloads.” We will definitely be trying these new instance types when they release in 2020!
All in all, there were no real HUGE announcements that would impact your costs, but baby steps are OK too!
Originally published at www.parkmycloud.com on December 13, 2019.