Amazon Web Services : M5 vs M5a vs M6g

AWS M5 / M5a / M6g Benchmak

In other words Intel vs AMD vs ARM. AWS recently released Graviton series for all their main instance types: R6g with extended memory, C6g for compute optimized and M6g for general purpose. Their offering has always been based on Intel but in the past years we saw AMD and now with Graviton 2 making AWS is based on their own chips.

Amazon Web services announces their Graviton processors as a new choice for their customers for increase their performance for a lower cost. But what’s the difference between all these solutions identical on the paper ? Let’s do CPU benchmark to answer it.

Specification

For our benchmark, we took 8CPU-32GB VM from each series:

Product Price CPU Frequency
m5.2xlarge 0.38 Intel(R) Xeon(R) Platinum 8175M
Intel(R) Xeon(R) Platinum 8259CL
2.5-3.2GHz
m5a.2xlarge 0.34 AMD EPYC 7571 2.4-3.0GHz
m6g.2xlarge 0.31 aarch64 N/A

These data are collected by our Cloud Transparency Platform, prices are for us-east (N. Virginia) region.

Performance testing

Prime number search with sysbench CPU

Sysbench CPU can be categorized as arithmetic operations on integer.

We can observe an increase of +100% on single thread and close to 400% between M5 and M6g with 8 threads.

Encryption with AES-256 CBC

Where AMD’s performance depends of block size, Intel and Graviton are homogeneous across sizes. The ARM chip is able to encrypt at 1.2 GB/sec where the M5 and M5a respectively cap at 400MB/sec (200%)  and 900MB/sec (130%).

Price

Product Hourly Monthly
(estimation)
Yearly
(estimation)
Discount
m5.2xlarge 0.38 280 3,360
m5a.2xlarge 0.34 251 3,012 -11%
m6g.2xlarge 0.31 224 2,688 -22%

Monthly is based on 730 hours, yearly on 8,760 hours without long term subscription

Prices make no doubt, each new generation offers a lower cost and M6g owns the lowest.

Conclusion

Depending of your workload, Graviton offers until +400% of performance compared to the Intel analogous. Combined with a lower pricing, M6G is definitively the best EC2 choice for any CPU related workload compatible with ARM architecture.


Check out data in our Public Cloud Reference


Understand Object storage by its performance

How to qualify Object Storage perf

Nowadays anyone who want to smartly store cool or cold data will be guided to an Object Storage solution.  This cloud model replaced a lot of usages such as our old FTP servers, our backup storage or static website hosting. The keywords here are “Low price”, “Scability”, “Unlimited”. But like we can observe with Computes, all Objects Storages aren’t equal, firstly in terms of price, then in performance.

What does qualify Object Storage performance ?

Latency

Depending of your architecture, latency could be a key factor in the case of workloads related to small blobs. A common example is static website hosting: The average file size won’t exceed 1MB, then you may expect them to be receive by clients almost instantly.

Keep in mind that (generally) an Object Storage is a unique point of service, so for inter-continental connection, it’s recommended to link with a CDN. The table below describes the worldwide average for time-to-first-byte toward storage services:

Africa Asia China Europe N. America Pacific S. America
Asia 1.393 1.264 1.065 0.812 0.899 1.233 1.272
Europe 0.874 0.820 0.957 0.214 0.490 0.996 0.768
N. America 1.343 0.934 1.164 0.635 0.325 0.870 0.652
Pacific 2.534 1.094 1.117 1.763 1.161 0.760 1.570

TTFB in seconds

Bandwidth

If you work with high-sized objects, bandwidth is a more interesting metric. It is especially visible in Big Data architectures, for their low storage costs, Object Storage are very appropriated for huge dataset storing but between the remote and local storage, network bandwidth is the main bottleneck.

Like latency, the factor is double: client and server networks count and at this game Clouds aren’t equal. Server’s bandwidth can be throttled at different layers:

  • For a connection : A maximum bandwidth is set for incoming request
  • At bucket layer : Each bucket are limited
  • For a whole service : Limitation is global for the tenant or each deployed Object Storage service

Bucket scalability

While Object Storage often appears as simple filesystem available with HTTP, under the hood, many technical constraints appear for the Cloud provider. Buckets are presented as ±unlimited flat blob containers, but several factors can make your performance varies:

  • The total number of object in your bucket
  • The total size of objects in your bucket
  • The name of your objects, especially the prefix

Burst handling

Something never presented on the landing pages is the capacity to handle a high load of connections. Again here, the market isn’t homogeneous, some vendors support heavy times worthy of a DDoS, other will have a decreasing of performance or simply return a HTTP 429 Too Many Requests.

The solution may be to simply balance loads across services/buckets or use a CDN service which is more appropriate for intensive HTTP workloads.

Conclusion

There’s no rule of thumb to establish if an Object Storage has good performance from its specification. Even if providers use standard software such as Ceph, the hardware and configuration create a genuine solution with their constraints and advantages. That’s why performance testing is always a requirement to understand the product profile.

New C5a benchmark: Performance/Price

AWS recently released the new series C5a equipped with custom AMD EPYC 7R32. We can discover here, a less expensive alternative to C5, similar to what they did with M5, R5 and T3. But cost isn’t an appropriated metric if you doesn’t take in account performance, so let’s dive into a performance/price benchmark comparing C5 and C5a.

A lower pricing

Name CPU RAM C5 C5a
large 2 4 0.085 0.077
xlarge 4 8 0.170 0.154
2xlarge 8 16 0.340 0.308
4xlarge 16 32 0.680 0.616
9xlarge 36 72 1.530 1.232
12xlarge 48 96 2.040 1.848
18xlarge 72 144 3.060 2.464
24xlarge 96 192 4.080 3.696
metal 96 192 4.080

Pricing is for East US (Ohio)

Performance a bit better

Before open the hood, there are 2 things to keep in mind about C5: CPU performance is highly variable. Behind the product names, several CPU model are sold and we actually collected the following:

  • Intel(R) Xeon(R) Platinum 8124M
  • Intel(R) Xeon(R) Platinum 8275CL

Like the new AMD EPYC 7R32, both are custom models only available at AWS. Next thing, a same CPU model works at different frequencies. Generally, Cloud Providers set their CPU frequency at baseline or turbo frequency, for Platinum 8124M, we detected values from 3 up to 3.45GHz.

Geekbench 5

Kind c5.large c5a.large
Single score 934 909
Single Integer 902 815
Single Float 949 969
Single Crypto 1267 1782
Multi score 1115 1168
Multi Integer 1049 1067
Multi Float 1200 1256
Multi Crypto 1470 1952

From a Geekbench perspective, C5a excels especially in cryptography realm which is not a subject to underestimate, nowadays encryption is something used everywhere, from volumes to HTTP connections or with any backend. Other domain are also more efficient but not with a huge gap.

sysbench RAM

c5.large c5a.large
Read 8201 9139
Write 6134 7091

RAM bandwidth is a good indicator of neighborough’s noise and as C5a has just been released its value has higher chance to be better. Then, we’ll also check regularly if C5 and C5a can still pretend the same throughput.

Performance/Price

Viewing the results below and knowing instances’ prices are 10% lower, it’s not a surprise that C5a has better profile in terms of performance per dollar spent.

Type Hourly Monthly Multi score Perf/price ratio
c5.large 0.085 62.05 1115 17.97
c5a.large 0.077 56.21 1168 18.82

Monthly price is calculated from 730 hours.
Perf/price ratio equals “Mult-score / Monthly”

Conclusion

With this new original CPU model, AWS decreases their pricing again but with some performance increasement. In the past with the previous C5, we observed a lot of performance variation and it wouldn’t be a surprise if future tests pull the average performance up or down.

As the full series cannot be described by its smaller instance type, we also tested bigger flavors. Feel free to consult their performance on our Public Cloud Reference.

AWS and the volume equation

Despite being one of the the worldwide most used block storage solution, Amazon’s General Purpose SSD is far away from being a general and versatile solution. Unlike other providers selling volumes based on device type and an hourly price per gigabytes, AWS made the choice to create products adapted to usages.

EBS : The Block Storage solutions

Behind the name of Elastic Block Store, 5 storage classes are available:

  • Magnetic: Historical Block Storage solution provided by AWS. As its name indicates, this product is powered-up by spinning disk making it inherently slow: 200 IOPS and 100 MB/s. But in the end of 2000s, it wasn’t a low-cost but a general purpose.
  • Throughput Optimized: Dedicated to large chunk processing, this product aims an optimal throughput. Still with HDD but efficient for Big Data or log processing.
  • Cold HDD: In the same branch than Throughput Optimized but with lower price and performance. Useful for data with less frequently accessed volume like cached data storage.
  • General Purpose SSD: This is the common volume type used by consumers and shouldn’t be taken as a standard SSD Block Storage. Firstly GP-SSD, is capped at 16KIOPS which is pretty low for an intensive workload. Secondly, its maximum performance are constrained by a credit system not letting you benefits permanently from the best performance. These both arguments make GP-SSD more appropriate for non-intensive workloads that do not require permanent charge.
  • Provisioned IOPS SSD: It’s an answer to the changing performance of General Purpose. This product allows user to define and pay for an amount of maximum IOPS going up to 64KIOPS. It makes possible storage-bound workloads but at a high price of $0.065 per provisioned IOPS.

Local storage

Block Storage isn’t the only one solution provided by Amazon Web Services, since I3 series, local NVMe-SSD are available for High-IOPS workloads. Let’s compare similar solutions on paper: i3.large vs r5.large + 500GB GP SSD.

Flavor CPU RAM Storage Monthly price
i3.large 2 16 475GB local NVMe-SSD $135
r5.large 2 16 General Purpose SSD $168.4

As you can see on table and chart below, for an equivalent solution in term of basic specifications, it’s much more worth opt in for the i3. Also, the NVMe devices are attached locally to I3 VMs without block storage creating a real gap in terms of IOPS and latency:

Features matter a lot

To do the a comparison of Block versus Local storage is inappropriate without taking in account features. In fact, despite its general lower performance Block Storage is a key component of Clouds’ flexibility and reliability. Where a Local device may focus on latency, Block is attractive by all its features such as snapshot/backups, replication, availability and more.
Here a small comparative table outlining general pro and cons:

Block Local
Latency Low to high Very low
IOPS Low to high High to very high
Replication Yes
SLA Yes
Price Low to very high Included with instance
Size Up to +16TB Fixed at instance startup
Persistence Unlimited Instance lifespan
Hot-plug Yes No

We see that there are clearly 2 usages: A non-guaranteed high performance and a flexible one.

Top 10s Cloud Compute debriefing

We recently release our Top 10 for Cloud Compute North America and Europe. With the help of our automated platform we tested near to 20 cloud providers and selected the most interesting per region. These studies outline performance/price value of Cloud Computes and bound Block Storages. We focus on maximum performance delivered by general purpose infrastructures, their associated costs and where is the best efficiency per dollar spent.

Context

For each provider, we tested 4 sets of VMs:

Category CPU RAM Extra storage
Small 2 4 100
Medium 4 8 150
Large 8 16 200
XLarge 16 32 500

From all the performance and pricing data we collected, the vendor selection was agnostically done, only by numbers with the following key metrics:

  • Hourly price
  • CPU Multi-thread performance
  • Volume IOPS
  • Volume bandwidth

Inherent biases

1. Hourly prices

De facto, most of the hyperscalers are penalized by the documents’ approach. Despite they could propose computing power at the edge of technology, the design of our subject doesn’t take in account the long term billing options such as 1 year or 3 years. These options are only proposed by big players such as Alibaba, AWS or Azure and you can consider up to 60% of discount if you subscribe to them.

2. Volume throttling

Next, hyperscalers generally throttle volume performance, where small and medium size vendors let you reach 3GB/sec and/or 1MIOPS with block storage, the big players stop around 3000 IOPS. This may seem low but it is guaranteed, where the possible 1MIOPS are neither stable nor predictable.

3. Compute focused

Finally, documents focus on compute: virtual machines and volumes, but cloud providers have so much to propose and especially big players. Server-less, Object Storage, DBaaS, with the variety of existing services, the whole value of a cloud vendors cannot be just about Cloud Compute.

Our insights at a glance

For those who don’t want to read the reports, here’s a small list of the leading providers:

Provider Price Compute Storage Region
Hetzner Very aggressive Average Average Europe
Kamatera Low Average High Worldwide
UpCloud Average High Guaranteed very high Worldwide
Oracle Cloud Average Average High Worldwide

What next ?

These documents will be renewed and their methodology improved. We want to bring more infrastructure characteristics like network and RAM. In the pursuit of objectivity, we think that we must diversify our reports to answer real life problematics such as:

  • Small and medium size providers
  • Hyperscalers
  • Country based
  • Provider origin based
  • Object storage, CDNs, DBaaS, Kubernetes, etc

We also want to digitalize this kind of report. Instead of just PDF, we wish to let consumer explore data with a web application. This will also let user appreciate more than 10 vendors without decrease reading quality.

For the meantime, do not hesitate take a look at our document center.

 

Do you warm up volumes ?

Nowadays, most of the cloud vendors provide different solutions to store your data and exploit them from other services. In virtual machine realm, it is often admitted that block storage brings a flexible consumption and local device ensures a low latency. At Cloud Mercato we continuously test and report storage metrics and beyond performance announced by providers, we often face bias effects all related to a phenomenons called “volume warming-up“.

We don’t talk about sport ?

No sorry, it isn’t even linked to temperature, your volumes are supposed to be in fresh rooms somewhere with many other peers. Here the subject is about your HDD/SSD performance when you just get it. Brand new volumes may suffer from several kind of phenomena mainly bound to block allocation. Our team sees this regularly and in fact the expression “warm-up” is the solution not the issue. Here’s what we observe:

  • When you read your volume, you have very high performance: It’s not really disturbing as a real user won’t read an empty disk. The problem is for testers like us who risk to collect results sometime too good to be true.
  • When you write, on the contrary, low performance occurs and penalty of 50 to 95% is seeable. Here an end-user will be directly affected, Imagine a fresh new database node working at 30% of its capacity: just to populate your database will take a while.

Why does it occurs ?

As you guess, providers won’t sell under-effective drives. Some vendors will declare clearly in their documentation if their volumes suffer from that issue, other will let you guess by yourself. In that last case we advise you to ensure that your usage won’t be degraded. As we are in virtualized environments, it’s difficult to give a general description of what’s going on but the idea of this handicap is around block allocation.

Let’s explain these problems by taking the point of view of a volume controller, as a block storage system or a device controller:

  • Read scenario:  The OS asks X amount of block in an area of my volume where I never wrote, I even not yet set a registry and I know this part is empty, so I can quickly answer “zero” whatever you ask me. This is why the high read performance.
  • Write scenario: The OS wants to store X amount of block, firstly I need to allocate a space in storage and update my registry. These operations are done automatically when you use your volume for first time, they represent the overhead and why you should warm up your devices before use them.

How to resolve this issue

The fix consists to produce the block allocation before the real usage, basically you must write on the entire device and read it. The intention is to allocate every block on the entire system with write and be sure they are available with read.

Despite the variety of hypervisors and distributed storage, this method works for most penalized storage. On Unix platforms, only 2 lines are required:

# Replace /dev/vdX by your device path
dd if=/dev/zero of=/dev/vdX bs=1M  # Write
dd if=/dev/vdX of=/dev/null bs=1M  # Read

Still stay one problem, the time for these operations. Firstly there is the latency given by our base problem, then the elapsed time is proportional to the volume’s size. Do you see it coming ? Imagine fill a disk of 3TB at 1.5MB/sec, the setup could be highly time consuming. So another solution would be to parallelize jobs but dd is not made for that. That’s where we use FIO:

# Replace /dev/vdX by your device path
fio --filename=/dev/vdX --rw=write --bs=1m --iodepth=32 --ioengine=libaio --numjobs=32 --direct=1 --name=fio  # Write
fio --filename=/dev/vdX --rw=read --bs=1m --iodepth=32 --ioengine=libaio --numjobs=32 --direct=1 --name=fio  # Read

Even with simultaneous operations, warm up is still a potential long task. But we can relativize things by thinking that this penalize only fresh blocks without allocation, so this operation has to be launched only at server startup. No need to launch it several time or periodically. On the other hand, this is something to take in consideration in infrastructure setup time. For example, in a modern application with a lambda xSQL cluster supporting replication, if this system is configured with auto-scaling helping to spin-up VMs and set replicas. If my storage suffers from lazy allocation I have two options:

  • I take the time to warm-up, it could take 1hour and autoscaling becomes useless
  • My RDMS will warm-up the volume by writing replication on its storage: The process will be very slow and you’ll have bad performance for any new block allocated

So there isn’t any quick solution, as written above, we advise to know accurately where you store your data. Volumes experiencing this disadvantage are simply not adapted to auto-scaling or other scenario presenting time constraints.

Let’s visualize it

Here’s a graph representing writing on a fresh SSD through block storage.

I attached my device at 1:20pm and started to write continuously until reach the maximum performance. My test scenario writes randomly on the SSD, so I’m not sure to warm all the blocks and that’s the point, a user writing on FS don’t really chose which block will be filled. So what can we see ?

  • Performance starts really low : 10 IOPS
  • More I write on the disk, more its throughput increases
  • After 10 minutes, maximum is reached and stable between 450 and 500 IOPS

End stability at 500 IOPS is a low number unveiling a throttling set by the cloud provider. If this limit would be 5MIOPS, I think we may have a clearer view on this phenomenon. Similarly, bigger the volume is, longer it will take to be hot and ready.

Conclusion

If we place these data in a real infrastructure, it could have a huge impact like a null one, all depends of the kind of system you  drive. A classical 3/3 will just require unique operation at start-up, but a cloud-native architecture which claims flexibility will suffers either from a low beginning or from a setup time due to warming-up.

 

Observe worldwide network latencies

Have you ever thought which provider will give the best latency to your users ? Not a theoretical value but an accurate metric representing a real end-to-end connection. At Cloud Mercato our platform allow us to manage cloud components all around the globe. VPS, virtual machines, buckets or CDN, we can easily setup worldwide client-server configuration and run network workloads. But this approach could be qualified as Datacenter to Datacenter: My client is an instance at provider X and it hits another machine at provider Y. So basically as providers are always supposed to have a low latency connection, the scenario becomes unappropriated to test real end user connection.

From our point of view, this performance test has to be done in the same condition than an end user: From a 3G/4G/5G device, with WiFi, through aDSL or optical fiber. Instead of create another Unix command we decided to write Observer, a web application letting your test more than 100 locations directly from your browser.

What it does ?

Observer displays performance from live tests operated by your browser. We setup endpoints among a bunch of Object Storages and CDNs and allow you to compare performance among the different solutions and providers.

Actually this application requests to our CTP a list of available endpoints serving a 1 byte file. For each item, an AJAX request is launched outputting Time To First Byte (TTFB). This value is reported as milliseconds on left table and temperature on map.

Some quick observations

  • If your target is regional, CDNs may not bring you an advantage in terms of latency
  • Even without CDN, Google benefits a lot from their private worldwide network

What is the future of this application ?

It’s actually still in beta/PoC but clearly it reaches our ambition that are testing TTFB from anywhere. From this seed we already imagine a lot of usage:

  • Smart integration directly on provider website making live testing
  • Better data visualization with charts
  • Whole data visualization allowing to understand geographical area’s latency by provider and/or device
  • Bandwidth test with upload and download
  • Integrate our pricing data
  • Yes, change the skin …

If you are a provider and would like to integrate your product in this application, do not hesitate to contact us. In any case, we invite you to test and give us a feedback, we love to see other insights.

dd is not a benchmarking tool

There is a widely held idea in the Internet that a written snippet will be universally valid to test and produce comparable results from any machine. Said like that, this assertion is globally false but a piece of code, valid in a context, can do a lot of road on the web and easily fool a good amount of people. Benchmarks with dd are a good example. Which Unix nerd didn’t test his brand new device with dd ? The command outputs an accurate value in MB/sec, what more ?

The problem is already in benchmark conception

If I quickly get the dd’s user manual or more simple, the help text, I can read:

Copy a file, converting and formatting according to the operands.

If my goal was to benchmark a device, it already appears that this tool is not the most appropriate. Firstly, I don’t aim to copy anything but just read or write. Next, I don’t want to work with files but with block device. Then, I don’t need the announced features about data handling. The three points are really important, because they show how much the tool is inappropriate.

Don’t get me wrong, I don’t denigrate dd. It personally saved me tons of hours with ISOs or disk migrations. But use it as a standard benchmark tool is more a hack than a reliable idea.

The first issue: The files

A major misconception of benchmark is in what I want to test and how I’ll do it. Here, our goal is HDD/SSD performance and pass by a filesystem can create a big biais in your analysis. Here the kind of command findable on the Internet:

dd bs=1M count=1024 if=/dev/zero of=/root/test

For those not familiar with dd, the above command creates a 1GB file containing only zeros at root user’s home: /root/test.  The authors generally claim the goal is to collect performance of the device where the file is stored, it’s poorly reached. Storage performance are mainly affected by a set of caches/buffers from the user level to the blocks located in SSD. File system is the main entry for users but as it is a software, it can hide you the reality of your hardware as good as well as bad.

By default, dd toward a file systems uses an asynchronous method, meaning that the if the written file is small enough to fit in RAM, the OS won’t write it on drive and will wait the most appropriate time to do so. In this configuration, the command’s output will absolutely not represent storage’s performance and as only volatile-memory is implied, dd displays very good performance.

At Cloud Mercato, as we want to reflect infrastructure performance, we bypass file system as much as possible and directly test device by its absolute path. So from our benchmark you know your hardware possibilities and can boost them with the file system of your choice. There’s only few cases where files are involved such as test root volume in write mode, you mustn’t not write on your root device directly or you’ll erase its OS.

Second issue: A tool without data generation

dd is designed around the concept of copy, it is also quite well explained by its long name “Data Duplicator”. Fortunately in Unix everything is a file and kernels provide pseudo-files generating data. There are:

  • /dev/zero
  • /dev/random
  • /dev/urandom

Under the hood, these pseudo-files are real software and suffer from this. /dev/zero is CPU bound but because it only produces zeros, it cannot represent a real workload. /dev/random is quite slow due to its high randomness and /dev/urandom is too intensive in term of CPU cycles.

Basically, you may not reach the storage maximum performance if you are limited by CPU. Moreover, dd isn’t a multi-thread software, so only one thread at once can stress the device decreasing chances to get the best.

Third: A lack of features

It is said, dd is not a benchmarking tool, if you look at the the open-source catalog of storage testing and the common features, dd, not being intended for this purpose, it is out of competition:

  • Single thread only
  • No optimized data generation
  • No access mode: Sequential or random
  • No deep control such as I/O depth
  • Only average bandwidth, no IOPS, latency or percentiles
  • No mixed patterns: read/write
  • No time control

This shortened list is eloquent, Data-Duplicator doesn’t provide the necessary features to be declared as a performance test tool.

Then the solution

Here are real benchmark tools that you can use:

FIO is really our daily tool (if not hourly), it brings to us possibilities not imaginable with dd like IO depth or random access. vdbench is also very handy, in a similar concept than FIO, you can create complex scenario such as imply multiple files in read/write access.

In conclusion, benchmark is not only a suite of commands ran in a shell. Executed tests and expected output really depend of context: What do you want to test ? Which component should be implied ? Why this value will represent something ? Any snippets taken on the Internet may have its value in a certain environment and be untruthful in another. It’s up to the tester to understand these factors and chose the appropriate tool to her/his purpose.

Out of the wood #1 : Kamatera

That’s make almost 3 years our analysis platform runs on computers all around the globe and automagically collects stuff about cloud market such as locations, instance sizes and more important price and performance. We are actually close to 60 providers, counting IaaS, PaaS or CDNs as vendors and this is a huge stack of knowledge that we want to share. Of course, our P2P is already here for people who want a comparison tool about price and performance, but this application isn’t able to translate all our knowledge. Then before to create another super visualization tool to expose our data, we thought that laying words on electronic paper would be a good and quick solution. So here’s a first article of a series presenting small and medium size cloud providers that aren’t on all lips but worth it.

The first platform studied in this series called “Out of the wood” is Kamatera, a medium sized vendor with an international offering.

Who are they

Firstly Kamatera should be qualified by their worldwide presence with datacenters in North America, Europe, Middle East and China.  Not only with single locations on continents, but pretty well scattered, covering for instance in the Eastern, Western and Central USA.

Following our methodology we class Kamatera as a medium size provider, they are mainly a cloud compute vendor providing IaaS. But on top of that, they give a major attention to the customer service, then you are free to use their infrastructure with high-level support or benefit from their managed services guaranteed by their teams.

In terms of cloud services, they present all the required features for a decent compute provider:

  • Virtual machines scaling up to 72 CPU and 384GB of RAM
  • VPC management
  • Block storage powered by SSD
  • Load balancer
  • Firewall
  • Multi-user management
  • API

More than IaaS, they also propose a great-sized catalog of SaaS services based on their VMs. Called services and apps, they allow users to opt-in for a preconfigured MongoDB, Rancher or WordPress without extra costs.

How is their platform

Let’s dive into their cloud servers design. Kamatera chose a flexible shaping of virtual machines, meaning that you can set a number of CPU and amount of RAM for each server you launch. 8CPU-8GB or 15CPU-200GB, everything is possible permitting an accurate composition of your infrastructure.

Above that, 4 kinds of VM exist:

  • General Purpose (B) : Dedicated CPU thread
  • Dedicated (D) : Dedicated CPU Core (2 threads)
  • Burstable (T) : Dedicated CPU thread with extra costs after 10% of utilization
  • Availability (A) : Non-dedicated CPU thread with no resources guaranteed

Again, by providing these types of vCPU, Kamatera allows consumers to adjust pricing and performance with their workload. No need to make a choice in a memory-optimized series for your Redis cluster, just design servers fitting your requirements.

Performance insights

We’ve launch our machinery on their infrastructure, collecting hardware specifications and metrics such as Geekbench scores or CPU steal. From our analysis, we are in a VMware ecosystem with Intel processors. Here’s a sample of chips we discovered across their datacenters:

  • Intel Xeon CPU E5-2620 v2
  • Intel Xeon CPU E5-2660 v3
  • Intel Xeon CPU E5-2697A v4
  • Intel Xeon CPU Gold 6150
  • Intel Xeon CPU Platinum 8270

From tests ran by our automated platform Kamatera obtains good a performance set, you’ll find below graphs representing their 2CPU-4GB VMs and different families at Microsoft Azure. We picked all the different type of vCPU available at Kamatera:

Compared to this well-known big player, Kamatera really performs well. This is just a sample and our extensive testing reveal that CPU performance increases almost linearly with the number of vCPU. Moreover, the charts above represent pretty well the 4 kind of vCPU: Dedicated performs the best, then General Purpose, Burstable then Avaibility.

Beyond their honorable performance, another great characteristic of Kamatera is their aggressive pricing. Despite they don’t have long term billing options like 1 or 3 year, their general purposes hourly rates is still lower than the major part of the competitors. Here’s a comparative table with the flavors used above:

Flavor Hourly price Monthly price
2ACPU 4GB 0.022 16.06
2BCPU 4GB 0.053 38.69
2DCPU 4GB 0.088 64.24
2TCPU 4GB 0.022 16.06
Standard_A2_v2 0.076 55.48
Standard_B2s 0.042 30.66
Standard_F2 0.099 72.27
Standard_F2s_v2 0.085 62.05

They also propose a monthly billing at the same price than hourly but 1TB of outgoing traffic are offered with this subscription. With VMs billed hourly Kamatera proposes a worldwide price of $0.01$ / GB which is still up to a tenth the costs announced by big-players.

Portrait of conclusion

Kamatera is a good representative of this market share, very valuable, who offers a worldwide infrastructure at a decent price. They don’t have the plethora of specific services findable on hyperscalers but their pricing and abilities can match with a great part of budgets and workloads.

To get more insight and create you own comparative chart or table, I invite you to go to our Price/Performance Portal,

Benchmark floating point computations with Python

Python and its nature

I am fundamentally a Python developer, a fact which makes me skip a bunch of C knowledge. I made the choice to stop placing all my focus on system performance to learn in writing and learning velocity. This will be the first of many blog posts that will focus on a variety of programming aspects. I’ll start things off by diving into python. Let’s go back to 10 years ago, when I was launching my 1st Python interpreter and typing:

>>> sum([2, 2])
4

It immediately made me think: So I don’t need to compile it or Syntax is soooo clear. As a sysadmin, I didn’t take a lot of time before I create my first scripts, then applications. Web page scrapping, mailing, ncurses menus. Basically the knight became blacksmith and now is able to create his own swords. To share my armory, I quickly decided to learn the well-known web framework Django and seeing the learning curve and the quick results I was getting, I apparently made good choices.

Dive into the cobra performance

If you use the snake language or not, you inevitably heard its main problem: Due to facts it’s not a compiled language, Python cannot reach the performances of the fastest languages such as C/C++. This assumption is partially true and as a performance benchmarker I always thought that despite the slower performance, I’m able to create anything I need with Python, so the trade off was worth it. At least until I tried to write a benchmark tool out of Python with a very small amount of overhead. This did not turn out exactly as I had expected.

The purpose of this kind of program is in total contradiction with the average web developers behaviors (cf I cache everything). The idea here is generally to produce an accurate action somewhere on a system with the minimum overhead and Python by nature didn’t seem adapted to that. This is a second sentence not entirely true, there are bunch of ways to write and use C code in Python such as Cython, pyrex or C wrappers. The standard library itself includes more and more C code, and even third library alternatives come to speed-up parts of code.

In facts, most of the general usages already have their C implementation. Network, file I/O, regexp or TLS, the language has boosted a lot of topics. Remain CPU-bound tasks and here, like most of interpreted languages, the GIL will produce some overhead. But keep in mind that this impacts just multi-tasks application, so in few words, single-thread codes are handsomely optimized but multi-thread suffer from the language design.

Python for scientists

By its various qualities, Python has become one the favorite language for scientific computing. We can mention NumPy, the basis about science, or Anaconda the scientific Python distribution and its package manager conda. The ecosystem is really large, touching many purposes:

  • Pure mathematics
  • Data representation
  • Machine/deep learning
  • Interactive notebook

It has a simple usage and let people produce quickly results but scientists generally have another main requirement: High computing capacity. Tons of numbers with tons of applied functions, sometimes with tons of dimensions. Scaling generally answers to this issue, but let’s avoid the multi-task subject and focus on single-thread performance. Of course parallelism is part of the reality landscape but it requires some technics such as sharding or parallel computing.

To collect performance data, we created a simple tool named FPB: Floating Point Benchmark. It aims to launch different kind of operations across several Python ways. For instance, compute an average from CPython or third libraries. This project is free, so feel free to contribute,. It also will be the subject of another article.

Below you’ll find some charts and tables representing timing of math functions. We observe performances from Vanilla Python to Numpy passing by alternative builtins ways such as SQLite. Our test environment is the following:

  • T-Systems’ p2.2xlarge.8 powered by KVM
  • Intel Xeon CPU E5-2690 v4
  • 8 vCPU @ 2.6 GHz
  • 64GB of RAM
  • 1x Tesla V100 PCle
  • FPB with float32

You can swipe to get more results.

Numpy Pandas Python SQLite
100 0.015 1.004 0.002 0.030
500 0.013 1.015 0.003 0.056
1000 0.017 1.017 0.006 0.088
5000 0.017 1.019 0.026 0.392
10000 0.023 1.027 0.051 0.623
50000 0.041 1.035 0.245 3.152
100000 0.082 1.086 0.498 5.763
500000 0.260 1.289 2.662 29.889
1000000 0.501 1.533 5.622 60.059
Numpy Pandas Python SQLite
100 0.006 0.238 0.002 0.034
500 0.005 0.239 0.008 0.066
1000 0.006 0.244 0.016 0.091
5000 0.007 0.266 0.074 0.424
10000 0.010 0.284 0.147 0.641
50000 0.025 0.430 0.729 3.336
100000 0.035 0.567 1.446 6.044
500000 0.180 3.243 7.240 31.573
1000000 0.313 4.485 14.695 64.826
Numpy Pandas Python SQLite
100 0.003 0.184 0.014
500 0.010 0.187 0.062
1000 0.018 0.203 0.121
5000 0.087 0.275 0.577
10000 0.158 0.349 1.168
50000 0.999 1.093 6.023
100000 0.813 1.027 13.022
500000 10.135 10.343 66.921
1000000 16.541 16.639 134.221
Numpy Pandas Python SQLite
100 0.010 0.263 0.001 0.029
500 0.009 0.270 0.003 0.057
1000 0.011 0.270 0.006 0.087
5000 0.015 0.293 0.026 0.388
10000 0.027 0.307 0.050 0.598
50000 0.050 0.463 0.246 3.063
100000 0.405 0.620 0.502 5.774
500000 0.403 3.354 2.595 30.432
1000000 1.452 4.825 5.582 59.209
Numpy Pandas Python SQLite
100 0.080 0.341 0.072 0.038
500 0.315 0.593 0.323 0.086
1000 0.610 0.673 0.547 0.149
5000 3.038 2.556 3.008 0.670
10000 5.899 3.875 6.069 1.139
50000 29.400 23.922 28.636 5.818
100000 58.728 50.507 61.103 11.123
500000 294.646 268.885 374.352 57.669
1000000 585.783 509.264 590.154 116.219

Assumptions:

  • 1st phenomenon, all methods generally give stable result until they unhook, meaning they aren’t design to manage more
  • 2nd phenomenon, if series doesn’t unhook, it may stop before, showing memory errors when system isn’t able to gather the whole dataset
  • Python isn’t always slow, for instance, sum has been greatly implemented and offers good performance
  • The outsider SQLite can offers good results for multi-dimensional operations but can’t do or is slow with math operations
  • Pandas being based on Numpy, performance are equals

From CPU to GPU

In case you didn’t know it, GPUs are optimized for floating point computation and nowadays It’s not incredible to see gaming focused PCs used as high performance computers. During the last decade, development for this kind of device has been ease a lot mainly by CUDA: Compute Unified Device Architecture. This technology allows to use GPU for general purpose (GPGPU) with C programming language, and so, the Snake comes again, based on CUDA multiple libraries drew a complete Python ecosystem from statistic to deep learning where most of the topics has been implemented with GPU support.

In the landscape of Python and GPU, you’ll find several bricks, such as:

  • CuPy : Began in 2015 as ChainNeural: network framework, this project is an implementation of Numpy using C/CUDA libraries
  • CuDF : Young project from 2017 implementing Pandas using CUDA technologies.
  • CUDAMat : Math library using GPU and compatible with NumPy. Its initial release was in 2013
  • GNumpy : NumPy GPU implementation from university of Toronto. Despite this project isn’t supported anymore, it seems to fill our goal.

Here’s another charts showing GPU solutions’ performance. We keep Numpy to have an idea of CPU vs GPU.

CUDAMat CuPy Numpy PyCUDA
100 0.181 0.111 0.015 0.459
500 0.182 0.136 0.013 0.464
1000 0.187 0.111 0.017 0.457
5000 0.210 0.134 0.017 0.547
10000 0.269 0.109 0.023 0.553
50000 0.246 0.140 0.041 0.534
100000 0.318 0.115 0.082 0.548
500000 0.359 0.290 0.260 0.561
1000000 0.372 0.632 0.501 0.900
5000000 0.529 4.020 0.924
10000000 0.527 8.087 0.940
50000000 0.951 40.224 1.273
100000000 1.477 78.201 1.735
500000000 5.455 391.084 6.220
1000000000 10.486 782.191 9.125
CUDAMat CuPy Numpy PyCUDA
100 0.147 0.092 0.006 0.269
500 0.163 0.119 0.005 0.278
1000 0.145 0.093 0.006 0.264
5000 0.174 0.117 0.007 0.365
10000 0.167 0.085 0.010 0.362
50000 0.255 0.120 0.025 0.373
100000 0.358 0.121 0.035 0.371
500000 1.141 0.361 0.180 0.371
1000000 2.128 0.738 0.313 0.665
5000000 15.193 4.418 0.657
10000000 29.211 8.847 0.679
50000000 142.295 44.706 0.985
100000000 283.188 89.415 1.426
500000000 1411.501 447.529 4.645
1000000000 2821.986 895.139 9.640
CUDAMat CuPy Numpy PyCUDA
100 0.082 0.003
500 0.104 0.010
1000 0.077 0.018
5000 0.103 0.087
10000 0.078 0.158
50000 0.166 0.999
100000 0.162 0.813
500000 0.805 10.135
1000000 1.010 16.541
5000000 4.021
10000000 15.713
50000000 81.924
100000000 170.693
500000000 777.003
1000000000 1623.245
CUDAMat CuPy Numpy PyCUDA
100 0.182 0.096 0.010 0.268
500 0.198 0.119 0.009 0.262
1000 0.172 0.095 0.011 0.269
5000 0.187 0.129 0.015 0.359
10000 0.191 0.089 0.027 0.376
50000 0.226 0.117 0.050 0.370
100000 0.225 0.108 0.405 0.369
500000 0.317 0.286 0.403 0.373
1000000 0.334 0.618 1.452 0.647
5000000 0.445 3.979 0.658
10000000 0.511 8.007 0.675
50000000 0.960 38.784 1.009
100000000 1.496 77.704 1.443
500000000 5.345 389.102 5.063
1000000000 10.376 777.999 9.623
CUDAMat CuPy Numpy PyCUDA
100 0.183 0.103 0.080
500 0.195 0.134 0.315
1000 0.226 0.101 0.610
5000 0.309 0.129 3.038
10000 0.311 0.249 5.899
50000 0.442 0.270 29.400
100000 0.443 2.395 58.728
500000 0.777 3.215 294.646
1000000 1.130 19.542 585.783
5000000 3.175 31.483
10000000 5.774 63.104

Assumptions:

  • The GPU unhooking is really far from CPU memory filling error
  • Small datasets (<100K) doesn’t really require a GPU
  • Most of the frameworks are able to handle 100 billions data points in reasonable times
  • Some frameworks are still stable after and could handle more if they would have more GPU RAM
  • CPU is directly out of the race for multi-dimension arrays

Assumptions:

  • Despite less RAM than the system, the GPU frameworks can handle more data than CPU
  • With unidimensional simple operations, Numpy is faster than the others
  • GPU implementations aren’t equal, depending of operation, each has its plus and minus

Distributed computing

Yes I wrote multi-processing wasn’t the goal of the article but there are several solutions which deserving some lines and I’ll talk about:

  • Dask: Enable scaling for main scientific computing frameworks
  • PySpark: Python API to use Spark, a Big Data analysis engine

With these approaches data are generally chunked and computations are shared across threads, processes or nodes. It has several implications:

  • An overhead is produced for interprocess communication, it will be more significant with TCP/IP
  • It’s up to the developer to know what is the best way to parallelize computing for their application. Operation scheduling or chunk size, everything isn’t easy as Numpy and requires adaptation to the platform
CuPy Dask Dask CuPy Numpy Spark
100 0.111 6.991 0.015 214.231
500 0.136 6.783 0.013 214.696
1000 0.111 6.888 0.017 212.697
5000 0.134 7.303 0.017 209.330
10000 0.109 6.611 0.023 218.304
50000 0.140 6.832 0.041 217.465
100000 0.115 6.819 0.082 273.697
500000 0.290 7.717 0.260 332.148
1000000 0.632 7.209 0.501
5000000 4.020 6.950
10000000 8.087 7.594
50000000 40.224 8.729
100000000 78.201 10.981
500000000 391.084 26.666
1000000000 782.191 45.295
CuPy Dask Dask CuPy Numpy Spark
100 0.092 4.010 6.030 0.006 214.424
500 0.119 3.969 6.586 0.005 202.505
1000 0.093 5.334 6.053 0.006 205.601
5000 0.117 3.800 6.080 0.007 211.439
10000 0.085 5.008 5.944 0.010 218.309
50000 0.120 3.300 6.116 0.025 213.221
100000 0.121 11.348 6.505 0.035 232.708
500000 0.361 10.884 6.620 0.180
1000000 0.738 17.597 6.370 0.313
5000000 4.418 6.226
10000000 8.847 6.459
50000000 44.706 7.992
100000000 89.415 9.817
500000000 447.529 24.648
1000000000 895.139 42.768
CuPy Dask Dask CuPy Numpy Spark
100 0.082 3.588 5.967 0.003 168.582
500 0.104 3.590 5.382 0.010 170.114
1000 0.077 4.558 5.301 0.018 174.239
5000 0.103 3.575 5.321 0.087 202.543
10000 0.078 4.504 5.381 0.158 198.266
50000 0.166 3.944 5.349 0.999 440.174
100000 0.162 9.210 5.394 0.813 590.974
500000 0.805 16.254 5.575 10.135
1000000 1.010 24.246 5.469 16.541
5000000 4.021 5.538
10000000 15.713 5.792
50000000 81.924 7.541
100000000 170.693 9.235
500000000 777.003 24.362
1000000000 1623.245 42.894
CuPy Dask Dask CuPy Numpy Spark
100 0.096 4.260 6.931 0.010 216.131
500 0.119 4.109 6.589 0.009 201.460
1000 0.095 5.564 7.343 0.011 212.060
5000 0.129 3.873 6.847 0.015 221.375
10000 0.089 5.177 6.931 0.027 218.741
50000 0.117 3.360 6.800 0.050 216.568
100000 0.108 11.447 6.763 0.405 265.191
500000 0.286 11.154 7.296 0.403 338.511
1000000 0.618 18.069 7.232 1.452
5000000 3.979 6.986
10000000 8.007 7.221
50000000 38.784 8.990
100000000 77.704 10.670
500000000 389.102 26.002
1000000000 777.999 45.081
CuPy Dask Dask CuPy Numpy Spark
100 0.103 515.880 91.227 0.080 629.306
500 0.134 584.842 95.750 0.315 644.168
1000 0.101 473.285 95.494 0.610 650.786
5000 0.129 637.091 94.269 3.038 742.905
10000 0.249 532.024 82.024 5.899 782.469
50000 0.270 733.753 93.942 29.400 1442.649
100000 2.395 709.022 95.728 58.728 2469.502
500000 3.215 962.462 204.780 294.646
1000000 19.542 948.553 207.883 585.783
5000000 31.483 488.644
10000000 63.104 853.314

Assumptions

  • As I played with a single host we cannot appreciate the real benefits of theses frameworks
  • We observe a minimum overhead of 6ms from CuPy to Dask+CuPy
  • PySpark has an overhead of 200ms making it unsuitable for our tests
  • More, PySpark doesn’t seems to handle memory as good as could do Vanilla Python

Of course, PySpark is here just for the experimentation, in my mind the small implementation that I used isn’t representative of a real usage. Spark is clearly in the big data field and even handling of 1 trillion items would be common tasks. Furthermore, a single host Spark is …hum.. a joke.

Conclusion

Here my GPU has 4 times less RAM than my CPU but we can see that it can handle 1,000 times more data, from 1M to 1G. With the multi-dimensionnal advantage, we can conclude without doubt the superiority of GPU. But due to its price, another question comes:

When should I choose GPU instead of CPU ?

From our results, Numpy compared to GPU solutions doesn’t have bad performance, the real problem here is memory allocation. There is just not enough RAM to run the test until unhooking. 2D and complex operations such as sine are slower but acceptable. At this point we can say that 1D arithmetics with less than 1M datapoints seems to be the most adapted workloads.

These basic operations can’t reflect perfectly an end usage as could do a real machine learning framework, FPB’s goal is to understand performance of these tasks. A future machine learning benchmark will fill this target.