Benchmark floating point computations with Python

Python and its nature

I am fundamentally a Python developer, a fact which makes me skip a bunch of C knowledge. I made the choice to stop placing all my focus on system performance to learn in writing and learning velocity. This will be the first of many blog posts that will focus on a variety of programming aspects. I’ll start things off by diving into python. Let's go back to 10 years ago, when I was launching my 1st Python interpreter and typing:
>>> sum([2, 2])
4
It immediately made me think: So I don't need to compile it or Syntax is soooo clear. As a sysadmin, I didn't take a lot of time before I create my first scripts, then applications. Web page scrapping, mailing, ncurses menus. Basically the knight became blacksmith and now is able to create his own swords. To share my armory, I quickly decided to learn the well-known web framework Django and seeing the learning curve and the quick results I was getting, I apparently made good choices.

Dive into the cobra performance

If you use the snake language or not, you inevitably heard its main problem: Due to facts it's not a compiled language, Python cannot reach the performances of the fastest languages such as C/C++. This assumption is partially true and as a performance benchmarker I always thought that despite the slower performance, I'm able to create anything I need with Python, so the trade off was worth it. At least until I tried to write a benchmark tool out of Python with a very small amount of overhead. This did not turn out exactly as I had expected. The purpose of this kind of program is in total contradiction with the average web developers behaviors (cf I cache everything). The idea here is generally to produce an accurate action somewhere on a system with the minimum overhead and Python by nature didn't seem adapted to that. This is a second sentence not entirely true, there are bunch of ways to write and use C code in Python such as Cython, pyrex or C wrappers. The standard library itself includes more and more C code, and even third library alternatives come to speed-up parts of code. In facts, most of the general usages already have their C implementation. Network, file I/O, regexp or TLS, the language has boosted a lot of topics. Remain CPU-bound tasks and here, like most of interpreted languages, the GIL will produce some overhead. But keep in mind that this impacts just multi-tasks application, so in few words, single-thread codes are handsomely optimized but multi-thread suffer from the language design.

Python for scientists

By its various qualities, Python has become one the favorite language for scientific computing. We can mention NumPy, the basis about science, or Anaconda the scientific Python distribution and its package manager conda. The ecosystem is really large, touching many purposes:
  • Pure mathematics
  • Data representation
  • Machine/deep learning
  • Interactive notebook
It has a simple usage and let people produce quickly results but scientists generally have another main requirement: High computing capacity. Tons of numbers with tons of applied functions, sometimes with tons of dimensions. Scaling generally answers to this issue, but let's avoid the multi-task subject and focus on single-thread performance. Of course parallelism is part of the reality landscape but it requires some technics such as sharding or parallel computing. To collect performance data, we created a simple tool named FPB: Floating Point Benchmark. It aims to launch different kind of operations across several Python ways. For instance, compute an average from CPython or third libraries. This project is free, so feel free to contribute,. It also will be the subject of another article. Below you'll find some charts and tables representing timing of math functions. We observe performances from Vanilla Python to Numpy passing by alternative builtins ways such as SQLite. Our test environment is the following:
  • T-Systems' p2.2xlarge.8 powered by KVM
  • Intel Xeon CPU E5-2690 v4
  • 8 vCPU @ 2.6 GHz
  • 64GB of RAM
  • 1x Tesla V100 PCle
  • FPB with float32
You can swipe to get more results.
Numpy Pandas Python SQLite
100 0.015 1.004 0.002 0.030
500 0.013 1.015 0.003 0.056
1000 0.017 1.017 0.006 0.088
5000 0.017 1.019 0.026 0.392
10000 0.023 1.027 0.051 0.623
50000 0.041 1.035 0.245 3.152
100000 0.082 1.086 0.498 5.763
500000 0.260 1.289 2.662 29.889
1000000 0.501 1.533 5.622 60.059
Numpy Pandas Python SQLite
100 0.006 0.238 0.002 0.034
500 0.005 0.239 0.008 0.066
1000 0.006 0.244 0.016 0.091
5000 0.007 0.266 0.074 0.424
10000 0.010 0.284 0.147 0.641
50000 0.025 0.430 0.729 3.336
100000 0.035 0.567 1.446 6.044
500000 0.180 3.243 7.240 31.573
1000000 0.313 4.485 14.695 64.826
Numpy Pandas Python SQLite
100 0.003 0.184 0.014
500 0.010 0.187 0.062
1000 0.018 0.203 0.121
5000 0.087 0.275 0.577
10000 0.158 0.349 1.168
50000 0.999 1.093 6.023
100000 0.813 1.027 13.022
500000 10.135 10.343 66.921
1000000 16.541 16.639 134.221
Numpy Pandas Python SQLite
100 0.010 0.263 0.001 0.029
500 0.009 0.270 0.003 0.057
1000 0.011 0.270 0.006 0.087
5000 0.015 0.293 0.026 0.388
10000 0.027 0.307 0.050 0.598
50000 0.050 0.463 0.246 3.063
100000 0.405 0.620 0.502 5.774
500000 0.403 3.354 2.595 30.432
1000000 1.452 4.825 5.582 59.209
Numpy Pandas Python SQLite
100 0.080 0.341 0.072 0.038
500 0.315 0.593 0.323 0.086
1000 0.610 0.673 0.547 0.149
5000 3.038 2.556 3.008 0.670
10000 5.899 3.875 6.069 1.139
50000 29.400 23.922 28.636 5.818
100000 58.728 50.507 61.103 11.123
500000 294.646 268.885 374.352 57.669
1000000 585.783 509.264 590.154 116.219
Assumptions:
  • 1st phenomenon, all methods generally give stable result until they unhook, meaning they aren't design to manage more
  • 2nd phenomenon, if series doesn't unhook, it may stop before, showing memory errors when system isn't able to gather the whole dataset
  • Python isn't always slow, for instance, sum has been greatly implemented and offers good performance
  • The outsider SQLite can offers good results for multi-dimensional operations but can't do or is slow with math operations
  • Pandas being based on Numpy, performance are equals

From CPU to GPU

In case you didn’t know it, GPUs are optimized for floating point computation and nowadays It's not incredible to see gaming focused PCs used as high performance computers. During the last decade, development for this kind of device has been ease a lot mainly by CUDA: Compute Unified Device Architecture. This technology allows to use GPU for general purpose (GPGPU) with C programming language, and so, the Snake comes again, based on CUDA multiple libraries drew a complete Python ecosystem from statistic to deep learning where most of the topics has been implemented with GPU support. In the landscape of Python and GPU, you'll find several bricks, such as:
  • CuPy : Began in 2015 as ChainNeural: network framework, this project is an implementation of Numpy using C/CUDA libraries
  • CuDF : Young project from 2017 implementing Pandas using CUDA technologies.
  • CUDAMat : Math library using GPU and compatible with NumPy. Its initial release was in 2013
  • GNumpy : NumPy GPU implementation from university of Toronto. Despite this project isn't supported anymore, it seems to fill our goal.
Here's another charts showing GPU solutions' performance. We keep Numpy to have an idea of CPU vs GPU.
CUDAMat CuPy Numpy PyCUDA
100 0.181 0.111 0.015 0.459
500 0.182 0.136 0.013 0.464
1000 0.187 0.111 0.017 0.457
5000 0.210 0.134 0.017 0.547
10000 0.269 0.109 0.023 0.553
50000 0.246 0.140 0.041 0.534
100000 0.318 0.115 0.082 0.548
500000 0.359 0.290 0.260 0.561
1000000 0.372 0.632 0.501 0.900
5000000 0.529 4.020 0.924
10000000 0.527 8.087 0.940
50000000 0.951 40.224 1.273
100000000 1.477 78.201 1.735
500000000 5.455 391.084 6.220
1000000000 10.486 782.191 9.125
CUDAMat CuPy Numpy PyCUDA
100 0.147 0.092 0.006 0.269
500 0.163 0.119 0.005 0.278
1000 0.145 0.093 0.006 0.264
5000 0.174 0.117 0.007 0.365
10000 0.167 0.085 0.010 0.362
50000 0.255 0.120 0.025 0.373
100000 0.358 0.121 0.035 0.371
500000 1.141 0.361 0.180 0.371
1000000 2.128 0.738 0.313 0.665
5000000 15.193 4.418 0.657
10000000 29.211 8.847 0.679
50000000 142.295 44.706 0.985
100000000 283.188 89.415 1.426
500000000 1411.501 447.529 4.645
1000000000 2821.986 895.139 9.640
CUDAMat CuPy Numpy PyCUDA
100 0.082 0.003
500 0.104 0.010
1000 0.077 0.018
5000 0.103 0.087
10000 0.078 0.158
50000 0.166 0.999
100000 0.162 0.813
500000 0.805 10.135
1000000 1.010 16.541
5000000 4.021
10000000 15.713
50000000 81.924
100000000 170.693
500000000 777.003
1000000000 1623.245
CUDAMat CuPy Numpy PyCUDA
100 0.182 0.096 0.010 0.268
500 0.198 0.119 0.009 0.262
1000 0.172 0.095 0.011 0.269
5000 0.187 0.129 0.015 0.359
10000 0.191 0.089 0.027 0.376
50000 0.226 0.117 0.050 0.370
100000 0.225 0.108 0.405 0.369
500000 0.317 0.286 0.403 0.373
1000000 0.334 0.618 1.452 0.647
5000000 0.445 3.979 0.658
10000000 0.511 8.007 0.675
50000000 0.960 38.784 1.009
100000000 1.496 77.704 1.443
500000000 5.345 389.102 5.063
1000000000 10.376 777.999 9.623
CUDAMat CuPy Numpy PyCUDA
100 0.183 0.103 0.080
500 0.195 0.134 0.315
1000 0.226 0.101 0.610
5000 0.309 0.129 3.038
10000 0.311 0.249 5.899
50000 0.442 0.270 29.400
100000 0.443 2.395 58.728
500000 0.777 3.215 294.646
1000000 1.130 19.542 585.783
5000000 3.175 31.483
10000000 5.774 63.104
Assumptions:
  • The GPU unhooking is really far from CPU memory filling error
  • Small datasets (<100K) doesn't really require a GPU
  • Most of the frameworks are able to handle 100 billions data points in reasonable times
  • Some frameworks are still stable after and could handle more if they would have more GPU RAM
  • CPU is directly out of the race for multi-dimension arrays
Assumptions:
  • Despite less RAM than the system, the GPU frameworks can handle more data than CPU
  • With unidimensional simple operations, Numpy is faster than the others
  • GPU implementations aren't equal, depending of operation, each has its plus and minus

Distributed computing

Yes I wrote multi-processing wasn't the goal of the article but there are several solutions which deserving some lines and I'll talk about:
  • Dask: Enable scaling for main scientific computing frameworks
  • PySpark: Python API to use Spark, a Big Data analysis engine
With these approaches data are generally chunked and computations are shared across threads, processes or nodes. It has several implications:
  • An overhead is produced for interprocess communication, it will be more significant with TCP/IP
  • It's up to the developer to know what is the best way to parallelize computing for their application. Operation scheduling or chunk size, everything isn't easy as Numpy and requires adaptation to the platform
CuPy Dask Dask CuPy Numpy Spark
100 0.111 6.991 0.015 214.231
500 0.136 6.783 0.013 214.696
1000 0.111 6.888 0.017 212.697
5000 0.134 7.303 0.017 209.330
10000 0.109 6.611 0.023 218.304
50000 0.140 6.832 0.041 217.465
100000 0.115 6.819 0.082 273.697
500000 0.290 7.717 0.260 332.148
1000000 0.632 7.209 0.501
5000000 4.020 6.950
10000000 8.087 7.594
50000000 40.224 8.729
100000000 78.201 10.981
500000000 391.084 26.666
1000000000 782.191 45.295
CuPy Dask Dask CuPy Numpy Spark
100 0.092 4.010 6.030 0.006 214.424
500 0.119 3.969 6.586 0.005 202.505
1000 0.093 5.334 6.053 0.006 205.601
5000 0.117 3.800 6.080 0.007 211.439
10000 0.085 5.008 5.944 0.010 218.309
50000 0.120 3.300 6.116 0.025 213.221
100000 0.121 11.348 6.505 0.035 232.708
500000 0.361 10.884 6.620 0.180
1000000 0.738 17.597 6.370 0.313
5000000 4.418 6.226
10000000 8.847 6.459
50000000 44.706 7.992
100000000 89.415 9.817
500000000 447.529 24.648
1000000000 895.139 42.768
CuPy Dask Dask CuPy Numpy Spark
100 0.082 3.588 5.967 0.003 168.582
500 0.104 3.590 5.382 0.010 170.114
1000 0.077 4.558 5.301 0.018 174.239
5000 0.103 3.575 5.321 0.087 202.543
10000 0.078 4.504 5.381 0.158 198.266
50000 0.166 3.944 5.349 0.999 440.174
100000 0.162 9.210 5.394 0.813 590.974
500000 0.805 16.254 5.575 10.135
1000000 1.010 24.246 5.469 16.541
5000000 4.021 5.538
10000000 15.713 5.792
50000000 81.924 7.541
100000000 170.693 9.235
500000000 777.003 24.362
1000000000 1623.245 42.894
CuPy Dask Dask CuPy Numpy Spark
100 0.096 4.260 6.931 0.010 216.131
500 0.119 4.109 6.589 0.009 201.460
1000 0.095 5.564 7.343 0.011 212.060
5000 0.129 3.873 6.847 0.015 221.375
10000 0.089 5.177 6.931 0.027 218.741
50000 0.117 3.360 6.800 0.050 216.568
100000 0.108 11.447 6.763 0.405 265.191
500000 0.286 11.154 7.296 0.403 338.511
1000000 0.618 18.069 7.232 1.452
5000000 3.979 6.986
10000000 8.007 7.221
50000000 38.784 8.990
100000000 77.704 10.670
500000000 389.102 26.002
1000000000 777.999 45.081
CuPy Dask Dask CuPy Numpy Spark
100 0.103 515.880 91.227 0.080 629.306
500 0.134 584.842 95.750 0.315 644.168
1000 0.101 473.285 95.494 0.610 650.786
5000 0.129 637.091 94.269 3.038 742.905
10000 0.249 532.024 82.024 5.899 782.469
50000 0.270 733.753 93.942 29.400 1442.649
100000 2.395 709.022 95.728 58.728 2469.502
500000 3.215 962.462 204.780 294.646
1000000 19.542 948.553 207.883 585.783
5000000 31.483 488.644
10000000 63.104 853.314
Assumptions
  • As I played with a single host we cannot appreciate the real benefits of theses frameworks
  • We observe a minimum overhead of 6ms from CuPy to Dask+CuPy
  • PySpark has an overhead of 200ms making it unsuitable for our tests
  • More, PySpark doesn't seems to handle memory as good as could do Vanilla Python
Of course, PySpark is here just for the experimentation, in my mind the small implementation that I used isn't representative of a real usage. Spark is clearly in the big data field and even handling of 1 trillion items would be common tasks. Furthermore, a single host Spark is ...hum.. a joke.

Conclusion

Here my GPU has 4 times less RAM than my CPU but we can see that it can handle 1,000 times more data, from 1M to 1G. With the multi-dimensionnal advantage, we can conclude without doubt the superiority of GPU. But due to its price, another question comes:

When should I choose GPU instead of CPU ?

From our results, Numpy compared to GPU solutions doesn't have bad performance, the real problem here is memory allocation. There is just not enough RAM to run the test until unhooking. 2D and complex operations such as sine are slower but acceptable. At this point we can say that 1D arithmetics with less than 1M datapoints seems to be the most adapted workloads. These basic operations can't reflect perfectly an end usage as could do a real machine learning framework, FPB's goal is to understand performance of these tasks. A future machine learning benchmark will fill this target.