Apache Zeppelin: stairway to notes* haven! — JVM Advent 2018

*notes is for notebooks in Zeppelin lingo Introduction Continuing from the previous post, Two years in the life of AI, ML, DL and Java, where I had expressed my motivation. I mentioned about our discussions, one of the discussions was, that you can write in languages like Python, R, Julia in JuPyteR notebooks.…

via Apache Zeppelin: stairway to notes* haven! — JVM Advent 2018

Advertisements

Two years in the life of AI, ML, DL and Java — JVM Advent 2018

Two years in the life of AI, ML, DL and Java Citation All the images in the post are owned by the respective owners / creators / authors. Introduction AI, ML and DL are acronyms for Artificial Intelligence, Machine Learning and Deep Learning. Now back to what I was going to write about. If you…

via Two years in the life of AI, ML, DL and Java — JVM Advent 2018

Docker and the JVM — JVM Advent 2018

2014 – We must adopt #microservices to solve all problems with monoliths2016 – We must adopt #docker to solve all problems with microservices2018 – We must adopt #kubernetes to solve all problems with docker pic.twitter.com/CrnvX9Lgpq — Syed Aqueel Haider (@sahrizv) July 14, 2018 Even though Docker was a 2016 thing, it is still relevant today.…

via Docker and the JVM — JVM Advent 2018

How to harness the Powers of Cloud TPUs?

Introduction

About a couple of months ago, I got introduced to Google Colab (an enhanced version of JuPyteR notebooks) and since then didn’t look back. It’s got everything and more you need as a researcher. The initial idea was to load up python notebooks and run them on it. Soon we realised we can actually run those notebooks not just on CPUs available on GCP but also GPUs and TPUs. Also, read up a bit about it from other sources, see Google Reveals Technical Specs and Business Rationale for TPU Processor (slightly dated, but definitely helpful).

Here we go now…

What is a TPU?

Just like a GPU is a graphics accelerator ASIC (to help create graphics images quickly for output to a display device) – which since long have been discovered that can be taken advantage of by using it for massive number crunching. Similarly, a TPU is an AI accelerator ASIC developed by Google specifically for Neural Network Machine Learning. One of the differences being TPUs are more about processing high-volume low-precision computation while GPUs for high-volume high-precision computation (please check the Wikipedia links for the interesting differences).

Then what?

TL;DR — how the notebooks and slides came about

To understand how these devices work, what better approach can we adopt than to benchmark them and then compare the results. Which is why we had a number of notebooks that came out of these experiments. And then we were also coincidentally preparing for Google Cloud Next 2018 at the Excel Center, London. And I spent an evening and a weekend preparing the slides for the talk Yaz and I were asked to give – Harnessing the Powers of Cloud TPU.

What happened first, before the other thing happened?

TL;DR — how we work at the GDG Cloud meetup events

It was a mere coincidence while we were meeting regularly at the GDG Cloud meetups in the London chapter, where Yaz would find interesting things to look at during the hack sessions (Pomodoro sessions – as he called them), suggested one session that we play with TPUs and benchmark them. Actually, I remember suggesting we do this during our sessions as an idea, but then we all got distracted with other equally interesting ideas (all saved somewhere on GitLab). And then I frantically started playing with two notebooks related to GPU and TPU respectively, provided as examples by Colab, which I then got to work on the TPU and then adapted it to work on the GPU (can’t remember anymore which way first). They both were doing slightly different things and measuring the performance of the GPU and TPU and I decided to make them do the same thing and measure the time taken to do it on the different device. Also, display details about the devices themselves (you will see towards the top or bottom of each notebook).

CPU v/s GPU v/s TPU –  Simple benchmarking example via Google Colab

CPU v/s GPU – Simple benchmarking

The CPU v/s GPU – Simple benchmarking notebook finish processing with the below output:

TFLOP is a bit of shorthand for “teraflop”, which is a way of 
measuring the power of a computer based more on mathematical 
capability than GHz. A teraflop refers to the capability of a 
processor to calculate one trillion floating-point operations
per second.

CPU TFlops: 0.53 
GPU speedup over CPU: 29x                    TFlops: 15.70

I was curious about the internals of the CPU and GPU so I ran some Linux commands (via Bash) that the notebooks allow (thankfully) and got these bits of info to share:

GPU-simple-benchmark-conclusion

You can find all those commands and the above output in the notebook as well.

CPU v/s TPU – Simple benchmarking

The TPU – Simple benchmarking notebook finish processing with the below output:

TFLOP is a bit of shorthand for “teraflop” which is a way of 
measuring the power of a computer based more on mathematical 
capability than GHz. A teraflop refers to the capability of a 
processor to calculate one trillion floating-point operations
per second.

CPU TFlops: 0.47
TPU speedup over CPU (cold-start): 75x        TFlops: 35.47 
TPU speedup over CPU (after warm-up): 338x    TFlops: 158.91

Unfortunately, I haven’t had a chance to play around with the TPU profiler yet to learn more about the internals of this fantastic device.

While there is room for errors and inaccuracies in the above figures, you might be curious about the tasks used for all the runs – it’s the below piece of code that has been making the CPU, GPU and TPU circuitry for the Simple benchmarking notebooks:

def cpu_flops():
  x = tf.random_uniform([N, N])
  y = tf.random_uniform([N, N])
  def _matmul(x, y):
    return tf.tensordot(x, y, axes=[[1], [0]]), y

  return tf.reduce_sum(
    tf.contrib.tpu.repeat(COUNT, _matmul, [x, y])
  )

CPU v/s GPU v/s TPU – Time-series prediction via Google Colab

Running the TPU version of the Timeseries notebook gave us some issues initially, which was reported on a StackOverflow post and a couple of good folks from the Google Cloud TPU team stepped in to help. But we managed to get the GPU version of the Time-Series Prediction notebook to work which clearly showed a much better response than the CPU version of Time-Series Prediction notebook – this version just choked half-way through the CPU-cycles and Colab asked me if I wanted to stop the process because we needed more resources (more memory)!!!

Time series: GPU version

Here snapshots of the notebook (Train the Recurrent Neural Network section), the full notebook can be found on Google Colab, it’s free to download, share and extract the python code in it.

Epoch 1/10
 9/10 [==========================>...] - ETA: 4s - loss: 0.0047WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss
WARNING:tensorflow:Can save best model only with val_loss available, skipping.
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,lr
10/10 [==============================] - 42s 4s/step - loss: 0.0048
Epoch 2/10
 9/10 [==========================>...] - ETA: 4s - loss: 0.0041WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss
WARNING:tensorflow:Can save best model only with val_loss available, skipping.
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,lr
10/10 [==============================] - 42s 4s/step - loss: 0.0041
Epoch 3/10
 9/10 [==========================>...] - ETA: 4s - loss: 0.0047WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss
WARNING:tensorflow:Can save best model only with val_loss available, skipping.
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,lr
10/10 [==============================] - 42s 4s/step - loss: 0.0046
Epoch 4/10
 9/10 [==========================>...] - ETA: 4s - loss: 0.0039WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss
WARNING:tensorflow:Can save best model only with val_loss available, skipping.
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,lr
10/10 [==============================] - 43s 4s/step - loss: 0.0039
Epoch 5/10
 9/10 [==========================>...] - ETA: 4s - loss: 0.0048WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss
WARNING:tensorflow:Can save best model only with val_loss available, skipping.
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,lr
10/10 [==============================] - 44s 4s/step - loss: 0.0048
Epoch 6/10
 9/10 [==========================>...] - ETA: 4s - loss: 0.0036WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss
WARNING:tensorflow:Can save best model only with val_loss available, skipping.
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,lr
10/10 [==============================] - 44s 4s/step - loss: 0.0037
Epoch 7/10
 9/10 [==========================>...] - ETA: 4s - loss: 0.0042WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss
WARNING:tensorflow:Can save best model only with val_loss available, skipping.
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,lr
10/10 [==============================] - 44s 4s/step - loss: 0.0041
Epoch 8/10
 9/10 [==========================>...] - ETA: 4s - loss: 0.0037WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss
WARNING:tensorflow:Can save best model only with val_loss available, skipping.
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,lr
10/10 [==============================] - 44s 4s/step - loss: 0.0038
Epoch 9/10
 9/10 [==========================>...] - ETA: 4s - loss: 0.0040WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss
WARNING:tensorflow:Can save best model only with val_loss available, skipping.
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,lr
10/10 [==============================] - 44s 4s/step - loss: 0.0039
Epoch 10/10
 9/10 [==========================>...] - ETA: 4s - loss: 0.0036WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss
WARNING:tensorflow:Can save best model only with val_loss available, skipping.
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,lr
10/10 [==============================] - 44s 4s/step - loss: 0.0035
CPU times: user 10min 8s, sys: 1min 22s, total: 11min 30s
Wall time: 7min 12s

The above run takes about ~7 mins (or total time of ~8 mins) on the Google Colab GPU:

CPU times: user 10min 8s, sys: 1min 22s, total: 11min 30s

Wall time: 7min 12s

I’m still unsure on how to interpret this time-related stats but I will take ~7 mins as our execution time till this point.

We finish with the following stats:

Screen Shot 2018-11-27 at 19.52.09

So now you can see why I earlier chose ~7 mins as the execution time. So it takes about 7 minutes to process this notebook – giving new predictions of temperature, pressure and wind speed and comparing it with the actual values (true values gather from post observations).

Time series: TPU version

Here snapshots of the notebook (Train the Recurrent Neural Network section), the full notebook can be found on Google Colab, it’s free to download, share and extract the python code in it.

Found TPU at: grpc://10.118.17.162:8470
INFO:tensorflow:Querying Tensorflow master (b'grpc://10.118.17.162:8470') for TPU system metadata.
INFO:tensorflow:Found TPU system:
INFO:tensorflow:*** Num TPU Cores: 8
INFO:tensorflow:*** Num TPU Workers: 1
INFO:tensorflow:*** Num TPU Cores Per Worker: 8
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:CPU:0, CPU, -1, 11845881175500857789)
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:XLA_CPU:0, XLA_CPU, 17179869184, 5923571607183194652)
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:XLA_GPU:0, XLA_GPU, 17179869184, 11085218230396215841)
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:0, TPU, 17179869184, 12636361223481337501)
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:1, TPU, 17179869184, 14151025931657390984)
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:2, TPU, 17179869184, 16816909163217742616)
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:3, TPU, 17179869184, 4327750408753767066)
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:4, TPU, 17179869184, 504271688162314774)
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:5, TPU, 17179869184, 14356678784461051119)
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:6, TPU, 17179869184, 6767339384180187426)
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:7, TPU, 17179869184, 1879489006510593388)
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU_SYSTEM:0, TPU_SYSTEM, 17179869184, 17850015066511710434)
WARNING:tensorflow:tpu_model (from tensorflow.contrib.tpu.python.tpu.keras_support) is experimental and may change or be removed at any time, and without warning.
Epoch 1/10
INFO:tensorflow:New input shapes; (re-)compiling: mode=train (# of cores 8), [TensorSpec(shape=(32,), dtype=tf.int32, name='core_id0'), TensorSpec(shape=(32, 1344, 20), dtype=tf.float32, name='input_10'), TensorSpec(shape=(32, 1344, 3), dtype=tf.float32, name='Dense-2_target_30')]
INFO:tensorflow:Overriding default placeholder.
INFO:tensorflow:Remapping placeholder for input
INFO:tensorflow:Started compiling
INFO:tensorflow:Finished compiling. Time elapsed: 3.394456386566162 secs
INFO:tensorflow:Setting weights on TPU model.
 9/10 [==========================>...] - ETA: 1s - loss: 0.0112WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss
WARNING:tensorflow:Can save best model only with val_loss available, skipping.
10/10 [==============================] - 14s 1s/step - loss: 0.0115
Epoch 2/10
 9/10 [==========================>...] - ETA: 0s - loss: 0.0183WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss
WARNING:tensorflow:Can save best model only with val_loss available, skipping.
10/10 [==============================] - 5s 501ms/step - loss: 0.0187
Epoch 3/10
 9/10 [==========================>...] - ETA: 0s - loss: 0.0260WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss
WARNING:tensorflow:Can save best model only with val_loss available, skipping.
10/10 [==============================] - 5s 497ms/step - loss: 0.0264
Epoch 4/10
 9/10 [==========================>...] - ETA: 0s - loss: 0.0324WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss
WARNING:tensorflow:Can save best model only with val_loss available, skipping.
10/10 [==============================] - 5s 496ms/step - loss: 0.0327
Epoch 5/10
 9/10 [==========================>...] - ETA: 0s - loss: 0.0374WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss
WARNING:tensorflow:Can save best model only with val_loss available, skipping.
10/10 [==============================] - 5s 470ms/step - loss: 0.0376
Epoch 6/10
 9/10 [==========================>...] - ETA: 0s - loss: 0.0378WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss
WARNING:tensorflow:Can save best model only with val_loss available, skipping.
10/10 [==============================] - 5s 471ms/step - loss: 0.0372
Epoch 7/10
 9/10 [==========================>...] - ETA: 0s - loss: 0.0220WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss
WARNING:tensorflow:Can save best model only with val_loss available, skipping.
10/10 [==============================] - 5s 489ms/step - loss: 0.0211
Epoch 8/10
 9/10 [==========================>...] - ETA: 0s - loss: 0.0084WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss
WARNING:tensorflow:Can save best model only with val_loss available, skipping.
10/10 [==============================] - 5s 493ms/step - loss: 0.0081
Epoch 9/10
 9/10 [==========================>...] - ETA: 0s - loss: 0.0044WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss
WARNING:tensorflow:Can save best model only with val_loss available, skipping.
10/10 [==============================] - 5s 495ms/step - loss: 0.0044
Epoch 10/10
 9/10 [==========================>...] - ETA: 0s - loss: 0.0040WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss
WARNING:tensorflow:Can save best model only with val_loss available, skipping.
10/10 [==============================] - 5s 491ms/step - loss: 0.0040
CPU times: user 10.8 s, sys: 2.7 s, total: 13.5 s
Wall time: 1min 6s

There are a few of things to still work on the notebook, one of them being getting rid of the warnings appearing during training on the TPU. As per our previous analysis, running on the TPUs should be way faster than GPUs, correction: TPUs are faster than GPUs even when running the Timeseries TPU version (I aligned the two notebooks i.e. GPU and TPU versions of the Timeseries notebooks and re-ran the two experiments). We haven’t been successful in being able to execute code cells till the end of the notebook due to errors in input shape which needs fixing and re-running the notebook. All of these look like good learning opportunities for me and everyone else. Our new results are more promising as you can see from above. And the whole notebook just took ~2 minutes to finish running, that’s so many times speed up over the GPU version (from the below).

Screen Shot 2018-11-27 at 19.44.15

Observations

We can now see the final outcome of the TPU version of the Timeseries notebook, the initial Simple Benchmarking related examples did help us make the below observations:

TPUs are ~85x to ~312x faster than CPUs, and GPUs are ~30x faster than CPUs

which also means that

TPUs are ~3x to ~10x faster than GPUs, which in turn are ~30x faster than CPUs

Some graphs plotting the speeds in teraflops and times between CPU v/s GPU v/s TPU:

CPU-GPU-TPU-Teraflopschart

CPU-GPU-TPU-Speed-times

Note: that during some runs on GCP, the above numbers showed higher than noted here so please beware of that as well. Maybe the notebooks took advantage of some improvements in the GCP infrastructure.

The different devices ran this simple task (block of code mentioned in the previous section) at different speeds, for a more complex task, the numbers would definitely be different, although we believe that their relative performances shouldn’t deviate too much.

Conclusion

Also, want to thank Yaz for being super-encouraging at all times during this whole process including for the presentation at Google Cloud Next 2018. Not forgetting Claudio who contributed quite a bit to the TPU version of the Timeseries notebook, whilst we have been debugging it.

In the end, it was all great and we will solve all the problems of humanity but for now, we have pretty much finished working on the TPU version of the Timeseries notebook, I welcome everyone to take a jab at it and see if you want to improve it further or experiment with it. Please hit back with your feedback and/or contributions in any case.

(Third generation TPU at the Google Data centre: TPU 3.0)

Have a read of what others (i.e. Jeff Hale’s article) are talking about the different PUs on various cloud providers and you can see GCPCloud leading in many such areas.

Be ready for some more notebook fests coming up in the form of more blog posts in the near to distant future. Please share your comments, feedback or any contributions to @theNeomatrix369, you can find more about me via the About me page.

Resources

Citations

Credits to all the images embedded (including the feature image) on this post go to the respective authors/ creators/owners of the images.

 

Digital Catapult | Machine Intelligence Garage: the best-kept secret yet in the open

Introduction

I was at a meetup when Simon Knowles, CTO of Graphcore was giving his talk on the latest development at Graphcore and that is also where I met with Peter Bloomfield from Digital Catapult(@digicatapult). Peter was spreading the word about Machine Intelligence Garage which is an amazing opportunity created by a collaboration between the government and industry leaders like Google, Nvidia, AWS, etcetera to help startups and small businesses access compute resources which they would have otherwise never been able to get hold of.

Digital Catapult - Get Involved

Our conversation

Sometime later, we decided to have a chat discussing the usual questions like what is Machine Intelligence Garage, what is the history, why, how, who, when, and at what stage is the programme at and how do people get involved? As you would imagine, I found our conversation interesting and informative and hence decided to write about it and share it with the rest of us.

Mani: Hey, Peter great meeting you and learning about the initiative from Digital Catapult, can you please share with me the history about this initiative?

Peter: Hi Mani, thanks for dropping in! Our programme, Machine Intelligence Garage was born from a piece of research we conducted last summer. In this report, we explore barriers facing AI startups, and we wanted to test the hypotheses that: Access to the right data, technical talent and adequate computational resources were the main things holding startups back. We collaborate a bit on the first two barriers with government and academic institutes, like the Turing, but the Machine Intelligence Garage programme was designed to provide startups with access to cloud computing vouchers, novel chipsets and HPC facilities. It is through our brilliant partners that we can offer these resources.

Mani: Are there other sister and daughter initiatives or programs, related to Digital Catapult, that everyone would benefit knowing about?

Peter: We have a whole range of digital programmes, across three core tech layers (future networks, AI/ML and Immersive tech as well as some cybersecurity and blockchain initiatives). The full details of our opportunities can be found through the ‘Get Involved’ section of our website.

Mani: Who have been and are going to benefit from this setup that you have in place?

Peter: Our Machine Intelligence Garage programme is designed to benefit early-stage startups who are data ready and need compute power to scale faster. Our collaborative programmes provide opportunities for larger corporations to get involved with startups to address industry-specific challenges.

Mani: How much does this access cost and for how long are they available for access?

Peter: Everything we provide on the Machine Intelligence Garage programme is free to use. The programme was set up using public funding from InnovateUK and CAP.AI. We are able to deliver the compute resources through our work with a wonderful set of partners.

Mani: Can you tell us the process from start to finish?

Peter: When I meet a new company, I have a chat with them about the things they are trying to do, the infrastructure they currently use and the sorts of ML approaches they are using to solve the problem. If the company needs our support and are developing a product with commercial viability and have both strong technical skills and domain expertise I encourage them to apply. The application form asks questions about the product, training data, compute power requirements. If we like the idea, we invite a company to an interview and if successful onboard them with the most suitable resource. The process generally takes 3 weeks from the close of the application call to onboarding with a resource.

gathered-around-table-classic

Mani: What other benefits do startups get from being involved? Are you able to introduce them to partners who can help them run trials and give feedback on the products or services they are building?

Peter: We have a large network that we encourage all our companies to take advantage of. Digital Catapult is an innovation centre and meeting the right people at the right time is key to success for many startups. We run a range of workshops, from business growth and pitch training to deep dives to learn more about technical resources and we make sure our startups benefit from all of these. If a startup wants to put some of the new knowledge into practice we are always very keen to facilitate it!

Mani: This is a lot of information, are there any resources on your website that can help. Anything to sign up to, to keep in touch?

Peter: We have a general technology Digital Catapult newsletter (sign-up form at the bottom of the page) and an AI specific Machine Intelligence Garage newsletter (sign-up form at the bottom of the page). If a startup wants to chat about the programme, they can send me an e-mail: peter.bloomfield@digicatapult.org.uk. We announce all our calls and opportunity through our twitter account too @DigiCatapult.

Mani: Can you please touch on the specifics of what the startups will get access to and how it can benefit them?

Peter: We have three main resources available:

  • Cloud Computing vouchers, either through AWS or Google Cloud Platform

To find out the exact amounts and specifics of access, please do get in touch.

Mani: If someone needed to find out about the benchmarks between different compute resources available to the participants, who or where would they look for the information? Do you have a team that does these measurements on a daily basis?

Peter: We do our own benchmarking of the facilities available and our data engineer on the programme can advise on this, as well as point you in the right direction for more literature!

immersive-lab-entrance

Mani: I have been to the Digital Catapult HQ at 101 Euston Road, and was blown away with all the tech activities happening there, can you please share details about it with our readers

Peter: Digital Catapult was set up four years ago and is the UK’s leading innovation centre for advanced digital technologies.  We have seen a number of changes over the years but our core values of opening up markets and making businesses more competitive and productive remain. We are incredibly lucky to have some amazing facilities to help companies develop new products and services and get their products to market faster, including a nationwide network of Immersive Labs [see launch photos, photos in 2018], an LPWAN network and the new 5G Brighton Testbed.

Mani: Can you name a few startups that are currently going through your programs and the ones who have already been through it?

Peter: We currently have 25 start-ups on our programme. The reams are a range of sizes, some just 2 people, others 20+. The thing they all have in common is that they are developing some really exciting commercial products/solutions with deep learning and have an immediate need for the computational resources we offer. The full list of start-ups can be found on our cohort page on the Machine Intelligence Garage website.

Mani: I really appreciate the time you have taken to answer my questions and this has definitely helped the readers know more about what you do and how they can benefit from this great government-driven initiative.

Peter: Thank you very much for coming in. It’s great to be able to reach a wider audience and grow our community! See you soon!

15307393344_f6881df22a_k

Closing note

I was shown around a number of facilities at their centre (two floors) i.e. the Immersive lab (yes plenty of VR headsets to play with), the server area where all the HPC hardware is kept (at low room temperature), a spacious conference room where meetups are held, a small library full of interesting books and also a hot-desking area shared by both internal staff, partners and friends of Digital Catapult/Machine Intelligence Garage. Looking at the two websites I found the news and views, events and workshops and Digital Catapult | MI Garage blog sites interesting to keep track of activities in this space.

immersive-lab-man-headgear

I’m sure after reading about the conversation, you must be wondering how you could take advantage of these facilities out there meant for you and ones in your network who could benefit from it.

Readers should go to the links mentioned above to learn about this program and how they can go about taking advantage of it, or recommend it to their friends in the community who would be more suitable for it.

Please do let me know if this is helpful by dropping a line in the comments below, and I would also welcome feedback, see how you can reach me, above all please check out to the links mentioned above and also reach out to the folks behind this great initiative.

 

Building Wholly Graal with Truffle!

feature-image-building-graal-and-truffle

Citation: credits to the feature image go to David Luders and reused under a CC license, the original image can be found on this Flickr page.

Introduction

It has been some time, since the two posts [1][2] on Graal/GraalVM/Truffle, and a general request was when are you going to write something about “how to build” this awesome thing called Graal. Technically, we will be building HotSpot’s C2 compiler (look for C2 in the glossary list) replacement, called Graal. This binary is different from the  GraalVM suite you download from OTN via http://graalvm.org/downloads.

I wasn’t just going to stop at the first couple of posts on this technology. In fact, one of the best ways to learn and get an in-depth idea about any tech work, is to know how to build it.

Getting Started

Building JVMCI for JDK8, Graal and Truffle is fairly simple, and the instructions are available on the graal repo. We will be running them on both the local (Linux, MacOS) and container (Docker) environments. To capture the process as-code, they have been written in bash, see https://github.com/neomatrix369/awesome-graal/tree/master/build/x86_64/linux_macos.

During the process of writing the scripts and testing them on various environments, there were some issues, but these were soon resolved with the help members of the Graal team — thanks Doug.

Running scripts

Documentation on how to run the scripts are provided in the README.md on awesome-graal. For each of the build environments they are merely a single command:

Linux & MacOS

$ ./local-build.sh

$ RUN_TESTS=false           ./local-build.sh

$ OUTPUT_DIR=/another/path/ ./local-build.sh

Docker

$ ./docker-build.sh

$ DEBUG=true                ./docker-build.sh

$ RUN_TESTS=false           ./docker-build.sh

$ OUTPUT_DIR=/another/path/ ./docker-build.sh

Both the local and docker scripts pass in the environment variables i.e. RUN_TESTS and OUTPUT_DIR to the underlying commands. Debugging the docker container is also possible by setting the DEBUG environment variable.

For a better understanding of how they work, best to refer to the local and docker scripts in the repo.

Build logs

I have provided build logs for the respective environments in the build/x86_64/linux_macos  folder in https://github.com/neomatrix369/awesome-graal/

Once the build is completed successfully, the following messages are shown:

[snipped]

>> Creating /path/to/awesome-graal/build/x86_64/linux/jdk8-with-graal from /path/to/awesome-graal/build/x86_64/linux/graal-jvmci-8/jdk1.8.0_144/linux-amd64/product
Copying /path/to/awesome-graal/build/x86_64/linux/graal/compiler/mxbuild/dists/graal.jar to /path/to/awesome-graal/build/x86_64/linux/jdk8-with-graal/jre/lib/jvmci
Copying /path/to/awesome-graal/build/x86_64/linux/graal/compiler/mxbuild/dists/graal-management.jar to /path/to/awesome-graal/build/x86_64/linux/jdk8-with-graal/jre/lib/jvmci
Copying /path/to/awesome-graal/build/x86_64/linux/graal/sdk/mxbuild/dists/graal-sdk.jar to /path/to/awesome-graal/build/x86_64/linux/jdk8-with-graal/jre/lib/boot
Copying /path/to/awesome-graal/build/x86_64/linux/graal/truffle/mxbuild/dists/truffle-api.jar to /path/to/awesome-graal/build/x86_64/linux/jdk8-with-graal/jre/lib/truffle

>>> All good, now pick your JDK from /path/to/awesome-graal/build/x86_64/linux/jdk8-with-graal :-)

Creating Archive and SHA of the newly JDK8 with Graal & Truffle at /home/graal/jdk8-with-graal
Creating Archive jdk8-with-graal.tar.gz
Creating a sha5 hash from jdk8-with-graal.tar.gz
jdk8-with-graal.tar.gz and jdk8-with-graal.tar.gz.sha256sum.txt have been successfully created in the /home/graal/output folder.

Artifacts

All the Graal and Truffle artifacts are created in the graal/compiler/mxbuild/dists/ folder and copied to the newly built jdk8-with-graal folder, both of these will be present in the folder where the build.sh script resides:

jdk8-with-graal/jre/lib/jvmci/graal.jar
jdk8-with-graal/jre/lib/jvmci/graal-management.jar
jdk8-with-graal/jre/lib/boot/graal-sdk.jar
jdk8-with-graal/jre/lib/truffle/truffle-api.jar

In short, we started off with vanilla JDK8 (JAVA_HOME) and via the build script created an enhanced JDK8 with Graal and Truffle embedded in it. At the end of a successful build process, the script will create a .tar.gz archive file in the jdk8-with-graal-local folder, alongside this file you will also find the sha5 hash of the archive.

In case of a Docker build, the same folder is called jdk8-with-graal-docker and in addition to the above mentioned files, it will also contain the build logs.

Running unit tests

Running unit tests is a simple command:

mx --java-home /path/to/jdk8 unittest

This step should follow the moment we have a successfully built artifact in the jdk8-with-graal-local folder. The below messages indicate a successful run of the unit tests:

>>>> Running unit tests...
Warning: 1049 classes in /home/graal/mx/mxbuild/dists/mx-micro-benchmarks.jar skipped as their class file version is not supported by FindClassesByAnnotatedMethods
Warning: 401 classes in /home/graal/mx/mxbuild/dists/mx-jacoco-report.jar skipped as their class file version is not supported by FindClassesByAnnotatedMethods
WARNING: Unsupported class files listed in /home/graal/graal-jvmci-8/mxbuild/unittest/mx-micro-benchmarks.jar.jdk1.8.excludedclasses
WARNING: Unsupported class files listed in /home/graal/graal-jvmci-8/mxbuild/unittest/mx-jacoco-report.jar.jdk1.8.excludedclasses
MxJUnitCore
JUnit version 4.12
............................................................................................
Time: 5.334

OK (92 tests)

JDK differences

So what have we got that’s different from the JDK we started with. If we compare the boot JDK with the final JDK here are the differences:

Combination of diff between $JAVA_HOME and jdk8-with-graal and meld will give the above:

JDKversusGraalJDKDiff-02

JDKversusGraalJDKDiff-01

diff -y --suppress-common-lines $JAVA_HOME jdk8-with-graal | less
meld $JAVA_HOME ./jdk8-with-graal

Note: $JAVA_HOME points to your JDK8 boot JDK.

Build execution time

The build execution time was captured on both Linux and MacOS and there was a small difference between running tests and not running tests:

Running the build with or without tests on a quad-core, with hyper-threading:

 real 4m4.390s
 user 15m40.900s
 sys 1m20.386s
 ^^^^^^^^^^^^^^^
 user + sys = 17m1.286s (17 minutes 1.286 second)

Similar running the build with and without tests on a dual-core MacOS, with 4GB RAM, SSD drive, differs little:

 real 9m58.106s
 user 18m54.698s 
 sys 2m31.142s
 ^^^^^^^^^^^^^^^ 
 user + sys = 21m25.84s (21 minutes 25.84 seconds)

Disclaimer: these measurements can certainly vary across the different environments and configurations. If you have a more accurate way to benchmark such running processes, please do share back.

Summary

In this post, we saw how we can build Graal and Truffle for JDK8 on both local and container environments.

The next thing we will do is build them on a build farm provided by Adopt OpenJDK. We will be able to run them across multiple platforms and operating systems, including building inside docker containers. This binary is different from the GraalVM suite you download from OTN via http://graalvm.org/downloads, hopefully we will be able to cover GraalVM in a future post.

Thanks to Julien Ponge for making his build script available for re-use and the Graal team for supporting during the writing of this post.

Feel free to share your feedback at @theNeomatrix369. Pull requests with improvements and best-practices are welcome at https://github.com/neomatrix369/awesome-graal.

Learning to use Wholly GraalVM!

I'm still learning by michelangelo

Citation: credits to the feature image goes to Anne Davis  and reused under a CC license, the original image can be found on this Flickr page.

Introduction

In the post Truffle served in a Holy Graal: Graal and Truffle for polyglot language interpretation on the JVM, we got a brief introduction and a bit of deep dive into Graal, Truffle and some of the concepts around it. But no technology is fun without diving deep into its practicality, otherwise its like Theoretical Physics or Pure Maths — abstract for some, boring for others (sorry the last part was just me ranting).

In this post we will be taking a look into the GraalVM, by installing it, comparing SDK differences and looking at a some of the examples that illustrate how different languages can be compiled and run on the GraalVM, and also how they can be run in the same context and finally natively (more performant).

GraalVM is similar to any Java SDK (JDK) that we download from any vendor, except that it has JVMCI: Java-level JVM Compiler Interface support and Graal is the default JIT compiler. It can, not just execute Java code but also languages like JS, Ruby, Python and R. It can also enable building ahead-of-time (AOT) compiled executable (native images) or share library for Java programs and other supported languages. Although we won’t be going through every language but only a selected few of them.

Just to let you know, that all of the commands and actions have been performed on a Ubuntu 16.04 operating system environment (should work on the MacOSX with minor adaptations, on Windows a bit more changes would be required – happy to receive feedback with the differences, will update post with them).

Practical hands-on

We can get our hands on the GraalVM in more than one way, either build it on our own or download a pre-built version from a vendor website:

  • build on our own: some cloning and other magic (we can see later on)
  • download a ready-made JVM: OTN download site
  • hook up a custom JIT to an existing JDK with JVMCI support (we can see later on)

As we are using a Linux environment, we it would be best to download the linux (preview) version of GraalVM based on JDK8 (> 500MB file, need to Accept the license, need to be signed in on OTN or you will be taken to https://login.oracle.com/mysso/signon.jsp) and install it.

Follow the installation information on the download page after unpacking the archive, you will find a folder by the name graalvm-0.30 (at the time of the writing of this post), after executing the below command:

$ tar -xvzf graalvm-0.30-linux-amd64-jdk8.tar.gz

Eagle eyeing: compare SDKs

We will quickly check the contents of the SDK to gain familiarity, so let’s check the contents of the GraalVM SDK folder:

$ cd graalvm-0.30
$ ls

GraamVM 0.30 SDK folder contents

which looks familiar, and has similarities, when compared with the traditional Java SDK folder (i.e. JDK 1.8.0_44):

$ cd /usr/lib/jdk1.8.0_44
$ ls

JDK 1.8.0_44-folder-contents

Except we have quite a few additional artifacts to learn about, i.e. the launchers on the VM for the supported languages, like FastR, JS (GraalJS), NodeJS (GraalNodeJS), Python, Ruby and Sulong (C/C++, Fortran).

Comparing the bin  folder between the GraalVM SDK and say JDK 1.8.0_44 SDK, we can see that we have a handful of additional files in there:

graal-0.30-bin-folder

(use tools like meld or just diff to compare directories)

Similarly we can see that the jre folder has interesting differences, although semantically similar to the traditional Java SDKs. A few items that look interesting in the list are Rscript, lli and ployglot.

Now we haven’t literally compared the two SDKs to mark elements that are different or missing in one or the other, but the above gives us an idea about what is offered with the pre how to use the features it provides – well this SDK has them baked into it the examples folder.

If the examples folder is NOT distributed in the future versions, please use the respective code snippets provided for each of the sections referred to (for each language). For this post, you won’t need the examples folder to be present.

$ tree -L 1 examples

examples-folder-content

(use the tree command – sudo apt-get tree to see the above, available on the MacOSX & Windows)

Each of the sub-folders contain examples for the respective languages supported by the GraalVM, including embed and native-image which we will also be looking at.

Exciting part: hands-on using the examples

Let’s get to the chase, but before we can execute any code and see what the examples do, we should move the graalvm-0.30 to where the other Java SDKs reside, lets say under /usr/lib/jvm/ and set an environment variable called GRAAL_HOME to point to it:

$ sudo mv -f graalvm-0.30 /usr/lib/jvm
$ export GRAAL_HOME=/usr/lib/jvm/graalvm-0.30
$ echo "export GRAAL_HOME=/usr/lib/jvm/graalvm-0.30" >> ~/.bashrc
$ cd examples

R language

Let’s pick the R and run some R scripts files:

$ cd R
$ $GRAAL_HOME/bin/Rscript --help    # to get to see the usage text

Beware we are running Rscript and not R, both can run R scripts, the later is a R REPL.

Running hello_world.Rusing Rscript:

$ $GRAAL_HOME/bin/Rscript hello_world.R
[1] "Hello world!"

JavaScript

Next we try out some Javascript:

$ cd ../js/
$ $GRAAL_HOME/bin/js --help         # to get to see the usage text

Running hello_world.js with js:

$ $GRAAL_HOME/bin/js hello_world.js
Hello world!

Embed

Now lets try something different, what if you wish to run code written in multiple languages, all residing in the same source file, on the JVM — never done before, which is what is meant by embed.

$ cd ../embed

We can do that using the org.graalvm.polyglot.context  class. Here’s a snippet of code from  HelloPolyglotWorld.java:

import org.graalvm.polyglot.*;

public class HelloPolyglotWorld {

public static void main(String[] args) throws Exception {
 System.out.println("Hello polyglot world Java!");
 Context context = Context.create();
 context.eval("js", "print('Hello polyglot world JavaScript!');");
 context.eval("ruby", "puts 'Hello polyglot world Ruby!'");
 context.eval("R", "print('Hello polyglot world R!');");
 context.eval("python", "print('Hello polyglot world Python!');");
 }
}

Compile it with the below to get a.class file created:

$ $GRAAL_HOME/bin/javac HelloPolyglotWorld.java

And run it with the below command to see how that works:

$ $GRAAL_HOME/bin/java HelloPolyglotWorld
Hello polyglot world Java!
Hello polyglot world JavaScript!
Hello polyglot world Ruby!
[1] "Hello polyglot world R!"
Hello polyglot world Python!

You might have noticed a bit of sluggishness with the execution when switching between languages and printing the “Hello polyglot world….” messages, hopefully we will learn why this happens, and maybe even be able to fix it.

Native image

The native image feature with the GraalVM SDK helps improve startup time of Java applications and give it smaller footprint. Effectively its converting byte-code that runs on the JVM (on any platform) to native code for a specific OS/platform — which is where the performance comes from. It’s using aggressive ahead-of-time (aot) optimisations to achieve good performance.

Let’s see how that works.

$ cd ../native-image

Lets take a snippet of Java code from  HelloWorld.java  in this folder:

public class HelloWorld {
public static void main(String[] args) {
System.out.println("Hello, World!");
}
}

Compile it into byte-code:

$ $GRAAL_HOME/bin/javac HelloWorld.java

Compile the byte-code (HelloWorld.class) into native code:

$ $GRAAL_HOME/bin/native-image HelloWorld
 classlist: 740.68 ms
 (cap): 1,042.00 ms
 setup: 1,748.77 ms
 (typeflow): 3,350.82 ms
 (objects): 1,258.85 ms
 (features): 0.99 ms
 analysis: 4,702.01 ms
 universe: 288.79 ms
 (parse): 741.91 ms
 (inline): 634.63 ms
 (compile): 6,155.80 ms
 compile: 7,847.51 ms
 image: 1,113.19 ms
 write: 241.73 ms
 [total]: 16,746.19 ms

Taking a look at the folder we can see the Hello World source and the compiled artifacts:

3.8M -rwxrwxr-x 1 xxxxx xxxxx 3.8M Dec 12 15:48 helloworld
 12K -rw-rw-r-- 1 xxxxx xxxxx     427 Dec 12  15:47 HelloWorld.class
 12K -rw-rw-r-- 1 xxxxx xxxxx     127 Dec 12  13:59 HelloWorld.java

The first file helloworld is the native binary that runs on the platform we compiled it on, using the native-image command, which can be directly executed with the help of the JVM:

$ helloworld
Hello, World!

Even though we gain performance, we might be loosing out on other features that we get running in the byte-code form on the JVM — the choice of which route to take is all a matter of what is the use-case and what is important for us.

It’s a wrap up!

That calls for a wrap up, quite a lot to read and try out on the command-line, but well worth the time to explore the interesting  GraalVM.

To sum up, we went about downloading the GraalVM from Oracle Lab’s website, unpacked it, had a look at the various folders and compared it with our traditional looking Java SDKs, noticed and noted the differences.

We further looked at the examples provided for the various Graal supported languages, and picked up a handful of features which gave us a taste of what the GraalVM can offer. While we can run our traditional Java applications on it, we now also have the opportunity to write applications that expressed in multiple supported languages in the same source file or the same project. This also gives us the ability to do seamlessly interop between the different aspects of the application written in a different language. Ability to even re-compile our existing applications for native environments (native-image) for performance and a smaller foot-print.

For more details on examples, please refer to http://www.graalvm.org/docs/examples/.

Feel free to share your thoughts with me on @theNeomatrix369.