How to build Graal-enabled JDK8 on CircleCI?

Citation: feature image on the blog can be found on flickr and created by Luca Galli. The image in one of the below sections can be also found on flickr and created by fklv (Obsolete hipster).


The Graal compiler is a replacement to HotSpot’s server-side JIT compiler widely known as the C2 compiler. It is written in Java with the goal of better performance (among other goals) as compared to the C2 compiler. New changes starting with Java 9 mean that we can now plug in our own hand-written C2 compiler into the JVM, thanks to JVMCI. The researchers and engineers at Oracle Labs) have created a variant of JDK8 with JVMCI enabled which can be used to build the Graal compiler. The Graal compiler is open source and is available on GitHub (along with the HotSpot JVMCI sources) needed to build the Graal compiler). This gives us the ability to fork/clone it and build our own version of the Graal compiler.

In this post, we are going to build the Graal compiler with JDK8 on CircleCI. The resulting artifacts are going to be:

– JDK8 embedded with the Graal compiler, and
– a zip archive containing Graal & Truffle modules/components.

Note: we are not covering how to build the whole of the GraalVM suite in this post, that can be done via another post. Although these scripts can be used to that, and there exists a branch which contains the rest of the steps.

Why use a CI tool to build the Graal compiler?

Screenshot_2019-08-06 Graal lovely

Continuous integration (CI) and continuous deployment (CD) tools have many benefits. One of the greatest is the ability to check the health of the code-base. Seeing why your builds are failing provides you with an opportunity to make a fix faster. For this project, it is important that we are able to verify and validate the scripts required to build the Graal compiler for Linux and macOS, both locally and in a Docker container.

A CI/CD tool lets us add automated tests to ensure that we get the desired outcome from our scripts when every PR is merged. In addition to ensuring that our new code does not introduce a breaking change, another great feature of CI/CD tools is that we can automate the creation of binaries and the automatic deployment of those binaries, making them available for open source distribution.

Let’s get started

During the process of researching CircleCI as a CI/CD solution to build the Graal compiler, I learned that we could run builds via two different approaches, namely:

– A CircleCI build with a standard Docker container (longer build time, longer config script)
– A CircleCI build with a pre-built, optimised Docker container (shorter build time, shorter config script)

We will now go through the two approaches mentioned above and see the pros and cons of both of them.

Approach 1: using a standard Docker container

For this approach, CircleCI requires a docker image that is available in Docker Hub or another public/private registry it has access to. We will have to install the necessary dependencies in this available environment in order for a successful build. We expect the build to run longer the first time and, depending on the levels of caching, it will speed up.

To understand how this is done, we will be going through the CircleCI configuration file section-by-section (stored in .circleci/circle.yml), see config.yml in .circleci for the full listing, see commit df28ee7 for the source changes.

Explaining sections of the config file

The below lines in the configuration file will ensure that our installed applications are cached (referring to the two specific directories) so that we don’t have to reinstall the dependencies each time a build occurs:

    dependencies:
      cache_directories:
        - "vendor/apt"
        - "vendor/apt/archives"

We will be referring to the docker image by its full name (as available on http://hub.docker.com under the account name used – adoptopenjdk). In this case, it is a standard docker image containing JDK8 made available by the good folks behind the Adopt OpenJDK build farm. In theory, we can use any image as long as it supports the build process. It will act as the base layer on which we will install the necessary dependencies:

        docker:
          - image: adoptopenjdk/openjdk8:jdk8u152-b16

Next, in the pre-Install Os dependencies step, we will restore the cache, if it already exists, this may look a bit odd, but for unique key labels, the below implementation is recommended by the docs):

          - restore_cache:
              keys:
                - os-deps-{{ arch }}-{{ .Branch }}-{{ .Environment.CIRCLE_SHA1 }}
                - os-deps-{{ arch }}-{{ .Branch }}

Then, in the Install Os dependencies step we run the respective shell script to install the dependencies needed. We have set this step to timeout if the operation takes longer than 2 minutes to complete (see docs for timeout):

          - run:
              name: Install Os dependencies
              command: ./build/x86_64/linux_macos/osDependencies.sh
              timeout: 2m

Then, in then post-Install Os dependencies step, we save the results of the previous step – the layer from the above run step (the key name is formatted to ensure uniqueness, and the specific paths to save are included):

          - save_cache:
              key: os-deps-{{ arch }}-{{ .Branch }}-{{ .Environment.CIRCLE_SHA1 }}
              paths:
                - vendor/apt
                - vendor/apt/archives

Then, in the pre-Build and install make via script step, we restore the cache, if one already exists:

          - restore_cache:
              keys:
                - make-382-{{ arch }}-{{ .Branch }}-{{ .Environment.CIRCLE_SHA1 }}
                - make-382-{{ arch }}-{{ .Branch }}

Then, in the Build and install make via script step we run the shell script to install a specific version of make and it is set to timeout if step takes longer than 1 minute to finish:

          - run:
              name: Build and install make via script
              command: ./build/x86_64/linux_macos/installMake.sh
              timeout: 1m

Then, in the post Build and install make via script step, we save the results of the above action to the cache:

          - save_cache:
              key: make-382-{{ arch }}-{{ .Branch }}-{{ .Environment.CIRCLE_SHA1 }}
              paths:
                - /make-3.82/
                - /usr/bin/make
                - /usr/local/bin/make
                - /usr/share/man/man1/make.1.gz
                - /lib/

Then, we define environment variables to update JAVA_HOME and PATH at runtime. Here the environment variables are sourced so that we remember them for the next subsequent steps till the end of the build process (please keep this in mind):

          - run:
              name: Define Environment Variables and update JAVA_HOME and PATH at Runtime
              command: |
                echo '....'     <== a number of echo-es displaying env variable values
                source ${BASH_ENV}

Then, in the step to Display Hardware, Software, Runtime environment and dependency versions, as best practice we display environment-specific information and record it into the logs for posterity (also useful during debugging when things go wrong):

          - run:
              name: Display HW, SW, Runtime env. info and versions of dependencies
              command: ./build/x86_64/linux_macos/lib/displayDependencyVersion.sh

Then, we run the step to setup MX – this is important from the point of view of the Graal compiler (mx) is a specialised build system created to facilitate compiling and building Graal/GraalVM and  components):

          - run:
              name: Setup MX
              command: ./build/x86_64/linux_macos/lib/setupMX.sh ${BASEDIR}

Then, we run the important step to Build JDK JVMCI (we build the JDK with JVMCI enabled here) and timeout, if the process takes longer than 15 minutes without any output or if the process takes longer than 20 minutes in total to finish:

          - run:
              name: Build JDK JVMCI
              command: ./build/x86_64/linux_macos/lib/build_JDK_JVMCI.sh ${BASEDIR} ${MX}
              timeout: 20m
              no_output_timeout: 15m

Then, we run the step Run JDK JVMCI Tests, which runs tests as part of the sanity check after building the JDK JVMCI:

          - run:
              name: Run JDK JVMCI Tests
              command: ./build/x86_64/linux_macos/lib/run_JDK_JVMCI_Tests.sh ${BASEDIR} ${MX}

Then, we run the step Setting up environment and Build Graal Compiler, to set up the build environment with the necessary environment variables which will be used by the steps to follow:

          - run:
              name: Setting up environment and Build Graal Compiler
              command: |
                echo ">>>> Currently JAVA_HOME=${JAVA_HOME}"
                JDK8_JVMCI_HOME="$(cd ${BASEDIR}/graal-jvmci-8/ && ${MX} --java-home ${JAVA_HOME} jdkhome)"
                echo "export JVMCI_VERSION_CHECK='ignore'" >> ${BASH_ENV}
                echo "export JAVA_HOME=${JDK8_JVMCI_HOME}" >> ${BASH_ENV}
                source ${BASH_ENV}

Then, we run the step Build the Graal Compiler and embed it into the JDK (JDK8 with JVMCI enabled) which timeouts if the process takes longer than 7 minutes without any output or longer than 10 minutes in total to finish:

          - run:
              name: Build the Graal Compiler and embed it into the JDK (JDK8 with JVMCI enabled)
              command: |
                echo ">>>> Using JDK8_JVMCI_HOME as JAVA_HOME (${JAVA_HOME})"
                ./build/x86_64/linux_macos/lib/buildGraalCompiler.sh ${BASEDIR} ${MX} ${BUILD_ARTIFACTS_DIR}
              timeout: 10m
              no_output_timeout: 7m

Then, we run the simple sanity checks to verify the validity of the artifacts created once a build has been completed, just before archiving the artifacts:

          - run:
              name: Sanity check artifacts
              command: |
                ./build/x86_64/linux_macos/lib/sanityCheckArtifacts.sh ${BASEDIR} ${JDK_GRAAL_FOLDER_NAME}
              timeout: 3m
              no_output_timeout: 2m

Then, we run the step Archiving artifacts (means compressing and copying final artifacts into a separate folder) which timeouts if the process takes longer than 2 minutes without any output or longer than 3 minutes in total to finish:

          - run:
              name: Archiving artifacts
              command: |
                ./build/x86_64/linux_macos/lib/archivingArtifacts.sh ${BASEDIR} ${MX} ${JDK_GRAAL_FOLDER_NAME} ${BUILD_ARTIFACTS_DIR}
              timeout: 3m
              no_output_timeout: 2m

For posterity and debugging purposes, we capture the generated logs from the various folders and archive them:

          - run:
              name: Collecting and archiving logs (debug and error logs)
              command: |
                ./build/x86_64/linux_macos/lib/archivingLogs.sh ${BASEDIR}
              timeout: 3m
              no_output_timeout: 2m
              when: always
          - store_artifacts:
              name: Uploading logs
              path: logs/

Finally, we store the generated artifacts at a specified location – the below lines will make the location available on the CircleCI interface (we can download the artifacts from here):

          - store_artifacts:
              name: Uploading artifacts in jdk8-with-graal-local
              path: jdk8-with-graal-local/

Approach 2: using a pre-built optimised Docker container

For approach 2, we will be using a pre-built docker container, that has been created and built locally with all necessary dependencies, the docker image saved and then pushed to a remote registry for e.g. Docker Hub. And then we will be referencing this docker image in the CircleCI environment, via the configuration file. This saves us time and effort for running all the commands to install the necessary dependencies to create the necessary environment for this approach (see the details steps in Approach 1 section).

We expect the build to run for a shorter time as compared to the previous build and this speedup is a result of the pre-built docker image (we will see in the Steps to build the pre-built docker image section), to see how this is done). The additional speed benefit comes from the fact that CircleCI caches the docker image layers which in turn results in a quicker startup of the build environment.

We will be going through the CircleCI configuration file section-by-section (stored in .circleci/circle.yml) for this approach, see config.yml in .circleci for the full listing, see commit e5916f1 for the source changes.

Explaining sections of the config file

Here again, we will be referring to the docker image by it’s full name. It is a pre-built docker image neomatrix369/graalvm-suite-jdk8 made available by neomatrix369. It was built and uploaded to Docker Hub in advance before the CircleCI build was started. It contains the necessary dependencies for the Graal compiler to be built:

        docker:
          - image: neomatrix369/graal-jdk8:${IMAGE_VERSION:-python-2.7}
        steps:
          - checkout

All the sections below do the exact same tasks (and for the same purpose) as in Approach 1, see Explaining sections of the config file section.

Except, we have removed the below sections as they are no longer required for Approach 2:

    - restore_cache:
              keys:
                - os-deps-{{ arch }}-{{ .Branch }}-{{ .Environment.CIRCLE_SHA1 }}
                - os-deps-{{ arch }}-{{ .Branch }}
          - run:
              name: Install Os dependencies
              command: ./build/x86_64/linux_macos/osDependencies.sh
              timeout: 2m
          - save_cache:
              key: os-deps-{{ arch }}-{{ .Branch }}-{{ .Environment.CIRCLE_SHA1 }}
              paths:
                - vendor/apt
                - vendor/apt/archives
          - restore_cache:
              keys:
                - make-382-{{ arch }}-{{ .Branch }}-{{ .Environment.CIRCLE_SHA1 }}
                - make-382-{{ arch }}-{{ .Branch }}
          - run:
              name: Build and install make via script
              command: ./build/x86_64/linux_macos/installMake.sh
              timeout: 1m
          - save_cache:
              key: make-382-{{ arch }}-{{ .Branch }}-{{ .Environment.CIRCLE_SHA1 }}
              paths:
                - /make-3.82/
                - /usr/bin/make
                - /usr/local/bin/make
                - /usr/share/man/man1/make.1.gz

In the following section, I will go through the steps show how to build the pre-built docker image. It will involve running the bash scripts – ./build/x86_64/linux_macos/osDependencies.sh and ./build/x86_64/linux_macos/installMake.sh to install the necessary dependencies as part of building a docker image. And, finally pushing the image to Docker Hub (can be pushed to any other remote registry of your choice).

Steps to build the pre-built docker image

– Run build-docker-image.sh (see bash script source) which depends on the presence of Dockerfile (see docker script source). The Dockerfile does all the necessary tasks of running the dependencies inside the container i.e. runs the bash scripts ./build/x86_64/linux_macos/osDependencies.sh and ./build/x86_64/linux_macos/installMake.sh:

    $ ./build-docker-image.sh

– Once the image has been built successfully, run push-graal-docker-image-to-hub.sh after setting the USER_NAME and IMAGE_NAME (see source code) otherwise it will use the default values as set in the bash script:

    $ USER_NAME="[your docker hub username]" IMAGE_NAME="[any image name]" \
        ./push-graal-docker-image-to-hub.sh

CircleCI config file statistics: Approach 1 versus Approach 2

Areas of interestApproach 1Approach 2
Config file (full source list)build-on-circlecibuild-using-prebuilt-docker-image
Commit point (sha)df28ee7e5916f1
Lines of code (loc)110 lines85 lines
Source lines (sloc)110 sloc85 sloc
Steps (steps: section)1915
Performance (see Performance section)Some speedup due to caching, but slower than Approach 2Speed-up due to pre-built docker image, and also due to caching at different steps. Faster than Approach 1

Ensure DLC layering is enabled (its a paid feature)

What not to do?

Approach 1 issues

I came across things that wouldn’t work initially, but were later fixed with changes to the configuration file or the scripts:

  • please make sure the .circleci/config.yml is always in the root directory of the folder
  • when using the store_artifacts directive in the .circleci/config.yml file setting, set the value to a fixed folder name i.e. jdk8-with-graal-local/ – in our case, setting the path to ${BASEDIR}/project/jdk8-with-graal didn’t create the resulting artifact once the build was finished hence the fixed path name suggestion.
  • environment variables: when working with environment variables, keep in mind that each command runs in its own shell hence the values set to environment variables inside the shell execution environment isn’t visible outside, follow the method used in the context of this post. Set the environment variables such that all the commands can see its required value to avoid misbehaviours or unexpected results at the end of each step.
  • caching: use the caching functionality after reading about it, for more details on CircleCI caching refer to the caching docs. See how it has been implemented in the context of this post. This will help avoid confusions and also help make better use of the functionality provided by CircleCI.

Approach 2 issues

  • Caching: check the docs when trying to use the Docker Layer Caching (DLC) option as it is a paid feature, once this is known the doubts about “why CircleCI keeps downloading all the layers during each build” will be clarified, for Docker Layer Caching details refer to docs. It can also clarify why in non-paid mode my build is still not as fast as I would like it to be.

General note:

  • Light-weight instances: to avoid the pitfall of thinking we can run heavy-duty builds, check the documentation on the technical specifications of the instances. If we run the standard Linux commands to probe the technical specifications of the instance we may be misled by thinking that they are high specification machines. See the step that enlists the Hardware and Software details of the instance (see Display HW, SW, Runtime env. info and versions of dependencies section). The instances are actually Virtual Machines or Container like environments with resources like 2CPU/4096MB. This means we can’t run long-running or heavy-duty builds like building the GraalVM suite. Maybe there is another way to handle these kinds of builds, or maybe such builds need to be decomposed into smaller parts.
  • Global environment variables: as each run line in the config.yml, runs in its own shell context, from within that context environment variables set by other executing contexts do not have access to these values. Hence in order to overcome this, we have adopted two methods:
  • pass as variables as parameters to calling bash/shell scripts to ensure scripts are able to access the values in the environment variables
  • use the source command as a run step to make environment variables accessible globally

End result and summary

We see the below screen (the last step i.e. Updating artifacts enlists where the artifacts have been copied), after a build has been successfully finished:

The artifacts are now placed in the right folder for download. We are mainly concerned about the jdk8-with-graal.tar.gz artifact.

Performance

Before writing this post, I ran multiple passes of both the approaches and jotted down the time taken to finish the builds, which can be seen below:

Approach 1: standard CircleCI build (caching enabled)
– 13 mins 28 secs
– 13 mins 59 secs
– 14 mins 52 secs
– 10 mins 38 secs
– 10 mins 26 secs
– 10 mins 23 secs
Approach 2: using pre-built docker image (caching enabled, DLC) feature unavailable)
– 13 mins 15 secs
– 15 mins 16 secs
– 15 mins 29 secs
– 15 mins 58 secs
– 10 mins 20 secs
– 9 mins 49 secs

Note: Approach 2 should show better performance when using a paid tier, as Docker Layer Caching) is available as part of this plan.

Sanity check

In order to be sure that by using both the above approaches we have actually built a valid JDK embedded with the Graal compiler, we perform the following steps with the created artifact:

– Firstly, download the jdk8-with-graal.tar.gz artifact from under the Artifacts tab on the CircleCI dashboard (needs sign-in):

– Then, unzip the .tar.gz file and do the following:

    tar xvf jdk8-with-graal.tar.gz

– Thereafter, run the below command to check the JDK binary is valid:

    cd jdk8-with-graal
    ./bin/java -version

– And finally check if we get the below output:

    openjdk version "1.8.0-internal"
    OpenJDK Runtime Environment (build 1.8.0-internal-jenkins_2017_07_27_20_16-b00)
    OpenJDK 64-Bit Graal:compiler_ab426fd70e30026d6988d512d5afcd3cc29cd565:compiler_ab426fd70e30026d6988d512d5afcd3cc29cd565 (build 25.71-b01-internal-jvmci-0.46, mixed mode)

– Similarly, to confirm if the JRE is valid and has the Graal compiler built in, we do this:

    ./bin/jre/java -version

– And check if we get a similar output as above:

    openjdk version "1.8.0-internal"
    OpenJDK Runtime Environment (build 1.8.0-internal-jenkins_2017_07_27_20_16-b00)
    OpenJDK 64-Bit Graal:compiler_ab426fd70e30026d6988d512d5afcd3cc29cd565:compiler_ab426fd70e30026d6988d512d5afcd3cc29cd565 (build 25.71-b01-internal-jvmci-0.46, mixed mode)

With this, we have successfully built JDK8 with the Graal compiler embedded in it and also bundled the Graal and Truffle components in an archive file, both of which are available for download via the CircleCI interface.

Note: you will notice that we do perform sanity checks of the binaries built just before we pack them into compressed archives, as part of the build steps (see bottom section of CircleCI the configuration files section).

Nice badges!

We all like to show-off and also like to know the current status of our build jobs. A green-colour, build status icon is a nice indication of success, which looks like the below on a markdown README page:

We can very easily embed both of these status badges displaying the build status of our project (branch-specific i.e. master or another branch you have created) built on CircleCI (see docs) on how to do that).

Conclusions

We explored two approaches to build the Graal compiler using the CircleCI environment. They were good experiments to compare performance between the two approaches and also how we can do them with ease. We also saw a number of things to avoid or not to do and also saw how useful some of the CircleCI features are. The documentation and forums do good justice when trying to make a build work or if you get stuck with something.

Once we know the CircleCI environment, it’s pretty easy to use and always gives us the exact same response (consistent behaviour) every time we run it. Its ephemeral nature means we are guaranteed a clean environment before each run and a clean up after it finishes. We can also set up checks on build time for every step of the build, and abort a build if the time taken to finish a step surpasses the threshold time-period.

The ability to use pre-built docker images coupled with Docker Layer Caching on CircleCI can be a major performance boost (saves us build time needed to reinstall any necessary dependencies at every build). Additional performance speedups are available on CircleCI, with caching of the build steps – this again saves build time by not having to re-run the same steps if they haven’t changed.

There are a lot of useful features available on CircleCI with plenty of documentation and everyone on the community forum are helpful and questions are answered pretty much instantly.

Next, let’s build the same and more on another build environment/build farm – hint, hint, are you think the same as me? Adopt OpenJDK build farm)? We can give it a try!

Thanks and credits to Ron Powell from CircleCI for proof-reading and giving constructive feedback. 

Please do let me know if this is helpful by dropping a line in the comments below or by tweeting at @theNeomatrix369, and I would also welcome feedback, see how you can reach me, above all please check out the links mentioned above.

Useful resources

– Links to useful CircleCI docs
About Getting started | Videos
About Docker
Docker Layer Caching
About Caching
About Debugging via SSH
CircleCI cheatsheet
CircleCI Community (Discussions)
Latest community topics
– CircleCI configuration and supporting files
Approach 1: https://github.com/neomatrix369/awesome-graal/tree/build-on-circleci (config file and other supporting files i.e. scripts, directory layout, etc…)
Approach 2: https://github.com/neomatrix369/awesome-graal/tree/build-on-circleci-using-pre-built-docker-container (config file and other supporting files i.e. scripts, directory layout, etc…)
Scripts to build Graal on Linux, macOS and inside the Docker container
Truffle served in a Holy Graal: Graal and Truffle for polyglot language interpretation on the JVM
Learning to use Wholly GraalVM!
Building Wholly Graal with Truffle!

Advertisements

Apache Zeppelin: stairway to notes* haven! — JVM Advent 2018

*notes is for notebooks in Zeppelin lingo Introduction Continuing from the previous post, Two years in the life of AI, ML, DL and Java, where I had expressed my motivation. I mentioned about our discussions, one of the discussions was, that you can write in languages like Python, R, Julia in JuPyteR notebooks.…

via Apache Zeppelin: stairway to notes* haven! — JVM Advent 2018

Two years in the life of AI, ML, DL and Java — JVM Advent 2018

Two years in the life of AI, ML, DL and Java Citation All the images in the post are owned by the respective owners / creators / authors. Introduction AI, ML and DL are acronyms for Artificial Intelligence, Machine Learning and Deep Learning. Now back to what I was going to write about. If you…

via Two years in the life of AI, ML, DL and Java — JVM Advent 2018

Docker and the JVM — JVM Advent 2018

2014 – We must adopt #microservices to solve all problems with monoliths2016 – We must adopt #docker to solve all problems with microservices2018 – We must adopt #kubernetes to solve all problems with docker pic.twitter.com/CrnvX9Lgpq — Syed Aqueel Haider (@sahrizv) July 14, 2018 Even though Docker was a 2016 thing, it is still relevant today.…

via Docker and the JVM — JVM Advent 2018

How to harness the Powers of Cloud TPUs?

Introduction

About a couple of months ago, I got introduced to Google Colab (an enhanced version of JuPyteR notebooks) and since then didn’t look back. It’s got everything and more you need as a researcher. The initial idea was to load up python notebooks and run them on it. Soon we realised we can actually run those notebooks not just on CPUs available on GCP but also GPUs and TPUs. Also, read up a bit about it from other sources, see Google Reveals Technical Specs and Business Rationale for TPU Processor (slightly dated, but definitely helpful).

Here we go now…

What is a TPU?

Just like a GPU is a graphics accelerator ASIC (to help create graphics images quickly for output to a display device) – which since long have been discovered that can be taken advantage of by using it for massive number crunching. Similarly, a TPU is an AI accelerator ASIC developed by Google specifically for Neural Network Machine Learning. One of the differences being TPUs are more about processing high-volume low-precision computation while GPUs for high-volume high-precision computation (please check the Wikipedia links for the interesting differences).

Then what?

TL;DR — how the notebooks and slides came about

To understand how these devices work, what better approach can we adopt than to benchmark them and then compare the results. Which is why we had a number of notebooks that came out of these experiments. And then we were also coincidentally preparing for Google Cloud Next 2018 at the Excel Center, London. And I spent an evening and a weekend preparing the slides for the talk Yaz and I were asked to give – Harnessing the Powers of Cloud TPU.

What happened first, before the other thing happened?

TL;DR — how we work at the GDG Cloud meetup events

It was a mere coincidence while we were meeting regularly at the GDG Cloud meetups in the London chapter, where Yaz would find interesting things to look at during the hack sessions (Pomodoro sessions – as he called them), suggested one session that we play with TPUs and benchmark them. Actually, I remember suggesting we do this during our sessions as an idea, but then we all got distracted with other equally interesting ideas (all saved somewhere on GitLab). And then I frantically started playing with two notebooks related to GPU and TPU respectively, provided as examples by Colab, which I then got to work on the TPU and then adapted it to work on the GPU (can’t remember anymore which way first). They both were doing slightly different things and measuring the performance of the GPU and TPU and I decided to make them do the same thing and measure the time taken to do it on the different device. Also, display details about the devices themselves (you will see towards the top or bottom of each notebook).

CPU v/s GPU v/s TPU –  Simple benchmarking example via Google Colab

CPU v/s GPU – Simple benchmarking

The CPU v/s GPU – Simple benchmarking notebook finish processing with the below output:

TFLOP is a bit of shorthand for “teraflop”, which is a way of 
measuring the power of a computer based more on mathematical 
capability than GHz. A teraflop refers to the capability of a 
processor to calculate one trillion floating-point operations
per second.

CPU TFlops: 0.53 
GPU speedup over CPU: 29x                    TFlops: 15.70

I was curious about the internals of the CPU and GPU so I ran some Linux commands (via Bash) that the notebooks allow (thankfully) and got these bits of info to share:

GPU-simple-benchmark-conclusion

You can find all those commands and the above output in the notebook as well.

CPU v/s TPU – Simple benchmarking

The TPU – Simple benchmarking notebook finish processing with the below output:

TFLOP is a bit of shorthand for “teraflop” which is a way of 
measuring the power of a computer based more on mathematical 
capability than GHz. A teraflop refers to the capability of a 
processor to calculate one trillion floating-point operations
per second.

CPU TFlops: 0.47
TPU speedup over CPU (cold-start): 75x        TFlops: 35.47 
TPU speedup over CPU (after warm-up): 338x    TFlops: 158.91

Unfortunately, I haven’t had a chance to play around with the TPU profiler yet to learn more about the internals of this fantastic device.

While there is room for errors and inaccuracies in the above figures, you might be curious about the tasks used for all the runs – it’s the below piece of code that has been making the CPU, GPU and TPU circuitry for the Simple benchmarking notebooks:

def cpu_flops():
  x = tf.random_uniform([N, N])
  y = tf.random_uniform([N, N])
  def _matmul(x, y):
    return tf.tensordot(x, y, axes=[[1], [0]]), y

  return tf.reduce_sum(
    tf.contrib.tpu.repeat(COUNT, _matmul, [x, y])
  )

CPU v/s GPU v/s TPU – Time-series prediction via Google Colab

Running the TPU version of the Timeseries notebook gave us some issues initially, which was reported on a StackOverflow post and a couple of good folks from the Google Cloud TPU team stepped in to help. But we managed to get the GPU version of the Time-Series Prediction notebook to work which clearly showed a much better response than the CPU version of Time-Series Prediction notebook – this version just choked half-way through the CPU-cycles and Colab asked me if I wanted to stop the process because we needed more resources (more memory)!!!

Time series: GPU version

Here snapshots of the notebook (Train the Recurrent Neural Network section), the full notebook can be found on Google Colab, it’s free to download, share and extract the python code in it.

Epoch 1/10
 9/10 [==========================>...] - ETA: 4s - loss: 0.0047WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss
WARNING:tensorflow:Can save best model only with val_loss available, skipping.
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,lr
10/10 [==============================] - 42s 4s/step - loss: 0.0048
Epoch 2/10
 9/10 [==========================>...] - ETA: 4s - loss: 0.0041WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss
WARNING:tensorflow:Can save best model only with val_loss available, skipping.
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,lr
10/10 [==============================] - 42s 4s/step - loss: 0.0041
Epoch 3/10
 9/10 [==========================>...] - ETA: 4s - loss: 0.0047WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss
WARNING:tensorflow:Can save best model only with val_loss available, skipping.
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,lr
10/10 [==============================] - 42s 4s/step - loss: 0.0046
Epoch 4/10
 9/10 [==========================>...] - ETA: 4s - loss: 0.0039WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss
WARNING:tensorflow:Can save best model only with val_loss available, skipping.
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,lr
10/10 [==============================] - 43s 4s/step - loss: 0.0039
Epoch 5/10
 9/10 [==========================>...] - ETA: 4s - loss: 0.0048WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss
WARNING:tensorflow:Can save best model only with val_loss available, skipping.
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,lr
10/10 [==============================] - 44s 4s/step - loss: 0.0048
Epoch 6/10
 9/10 [==========================>...] - ETA: 4s - loss: 0.0036WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss
WARNING:tensorflow:Can save best model only with val_loss available, skipping.
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,lr
10/10 [==============================] - 44s 4s/step - loss: 0.0037
Epoch 7/10
 9/10 [==========================>...] - ETA: 4s - loss: 0.0042WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss
WARNING:tensorflow:Can save best model only with val_loss available, skipping.
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,lr
10/10 [==============================] - 44s 4s/step - loss: 0.0041
Epoch 8/10
 9/10 [==========================>...] - ETA: 4s - loss: 0.0037WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss
WARNING:tensorflow:Can save best model only with val_loss available, skipping.
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,lr
10/10 [==============================] - 44s 4s/step - loss: 0.0038
Epoch 9/10
 9/10 [==========================>...] - ETA: 4s - loss: 0.0040WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss
WARNING:tensorflow:Can save best model only with val_loss available, skipping.
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,lr
10/10 [==============================] - 44s 4s/step - loss: 0.0039
Epoch 10/10
 9/10 [==========================>...] - ETA: 4s - loss: 0.0036WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss
WARNING:tensorflow:Can save best model only with val_loss available, skipping.
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,lr
10/10 [==============================] - 44s 4s/step - loss: 0.0035
CPU times: user 10min 8s, sys: 1min 22s, total: 11min 30s
Wall time: 7min 12s

The above run takes about ~7 mins (or total time of ~8 mins) on the Google Colab GPU:

CPU times: user 10min 8s, sys: 1min 22s, total: 11min 30s

Wall time: 7min 12s

I’m still unsure on how to interpret this time-related stats but I will take ~7 mins as our execution time till this point.

We finish with the following stats:

Screen Shot 2018-11-27 at 19.52.09

So now you can see why I earlier chose ~7 mins as the execution time. So it takes about 7 minutes to process this notebook – giving new predictions of temperature, pressure and wind speed and comparing it with the actual values (true values gather from post observations).

Time series: TPU version

Here snapshots of the notebook (Train the Recurrent Neural Network section), the full notebook can be found on Google Colab, it’s free to download, share and extract the python code in it.

Found TPU at: grpc://10.118.17.162:8470
INFO:tensorflow:Querying Tensorflow master (b'grpc://10.118.17.162:8470') for TPU system metadata.
INFO:tensorflow:Found TPU system:
INFO:tensorflow:*** Num TPU Cores: 8
INFO:tensorflow:*** Num TPU Workers: 1
INFO:tensorflow:*** Num TPU Cores Per Worker: 8
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:CPU:0, CPU, -1, 11845881175500857789)
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:XLA_CPU:0, XLA_CPU, 17179869184, 5923571607183194652)
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:XLA_GPU:0, XLA_GPU, 17179869184, 11085218230396215841)
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:0, TPU, 17179869184, 12636361223481337501)
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:1, TPU, 17179869184, 14151025931657390984)
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:2, TPU, 17179869184, 16816909163217742616)
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:3, TPU, 17179869184, 4327750408753767066)
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:4, TPU, 17179869184, 504271688162314774)
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:5, TPU, 17179869184, 14356678784461051119)
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:6, TPU, 17179869184, 6767339384180187426)
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:7, TPU, 17179869184, 1879489006510593388)
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU_SYSTEM:0, TPU_SYSTEM, 17179869184, 17850015066511710434)
WARNING:tensorflow:tpu_model (from tensorflow.contrib.tpu.python.tpu.keras_support) is experimental and may change or be removed at any time, and without warning.
Epoch 1/10
INFO:tensorflow:New input shapes; (re-)compiling: mode=train (# of cores 8), [TensorSpec(shape=(32,), dtype=tf.int32, name='core_id0'), TensorSpec(shape=(32, 1344, 20), dtype=tf.float32, name='input_10'), TensorSpec(shape=(32, 1344, 3), dtype=tf.float32, name='Dense-2_target_30')]
INFO:tensorflow:Overriding default placeholder.
INFO:tensorflow:Remapping placeholder for input
INFO:tensorflow:Started compiling
INFO:tensorflow:Finished compiling. Time elapsed: 3.394456386566162 secs
INFO:tensorflow:Setting weights on TPU model.
 9/10 [==========================>...] - ETA: 1s - loss: 0.0112WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss
WARNING:tensorflow:Can save best model only with val_loss available, skipping.
10/10 [==============================] - 14s 1s/step - loss: 0.0115
Epoch 2/10
 9/10 [==========================>...] - ETA: 0s - loss: 0.0183WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss
WARNING:tensorflow:Can save best model only with val_loss available, skipping.
10/10 [==============================] - 5s 501ms/step - loss: 0.0187
Epoch 3/10
 9/10 [==========================>...] - ETA: 0s - loss: 0.0260WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss
WARNING:tensorflow:Can save best model only with val_loss available, skipping.
10/10 [==============================] - 5s 497ms/step - loss: 0.0264
Epoch 4/10
 9/10 [==========================>...] - ETA: 0s - loss: 0.0324WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss
WARNING:tensorflow:Can save best model only with val_loss available, skipping.
10/10 [==============================] - 5s 496ms/step - loss: 0.0327
Epoch 5/10
 9/10 [==========================>...] - ETA: 0s - loss: 0.0374WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss
WARNING:tensorflow:Can save best model only with val_loss available, skipping.
10/10 [==============================] - 5s 470ms/step - loss: 0.0376
Epoch 6/10
 9/10 [==========================>...] - ETA: 0s - loss: 0.0378WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss
WARNING:tensorflow:Can save best model only with val_loss available, skipping.
10/10 [==============================] - 5s 471ms/step - loss: 0.0372
Epoch 7/10
 9/10 [==========================>...] - ETA: 0s - loss: 0.0220WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss
WARNING:tensorflow:Can save best model only with val_loss available, skipping.
10/10 [==============================] - 5s 489ms/step - loss: 0.0211
Epoch 8/10
 9/10 [==========================>...] - ETA: 0s - loss: 0.0084WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss
WARNING:tensorflow:Can save best model only with val_loss available, skipping.
10/10 [==============================] - 5s 493ms/step - loss: 0.0081
Epoch 9/10
 9/10 [==========================>...] - ETA: 0s - loss: 0.0044WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss
WARNING:tensorflow:Can save best model only with val_loss available, skipping.
10/10 [==============================] - 5s 495ms/step - loss: 0.0044
Epoch 10/10
 9/10 [==========================>...] - ETA: 0s - loss: 0.0040WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss
WARNING:tensorflow:Can save best model only with val_loss available, skipping.
10/10 [==============================] - 5s 491ms/step - loss: 0.0040
CPU times: user 10.8 s, sys: 2.7 s, total: 13.5 s
Wall time: 1min 6s

There are a few of things to still work on the notebook, one of them being getting rid of the warnings appearing during training on the TPU. As per our previous analysis, running on the TPUs should be way faster than GPUs, correction: TPUs are faster than GPUs even when running the Timeseries TPU version (I aligned the two notebooks i.e. GPU and TPU versions of the Timeseries notebooks and re-ran the two experiments). We haven’t been successful in being able to execute code cells till the end of the notebook due to errors in input shape which needs fixing and re-running the notebook. All of these look like good learning opportunities for me and everyone else. Our new results are more promising as you can see from above. And the whole notebook just took ~2 minutes to finish running, that’s so many times speed up over the GPU version (from the below).

Screen Shot 2018-11-27 at 19.44.15

Observations

We can now see the final outcome of the TPU version of the Timeseries notebook, the initial Simple Benchmarking related examples did help us make the below observations:

TPUs are ~85x to ~312x faster than CPUs, and GPUs are ~30x faster than CPUs

which also means that

TPUs are ~3x to ~10x faster than GPUs, which in turn are ~30x faster than CPUs

Some graphs plotting the speeds in teraflops and times between CPU v/s GPU v/s TPU:

CPU-GPU-TPU-Teraflopschart

CPU-GPU-TPU-Speed-times

Note: that during some runs on GCP, the above numbers showed higher than noted here so please beware of that as well. Maybe the notebooks took advantage of some improvements in the GCP infrastructure.

The different devices ran this simple task (block of code mentioned in the previous section) at different speeds, for a more complex task, the numbers would definitely be different, although we believe that their relative performances shouldn’t deviate too much.

Conclusion

Also, want to thank Yaz for being super-encouraging at all times during this whole process including for the presentation at Google Cloud Next 2018. Not forgetting Claudio who contributed quite a bit to the TPU version of the Timeseries notebook, whilst we have been debugging it.

In the end, it was all great and we will solve all the problems of humanity but for now, we have pretty much finished working on the TPU version of the Timeseries notebook, I welcome everyone to take a jab at it and see if you want to improve it further or experiment with it. Please hit back with your feedback and/or contributions in any case.

(Third generation TPU at the Google Data centre: TPU 3.0)

Have a read of what others (i.e. Jeff Hale’s article) are talking about the different PUs on various cloud providers and you can see GCPCloud leading in many such areas.

Be ready for some more notebook fests coming up in the form of more blog posts in the near to distant future. Please share your comments, feedback or any contributions to @theNeomatrix369, you can find more about me via the About me page.

Resources

Citations

Credits to all the images embedded (including the feature image) on this post go to the respective authors/ creators/owners of the images.

 

Digital Catapult | Machine Intelligence Garage: the best-kept secret yet in the open

Introduction

I was at a meetup when Simon Knowles, CTO of Graphcore was giving his talk on the latest development at Graphcore and that is also where I met with Peter Bloomfield from Digital Catapult(@digicatapult). Peter was spreading the word about Machine Intelligence Garage which is an amazing opportunity created by a collaboration between the government and industry leaders like Google, Nvidia, AWS, etcetera to help startups and small businesses access compute resources which they would have otherwise never been able to get hold of.

Digital Catapult - Get Involved

Our conversation

Sometime later, we decided to have a chat discussing the usual questions like what is Machine Intelligence Garage, what is the history, why, how, who, when, and at what stage is the programme at and how do people get involved? As you would imagine, I found our conversation interesting and informative and hence decided to write about it and share it with the rest of us.

Mani: Hey, Peter great meeting you and learning about the initiative from Digital Catapult, can you please share with me the history about this initiative?

Peter: Hi Mani, thanks for dropping in! Our programme, Machine Intelligence Garage was born from a piece of research we conducted last summer. In this report, we explore barriers facing AI startups, and we wanted to test the hypotheses that: Access to the right data, technical talent and adequate computational resources were the main things holding startups back. We collaborate a bit on the first two barriers with government and academic institutes, like the Turing, but the Machine Intelligence Garage programme was designed to provide startups with access to cloud computing vouchers, novel chipsets and HPC facilities. It is through our brilliant partners that we can offer these resources.

Mani: Are there other sister and daughter initiatives or programs, related to Digital Catapult, that everyone would benefit knowing about?

Peter: We have a whole range of digital programmes, across three core tech layers (future networks, AI/ML and Immersive tech as well as some cybersecurity and blockchain initiatives). The full details of our opportunities can be found through the ‘Get Involved’ section of our website.

Mani: Who have been and are going to benefit from this setup that you have in place?

Peter: Our Machine Intelligence Garage programme is designed to benefit early-stage startups who are data ready and need compute power to scale faster. Our collaborative programmes provide opportunities for larger corporations to get involved with startups to address industry-specific challenges.

Mani: How much does this access cost and for how long are they available for access?

Peter: Everything we provide on the Machine Intelligence Garage programme is free to use. The programme was set up using public funding from InnovateUK and CAP.AI. We are able to deliver the compute resources through our work with a wonderful set of partners.

Mani: Can you tell us the process from start to finish?

Peter: When I meet a new company, I have a chat with them about the things they are trying to do, the infrastructure they currently use and the sorts of ML approaches they are using to solve the problem. If the company needs our support and are developing a product with commercial viability and have both strong technical skills and domain expertise I encourage them to apply. The application form asks questions about the product, training data, compute power requirements. If we like the idea, we invite a company to an interview and if successful onboard them with the most suitable resource. The process generally takes 3 weeks from the close of the application call to onboarding with a resource.

gathered-around-table-classic

Mani: What other benefits do startups get from being involved? Are you able to introduce them to partners who can help them run trials and give feedback on the products or services they are building?

Peter: We have a large network that we encourage all our companies to take advantage of. Digital Catapult is an innovation centre and meeting the right people at the right time is key to success for many startups. We run a range of workshops, from business growth and pitch training to deep dives to learn more about technical resources and we make sure our startups benefit from all of these. If a startup wants to put some of the new knowledge into practice we are always very keen to facilitate it!

Mani: This is a lot of information, are there any resources on your website that can help. Anything to sign up to, to keep in touch?

Peter: We have a general technology Digital Catapult newsletter (sign-up form at the bottom of the page) and an AI specific Machine Intelligence Garage newsletter (sign-up form at the bottom of the page). If a startup wants to chat about the programme, they can send me an e-mail: peter.bloomfield@digicatapult.org.uk. We announce all our calls and opportunity through our twitter account too @DigiCatapult.

Mani: Can you please touch on the specifics of what the startups will get access to and how it can benefit them?

Peter: We have three main resources available:

  • Cloud Computing vouchers, either through AWS or Google Cloud Platform

To find out the exact amounts and specifics of access, please do get in touch.

Mani: If someone needed to find out about the benchmarks between different compute resources available to the participants, who or where would they look for the information? Do you have a team that does these measurements on a daily basis?

Peter: We do our own benchmarking of the facilities available and our data engineer on the programme can advise on this, as well as point you in the right direction for more literature!

immersive-lab-entrance

Mani: I have been to the Digital Catapult HQ at 101 Euston Road, and was blown away with all the tech activities happening there, can you please share details about it with our readers

Peter: Digital Catapult was set up four years ago and is the UK’s leading innovation centre for advanced digital technologies.  We have seen a number of changes over the years but our core values of opening up markets and making businesses more competitive and productive remain. We are incredibly lucky to have some amazing facilities to help companies develop new products and services and get their products to market faster, including a nationwide network of Immersive Labs [see launch photos, photos in 2018], an LPWAN network and the new 5G Brighton Testbed.

Mani: Can you name a few startups that are currently going through your programs and the ones who have already been through it?

Peter: We currently have 25 start-ups on our programme. The reams are a range of sizes, some just 2 people, others 20+. The thing they all have in common is that they are developing some really exciting commercial products/solutions with deep learning and have an immediate need for the computational resources we offer. The full list of start-ups can be found on our cohort page on the Machine Intelligence Garage website.

Mani: I really appreciate the time you have taken to answer my questions and this has definitely helped the readers know more about what you do and how they can benefit from this great government-driven initiative.

Peter: Thank you very much for coming in. It’s great to be able to reach a wider audience and grow our community! See you soon!

15307393344_f6881df22a_k

Closing note

I was shown around a number of facilities at their centre (two floors) i.e. the Immersive lab (yes plenty of VR headsets to play with), the server area where all the HPC hardware is kept (at low room temperature), a spacious conference room where meetups are held, a small library full of interesting books and also a hot-desking area shared by both internal staff, partners and friends of Digital Catapult/Machine Intelligence Garage. Looking at the two websites I found the news and views, events and workshops and Digital Catapult | MI Garage blog sites interesting to keep track of activities in this space.

immersive-lab-man-headgear

I’m sure after reading about the conversation, you must be wondering how you could take advantage of these facilities out there meant for you and ones in your network who could benefit from it.

Readers should go to the links mentioned above to learn about this program and how they can go about taking advantage of it, or recommend it to their friends in the community who would be more suitable for it.

Please do let me know if this is helpful by dropping a line in the comments below, and I would also welcome feedback, see how you can reach me, above all please check out to the links mentioned above and also reach out to the folks behind this great initiative.

 

Building Wholly Graal with Truffle!

feature-image-building-graal-and-truffle

Citation: credits to the feature image go to David Luders and reused under a CC license, the original image can be found on this Flickr page.

Introduction

It has been some time, since the two posts [1][2] on Graal/GraalVM/Truffle, and a general request was when are you going to write something about “how to build” this awesome thing called Graal. Technically, we will be building HotSpot’s C2 compiler (look for C2 in the glossary list) replacement, called Graal. This binary is different from the  GraalVM suite you download from OTN via http://graalvm.org/downloads.

I wasn’t just going to stop at the first couple of posts on this technology. In fact, one of the best ways to learn and get an in-depth idea about any tech work, is to know how to build it.

Getting Started

Building JVMCI for JDK8, Graal and Truffle is fairly simple, and the instructions are available on the graal repo. We will be running them on both the local (Linux, MacOS) and container (Docker) environments. To capture the process as-code, they have been written in bash, see https://github.com/neomatrix369/awesome-graal/tree/master/build/x86_64/linux_macos.

During the process of writing the scripts and testing them on various environments, there were some issues, but these were soon resolved with the help members of the Graal team — thanks Doug.

Running scripts

Documentation on how to run the scripts are provided in the README.md on awesome-graal. For each of the build environments they are merely a single command:

Linux & MacOS

$ ./local-build.sh

$ RUN_TESTS=false           ./local-build.sh

$ OUTPUT_DIR=/another/path/ ./local-build.sh

Docker

$ ./docker-build.sh

$ DEBUG=true                ./docker-build.sh

$ RUN_TESTS=false           ./docker-build.sh

$ OUTPUT_DIR=/another/path/ ./docker-build.sh

Both the local and docker scripts pass in the environment variables i.e. RUN_TESTS and OUTPUT_DIR to the underlying commands. Debugging the docker container is also possible by setting the DEBUG environment variable.

For a better understanding of how they work, best to refer to the local and docker scripts in the repo.

Build logs

I have provided build logs for the respective environments in the build/x86_64/linux_macos  folder in https://github.com/neomatrix369/awesome-graal/

Once the build is completed successfully, the following messages are shown:

[snipped]

>> Creating /path/to/awesome-graal/build/x86_64/linux/jdk8-with-graal from /path/to/awesome-graal/build/x86_64/linux/graal-jvmci-8/jdk1.8.0_144/linux-amd64/product
Copying /path/to/awesome-graal/build/x86_64/linux/graal/compiler/mxbuild/dists/graal.jar to /path/to/awesome-graal/build/x86_64/linux/jdk8-with-graal/jre/lib/jvmci
Copying /path/to/awesome-graal/build/x86_64/linux/graal/compiler/mxbuild/dists/graal-management.jar to /path/to/awesome-graal/build/x86_64/linux/jdk8-with-graal/jre/lib/jvmci
Copying /path/to/awesome-graal/build/x86_64/linux/graal/sdk/mxbuild/dists/graal-sdk.jar to /path/to/awesome-graal/build/x86_64/linux/jdk8-with-graal/jre/lib/boot
Copying /path/to/awesome-graal/build/x86_64/linux/graal/truffle/mxbuild/dists/truffle-api.jar to /path/to/awesome-graal/build/x86_64/linux/jdk8-with-graal/jre/lib/truffle

>>> All good, now pick your JDK from /path/to/awesome-graal/build/x86_64/linux/jdk8-with-graal :-)

Creating Archive and SHA of the newly JDK8 with Graal & Truffle at /home/graal/jdk8-with-graal
Creating Archive jdk8-with-graal.tar.gz
Creating a sha5 hash from jdk8-with-graal.tar.gz
jdk8-with-graal.tar.gz and jdk8-with-graal.tar.gz.sha256sum.txt have been successfully created in the /home/graal/output folder.

Artifacts

All the Graal and Truffle artifacts are created in the graal/compiler/mxbuild/dists/ folder and copied to the newly built jdk8-with-graal folder, both of these will be present in the folder where the build.sh script resides:

jdk8-with-graal/jre/lib/jvmci/graal.jar
jdk8-with-graal/jre/lib/jvmci/graal-management.jar
jdk8-with-graal/jre/lib/boot/graal-sdk.jar
jdk8-with-graal/jre/lib/truffle/truffle-api.jar

In short, we started off with vanilla JDK8 (JAVA_HOME) and via the build script created an enhanced JDK8 with Graal and Truffle embedded in it. At the end of a successful build process, the script will create a .tar.gz archive file in the jdk8-with-graal-local folder, alongside this file you will also find the sha5 hash of the archive.

In case of a Docker build, the same folder is called jdk8-with-graal-docker and in addition to the above mentioned files, it will also contain the build logs.

Running unit tests

Running unit tests is a simple command:

mx --java-home /path/to/jdk8 unittest

This step should follow the moment we have a successfully built artifact in the jdk8-with-graal-local folder. The below messages indicate a successful run of the unit tests:

>>>> Running unit tests...
Warning: 1049 classes in /home/graal/mx/mxbuild/dists/mx-micro-benchmarks.jar skipped as their class file version is not supported by FindClassesByAnnotatedMethods
Warning: 401 classes in /home/graal/mx/mxbuild/dists/mx-jacoco-report.jar skipped as their class file version is not supported by FindClassesByAnnotatedMethods
WARNING: Unsupported class files listed in /home/graal/graal-jvmci-8/mxbuild/unittest/mx-micro-benchmarks.jar.jdk1.8.excludedclasses
WARNING: Unsupported class files listed in /home/graal/graal-jvmci-8/mxbuild/unittest/mx-jacoco-report.jar.jdk1.8.excludedclasses
MxJUnitCore
JUnit version 4.12
............................................................................................
Time: 5.334

OK (92 tests)

JDK differences

So what have we got that’s different from the JDK we started with. If we compare the boot JDK with the final JDK here are the differences:

Combination of diff between $JAVA_HOME and jdk8-with-graal and meld will give the above:

JDKversusGraalJDKDiff-02

JDKversusGraalJDKDiff-01

diff -y --suppress-common-lines $JAVA_HOME jdk8-with-graal | less
meld $JAVA_HOME ./jdk8-with-graal

Note: $JAVA_HOME points to your JDK8 boot JDK.

Build execution time

The build execution time was captured on both Linux and MacOS and there was a small difference between running tests and not running tests:

Running the build with or without tests on a quad-core, with hyper-threading:

 real 4m4.390s
 user 15m40.900s
 sys 1m20.386s
 ^^^^^^^^^^^^^^^
 user + sys = 17m1.286s (17 minutes 1.286 second)

Similar running the build with and without tests on a dual-core MacOS, with 4GB RAM, SSD drive, differs little:

 real 9m58.106s
 user 18m54.698s 
 sys 2m31.142s
 ^^^^^^^^^^^^^^^ 
 user + sys = 21m25.84s (21 minutes 25.84 seconds)

Disclaimer: these measurements can certainly vary across the different environments and configurations. If you have a more accurate way to benchmark such running processes, please do share back.

Summary

In this post, we saw how we can build Graal and Truffle for JDK8 on both local and container environments.

The next thing we will do is build them on a build farm provided by Adopt OpenJDK. We will be able to run them across multiple platforms and operating systems, including building inside docker containers. This binary is different from the GraalVM suite you download from OTN via http://graalvm.org/downloads, hopefully we will be able to cover GraalVM in a future post.

Thanks to Julien Ponge for making his build script available for re-use and the Graal team for supporting during the writing of this post.

Feel free to share your feedback at @theNeomatrix369. Pull requests with improvements and best-practices are welcome at https://github.com/neomatrix369/awesome-graal.