Building Wholly Graal with Truffle!

feature-image-building-graal-and-truffle

Citation: credits to the feature image go to David Luders and reused under a CC license, the original image can be found on this Flickr page.

Introduction

It has been some time, since the two posts [1][2] on Graal/GraalVM/Truffle, and a general request was when are you going to write something about “how to build” this awesome thing called Graal. Technically, we will be building HotSpot’s C2 compiler (look for C2 in the glossary list) replacement, called Graal. This binary is different from the  GraalVM suite you download from OTN via http://graalvm.org/downloads.

I wasn’t just going to stop at the first couple of posts on this technology. In fact, one of the best ways to learn and get an in-depth idea about any tech work, is to know how to build it.

Getting Started

Building JVMCI for JDK8, Graal and Truffle is fairly simple, and the instructions are available on the graal repo. We will be running them on both the local (Linux, MacOS) and container (Docker) environments. To capture the process as-code, they have been written in bash, see https://github.com/neomatrix369/awesome-graal/tree/master/build/x86_64/linux_macos.

During the process of writing the scripts and testing them on various environments, there were some issues, but these were soon resolved with the help members of the Graal team — thanks Doug.

Running scripts

Documentation on how to run the scripts are provided in the README.md on awesome-graal. For each of the build environments they are merely a single command:

Linux & MacOS

$ ./local-build.sh

$ RUN_TESTS=false           ./local-build.sh

$ OUTPUT_DIR=/another/path/ ./local-build.sh

Docker

$ ./docker-build.sh

$ DEBUG=true                ./docker-build.sh

$ RUN_TESTS=false           ./docker-build.sh

$ OUTPUT_DIR=/another/path/ ./docker-build.sh

Both the local and docker scripts pass in the environment variables i.e. RUN_TESTS and OUTPUT_DIR to the underlying commands. Debugging the docker container is also possible by setting the DEBUG environment variable.

For a better understanding of how they work, best to refer to the local and docker scripts in the repo.

Build logs

I have provided build logs for the respective environments in the build/x86_64/linux_macos  folder in https://github.com/neomatrix369/awesome-graal/

Once the build is completed successfully, the following messages are shown:

[snipped]

>> Creating /path/to/awesome-graal/build/x86_64/linux/jdk8-with-graal from /path/to/awesome-graal/build/x86_64/linux/graal-jvmci-8/jdk1.8.0_144/linux-amd64/product
Copying /path/to/awesome-graal/build/x86_64/linux/graal/compiler/mxbuild/dists/graal.jar to /path/to/awesome-graal/build/x86_64/linux/jdk8-with-graal/jre/lib/jvmci
Copying /path/to/awesome-graal/build/x86_64/linux/graal/compiler/mxbuild/dists/graal-management.jar to /path/to/awesome-graal/build/x86_64/linux/jdk8-with-graal/jre/lib/jvmci
Copying /path/to/awesome-graal/build/x86_64/linux/graal/sdk/mxbuild/dists/graal-sdk.jar to /path/to/awesome-graal/build/x86_64/linux/jdk8-with-graal/jre/lib/boot
Copying /path/to/awesome-graal/build/x86_64/linux/graal/truffle/mxbuild/dists/truffle-api.jar to /path/to/awesome-graal/build/x86_64/linux/jdk8-with-graal/jre/lib/truffle

>>> All good, now pick your JDK from /path/to/awesome-graal/build/x86_64/linux/jdk8-with-graal :-)

Creating Archive and SHA of the newly JDK8 with Graal & Truffle at /home/graal/jdk8-with-graal
Creating Archive jdk8-with-graal.tar.gz
Creating a sha5 hash from jdk8-with-graal.tar.gz
jdk8-with-graal.tar.gz and jdk8-with-graal.tar.gz.sha256sum.txt have been successfully created in the /home/graal/output folder.

Artifacts

All the Graal and Truffle artifacts are created in the graal/compiler/mxbuild/dists/ folder and copied to the newly built jdk8-with-graal folder, both of these will be present in the folder where the build.sh script resides:

jdk8-with-graal/jre/lib/jvmci/graal.jar
jdk8-with-graal/jre/lib/jvmci/graal-management.jar
jdk8-with-graal/jre/lib/boot/graal-sdk.jar
jdk8-with-graal/jre/lib/truffle/truffle-api.jar

In short, we started off with vanilla JDK8 (JAVA_HOME) and via the build script created an enhanced JDK8 with Graal and Truffle embedded in it. At the end of a successful build process, the script will create a .tar.gz archive file in the jdk8-with-graal-local folder, alongside this file you will also find the sha5 hash of the archive.

In case of a Docker build, the same folder is called jdk8-with-graal-docker and in addition to the above mentioned files, it will also contain the build logs.

Running unit tests

Running unit tests is a simple command:

mx --java-home /path/to/jdk8 unittest

This step should follow the moment we have a successfully built artifact in the jdk8-with-graal-local folder. The below messages indicate a successful run of the unit tests:

>>>> Running unit tests...
Warning: 1049 classes in /home/graal/mx/mxbuild/dists/mx-micro-benchmarks.jar skipped as their class file version is not supported by FindClassesByAnnotatedMethods
Warning: 401 classes in /home/graal/mx/mxbuild/dists/mx-jacoco-report.jar skipped as their class file version is not supported by FindClassesByAnnotatedMethods
WARNING: Unsupported class files listed in /home/graal/graal-jvmci-8/mxbuild/unittest/mx-micro-benchmarks.jar.jdk1.8.excludedclasses
WARNING: Unsupported class files listed in /home/graal/graal-jvmci-8/mxbuild/unittest/mx-jacoco-report.jar.jdk1.8.excludedclasses
MxJUnitCore
JUnit version 4.12
............................................................................................
Time: 5.334

OK (92 tests)

JDK differences

So what have we got that’s different from the JDK we started with. If we compare the boot JDK with the final JDK here are the differences:

Combination of diff between $JAVA_HOME and jdk8-with-graal and meld will give the above:

JDKversusGraalJDKDiff-02

JDKversusGraalJDKDiff-01

diff -y --suppress-common-lines $JAVA_HOME jdk8-with-graal | less
meld $JAVA_HOME ./jdk8-with-graal

Note: $JAVA_HOME points to your JDK8 boot JDK.

Build execution time

The build execution time was captured on both Linux and MacOS and there was a small difference between running tests and not running tests:

Running the build with or without tests on a quad-core, with hyper-threading:

 real 4m4.390s
 user 15m40.900s
 sys 1m20.386s
 ^^^^^^^^^^^^^^^
 user + sys = 17m1.286s (17 minutes 1.286 second)

Similar running the build with and without tests on a dual-core MacOS, with 4GB RAM, SSD drive, differs little:

 real 9m58.106s
 user 18m54.698s 
 sys 2m31.142s
 ^^^^^^^^^^^^^^^ 
 user + sys = 21m25.84s (21 minutes 25.84 seconds)

Disclaimer: these measurements can certainly vary across the different environments and configurations. If you have a more accurate way to benchmark such running processes, please do share back.

Summary

In this post, we saw how we can build Graal and Truffle for JDK8 on both local and container environments.

The next thing we will do is build them on a build farm provided by Adopt OpenJDK. We will be able to run them across multiple platforms and operating systems, including building inside docker containers. This binary is different from the GraalVM suite you download from OTN via http://graalvm.org/downloads, hopefully we will be able to cover GraalVM in a future post.

Thanks to Julien Ponge for making his build script available for re-use and the Graal team for supporting during the writing of this post.

Feel free to share your feedback at @theNeomatrix369. Pull requests with improvements and best-practices are welcome at https://github.com/neomatrix369/awesome-graal.

Advertisements

Containers all the way through…

In this post I will attempt to cover fundamentals of Bare Metal Systems, Virtual Systems and Container Systems. And the purpose for doing so is to learn about these systems as they stand and also the differences between them, focusing on how they execute programs in their respective environments.

Bare metal systems

Let’s think of our Bare Metal Systems as desktops and laptops we use on a daily basis (or even servers in server rooms and data-centers), and we have the following components:

  • the hardware (outer physical layer)
  • the OS platform (running inside the hardware)
  • the programs running on the OS (as processes)

Programs are stored on the hard drive in the form of executable files (a format understandable by the OS) and loaded into memory via one or more processes. Programs interact with the kernel, which forms a core part of the OS architecture and the hardware. The OS coordinate communication between hardware i.e. CPU, I/O devices, Memory, etc… and the programs.

 

Bare Metal Systems

A more detailed explanation of what programs or executables are, how programs execute and where an Operating System come into play, can be found on this Stackoverflow page [2].

Virtual systems

On the other hand Virtual Systems, with the help of Virtual System controllers like, Virtual Box or VMWare or a hypervisor [1] run an operating system on a bare metal system. These systems emulate bare-metal hardware as software abstraction(s) inside which we run the real OS platform. Such systems can be made up of the following layers, and also referred to as a Virtual Machines (VM):

  • a software abstraction of the hardware (Virtual Machine)
  • the OS platform running inside the software abstraction (guest OS)
  • one or more programs running in the guest OS (processes)

It’s like running a computer (abstracted as software) inside another computer. And the rest of the fundamentals from the Bare Metal System applies to this abstraction layer as well. When a process is created inside the Virtual System, then the host OS which runs the Virtual System might also be spawning one or more processes.

Virtual Systems

Container systems

Now looking at Container Systems we can say the following:

  • they run on top of OS platforms running inside Bare Metal Systems or Virtual Systems
  • containers which allow isolating processes and sharing the kernel between each other (such isolation from other processes and resources are possible in some OSes like say Linux, due to OS kernel features like cgroups[3] and namespaces)[4]

A container creates an OS like environment, inside which one or more programs can be executed. Each of these executions could result in a one or more processes on the host OS. Container Systems are composed of these layers:

  • hardware (accessible via kernel features)
  • the OS platform (shared kernel)
  • one or more programs running inside the container (as processes)

Container Systems

Summary

Looking at these enclosures or rounded rectangles within each other, we can already see how it is containers all the way through.

Bare Metal Systems
Virtual SystemsContainer Systems

There is an increasing number of distinctions between Bare Metal Systems, Virtual Systems and Container Systems. While Virtual Systems encapsulate the Operating System inside a thick hardware virtualisation, Container Systems do something similar but with a much thinner virtualisation layer.

There are a number of pros and cons between these systems when we look at them individually, i.e. portability, performance, resource consumption, time to recreate such systems, maintenance, et al.

Word of thanks and stay in touch

Thank you for your time, feel free to send your queries and comments to @theNeomatrix369. Big thanks to my colleague, and  our DevOps craftsman  Robert Firek from Codurance for proof-reading my post and steering me in the right direction.

Resources

Why not build #OpenJDK 9 using #Docker ? – Part 2 of 2

…continuing from Why not build #OpenJDK 9 using #Docker ? – Part 1 of 2.

I ran into a number of issues and you can see from my commits, I pulled myself out of it, but to run this Dockerfile from the command-line I used this instruction:

$ docker build -t neomatrix369/openjdk9 .

you can also do it using the below if you have not set your permissions:

$ sudo docker build -t neomatrix369/openjdk9 .

and get the below (summarised) output:

Sending build context to Docker daemon 3.072 kB
Sending build context to Docker daemon 
Step 0 : FROM phusion/baseimage:latest
 ---> 5a14c1498ff4
Step 1 : MAINTAINER Mani Sarkar (from @adoptopenjdk)
 ---> Using cache
 ---> 95e30b7f52b9
Step 2 : RUN apt-get update &&   apt-get install -y     libxt-dev zip pkg-config libX11-dev libxext-dev     libxrender-dev libxtst-dev libasound2-dev libcups2-dev libfreetype6-dev &&   rm -rf /var/lib/apt/lists/*
 ---> Using cache
 ---> 1ea3bbb15c2d
Step 3 : RUN apt-get update
 ---> Using cache
 ---> 6c3938f4d23d
Step 4 : RUN apt-get install -y mercurial ca-certificates-java build-essential
 ---> Using cache
 ---> e3f99b5a3bd3
Step 5 : RUN cd /tmp &&   hg clone http://hg.openjdk.java.net/jdk9/jdk9 openjdk9 &&   cd openjdk9 &&   sh ./get_source.sh
 ---> Using cache
 ---> 26cfaf16b9fa
Step 6 : RUN apt-get install -y wget &&   wget --no-check-certificate --header "Cookie: oraclelicense=accept-securebackup-cookie" http://download.oracle.com/otn-pub/java/jdk/8u45-b14/jdk-8u45-linux-x64.tar.gz
 ---> Using cache
 ---> 696889250fed
Step 7 : RUN tar zxvf jdk-8u45-linux-x64.tar.gz -C /opt
 ---> Using cache
 ---> c25cc9201c1b
Step 8 : RUN cd /tmp/openjdk9 &&   bash ./configure --with-cacerts-file=/etc/ssl/certs/java/cacerts --with-boot-jdk=/opt/jdk1.8.0_45
 ---> Using cache
 ---> 4e425de379e6
Step 9 : RUN cd /tmp/openjdk9 &&   make clean images
 ---> Using cache
 ---> 2d9e17c870be
Step 10 : RUN cd /tmp/openjdk9 &&   cp -a build/linux-x86_64-normal-server-release/images/jdk     /opt/openjdk9
 ---> Using cache
 ---> 9250fac9b500
Step 11 : RUN cd /tmp/openjdk9 &&   find /opt/openjdk9 -type f -exec chmod a+r {} + &&   find /opt/openjdk9 -type d -exec chmod a+rx {} +
 ---> Using cache
 ---> d0c597d045d4
Step 12 : ENV PATH /opt/openjdk9/bin:$PATH
 ---> Using cache
 ---> 3965c3e47855
Step 13 : ENV JAVA_HOME /opt/openjdk9
 ---> Using cache
 ---> 5877e8efd939
Successfully built 5877e8efd939

The above action creates an image which is stored in your local repository (use docker images to enlist the images in the repo). If you want to load the image into a container, and access the files it has built or see anything else, do the below:

$ sudo docker run -it --name openjdk9 neomatrix369/openjdk9 /bin/bash

this will take you to a bash prompt into the container and you can run any of your linux commands and access the file system.

Explaining docker run

$ sudo docker run -it --name openjdk9 neomatrix369/openjdk9 java -version

will show you this

openjdk version "1.9.0-internal"
OpenJDK Runtime Environment (build 1.9.0-internal-_2015_06_04_06_46-b00)
OpenJDK 64-Bit Server VM (build 1.9.0-internal-_2015_06_04_06_46-b00, mixed mode)

Here’s a breakdown of the docker run command:

docker run The command to create and start a new Docker container.
-it To run in interactive mode, so you can see the after running the container.
neomatrix369/openjdk9 This is a reference to the image tag by name (which we created above).
java -version Runs the java command asking its version, inside the containing, which is assisted by the two environment variables PATH and JAVA_HOME which was set in the Dockerfile above.

Footnotes

You might have noticed I grouped very specific instructions with each step, especially the RUN commands, its because, each time I got one of these wrong, it would re-execute the step again, including the steps that ran fine and didn’t need re-executing. Not only is this unnecessary its not using our resources efficiently which is what Docker brings us. So any addition, edition or deletion to any step will only result in that step being executed, and not the other steps that are fine.

So one of the best practises is to keep the steps granular enough and pre-load files and data beforehand and give it to docker. It has amazing caching and archiving mechanisms built in.

Save our work

As we know if we do not save the container into the image, our changes are lost.

If I didn’t use the docker build command I used earlier I could have, after the build process was completed and image created, used the below command:

$ sudo docker commit [sha of the image] neomatrix369/openjdk9

Sharing your docker image on Docker hub

Once you are happy with your changes, and want to share it with community at large, do the below:

$ sudo docker push neomatrix369/openjdk9

and you will see these depending on which of your layers have been found in the repo and which ones are new (this one is an example snapshot of the process):

The push refers to a repository [neomatrix369/openjdk9] (len: 1)
5877e8efd939: Image already exists 
3965c3e47855: Image already exists 
d0c597d045d4: Image already exists 
9250fac9b500: Image already exists 
2d9e17c870be: Buffering to Disk
.
.
.

There is plenty of room for development and improvement of this Docker script. So happy hacking and would love to hear your feedback or contributions from you.

BIG Thanks

Big thanks to the below two who proof-read my post and added value to it, whilst enjoying the #Software #Craftsmanship developer community (organised and supported by @LSCC):
Oliver Nautsch – @ollispieps (JUG Switzerland)
Amir Bazazi (@Codurance) – @amirbazazi

Special thanks to Roberto Cortez (@radcortez) for your Docker posts, these inspired and helped me write my first Docker post.

Resources

[1] Docker
[2] Get into Docker – A Guide for Total Newbies
[3] Docker for Total Newbies Part 2: Distribute Your Applications with Docker Images
[4] Docker posts on Voxxed
[5] OpenJDK
[6] Building OpenJDK
[7] Building OpenJDK on Linux, MacOs and Windows
[8] Virtual Machines (OpenJDK)
[9] Build your own OpenJDK
[10] Vagrant script (OpenJDK)
[11] YOUR DOCKER IMAGE MIGHT BE BROKEN without you knowing it
[12] Dockerfile on github
[13] Adopt OpenJDK: Getting Started Kit
[14] London Java Community

Why not build #OpenJDK 9 using #Docker ? – Part 1 of 2

Introduction

I think I have joined the Docker [1] party a bit late but that means by now everyone knows what Docker is and all the other basic fundamentals which I can very well skip, but if you are still interested, please check these posts Get into Docker – A Guide for Total Newbies [2] and Docker for Total Newbies Part 2: Distribute Your Applications with Docker Images [3]. And if you still want to know more about this widely spoken topic, check out these Docker posts on Voxxed [4].

Why ?

Since everyone has been doing some sort of provisioning or spinning up of dev or pre-prod or test environments using Docker [1] I decided to do the same but with my favourite project i.e. OpenJDK [5].

So far you can natively build OpenJDK [6] across Linux, MacOs and Windows [7], or do the same things via virtual machines or vagrant instances, see more on then via these resources Virtual Machines, [8] Build your own OpenJDK [9] and this vagrant script [10]. All part of the Adopt OpenJDK initiative lead by London Java Community [14] and supported by JUGs all over the world.

Requirements

Most parts of post is for those using Linux distributions (this one was created on Ubuntu 14.04). Linux, MacOS and Windows users please refer to Docker‘s  Linux, MacOS and Windows instructions respectively.

Hints: MacOS and Windows users will need to install Boot2Docker and remember to run the below two commands (and check your Docker host environment variables):

$ boot2docker init
$ boot2docker up 
$ boot2docker shellinit 

For the MacOS, if the above throw FATA[…] error messages, please try the below:

$ sudo boot2docker init
$ sudo boot2docker up 
$ sudo boot2docker shellinit 

For rest of the details please refer to the links provided above. Once you have the above in place for the Windows or MacOS platform, by merely executing the Dockerfile using the docker build and docker run commands you can create / update a container and run it respectively.

*** Please refer to the above links and ensure Docker works for you for the above platforms – try out tutorials or steps proving that Docker run as expected before proceeding further. ***

Building OpenJDK 9 using Docker

Now I will show you how to do the same things as mentioned above using Docker.

So I read the first two resource I shared so far (and wrote the last ones). So lets get started, and I’m going to walk you through what the Dockerfile looks like, as I take you through each section of the Dockerfile code.

*** Please note the steps below are not meant to be executed on your command prompty, they form an integral part of the Dockerfile which you can download from here at the end of this post. ***

You have noticed unlike everyone else I have chosen a different OS image i.e. phusion/baseimage, why? Read YOUR DOCKER IMAGE MIGHT BE BROKEN without you knowing it [11], to learn more about it.

FROM phusion/baseimage:latest

Each of the RUN steps below when executed becomes a Docker layer in isolation and gets assigned a SHA like this i.e. 95e30b7f52b9.

RUN \
  apt-get update && \
  apt-get install -y \
    libxt-dev zip pkg-config libX11-dev libxext-dev \
    libxrender-dev libxtst-dev libasound2-dev libcups2-dev libfreetype6-dev && \
  rm -rf /var/lib/apt/lists/*

The base image is updated and a number of dependencies are installed i.e. Mercurial (hg) and build-essential.

RUN \
  apt-get update && \
  apt-get install -y mercurial ca-certificates-java build-essential

Clone the OpenJDK 9 sources and download the latest sources from mercurial. You will notice that each of these steps are prefixed by this line cd /tmp &&, this is because each instruction is run in its own layer, as if it does not remember where it was when the previous instruction was run. Nothing to worry about, all your changes are still intact in the container.

RUN \
  cd /tmp && \
  hg clone http://hg.openjdk.java.net/jdk9/jdk9 openjdk9 && \
  cd openjdk9 && \
  sh ./get_source.sh

Install only what you need when you need them, see below I downloaded wget and then the jdk binary. I also learnt how to use wget by passing the necessary params and headers to make the server give us the binary we request. Finally un-tar the file using the famous tar command.

RUN \
  apt-get install -y wget && \
  wget --no-check-certificate --header "Cookie: oraclelicense=accept-securebackup-cookie" \ 
http://download.oracle.com/otn-pub/java/jdk/8u45-b14/jdk-8u45-linux-x64.tar.gz

RUN \
  tar zxvf jdk-8u45-linux-x64.tar.gz -C /opt

Run configure with the famous –with-boot-jdk=/opt/jdk1.8.0_45 to set the bootstrap jdk to point to jdk1.8.0_45.

RUN \
  cd /tmp/openjdk9 && \
  bash ./configure --with-cacerts-file=/etc/ssl/certs/java/cacerts --with-boot-jdk=/opt/jdk1.8.0_45

Now run the most important command:

RUN \  
  cd /tmp/openjdk9 && \
  make clean images

Once the build is successful, the artefacts i.e. jdk and jre images are created in the build folder.

RUN \  
  cd /tmp/openjdk9 && \
  cp -a build/linux-x86_64-normal-server-release/images/jdk \
    /opt/openjdk9

Below are some chmod ceremonies across the files and directories in the openjdk9 folder.

RUN \  
  cd /tmp/openjdk9 && \
  find /opt/openjdk9 -type f -exec chmod a+r {} + && \
  find /opt/openjdk9 -type d -exec chmod a+rx {} +

Two environment variable i.e. PATH and JAVA_HOME are created with the respective values assigned to them.

ENV PATH /opt/openjdk9/bin:$PATH
ENV JAVA_HOME /opt/openjdk9

You can find the entire source for the entire Dockerfile on github [12].

…more of this in the next post, Why not build #OpenJDK 9 using #Docker ? – Part 2 of 2, we will use the docker build, docker run commands and some more docker stuff.