This post is going to be a composition of the practical parts of two posts, one written late last year and the other a couple of months ago respectively. The posts being Apache Zeppelin: stairway to notes* haven! (late Dec 2018) and Running your JuPyTer notebooks on Oracle Cloud Infrastructure (early September 2019). Although this time we are going to make Apache Zeppelin run on the Oracle Cloud Infrastructure….
Category Archives: container
Running your JuPyTeR notebooks on the cloud
On the back of my previous share on how to build and run a docker container with Jupyter, I’ll be taking this further on how we can make this run on a cloud platform.
We’ll try to do this on Oracle Cloud Infrastructure (OCI). In theory, we should be able to do everything in the blog on any VM or Baremetal instance. If you are new to Oracle Cloud, I would suggest getting familiar with the docs and Getting Started sections of the docs. You will also find several informative links at the bottom of this post, in the Resources section…
Originally published on Medium.com: Running your JuPyTeR notebooks on the cloud
Building Wholly Graal with Truffle!
Citation: credits to the feature image go to David Luders and reused under a CC license, the original image can be found on this Flickr page.
It has been some time, since the two posts  on Graal/GraalVM/Truffle, and a general request was when are you going to write something about “how to build” this awesome thing called Graal. Technically, we will be building HotSpot’s C2 compiler (look for C2 in the glossary list) replacement, called Graal. This binary is different from the GraalVM suite you download from OTN via http://graalvm.org/downloads.
I wasn’t just going to stop at the first couple of posts on this technology. In fact, one of the best ways to learn and get an in-depth idea about any tech work, is to know how to build it.
Building JVMCI for JDK8, Graal and Truffle is fairly simple, and the instructions are available on the graal repo. We will be running them on both the local (Linux, MacOS) and container (Docker) environments. To capture the process as-code, they have been written in bash, see https://github.com/neomatrix369/awesome-graal/tree/master/build/x86_64/linux_macos.
During the process of writing the scripts and testing them on various environments, there were some issues, but these were soon resolved with the help members of the Graal team — thanks Doug.
Documentation on how to run the scripts are provided in the README.md on awesome-graal. For each of the build environments they are merely a single command:
Linux & MacOS
$ ./local-build.sh $ RUN_TESTS=false ./local-build.sh $ OUTPUT_DIR=/another/path/ ./local-build.sh
$ ./docker-build.sh $ DEBUG=true ./docker-build.sh $ RUN_TESTS=false ./docker-build.sh $ OUTPUT_DIR=/another/path/ ./docker-build.sh
Both the local and docker scripts pass in the environment variables i.e. RUN_TESTS and OUTPUT_DIR to the underlying commands. Debugging the docker container is also possible by setting the DEBUG environment variable.
For a better understanding of how they work, best to refer to the local and docker scripts in the repo.
I have provided build logs for the respective environments in the build/x86_64/linux_macos folder in https://github.com/neomatrix369/awesome-graal/
Once the build is completed successfully, the following messages are shown:
[snipped] >> Creating /path/to/awesome-graal/build/x86_64/linux/jdk8-with-graal from /path/to/awesome-graal/build/x86_64/linux/graal-jvmci-8/jdk1.8.0_144/linux-amd64/product Copying /path/to/awesome-graal/build/x86_64/linux/graal/compiler/mxbuild/dists/graal.jar to /path/to/awesome-graal/build/x86_64/linux/jdk8-with-graal/jre/lib/jvmci Copying /path/to/awesome-graal/build/x86_64/linux/graal/compiler/mxbuild/dists/graal-management.jar to /path/to/awesome-graal/build/x86_64/linux/jdk8-with-graal/jre/lib/jvmci Copying /path/to/awesome-graal/build/x86_64/linux/graal/sdk/mxbuild/dists/graal-sdk.jar to /path/to/awesome-graal/build/x86_64/linux/jdk8-with-graal/jre/lib/boot Copying /path/to/awesome-graal/build/x86_64/linux/graal/truffle/mxbuild/dists/truffle-api.jar to /path/to/awesome-graal/build/x86_64/linux/jdk8-with-graal/jre/lib/truffle >>> All good, now pick your JDK from /path/to/awesome-graal/build/x86_64/linux/jdk8-with-graal :-) Creating Archive and SHA of the newly JDK8 with Graal & Truffle at /home/graal/jdk8-with-graal Creating Archive jdk8-with-graal.tar.gz Creating a sha5 hash from jdk8-with-graal.tar.gz jdk8-with-graal.tar.gz and jdk8-with-graal.tar.gz.sha256sum.txt have been successfully created in the /home/graal/output folder.
All the Graal and Truffle artifacts are created in the graal/compiler/mxbuild/dists/ folder and copied to the newly built jdk8-with-graal folder, both of these will be present in the folder where the build.sh script resides:
jdk8-with-graal/jre/lib/jvmci/graal.jar jdk8-with-graal/jre/lib/jvmci/graal-management.jar jdk8-with-graal/jre/lib/boot/graal-sdk.jar jdk8-with-graal/jre/lib/truffle/truffle-api.jar
In short, we started off with vanilla JDK8 (JAVA_HOME) and via the build script created an enhanced JDK8 with Graal and Truffle embedded in it. At the end of a successful build process, the script will create a .tar.gz archive file in the jdk8-with-graal-local folder, alongside this file you will also find the sha5 hash of the archive.
In case of a Docker build, the same folder is called jdk8-with-graal-docker and in addition to the above mentioned files, it will also contain the build logs.
Running unit tests
Running unit tests is a simple command:
mx --java-home /path/to/jdk8 unittest
This step should follow the moment we have a successfully built artifact in the jdk8-with-graal-local folder. The below messages indicate a successful run of the unit tests:
>>>> Running unit tests... Warning: 1049 classes in /home/graal/mx/mxbuild/dists/mx-micro-benchmarks.jar skipped as their class file version is not supported by FindClassesByAnnotatedMethods Warning: 401 classes in /home/graal/mx/mxbuild/dists/mx-jacoco-report.jar skipped as their class file version is not supported by FindClassesByAnnotatedMethods WARNING: Unsupported class files listed in /home/graal/graal-jvmci-8/mxbuild/unittest/mx-micro-benchmarks.jar.jdk1.8.excludedclasses WARNING: Unsupported class files listed in /home/graal/graal-jvmci-8/mxbuild/unittest/mx-jacoco-report.jar.jdk1.8.excludedclasses MxJUnitCore JUnit version 4.12 ............................................................................................ Time: 5.334 OK (92 tests)
So what have we got that’s different from the JDK we started with. If we compare the boot JDK with the final JDK here are the differences:
Combination of diff between $JAVA_HOME and jdk8-with-graal and meld will give the above:
diff -y --suppress-common-lines $JAVA_HOME jdk8-with-graal | less
meld $JAVA_HOME ./jdk8-with-graal
Note: $JAVA_HOME points to your JDK8 boot JDK.
Build execution time
The build execution time was captured on both Linux and MacOS and there was a small difference between running tests and not running tests:
Running the build with or without tests on a quad-core, with hyper-threading:
real 4m4.390s user 15m40.900s sys 1m20.386s ^^^^^^^^^^^^^^^ user + sys = 17m1.286s (17 minutes 1.286 second)
Similar running the build with and without tests on a dual-core MacOS, with 4GB RAM, SSD drive, differs little:
real 9m58.106s user 18m54.698s sys 2m31.142s ^^^^^^^^^^^^^^^ user + sys = 21m25.84s (21 minutes 25.84 seconds)
Disclaimer: these measurements can certainly vary across the different environments and configurations. If you have a more accurate way to benchmark such running processes, please do share back.
In this post, we saw how we can build Graal and Truffle for JDK8 on both local and container environments.
The next thing we will do is build them on a build farm provided by Adopt OpenJDK. We will be able to run them across multiple platforms and operating systems, including building inside docker containers. This binary is different from the GraalVM suite you download from OTN via http://graalvm.org/downloads, hopefully we will be able to cover GraalVM in a future post.
Thanks to Julien Ponge for making his build script available for re-use and the Graal team for supporting during the writing of this post.
Feel free to share your feedback at @theNeomatrix369. Pull requests with improvements and best-practices are welcome at https://github.com/neomatrix369/awesome-graal.
Containers all the way through…
In this post I will attempt to cover fundamentals of Bare Metal Systems, Virtual Systems and Container Systems. And the purpose for doing so is to learn about these systems as they stand and also the differences between them, focusing on how they execute programs in their respective environments.
Bare metal systems
Let’s think of our Bare Metal Systems as desktops and laptops we use on a daily basis (or even servers in server rooms and data-centers), and we have the following components:
- the hardware (outer physical layer)
- the OS platform (running inside the hardware)
- the programs running on the OS (as processes)
Programs are stored on the hard drive in the form of executable files (a format understandable by the OS) and loaded into memory via one or more processes. Programs interact with the kernel, which forms a core part of the OS architecture and the hardware. The OS coordinate communication between hardware i.e. CPU, I/O devices, Memory, etc… and the programs.
A more detailed explanation of what programs or executables are, how programs execute and where an Operating System come into play, can be found on this Stackoverflow page .
On the other hand Virtual Systems, with the help of Virtual System controllers like, Virtual Box or VMWare or a hypervisor  run an operating system on a bare metal system. These systems emulate bare-metal hardware as software abstraction(s) inside which we run the real OS platform. Such systems can be made up of the following layers, and also referred to as a Virtual Machines (VM):
- a software abstraction of the hardware (Virtual Machine)
- the OS platform running inside the software abstraction (guest OS)
- one or more programs running in the guest OS (processes)
It’s like running a computer (abstracted as software) inside another computer. And the rest of the fundamentals from the Bare Metal System applies to this abstraction layer as well. When a process is created inside the Virtual System, then the host OS which runs the Virtual System might also be spawning one or more processes.
Now looking at Container Systems we can say the following:
- they run on top of OS platforms running inside Bare Metal Systems or Virtual Systems
- containers which allow isolating processes and sharing the kernel between each other (such isolation from other processes and resources are possible in some OSes like say Linux, due to OS kernel features like cgroups and namespaces)
A container creates an OS like environment, inside which one or more programs can be executed. Each of these executions could result in a one or more processes on the host OS. Container Systems are composed of these layers:
- hardware (accessible via kernel features)
- the OS platform (shared kernel)
- one or more programs running inside the container (as processes)
Looking at these enclosures or rounded rectangles within each other, we can already see how it is containers all the way through.
There is an increasing number of distinctions between Bare Metal Systems, Virtual Systems and Container Systems. While Virtual Systems encapsulate the Operating System inside a thick hardware virtualisation, Container Systems do something similar but with a much thinner virtualisation layer.
There are a number of pros and cons between these systems when we look at them individually, i.e. portability, performance, resource consumption, time to recreate such systems, maintenance, et al.
Word of thanks and stay in touch
Thank you for your time, feel free to send your queries and comments to @theNeomatrix369. Big thanks to my colleague, and our DevOps craftsman Robert Firek from Codurance for proof-reading my post and steering me in the right direction.