Preventing Java Maxing Out CPU in Docker Containers

With much of the deployment world moving to Docker and Kubernetes in some wave, shape or form; I’ve been spending a lot more time digging into the minute things that will generally make things better.

Working on some of the Synthetica Data Engine jobs which rely on heavily on Java execution, I’ve been reminded again and again about how container memory management will save you a lot of time on debugging why jobs are acting odd, not completing or just dying.

Docker Memory Allocation

Firstly there’s Docker itself. The Docker Desktop application makes things a little easier for us to manage. Within the preferences you can change the memory resources to the containers, define the swap size and amount of CPU cores you want Docker to use across container engine.

If you want to learn about stress testing Docker containers then the post by Thorsten Hans called “Docker Memory Limits Explained” goes into detail about using the progrium/stress image to stress test the memory and swap spaces on your Docker deployment.

Now onto Java….

Running Java Applications In Docker

Let’s get one thing straight, Java is far from a dead language. The JVM is alive and well, being used all over the place, especially in banking and insurance. While Java is not the first choice as a language, don’t forget that Kotlin, Clojure and a host of other battle tested languages sit nicely on the JVM. I love Go, I tolerate Python, I still like everything about the JVM.

Running JVM applications on Docker can present some interesting challenges.

Within any of the Synthetica Java/Clojure jobs there is a shell script to run the JVM application. Now I’d wager that 99% of the time there are no issues, the container CPU/Memory is just fine to handle the processing and load of the application.

Keep in mind though the container doesn’t just have the Java application to handle, there is the operating system within the container itself. During the updating of the Synthetica jobs from Java8 to Java17 I also spent some time updating the run scripts to handle the memory allocation better.


CGROUPS_MEM=$(cat /sys/fs/cgroup/memory/memory.limit_in_bytes)
MEMINFO_MEM=$(($(awk '/MemTotal/ {print $2}' /proc/meminfo)*1024))
XMX=$(awk '{printf("%d",$1*$2/1024^2)}' <<< " ${MEM} ${JVM_PEER_HEAP_RATIO} ")

: ${PEER_JAVA_OPTS:='-XX:+UseG1GC -server'}

/usr/bin/java $PEER_JAVA_OPTS -jar /path/to/jarfile.jar

By finding the memory of the container it’s possible to tune the JVM heap sizing and pass it in to the Java executable options. This means that the application is using the desired memory available without risking the operating system container memory. If the JVM heap starts to spiral and the CPU attempts swapping then I’ve found the container CPU spiking over 300% and the job will never complete.

This method has saved intensive processing jobs within containers over and over again for me.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: