Building a Docker Image with Chrono

This guide describes how to build a Docker image with Chrono installed, including selected modules and dependencies. The image is created using a custom Dockerfile that aggregates multiple snippet files—each appending necessary CMake options and pre-build environment commands. Docker Compose orchestrates the build and run process.

Prerequisites

Ensure you have:

  • Docker installed on your system. If not, download and install Docker from the official website.
  • Docker Compose installed on your system. If not, download and install Docker Compose from the official website. This is optional but recommended for orchestrating the build and run process. Docker compose will be used in this guide.
  • You have cloned the Chrono repository.

Background

The provided docker-compose.yml defines two services: dev and vnc. The dev service is the primary image which builds the Chrono library with selected modules and dependencies. The vnc service is optional and aids in visualization on systems where X11 is not available and/or when remotely accessing a machine without a display.

name: chrono
services:
dev:
image: "${COMPOSE_PROJECT_NAME}/${COMPOSE_PROJECT_NAME}:dev"
hostname: '${COMPOSE_PROJECT_NAME}'
container_name: '${COMPOSE_PROJECT_NAME}-dev'
build:
context: "./"
network: "host"
dockerfile: "./dev.dockerfile"
args:
PROJECT: "${COMPOSE_PROJECT_NAME}"
IMAGE_BASE: "ubuntu"
IMAGE_TAG: "22.04"
USER_GROUPS: "dialout video"
PIP_REQUIREMENTS: "black"
APT_DEPENDENCIES: "vim cmake-curses-gui"
USER_SHELL_ADD_ONS: "alias python=python3"
CUDA_VERSION: "12-2"
ROS_DISTRO: "humble"
OPTIX_SCRIPT: "data/NVIDIA-OptiX-SDK-7.7.0-linux64-x86_64.sh"
volumes:
- '../../:/home/${COMPOSE_PROJECT_NAME}/${COMPOSE_PROJECT_NAME}-dev'
- '/tmp/.X11-unix:/tmp/.X11-unix'
environment:
DISPLAY: '${DISPLAY:-vnc:0.0}'
NVIDIA_VISIBLE_DEVICES: all
NVIDIA_DRIVER_CAPABILITIES: all
working_dir: '/home/${COMPOSE_PROJECT_NAME}/${COMPOSE_PROJECT_NAME}-dev'
tty: true
deploy:
resources:
reservations:
devices:
- driver: nvidia
capabilities: [gpu]
vnc:
image: "camera/${COMPOSE_PROJECT_NAME}:vnc"
hostname: "${COMPOSE_PROJECT_NAME}-vnc"
container_name: "${COMPOSE_PROJECT_NAME}-vnc"
build:
context: "./"
dockerfile: "./vnc.dockerfile"
network: "host"
args:
VNC_PASSWORD: "${COMPOSE_PROJECT_NAME}"
ports:
- "127.0.0.1:8080-8099:8080"
- "127.0.0.1:5900-5999:5900"
networks:
default:
name: "${COMPOSE_PROJECT_NAME}"

You may also provide additional dependencies or requirements in the docker-compose.yml at build time using the APT_DEPENDENCIES and PIP_DEPENDENCIES environment variables. You also can additional build args as necessary for your snippets. The base image must be debian-based (and some modules may require ubuntu-based images).

The default docker-compose.yml file will attach a NVIDIA GPU to the container if available. If you don't have a NVIDIA GPU, you can comment out the parts which follow deploy in the docker-compose.yml file.

To simplify the dockerfiles, we leverage an open source project called dockerfile-x (it doesn't require any installation). This project provides the INCLUDE directive, which allows us to include multiple files in a single Dockerfile. In this way, we can call INCLUDE only on the modules we need, and the final dockerfile will be generated automatically.

# SPDX-License-Identifier: MIT
# This snippet install Chrono in ${PACKAGE_DIR}/chrono
# It includes other snippets, where specific modules can be added or removed based on need
ARG CHRONO_BRANCH="feature/modern_cmake"
ARG CHRONO_REPO="https://github.com/projectchrono/chrono.git"
ARG CHRONO_DIR="${USERHOME}/chrono"
ARG CHRONO_INSTALL_DIR="${USERHOME}/packages/chrono"
ARG PACKAGE_DIR="${USERHOME}/packages"
RUN mkdir -p ${PACKAGE_DIR}
# This variable will be used by snippets to add cmake options
ENV CMAKE_OPTIONS=""
# This variable is used before building (but in the same RUN command)
# This is useful for setting environment variables that are used in the build process
ENV PRE_BUILD_COMMANDS=""
# Install Chrono dependencies that are required for all modules (or some but are fairly small)
RUN sudo apt update && \
sudo apt install --no-install-recommends -y \
libirrlicht-dev \
libeigen3-dev \
git \
cmake \
build-essential \
ninja-build \
swig \
libxxf86vm-dev \
freeglut3-dev \
python3-numpy \
libglu1-mesa-dev \
libglew-dev \
libglfw3-dev \
libblas-dev \
liblapack-dev \
wget \
xorg-dev && \
sudo apt clean && sudo apt autoremove -y && sudo rm -rf /var/lib/apt/lists/*
# Clone Chrono before running the snippets
RUN git clone --recursive -b ${CHRONO_BRANCH} ${CHRONO_REPO} ${CHRONO_DIR}
# Include the snippets which install shared dependencies
# These can be commented out or removed if they are no longer needed
INCLUDE ./cuda.dockerfile
INCLUDE ./ros.dockerfile
# Then include the snippets for the modules you want to install
INCLUDE ./ch_ros.dockerfile
INCLUDE ./ch_vsg.dockerfile
INCLUDE ./ch_irrlicht.dockerfile
INCLUDE ./ch_vehicle.dockerfile
INCLUDE ./ch_sensor.dockerfile
INCLUDE ./ch_parser.dockerfile
INCLUDE ./ch_python.dockerfile
# Install Chrono
RUN ${PRE_BUILD_SCRIPTS} && \
# Evaluate the cmake options to expand any $(...) commands or variables
eval "_CMAKE_OPTIONS=\"${CMAKE_OPTIONS}\"" && \
mkdir ${CHRONO_DIR}/build && \
cd ${CHRONO_DIR}/build && \
cmake ../ -G Ninja \
-DCMAKE_BUILD_TYPE=Release \
-DBUILD_DEMOS=OFF \
-DBUILD_BENCHMARKING=OFF \
-DBUILD_TESTING=OFF \
-DCMAKE_LIBRARY_PATH=$(find /usr/local/cuda/ -type d -name stubs) \
-DEigen3_DIR=/usr/lib/cmake/eigen3 \
-DCMAKE_INSTALL_PREFIX=${CHRONO_INSTALL_DIR} \
-DNUMPY_INCLUDE_DIR=$(python3 -c 'import numpy; print(numpy.get_include())') \
${_CMAKE_OPTIONS} \
&& \
ninja && ninja install
# Update shell config
RUN echo "export LD_LIBRARY_PATH=\$LD_LIBRARY_PATH:${CHRONO_INSTALL_DIR}/lib" >> ${USERSHELLPROFILE}

You can then comment out (or create new) snippets which include the modules you need. The CMAKE_OPTIONS variable should be updated to provide relevant CMake options for the modules you include to the Chrono build command. For instance, enabling the Chrono::Vehicle module would require setting setting the CH_ENABLE_MODULE_VEHICLE option to ON, as shown below:

ENV CMAKE_OPTIONS="${CMAKE_OPTIONS} -DCH_ENABLE_MODULE_VEHICLE=ON"

Building the Docker Image

From anywhere in the Chrono repository, run the following command:

docker compose -f contrib/docker/docker-compose.yml build

This will build both the dev and vnc services. The dev service will build and install Chrono globally within the image. The default chrono build directory is at /home/chrono/chrono and the default install directory is at /home/chrono/packages/chrono. This can be changed with the CHRONO_DIR and CHRONO_INSTALL_DIR variables, respectively. By default, demos and testing are disabled to speed up the build process.

By default, the following modules are enabled:

  • PyChrono
  • Chrono::Vehicle
  • Chrono::Irrlicht
  • Chrono::Parser
  • Chrono::VSG
  • Chrono::Sensor
  • Chrono::ROS
This may take some time, depending on your system and the modules you have included. For speed reasons, you may want to comment out some modules you don't need.

Running the Docker Container

The intention of the dev service is to attach to the container within a shell. To do this, run the following command:

docker compose -f contrib/docker/docker-compose.yml run dev

By default, the initial directory is /home/chrono/chrono-dev, where that directory is a volume to the host's chrono directory. This means you can build template projects using the chrono build. You can add additional volumes, as needed.

Visualizing GUI Applications

By default, the /tmp/.X11-unix directory is mounted to the container, which allows for GUI applications to be displayed on the host machine. If this isn't available to you for some reason, you can use the vnc service to visualize the container. This uses NoVNC to display the container's desktop in a browser. By default, the VNC server will be run on any port between 8080. You can then navigate to http://localhost:8080 to view the desktop. If you are on a remote machine, ensure you forward the port to your local machine.

The docker-compose.yml file actually sets the ports to 8080-8099, so if 8080 is in use, it will try the next available port. You can run docker ps to see which port is being used if things aren't working.

Additional Notes

Building the Docker Image without a NVIDIA GPU

As noted above, the default docker-compose.yml file will attach a NVIDIA GPU to the container if available. If you don't have a NVIDIA GPU, you can comment out the parts which follow deploy in the docker-compose.yml file.

Installing Chrono::Sensor

To install Chrono::Sensor, you need support for CUDA, have a NVIDIA graphics card, and have an OptiX license and build script locally. If you have a NVIDIA graphics card, ensure the cuda.dockerfile is included before ch_sensor.dockerfile. You can then download the OptiX 7.7 installation script and place it at contrib/docker/data.