...
Moreover, vulkan
drivers are not available on Ibex GPU node and we will rely on opengl
to run our server as headless
i.e. with no rendering support.
Building Server environment
For server side we use Docker image published by CarlaSim on DockerHub. Since Docker container platform is not available on Ibex. Instead we will use Singularity container platform to run a CarlaSim container using the same Docker image from DockerHub.
...
Once completed, the above commands result in a Singularity Image File or a carla_0.9.5.sif
which Singularity container runtime understand.
Running the server
The following jobscript should start a Carla server:
Code Block |
---|
#!/bin/bash #SBATCH --job-name=carla_server #SBATCH --gpus=1 #SBATCH --gpus-per-node=1 #SBATCH --constraint=gtx1080ti #SBATCH --mem=50G #SBATCH --cpus-per-task=4 #SBATCH --time=01:00:00 module load singularity export DISPLAY= # set image path to where your image is located export IMAGE=/ibex/scratch/shaima0d/tickets/37094/carla_0.9.5.sif #Setting the port is optional. Set it if another user process is using the default 2000 and 2001 ports. export HOST=$(/bin/hostname) export PORT=11011 echo "Starting Carla server on $HOST:$PORT" > server_info.${SLURM_JOBID}.txt SINGULARITYENV_SDL_VIDEODRIVER=offscreen SINGULARITYENV_CUDA_VISIBLE_DEIVCE=0 singularity exec --nv carla_0.9.5.sif /home/carla/CarlaUE4.sh -opengl -carla-rpc-port=11011 |
Build Client environment
You can create a conda environment:
Code Block |
---|
conda env create -f environment.yml |
where the environment.yml
looks as follows:
Code Block |
---|
name: carla_py3
channels:
- conda-forge
- intel
- defaults
dependencies:
- _libgcc_mutex=0.1=conda_forge
- _openmp_mutex=4.5=1_gnu
- ca-certificates=2021.5.30=ha878542_0
- certifi=2021.5.30=py36h5fab9bb_0
- ld_impl_linux-64=2.36.1=hea4e1c9_2
- libblas=3.9.0=11_linux64_openblas
- libcblas=3.9.0=11_linux64_openblas
- libffi=3.3=h58526e2_2
- libgcc-ng=11.1.0=hc902ee8_8
- libgfortran-ng=11.1.0=h69a702a_8
- libgfortran5=11.1.0=h6c583b3_8
- libgomp=11.1.0=hc902ee8_8
- libjpeg-turbo=2.1.0=h7f98852_0
- liblapack=3.9.0=11_linux64_openblas
- libopenblas=0.3.17=pthreads_h8fe5266_1
- libstdcxx-ng=11.1.0=h56837e0_8
- ncurses=6.2=h58526e2_4
- numpy=1.19.5=py36hfc0c790_2
- openssl=1.1.1k=h7f98852_0
- pip=21.2.4=pyhd8ed1ab_0
- python=3.6.13=hffdb5ce_0_cpython
- python_abi=3.6=2_cp36m
- readline=8.1=h46c0cb4_0
- setuptools=49.6.0=py36h5fab9bb_3
- sqlite=3.36.0=h9cd32fc_0
- tk=8.6.10=h21135ba_1
- wheel=0.37.0=pyhd8ed1ab_0
- xz=5.2.5=h516909a_1
- zlib=1.2.11=h516909a_1010
- pip:
- carla==0.9.5
- gputil==1.4.0
- psutil==5.8.0
- py-cpuinfo==8.0.0
- pygame==2.0.1
- python-tr==0.1.2 |
Running the client
To connect to the server, you can either start an interactive job or submit a python script in a SLURM job script. The following is a jobscript running a performance benchmark packaged in CarlaSim’s Github repository:
Code Block |
---|
#!/bin/bash
#SBATCH --gpus=1
#SBATCH --time=01:00:00
#SBATCH --mem=100G
source ~/miniconda3/bin/activate carla_py3
HOST=$(cat server_info.txt | cut -d ":" -f 1)
PORT=$(cat server_info.txt | cut -d ":" -f 2)
echo "python ${PWD}/carla/PythonAPI/util/performance_benchmark.py --host=$HOST --port=${PORT} "
srun -u python ${PWD}/carla/PythonAPI/util/performance_benchmark.py --host=$HOST --port=${PORT} |
Note |
---|
At times, you may notice that the server doesn’t start and crashes with Segmentation Fault. In such situation try allocating the whole node and resubmit the server job. We are investigating the issue why the CUDA_VISIBLE_DEVICES are not honored. |