Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Code Block
#!/bin/bash

#SBATCH --job-name=carla_server
#SBATCH --gpus=1
#SBATCH --gpus-per-node=1
#SBATCH --constraint=gtx1080ti
#SBATCH --mem=50G
#SBATCH --cpus-per-task=4
#SBATCH --time=01:00:00


module load singularity
export DISPLAY=
# set image path to where your image is located
export IMAGE=/ibex/scratch/shaima0d/tickets/37094/carla_0.9.5.sif

#Setting the port is optional. Set it if another user process is using the default 2000 and 2001 ports.
export HOST=$(/bin/hostname)
export PORT=11011
echo "Starting Carla server on $HOST:$PORT" > server_info.${SLURM_JOBID}.txt

SINGULARITYENV_SDL_VIDEODRIVER=offscreen SINGULARITYENV_CUDA_VISIBLE_DEIVCE=0$CUDA_VISIBLE_DEVICES singularity exec --nv carla_0.9.5.sif /home/carla/CarlaUE4.sh -opengl -carla-rpc-port=11011
Note
  • At times, you may notice that the server doesn’t start and crashes with Segmentation Fault. In such situation try allocating the whole node and resubmit the server job. We are investigating the issue why the CUDA_VISIBLE_DEVICES are not honored.

  • In case you see that the requested port is busy, try to change the port, its an arbitrary number. Try port number > 10000 to keep away from the commonly used ones.

Build Client environment

You can create a conda environment:

...

Code Block
#!/bin/bash
#SBATCH --gpus=1
#SBATCH --time=01:00:00
#SBATCH --mem=100G


source ~/miniconda3/bin/activate carla_py3

HOST=$(cat server_info.txt | cut -d ":" -f 1)
PORT=$(cat server_info.txt | cut -d ":" -f 2)

echo "python  ${PWD}/carla/PythonAPI/util/performance_benchmark.py --host=$HOST --port=${PORT} "
srun -u python  ${PWD}/carla/PythonAPI/util/performance_benchmark.py --host=$HOST --port=${PORT} 
Note
At times, you may notice that the server doesn’t start and crashes with Segmentation Fault. In such situation try allocating the whole node and resubmit the server job. We are investigating the issue why the CUDA_VISIBLE_DEVICES are not honored.