For detailed information
- about the system, how to evaluate experiments and how to extend the artifact, we refer to this documentation.
- about the managing system that is provided with this artifact and how you can integrate your own, we refer to this documentation
- Docker (necessary)
- CUDA (optional if you want to run the segmentation models in the managed subsystem on the GPU)
- Docker nvidia runtime (optional)
If you want to run this artifact on the GPU, make sure that you have installed the nvidia container toolkit and configured docker to use the nvidia runtime (/etc/docker/daemon.json)
{
"runtimes": {
"nvidia": {
"args": [],
"path": "nvidia-container-runtime"
}
},
"default-runtime": "nvidia"
}
Note: This has been thoroughly tested under WSL2 and native Ubuntu 22.04.
As everything is containerized in Docker, this should not be a problem as long as you have docker running and a CLI that supports docker run/build commands.
We use pretrained models and a ROS bag that replays the data which the pipeline uses. All of the data are provided on a server. In the following are the instructions to setup the folder structure with the data in the right place:
- Create a ".data" folder.
cd ros_ws && mkdir .data && cd .data- Download the segmentation models and the ros bags with test data from the a server with the scp command (password
seams):
scp seams-reviewer@65.108.55.103:/home/seams-reviewer/data.zip .
unzip data.zipIn case unzip is not installed on your system:
sudo apt-get install unzipIn the end, the structure of your ros_ws should look like this:
ros_ws
├── .data
│ ├── checkpoints
│ │ ├── depth.pth
│ │ ├── fusion.pth
│ │ └── rgb.pth
│ └── SynDrone_t01_h50.bag
This artifact can be run without or with an exemplary managing system.
The logs will be stored in a folder log_dump next to the ros_ws directory.
Run the experiment from the root of the repository with one of the two following options (GPU or CPU). We recommend to use the GPU since there is a neural network running in SUNSET that runs smoother on the GPU.
bash ./ros_ws/evaluation/test_runner_docker.sh gpubash ./ros_ws/evaluation/test_runner_docker.sh cpuAfter the script has finished, you should see a new log file in the log_dump folder.
If you see that: Congratulations, you did everything right and were able to run a baseline managing system on SUNSET!
If you now want to evaluate the baseline system on all possible scenarios, you can run the following commands:
bash ./ros_ws/evaluation/example_run_docker.sh gpubash ./ros_ws/evaluation/example_run_docker.sh cpuTo evaluate the log files, you can use the eval script
Create the necessary docker network:
docker network create sunset-networkThere is a devcontainer file, i.e. you can just open VSCode in the root of this repository and reopen VSCode in the devcontainer. This should do the rest.
Watch out: If you ran experiments with docker before and then start an experiment in the dev container, be sure to remove the build and install directory beforehand.
Otherwise colcon build will fail since the paths are different when running as docker container compared to the dev container.
The sudo password for the dockuser inside the dev container is also dockuser.
sudo rm -r build/ install/ log/Note: By default we use the GPU to run our experiments. If you want to run experiments inside of the dev container, you can use the following script:
bash ./evaluation/test_runner_local.sh gpuThe logs will be stored in a folder log_dump next to the ros_ws directory.
If you'd like to run them on your CPU make sure to replace the cuda tag from the image with cpu in the devcontainer.json --> this will significantly impact the pace and therefore the reaction time and system down time of your results.
If you want to run experiments inside of the dev container, you can use the following script:
bash ./evaluation/test_runner_local.sh cpuNote: this will significantly impact the duration of your results. The segmentation node will perform with a lower frequency which has impacts on the reaction time and system down time of your experiment.