- Python 99.6%
- Shell 0.4%
| checkpoints | ||
| fl_utils | ||
| reinforcement_learning | ||
| replay_buffers | ||
| serpentrain | ||
| .gitattributes | ||
| .gitignore | ||
| aicrowd.json | ||
| apt.txt | ||
| docker_run.sh | ||
| environment.yml | ||
| LICENSE | ||
| local_run.sh | ||
| my_observation_builder.py | ||
| README.md | ||
| README_competition.md | ||
| run.py | ||
| run.sh | ||
| sweep.yaml | ||
| train.py | ||
The Team: Serpentrain
This is the finalized repo from Serpentrain, the 2020 SerpentineAI team that competed in the 2020 flatland challenge. The technical report of this and other competitions joined by teams from SerpentineAI can be found online.
Run the code
The competition uses conda as package manager. After installing conda (or miniconda) run the following commands in the repo root (~/../flatland) to download and install all requirements:
conda env create
conda activate flatland-rl
DQN Agent
Training
Training our agent takes 3 gpus, as it was what we had access to during the competition. To train our agent run:
PYTHONPATH=. python serpentrain/reinforcement_learning/distributed/main_distributed.py
Warning this might freeze up your system as it is resource heavy.
To see the various training options you can run:
PYTHONPATH=. python serpentrain/reinforcement_learning/distributed/main_distributed.py -h
Running
Adjust the run.py file.
RENDER = True # Whether to render the game
USE_GPU = True # If you have a GPU
DQN_MODEL = True
CHECKPOINT_PATH = "path/to/checkpoint.pt" # E.G. './checkpoints/submission/snapshot-20201104-2201-epoch-1.pt'
Then run:
bash local_run.sh
Rule Based Agent
Adjust the run.py file.
RENDER = True # Whether to render the game
USE_GPU = False # Not necessary
DQN_MODEL = False
CHECKPOINT_PATH = "" # Not necessary
Then run:
bash local_run.sh
Acknowledgements
SerpentineAI
SerpentineAI is a student team from the Technical University of Eindhoven. During the competition computational resources provided by VBTI to SerpentineAI were used during training.