AI agent to solve train scheduling problems for the 'Flatland' competition, 2020
  • Python 99.6%
  • Shell 0.4%
Find a file
2021-05-01 22:36:39 +02:00
checkpoints Initial commit 2021-04-15 15:28:29 +02:00
fl_utils Initial commit 2021-04-15 15:28:29 +02:00
reinforcement_learning Initial commit 2021-04-15 15:28:29 +02:00
replay_buffers Initial commit 2021-04-15 15:28:29 +02:00
serpentrain Initial commit 2021-04-15 15:28:29 +02:00
.gitattributes Initial commit 2021-04-15 15:28:29 +02:00
.gitignore Initial commit 2021-04-15 15:28:29 +02:00
aicrowd.json Initial commit 2021-04-15 15:28:29 +02:00
apt.txt Initial commit 2021-04-15 15:28:29 +02:00
docker_run.sh Initial commit 2021-04-15 15:28:29 +02:00
environment.yml Initial commit 2021-04-15 15:28:29 +02:00
LICENSE Update LICENSE 2021-04-15 21:39:17 +02:00
local_run.sh Initial commit 2021-04-15 15:28:29 +02:00
my_observation_builder.py Initial commit 2021-04-15 15:28:29 +02:00
README.md Added Serpentine Logo 2021-05-01 22:36:39 +02:00
README_competition.md Initial commit 2021-04-15 15:28:29 +02:00
run.py Initial commit 2021-04-15 15:28:29 +02:00
run.sh Initial commit 2021-04-15 15:28:29 +02:00
sweep.yaml Initial commit 2021-04-15 15:28:29 +02:00
train.py Initial commit 2021-04-15 15:28:29 +02:00

The Team: Serpentrain

This is the finalized repo from Serpentrain, the 2020 SerpentineAI team that competed in the 2020 flatland challenge. The technical report of this and other competitions joined by teams from SerpentineAI can be found online.

Run the code

The competition uses conda as package manager. After installing conda (or miniconda) run the following commands in the repo root (~/../flatland) to download and install all requirements:

conda env create
conda activate flatland-rl

DQN Agent

Training

Training our agent takes 3 gpus, as it was what we had access to during the competition. To train our agent run:

PYTHONPATH=. python serpentrain/reinforcement_learning/distributed/main_distributed.py

Warning this might freeze up your system as it is resource heavy.

To see the various training options you can run:

PYTHONPATH=. python serpentrain/reinforcement_learning/distributed/main_distributed.py -h

Running

Adjust the run.py file.

RENDER = True  # Whether to render the game 
USE_GPU = True  # If you have a GPU 
DQN_MODEL = True
CHECKPOINT_PATH = "path/to/checkpoint.pt"  # E.G. './checkpoints/submission/snapshot-20201104-2201-epoch-1.pt'

Then run:

bash local_run.sh

Rule Based Agent

Adjust the run.py file.

RENDER = True  # Whether to render the game
USE_GPU = False  # Not necessary
DQN_MODEL = False
CHECKPOINT_PATH = ""  # Not necessary

Then run:

bash local_run.sh

Acknowledgements

SerpentineAI

SerpentineAI is a student team from the Technical University of Eindhoven. During the competition computational resources provided by VBTI to SerpentineAI were used during training.