From 5e73387494150051bffc50d4a29c9c4c3b6c6d90 Mon Sep 17 00:00:00 2001 From: cxx Date: Fri, 16 Jun 2017 15:38:42 +0800 Subject: [PATCH] Fix README since BreakOut pretrained model doesn't match the correct tensor shape. Therefore, Pong is used instead. --- README.md | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index 679d0da..e636a94 100644 --- a/README.md +++ b/README.md @@ -61,6 +61,7 @@ python -m baselines.deepq.experiments.atari.download_model Once you pick a model, you can download it and visualize the learned policy. Be sure to pass `--dueling` flag to visualization script when using dueling models. ```bash -python -m baselines.deepq.experiments.atari.download_model --blob model-atari-prior-duel-breakout-1 --model-dir /tmp/models -python -m baselines.deepq.experiments.atari.enjoy --model-dir /tmp/models/model-atari-prior-duel-breakout-1 --env Breakout --dueling +python -m baselines.deepq.experiments.atari.download_model --blob model-atari-duel-pong-1 --model-dir /tmp/models +python -m baselines.deepq.experiments.atari.enjoy --model-dir /tmp/models/model-atari-duel-pong-1 --env Pong --dueling + ```