Merge pull request #104 from stevenschmatz/patch-1
Fix relative links in README.md
This commit is contained in:
@@ -15,7 +15,7 @@ python -m baselines.deepq.experiments.enjoy_cartpole
|
|||||||
```
|
```
|
||||||
|
|
||||||
|
|
||||||
Be sure to check out the source code of [both](baselines/deepq/experiments/train_cartpole.py) [files](baselines/deepq/experiments/enjoy_cartpole.py)!
|
Be sure to check out the source code of [both](experiments/train_cartpole.py) [files](experiments/enjoy_cartpole.py)!
|
||||||
|
|
||||||
## If you wish to apply DQN to solve a problem.
|
## If you wish to apply DQN to solve a problem.
|
||||||
|
|
||||||
@@ -49,4 +49,4 @@ Once you pick a model, you can download it and visualize the learned policy. Be
|
|||||||
python -m baselines.deepq.experiments.atari.download_model --blob model-atari-duel-pong-1 --model-dir /tmp/models
|
python -m baselines.deepq.experiments.atari.download_model --blob model-atari-duel-pong-1 --model-dir /tmp/models
|
||||||
python -m baselines.deepq.experiments.atari.enjoy --model-dir /tmp/models/model-atari-duel-pong-1 --env Pong --dueling
|
python -m baselines.deepq.experiments.atari.enjoy --model-dir /tmp/models/model-atari-duel-pong-1 --env Pong --dueling
|
||||||
|
|
||||||
```
|
```
|
||||||
|
Reference in New Issue
Block a user