fix ppo command in readme

This commit is contained in:
Louie Helm
2017-09-05 06:06:19 -07:00
committed by GitHub
parent 3d3ea6cb16
commit 589387403b

View File

@@ -2,6 +2,6 @@
- Original paper: https://arxiv.org/abs/1707.06347
- Baselines blog post: https://blog.openai.com/openai-baselines-ppo/
- `mpirun -np 8 python -m baselines.ppo.run_atari` runs the algorithm for 40M frames = 10M timesteps on an Atari game. See help (`-h`) for more options.
- `python -m baselines.ppo.run_mujoco` runs the algorithm for 1M frames on a Mujoco environment.
- `mpirun -np 8 python -m baselines.ppo1.run_atari` runs the algorithm for 40M frames = 10M timesteps on an Atari game. See help (`-h`) for more options.
- `python -m baselines.ppo1.run_mujoco` runs the algorithm for 1M frames on a Mujoco environment.