add a note about DQN algorithms not performing well

This commit is contained in:
Peter Zhokhov
2018-09-27 12:51:43 -07:00
parent 4402b8eba6
commit 34ae3194b4

View File

@@ -94,6 +94,10 @@ DQN with Atari is at this point a classics of benchmarks. To run the baselines i
```
python -m baselines.run --alg=deepq --env=PongNoFrameskip-v4 --num_timesteps=1e6
```
*NOTE:*
The DQN-based algorithms currently do not get high scores on the Atari games
(see GitHub issue [431](https://github.com/openai/baselines/issues/431))
We are currently investigating this and recommend users to instead use PPO2.
## Saving, loading and visualizing models
The algorithms serialization API is not properly unified yet; however, there is a simple method to save / restore trained models.