From 34ae3194b4a15c57a8e5f2ae4d70191703a68f5a Mon Sep 17 00:00:00 2001 From: Peter Zhokhov Date: Thu, 27 Sep 2018 12:51:43 -0700 Subject: [PATCH] add a note about DQN algorithms not performing well --- README.md | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/README.md b/README.md index a9b7bf6..06b67f6 100644 --- a/README.md +++ b/README.md @@ -94,6 +94,10 @@ DQN with Atari is at this point a classics of benchmarks. To run the baselines i ``` python -m baselines.run --alg=deepq --env=PongNoFrameskip-v4 --num_timesteps=1e6 ``` +*NOTE:* +The DQN-based algorithms currently do not get high scores on the Atari games +(see GitHub issue [431](https://github.com/openai/baselines/issues/431)) +We are currently investigating this and recommend users to instead use PPO2. ## Saving, loading and visualizing models The algorithms serialization API is not properly unified yet; however, there is a simple method to save / restore trained models.