From 394339deb52e03ae0e1d27f21b941849278c749e Mon Sep 17 00:00:00 2001 From: pzhokhov Date: Wed, 3 Oct 2018 20:53:58 -0700 Subject: [PATCH] Update README.md --- README.md | 4 ---- 1 file changed, 4 deletions(-) diff --git a/README.md b/README.md index 06b67f6..a9b7bf6 100644 --- a/README.md +++ b/README.md @@ -94,10 +94,6 @@ DQN with Atari is at this point a classics of benchmarks. To run the baselines i ``` python -m baselines.run --alg=deepq --env=PongNoFrameskip-v4 --num_timesteps=1e6 ``` -*NOTE:* -The DQN-based algorithms currently do not get high scores on the Atari games -(see GitHub issue [431](https://github.com/openai/baselines/issues/431)) -We are currently investigating this and recommend users to instead use PPO2. ## Saving, loading and visualizing models The algorithms serialization API is not properly unified yet; however, there is a simple method to save / restore trained models.