Files
baselines/baselines/her/README.md
pzhokhov 6c44fb28fe refactor HER - phase 1 (#767)
* joshim5 changes (width and height to WarpFrame wrapper)

* match network output with action distribution via a linear layer only if necessary (#167)

* support color vs. grayscale option in WarpFrame wrapper (#166)

* support color vs. grayscale option in WarpFrame wrapper

* Support color in other wrappers

* Updated per Peters suggestions

* fixing test failures

* ppo2 with microbatches (#168)

* pass microbatch_size to the model during construction

* microbatch fixes and test (#169)

* microbatch fixes and test

* tiny cleanup

* added assertions to the test

* vpg-related fix

* Peterz joshim5 subclass ppo2 model (#170)

* microbatch fixes and test

* tiny cleanup

* added assertions to the test

* vpg-related fix

* subclassing the model to make microbatched version of model WIP

* made microbatched model a subclass of ppo2 Model

* flake8 complaint

* mpi-less ppo2 (resolving merge conflict)

* flake8 and mpi4py imports in ppo2/model.py

* more un-mpying

* merge master

* updates to the benchmark viewer code + autopep8 (#184)

* viz docs and syntactic sugar wip

* update viewer yaml to use persistent volume claims

* move plot_util to baselines.common, update links

* use 1Tb hard drive for results viewer

* small updates to benchmark vizualizer code

* autopep8

* autopep8

* any folder can be a benchmark

* massage games image a little bit

* fixed --preload option in app.py

* remove preload from run_viewer.sh

* remove pdb breakpoints

* update bench-viewer.yaml

* fixed bug (#185)

* fixed bug 

it's wrong to do the else statement, because no other nodes would start.

* changed the fix slightly

* Refactor her phase 1 (#194)

* add monitor to the rollout envs in her RUN BENCHMARKS her

* Slice -> Slide in her benchmarks RUN BENCHMARKS her

* run her benchmark for 200 epochs

* dummy commit to RUN BENCHMARKS her

* her benchmark for 500 epochs RUN BENCHMARKS her

* add num_timesteps to her benchmark to be compatible with viewer RUN BENCHMARKS her

* add num_timesteps to her benchmark to be compatible with viewer RUN BENCHMARKS her

* add num_timesteps to her benchmark to be compatible with viewer RUN BENCHMARKS her

* disable saving of policies in her benchmark RUN BENCHMARKS her

* run fetch benchmarks with ppo2 and ddpg RUN BENCHMARKS Fetch

* run fetch benchmarks with ppo2 and ddpg RUN BENCHMARKS Fetch

* launcher refactor wip

* wip

* her works on FetchReach

* her runner refactor RUN BENCHMARKS Fetch1M

* unit test for her

* fixing warnings in mpi_average in her, skip test_fetchreach if mujoco is not present

* pickle-based serialization in her

* remove extra import from subproc_vec_env.py

* investigating differences in rollout.py

* try with old rollout code RUN BENCHMARKS her

* temporarily use DummyVecEnv in cmd_util.py RUN BENCHMARKS her

* dummy commit to RUN BENCHMARKS her

* set info_values in rollout worker in her RUN BENCHMARKS her

* bug in rollout_new.py RUN BENCHMARKS her

* fixed bug in rollout_new.py RUN BENCHMARKS her

* do not use last step because vecenv calls reset and returns obs after reset RUN BENCHMARKS her

* updated buffer sizes RUN BENCHMARKS her

* fixed loading/saving via joblib

* dust off learning from demonstrations in HER, docs, refactor

* add deprecation notice on her play and plot files

* address comments by Matthias
2018-12-19 14:44:08 -08:00

5.0 KiB

Hindsight Experience Replay

For details on Hindsight Experience Replay (HER), please read the paper.

How to use Hindsight Experience Replay

Getting started

Training an agent is very simple:

python -m baselines.run --alg=her --env=FetchReach-v1 --num_timesteps=5000

This will train a DDPG+HER agent on the FetchReach environment. You should see the success rate go up quickly to 1.0, which means that the agent achieves the desired goal in 100% of the cases (note how HER can solve it in <5k steps - try doing that with PPO by replacing her with ppo2 :)) The training script logs other diagnostics as well. Policy at the end of the training can be saved using --save_path flag, for instance:

python -m baselines.run --alg=her --env=FetchReach-v1 --num_timesteps=5000 --save_path=~/policies/her/fetchreach5k

To inspect what the agent has learned, use the --play flag:

python -m baselines.run --alg=her --env=FetchReach-v1 --num_timesteps=5000 --play

(note --play can be combined with --load_path, which lets one load trained policies, for more results see README.md)

Reproducing results

In Plappert et al. (2018), 38 trajectories were generated in parallel (19 MPI processes, each generating computing gradients from 2 trajectories and aggregating). To reproduce that behaviour, use

mpirun -np 19 python -m baselines.run --num_env=2 --alg=her ... 

This will require a machine with sufficient amount of physical CPU cores. In our experiments, we used Azure's D15v2 instances, which have 20 physical cores. We only scheduled the experiment on 19 of those to leave some head-room on the system.

Hindsight Experience Replay with Demonstrations

Using pre-recorded demonstrations to Overcome the exploration problem in HER based Reinforcement learning. For details, please read the paper.

Getting started

The first step is to generate the demonstration dataset. This can be done in two ways, either by using a VR system to manipulate the arm using physical VR trackers or the simpler way is to write a script to carry out the respective task. Now some tasks can be complex and thus it would be difficult to write a hardcoded script for that task (eg. Fetch Push), but here our focus is on providing an algorithm that helps the agent to learn from demonstrations, and not on the demonstration generation paradigm itself. Thus the data collection part is left to the reader's choice.

We provide a script for the Fetch Pick and Place task, to generate demonstrations for the Pick and Place task execute:

python experiment/data_generation/fetch_data_generation.py

This outputs data_fetch_random_100.npz file which is our data file.

To launch training with demonstrations (more technically, with behaviour cloning loss as an auxilliary loss), run the following

python -m baselines.run --alg=her --env=FetchPickAndPlace-v1 --num_timesteps=2.5e6 --demo_file=/Path/to/demo_file.npz

This will train a DDPG+HER agent on the FetchPickAndPlace environment by using previously generated demonstration data. To inspect what the agent has learned, use the --play flag as described above.

Configuration

The provided configuration is for training an agent with HER without demonstrations, we need to change a few paramters for the HER algorithm to learn through demonstrations, to do that, set:

  • bc_loss: 1 - whether or not to use the behavior cloning loss as an auxilliary loss
  • q_filter: 1 - whether or not a Q value filter should be used on the Actor outputs
  • num_demo: 100 - number of expert demo episodes
  • demo_batch_size: 128 - number of samples to be used from the demonstrations buffer, per mpi thread
  • prm_loss_weight: 0.001 - Weight corresponding to the primary loss
  • aux_loss_weight: 0.0078 - Weight corresponding to the auxilliary loss also called the cloning loss

Apart from these changes the reported results also have the following configurational changes:

  • n_cycles: 20 - per epoch
  • batch_size: 1024 - per mpi thread, total batch size
  • random_eps: 0.1 - percentage of time a random action is taken
  • noise_eps: 0.1 - std of gaussian noise added to not-completely-random actions

These parameters can be changed either in experiment/config.py or passed to the command line as --param=value)

Results

Training with demonstrations helps overcome the exploration problem and achieves a faster and better convergence. The following graphs contrast the difference between training with and without demonstration data, We report the mean Q values vs Epoch and the Success Rate vs Epoch:

Training results for Fetch Pick and Place task constrasting between training with and without demonstration data.