* exported rl-algs * more stuff from rl-algs * run slow tests * re-exported rl_algs * re-exported rl_algs - fixed problems with serialization test and test_cartpole * replaced atari_arg_parser with common_arg_parser * run.py can run algos from both baselines and rl_algs * added approximate humanoid reward with ppo2 into the README for reference * dummy commit to RUN BENCHMARKS * dummy commit to RUN BENCHMARKS * dummy commit to RUN BENCHMARKS * dummy commit to RUN BENCHMARKS * very dummy commit to RUN BENCHMARKS * serialize variables as a dict, not as a list * running_mean_std uses tensorflow variables * fixed import in vec_normalize * dummy commit to RUN BENCHMARKS * dummy commit to RUN BENCHMARKS * flake8 complaints * save all variables to make sure we save the vec_normalize normalization * benchmarks on ppo2 only RUN BENCHMARKS * make_atari_env compatible with mpi * run ppo_mpi benchmarks only RUN BENCHMARKS * hardcode names of retro environments * add defaults * changed default ppo2 lr schedule to linear RUN BENCHMARKS * non-tf normalization benchmark RUN BENCHMARKS * use ncpu=1 for mujoco sessions - gives a bit of a performance speedup * reverted running_mean_std to user property decorators for mean, var, count * reverted VecNormalize to use RunningMeanStd (no tf) * reverted VecNormalize to use RunningMeanStd (no tf) * profiling wip * use VecNormalize with regular RunningMeanStd * added acer runner (missing import) * flake8 complaints * added a note in README about TfRunningMeanStd and serialization of VecNormalize * dummy commit to RUN BENCHMARKS * merged benchmarks branch
32 lines
1.3 KiB
Python
32 lines
1.3 KiB
Python
import numpy as np
|
|
import tensorflow as tf
|
|
from mpi4py import MPI
|
|
|
|
class MpiAdamOptimizer(tf.train.AdamOptimizer):
|
|
"""Adam optimizer that averages gradients across mpi processes."""
|
|
def __init__(self, comm, **kwargs):
|
|
self.comm = comm
|
|
tf.train.AdamOptimizer.__init__(self, **kwargs)
|
|
def compute_gradients(self, loss, var_list, **kwargs):
|
|
grads_and_vars = tf.train.AdamOptimizer.compute_gradients(self, loss, var_list, **kwargs)
|
|
grads_and_vars = [(g, v) for g, v in grads_and_vars if g is not None]
|
|
flat_grad = tf.concat([tf.reshape(g, (-1,)) for g, v in grads_and_vars], axis=0)
|
|
shapes = [v.shape.as_list() for g, v in grads_and_vars]
|
|
sizes = [int(np.prod(s)) for s in shapes]
|
|
|
|
num_tasks = self.comm.Get_size()
|
|
buf = np.zeros(sum(sizes), np.float32)
|
|
|
|
def _collect_grads(flat_grad):
|
|
self.comm.Allreduce(flat_grad, buf, op=MPI.SUM)
|
|
np.divide(buf, float(num_tasks), out=buf)
|
|
return buf
|
|
|
|
avg_flat_grad = tf.py_func(_collect_grads, [flat_grad], tf.float32)
|
|
avg_flat_grad.set_shape(flat_grad.shape)
|
|
avg_grads = tf.split(avg_flat_grad, sizes, axis=0)
|
|
avg_grads_and_vars = [(tf.reshape(g, v.shape), v)
|
|
for g, (_, v) in zip(avg_grads, grads_and_vars)]
|
|
|
|
return avg_grads_and_vars
|