* re-setting up travis
* re-setting up travis
* resolved merge conflicts, added missing dependency for codegen
* removed parallel tests (workers are failing for some reason)
* try test baselines only
* added language options - some weirdness in rcall image that requires them?
* added verbosity to tests
* try tests in baselines only
* ci/runtests.sh tests codegen (some failure on baselines specifically on travis, trying to narrow down the problem)
* removed render from codegen test - maybe that's the problem?
* trying even simpler command within the image to figure out the problem
* print out system info in ci/runtests.sh
* print system info outside of docker as well
* trying single test file in codegen
* install graphviz in the docker image
* git subrepo pull baselines
subrepo:
subdir: "baselines"
merged: "8c2aea2"
upstream:
origin: "git@github.com:openai/baselines.git"
branch: "master"
commit: "8c2aea2"
git-subrepo:
version: "0.4.0"
origin: "git@github.com:ingydotnet/git-subrepo.git"
commit: "74339e8"
* added graphviz to the dockerfile (need both graphviz-dev and graphviz)
* only tests in codegen/algo/test_algo_builder.py
* run baselines tests only. still no clue why collection of codegen tests fails
* update baselines setup to install filelock for tests
* run slow tests
* skip slow tests in baselines
* single test file in baselines
* try reinstalling tensorflow
* running slow tests
* try full baselines and codegen test suite
* in the test Dockerfile, reinstall tensorflow
* using fake display for codegen render tests
* fixed display-related failures by adding a custom entrpoint to the docker image
* set LC_ALL and LANG env variables in docker image
* try sequential tests
* include psutil in requirements; increase relative tolerance in test_low_level_algo_distr
* trying to fix codegen failures on travis
* git subrepo commit (merge) baselines
subrepo:
subdir: "baselines"
merged: "9ce84da"
upstream:
origin: "git@github.com:openai/baselines.git"
branch: "master"
commit: "b222dd0"
git-subrepo:
version: "0.4.0"
origin: "git@github.com:ingydotnet/git-subrepo.git"
commit: "74339e8"
* syntax in install.py
* changing the order of package installation
* removed supervised-reptile from installation list
* cron uses the full games repo in rcall
* flake8 complaints
* rewrite all extras logic in baselines, install.py always uses [all]
* exported rl-algs
* more stuff from rl-algs
* run slow tests
* re-exported rl_algs
* re-exported rl_algs - fixed problems with serialization test and test_cartpole
* replaced atari_arg_parser with common_arg_parser
* run.py can run algos from both baselines and rl_algs
* added approximate humanoid reward with ppo2 into the README for reference
* dummy commit to RUN BENCHMARKS
* dummy commit to RUN BENCHMARKS
* dummy commit to RUN BENCHMARKS
* dummy commit to RUN BENCHMARKS
* very dummy commit to RUN BENCHMARKS
* serialize variables as a dict, not as a list
* running_mean_std uses tensorflow variables
* fixed import in vec_normalize
* dummy commit to RUN BENCHMARKS
* dummy commit to RUN BENCHMARKS
* flake8 complaints
* save all variables to make sure we save the vec_normalize normalization
* benchmarks on ppo2 only RUN BENCHMARKS
* make_atari_env compatible with mpi
* run ppo_mpi benchmarks only RUN BENCHMARKS
* hardcode names of retro environments
* add defaults
* changed default ppo2 lr schedule to linear RUN BENCHMARKS
* non-tf normalization benchmark RUN BENCHMARKS
* use ncpu=1 for mujoco sessions - gives a bit of a performance speedup
* reverted running_mean_std to user property decorators for mean, var, count
* reverted VecNormalize to use RunningMeanStd (no tf)
* reverted VecNormalize to use RunningMeanStd (no tf)
* profiling wip
* use VecNormalize with regular RunningMeanStd
* added acer runner (missing import)
* flake8 complaints
* added a note in README about TfRunningMeanStd and serialization of VecNormalize
* dummy commit to RUN BENCHMARKS
* merged benchmarks branch