Files
Gymnasium/tests/wrappers/test_clip_action.py
Ariel Kwiatkowski c364506710 Seeding update (#2422)
* Ditch most of the seeding.py and replace np_random with the numpy default_rng. Let's see if tests pass

* Updated a bunch of RNG calls from the RandomState API to Generator API

* black; didn't expect that, did ya?

* Undo a typo

* blaaack

* More typo fixes

* Fixed setting/getting state in multidiscrete spaces

* Fix typo, fix a test to work with the new sampling

* Correctly (?) pass the randomly generated seed if np_random is called with None as seed

* Convert the Discrete sample to a python int (as opposed to np.int64)

* Remove some redundant imports

* First version of the compatibility layer for old-style RNG. Mainly to trigger tests.

* Removed redundant f-strings

* Style fixes, removing unused imports

* Try to make tests pass by removing atari from the dockerfile

* Try to make tests pass by removing atari from the setup

* Try to make tests pass by removing atari from the setup

* Try to make tests pass by removing atari from the setup

* First attempt at deprecating `env.seed` and supporting `env.reset(seed=seed)` instead. Tests should hopefully pass but throw up a million warnings.

* black; didn't expect that, didya?

* Rename the reset parameter in VecEnvs back to `seed`

* Updated tests to use the new seeding method

* Removed a bunch of old `seed` calls.

Fixed a bug in AsyncVectorEnv

* Stop Discrete envs from doing part of the setup (and using the randomness) in init (as opposed to reset)

* Add explicit seed to wrappers reset

* Remove an accidental return

* Re-add some legacy functions with a warning.

* Use deprecation instead of regular warnings for the newly deprecated methods/functions
2021-12-08 16:14:15 -05:00

27 lines
676 B
Python

import numpy as np
import gym
from gym.wrappers import ClipAction
def test_clip_action():
# mountaincar: action-based rewards
make_env = lambda: gym.make("MountainCarContinuous-v0")
env = make_env()
wrapped_env = ClipAction(make_env())
seed = 0
env.reset(seed=seed)
wrapped_env.reset(seed=seed)
actions = [[0.4], [1.2], [-0.3], [0.0], [-2.5]]
for action in actions:
obs1, r1, d1, _ = env.step(
np.clip(action, env.action_space.low, env.action_space.high)
)
obs2, r2, d2, _ = wrapped_env.step(action)
assert np.allclose(r1, r2)
assert np.allclose(obs1, obs2)
assert d1 == d2