mirror of
https://github.com/Farama-Foundation/Gymnasium.git
synced 2025-08-19 13:32:03 +00:00
Seeding update (#2422)
* Ditch most of the seeding.py and replace np_random with the numpy default_rng. Let's see if tests pass * Updated a bunch of RNG calls from the RandomState API to Generator API * black; didn't expect that, did ya? * Undo a typo * blaaack * More typo fixes * Fixed setting/getting state in multidiscrete spaces * Fix typo, fix a test to work with the new sampling * Correctly (?) pass the randomly generated seed if np_random is called with None as seed * Convert the Discrete sample to a python int (as opposed to np.int64) * Remove some redundant imports * First version of the compatibility layer for old-style RNG. Mainly to trigger tests. * Removed redundant f-strings * Style fixes, removing unused imports * Try to make tests pass by removing atari from the dockerfile * Try to make tests pass by removing atari from the setup * Try to make tests pass by removing atari from the setup * Try to make tests pass by removing atari from the setup * First attempt at deprecating `env.seed` and supporting `env.reset(seed=seed)` instead. Tests should hopefully pass but throw up a million warnings. * black; didn't expect that, didya? * Rename the reset parameter in VecEnvs back to `seed` * Updated tests to use the new seeding method * Removed a bunch of old `seed` calls. Fixed a bug in AsyncVectorEnv * Stop Discrete envs from doing part of the setup (and using the randomness) in init (as opposed to reset) * Add explicit seed to wrappers reset * Remove an accidental return * Re-add some legacy functions with a warning. * Use deprecation instead of regular warnings for the newly deprecated methods/functions
This commit is contained in:
committed by
GitHub
parent
b84b69c872
commit
c364506710
@@ -15,10 +15,8 @@ def test_transform_reward(env_id):
|
||||
wrapped_env = TransformReward(gym.make(env_id), lambda r: scale * r)
|
||||
action = env.action_space.sample()
|
||||
|
||||
env.seed(0)
|
||||
env.reset()
|
||||
wrapped_env.seed(0)
|
||||
wrapped_env.reset()
|
||||
env.reset(seed=0)
|
||||
wrapped_env.reset(seed=0)
|
||||
|
||||
_, reward, _, _ = env.step(action)
|
||||
_, wrapped_reward, _, _ = wrapped_env.step(action)
|
||||
@@ -33,10 +31,8 @@ def test_transform_reward(env_id):
|
||||
wrapped_env = TransformReward(gym.make(env_id), lambda r: np.clip(r, min_r, max_r))
|
||||
action = env.action_space.sample()
|
||||
|
||||
env.seed(0)
|
||||
env.reset()
|
||||
wrapped_env.seed(0)
|
||||
wrapped_env.reset()
|
||||
env.reset(seed=0)
|
||||
wrapped_env.reset(seed=0)
|
||||
|
||||
_, reward, _, _ = env.step(action)
|
||||
_, wrapped_reward, _, _ = wrapped_env.step(action)
|
||||
@@ -49,10 +45,8 @@ def test_transform_reward(env_id):
|
||||
env = gym.make(env_id)
|
||||
wrapped_env = TransformReward(gym.make(env_id), lambda r: np.sign(r))
|
||||
|
||||
env.seed(0)
|
||||
env.reset()
|
||||
wrapped_env.seed(0)
|
||||
wrapped_env.reset()
|
||||
env.reset(seed=0)
|
||||
wrapped_env.reset(seed=0)
|
||||
|
||||
for _ in range(1000):
|
||||
action = env.action_space.sample()
|
||||
|
Reference in New Issue
Block a user