Files
Gymnasium/gym/envs/classic_control/acrobot.py

432 lines
15 KiB
Python
Raw Normal View History

2016-04-27 08:00:58 -07:00
"""classic Acrobot task"""
Seeding update (#2422) * Ditch most of the seeding.py and replace np_random with the numpy default_rng. Let's see if tests pass * Updated a bunch of RNG calls from the RandomState API to Generator API * black; didn't expect that, did ya? * Undo a typo * blaaack * More typo fixes * Fixed setting/getting state in multidiscrete spaces * Fix typo, fix a test to work with the new sampling * Correctly (?) pass the randomly generated seed if np_random is called with None as seed * Convert the Discrete sample to a python int (as opposed to np.int64) * Remove some redundant imports * First version of the compatibility layer for old-style RNG. Mainly to trigger tests. * Removed redundant f-strings * Style fixes, removing unused imports * Try to make tests pass by removing atari from the dockerfile * Try to make tests pass by removing atari from the setup * Try to make tests pass by removing atari from the setup * Try to make tests pass by removing atari from the setup * First attempt at deprecating `env.seed` and supporting `env.reset(seed=seed)` instead. Tests should hopefully pass but throw up a million warnings. * black; didn't expect that, didya? * Rename the reset parameter in VecEnvs back to `seed` * Updated tests to use the new seeding method * Removed a bunch of old `seed` calls. Fixed a bug in AsyncVectorEnv * Stop Discrete envs from doing part of the setup (and using the randomness) in init (as opposed to reset) * Add explicit seed to wrappers reset * Remove an accidental return * Re-add some legacy functions with a warning. * Use deprecation instead of regular warnings for the newly deprecated methods/functions
2021-12-08 22:14:15 +01:00
from typing import Optional
2016-04-27 08:00:58 -07:00
import numpy as np
import pygame
from pygame import gfxdraw
from numpy import sin, cos, pi
2016-04-27 08:00:58 -07:00
from gym import core, spaces
from gym.utils import seeding
2016-04-27 08:00:58 -07:00
__copyright__ = "Copyright 2013, RLPy http://acl.mit.edu/RLPy"
2021-07-29 02:26:34 +02:00
__credits__ = [
"Alborz Geramifard",
"Robert H. Klein",
"Christoph Dann",
"William Dabney",
"Jonathan P. How",
]
2016-04-27 08:00:58 -07:00
__license__ = "BSD 3-Clause"
__author__ = "Christoph Dann <cdann@cdann.de>"
# SOURCE:
# https://github.com/rlpy/rlpy/blob/master/rlpy/Domains/Acrobot.py
2021-07-29 02:26:34 +02:00
2016-04-27 08:00:58 -07:00
class AcrobotEnv(core.Env):
"""
### Description
The Acrobot system includes two joints and two links, where the joint between the two links is actuated. Initially, the
links are hanging downwards, and the goal is to swing the end of the lower link up to a given height by applying changes
to torque on the actuated joint (middle).
**Gif**: two blue pendulum links connected by two green joints. The joint in between the two pendulum links is acted
upon by the agent via changes in torque. The goal is to swing the end of the outer-link to reach the target height
(black horizontal line above system).
### Action Space
The action is either applying +1, 0 or -1 torque on the joint between the two pendulum links.
| Num | Action |
|-----|------------------------|
| 0 | apply -1 torque to the joint |
| 1 | apply 0 torque to the joint |
| 2 | apply 1 torque to the joint |
### Observation Space
The observation space gives information about the two rotational joint angles `theta1` and `theta2`, as well as their
angular velocities:
- `theta1` is the angle of the inner link joint, where an angle of 0 indicates the first link is pointing directly
downwards.
- `theta2` is *relative to the angle of the first link.* An angle of 0 corresponds to having the same angle between the
two links.
The angular velocities of `theta1` and `theta2` are bounded at ±4π, and ±9π respectively.
The observation is a `ndarray` with shape `(6,)` where the elements correspond to the following:
| Num | Observation | Min | Max |
|-----|-----------------------|----------------------|--------------------|
| 0 | Cosine of `theta1` | -1 | 1 |
| 1 | Sine of `theta1` | -1 | 1 |
| 2 | Cosine of `theta2` | -1 | 1 |
| 3 | Sine of `theta2` | -1 | 1 |
| 4 | Angular velocity of `theta1` | ~ -12.567 (-4 * pi) | ~ 12.567 (4 * pi) |
| 5 | Angular velocity of `theta2` | ~ -28.274 (-9 * pi) | ~ 28.274 (9 * pi) |
or `[cos(theta1) sin(theta1) cos(theta2) sin(theta2) thetaDot1 thetaDot2]`. As an example, a state of
`[1, 0, 1, 0, ..., ...]` indicates that both links are pointing downwards.
### Rewards
All steps that do not reach the goal (termination criteria) incur a reward of -1. Achieving the target height and
terminating incurs a reward of 0. The reward threshold is -100.
### Starting State
At start, each parameter in the underlying state (`theta1`, `theta2`, and the two angular velocities) is initialized
uniformly at random between -0.1 and 0.1. This means both links are pointing roughly downwards.
### Episode Termination
The episode terminates of one of the following occurs:
1. The target height is achieved. As constructed, this occurs when
`-cos(theta1) - cos(theta2 + theta1) > 1.0`
2. Episode length is greater than 500 (200 for v0)
### Arguments
There are no arguments supported in constructing the environment. As an example:
```python
import gym
env_name = 'Acrobot-v1'
env = gym.make(env_name)
```
By default, the dynamics of the acrobot follow those described in Richard Sutton's book
[Reinforcement Learning: An Introduction](http://incompleteideas.net/book/11/node4.html). However, a `book_or_nips`
setting can be modified on the environment to change the pendulum dynamics to those described
in [the original NeurIPS paper](https://papers.nips.cc/paper/1995/hash/8f1d43620bc6bb580df6e80b0dc05c48-Abstract.html).
See the following note and
the [implementation](https://github.com/openai/gym/blob/master/gym/envs/classic_control/acrobot.py) for details:
> The dynamics equations were missing some terms in the NIPS paper which
are present in the book. R. Sutton confirmed in personal correspondence
that the experimental results shown in the paper and the book were
generated with the equations shown in the book.
However, there is the option to run the domain with the paper equations
by setting `book_or_nips = 'nips'`
Continuing from the prior example:
```python
# To change the dynamics as described above
env.env.book_or_nips = 'nips'
```
### Version History
- v1: Maximum number of steps increased from 200 to 500. The observation space for v0 provided direct readings of
`theta1` and `theta2` in radians, having a range of `[-pi, pi]`. The v1 observation space as described here provides the
sin and cosin of each angle instead.
- v0: Initial versions release (1.0.0) (removed from gym for v1)
### References
- Sutton, R. S. (1996). Generalization in Reinforcement Learning: Successful Examples Using Sparse Coarse Coding. In D. Touretzky, M. C. Mozer, & M. Hasselmo (Eds.), Advances in Neural Information Processing Systems (Vol. 8). MIT Press. https://proceedings.neurips.cc/paper/1995/file/8f1d43620bc6bb580df6e80b0dc05c48-Paper.pdf
- Sutton, R. S., Barto, A. G. (2018 ). Reinforcement Learning: An Introduction. The MIT Press.
2016-04-27 08:00:58 -07:00
"""
2021-07-29 02:26:34 +02:00
metadata = {"render.modes": ["human", "rgb_array"], "video.frames_per_second": 15}
2016-04-27 08:00:58 -07:00
2021-07-29 02:26:34 +02:00
dt = 0.2
2016-04-27 08:00:58 -07:00
2021-07-29 02:26:34 +02:00
LINK_LENGTH_1 = 1.0 # [m]
LINK_LENGTH_2 = 1.0 # [m]
LINK_MASS_1 = 1.0 #: [kg] mass of link 1
LINK_MASS_2 = 1.0 #: [kg] mass of link 2
2016-04-27 08:00:58 -07:00
LINK_COM_POS_1 = 0.5 #: [m] position of the center of mass of link 1
LINK_COM_POS_2 = 0.5 #: [m] position of the center of mass of link 2
2021-07-29 02:26:34 +02:00
LINK_MOI = 1.0 #: moments of inertia for both links
2016-04-27 08:00:58 -07:00
MAX_VEL_1 = 4 * pi
MAX_VEL_2 = 9 * pi
2016-04-27 08:00:58 -07:00
2021-07-29 02:26:34 +02:00
AVAIL_TORQUE = [-1.0, 0.0, +1]
2016-04-27 08:00:58 -07:00
2021-07-29 02:26:34 +02:00
torque_noise_max = 0.0
2016-04-27 08:00:58 -07:00
SCREEN_DIM = 500
2016-04-27 08:00:58 -07:00
#: use dynamics equations from the nips paper or the book
book_or_nips = "book"
action_arrow = None
domain_fig = None
actions_num = 3
def __init__(self):
self.screen = None
self.isopen = True
2021-07-29 15:39:42 -04:00
high = np.array(
[1.0, 1.0, 1.0, 1.0, self.MAX_VEL_1, self.MAX_VEL_2], dtype=np.float32
)
low = -high
self.observation_space = spaces.Box(low=low, high=high, dtype=np.float32)
self.action_space = spaces.Discrete(3)
self.state = None
2016-04-27 08:00:58 -07:00
def reset(
self,
*,
seed: Optional[int] = None,
return_info: bool = False,
options: Optional[dict] = None
):
Seeding update (#2422) * Ditch most of the seeding.py and replace np_random with the numpy default_rng. Let's see if tests pass * Updated a bunch of RNG calls from the RandomState API to Generator API * black; didn't expect that, did ya? * Undo a typo * blaaack * More typo fixes * Fixed setting/getting state in multidiscrete spaces * Fix typo, fix a test to work with the new sampling * Correctly (?) pass the randomly generated seed if np_random is called with None as seed * Convert the Discrete sample to a python int (as opposed to np.int64) * Remove some redundant imports * First version of the compatibility layer for old-style RNG. Mainly to trigger tests. * Removed redundant f-strings * Style fixes, removing unused imports * Try to make tests pass by removing atari from the dockerfile * Try to make tests pass by removing atari from the setup * Try to make tests pass by removing atari from the setup * Try to make tests pass by removing atari from the setup * First attempt at deprecating `env.seed` and supporting `env.reset(seed=seed)` instead. Tests should hopefully pass but throw up a million warnings. * black; didn't expect that, didya? * Rename the reset parameter in VecEnvs back to `seed` * Updated tests to use the new seeding method * Removed a bunch of old `seed` calls. Fixed a bug in AsyncVectorEnv * Stop Discrete envs from doing part of the setup (and using the randomness) in init (as opposed to reset) * Add explicit seed to wrappers reset * Remove an accidental return * Re-add some legacy functions with a warning. * Use deprecation instead of regular warnings for the newly deprecated methods/functions
2021-12-08 22:14:15 +01:00
super().reset(seed=seed)
self.state = self.np_random.uniform(low=-0.1, high=0.1, size=(4,)).astype(
np.float32
)
if not return_info:
return self._get_ob()
else:
return self._get_ob(), {}
2016-04-27 08:00:58 -07:00
def step(self, a):
2016-04-27 08:00:58 -07:00
s = self.state
assert s is not None, "Call reset before using AcrobotEnv object."
2016-04-27 08:00:58 -07:00
torque = self.AVAIL_TORQUE[a]
# Add noise to the force action
if self.torque_noise_max > 0:
2021-07-29 15:39:42 -04:00
torque += self.np_random.uniform(
-self.torque_noise_max, self.torque_noise_max
)
2016-04-27 08:00:58 -07:00
# Now, augment the state with our force action so it can be passed to
# _dsdt
s_augmented = np.append(s, torque)
ns = rk4(self._dsdt, s_augmented, [0, self.dt])
2021-08-26 15:17:57 -04:00
ns[0] = wrap(ns[0], -pi, pi)
ns[1] = wrap(ns[1], -pi, pi)
2016-04-27 08:00:58 -07:00
ns[2] = bound(ns[2], -self.MAX_VEL_1, self.MAX_VEL_1)
ns[3] = bound(ns[3], -self.MAX_VEL_2, self.MAX_VEL_2)
self.state = ns
2016-04-27 08:00:58 -07:00
terminal = self._terminal()
2021-07-29 02:26:34 +02:00
reward = -1.0 if not terminal else 0.0
return (self._get_ob(), reward, terminal, {})
def _get_ob(self):
s = self.state
assert s is not None, "Call reset before using AcrobotEnv object."
return np.array(
[cos(s[0]), sin(s[0]), cos(s[1]), sin(s[1]), s[2], s[3]], dtype=np.float32
)
2016-04-27 08:00:58 -07:00
def _terminal(self):
s = self.state
assert s is not None, "Call reset before using AcrobotEnv object."
2021-07-29 02:26:34 +02:00
return bool(-cos(s[0]) - cos(s[1] + s[0]) > 1.0)
2016-04-27 08:00:58 -07:00
def _dsdt(self, s_augmented):
2016-04-27 08:00:58 -07:00
m1 = self.LINK_MASS_1
m2 = self.LINK_MASS_2
l1 = self.LINK_LENGTH_1
lc1 = self.LINK_COM_POS_1
lc2 = self.LINK_COM_POS_2
I1 = self.LINK_MOI
I2 = self.LINK_MOI
g = 9.8
a = s_augmented[-1]
s = s_augmented[:-1]
theta1 = s[0]
theta2 = s[1]
dtheta1 = s[2]
dtheta2 = s[3]
2021-07-29 15:39:42 -04:00
d1 = (
m1 * lc1 ** 2
+ m2 * (l1 ** 2 + lc2 ** 2 + 2 * l1 * lc2 * cos(theta2))
+ I1
+ I2
)
d2 = m2 * (lc2 ** 2 + l1 * lc2 * cos(theta2)) + I2
2021-07-29 02:26:34 +02:00
phi2 = m2 * lc2 * g * cos(theta1 + theta2 - pi / 2.0)
phi1 = (
-m2 * l1 * lc2 * dtheta2 ** 2 * sin(theta2)
- 2 * m2 * l1 * lc2 * dtheta2 * dtheta1 * sin(theta2)
+ (m1 * lc1 + m2 * l1) * g * cos(theta1 - pi / 2)
+ phi2
)
2016-04-27 08:00:58 -07:00
if self.book_or_nips == "nips":
# the following line is consistent with the description in the
# paper
2021-07-29 02:26:34 +02:00
ddtheta2 = (a + d2 / d1 * phi1 - phi2) / (m2 * lc2 ** 2 + I2 - d2 ** 2 / d1)
2016-04-27 08:00:58 -07:00
else:
# the following line is consistent with the java implementation and the
# book
2021-07-29 15:39:42 -04:00
ddtheta2 = (
a + d2 / d1 * phi1 - m2 * l1 * lc2 * dtheta1 ** 2 * sin(theta2) - phi2
) / (m2 * lc2 ** 2 + I2 - d2 ** 2 / d1)
2016-04-27 08:00:58 -07:00
ddtheta1 = -(d2 * ddtheta2 + phi1) / d1
2021-07-29 02:26:34 +02:00
return (dtheta1, dtheta2, ddtheta1, ddtheta2, 0.0)
2016-04-27 08:00:58 -07:00
2021-07-29 02:26:34 +02:00
def render(self, mode="human"):
if self.screen is None:
pygame.init()
self.screen = pygame.display.set_mode((self.SCREEN_DIM, self.SCREEN_DIM))
self.surf = pygame.Surface((self.SCREEN_DIM, self.SCREEN_DIM))
self.surf.fill((255, 255, 255))
2016-04-27 08:00:58 -07:00
s = self.state
bound = self.LINK_LENGTH_1 + self.LINK_LENGTH_2 + 0.2 # 2.2 for default
scale = self.SCREEN_DIM / (bound * 2)
offset = self.SCREEN_DIM / 2
2016-04-27 08:00:58 -07:00
2021-07-29 02:26:34 +02:00
if s is None:
return None
p1 = [
-self.LINK_LENGTH_1 * cos(s[0]) * scale,
self.LINK_LENGTH_1 * sin(s[0]) * scale,
]
2016-04-27 08:00:58 -07:00
2021-07-29 02:26:34 +02:00
p2 = [
p1[0] - self.LINK_LENGTH_2 * cos(s[0] + s[1]) * scale,
p1[1] + self.LINK_LENGTH_2 * sin(s[0] + s[1]) * scale,
2021-07-29 02:26:34 +02:00
]
2016-04-27 08:00:58 -07:00
2021-07-29 02:26:34 +02:00
xys = np.array([[0, 0], p1, p2])[:, ::-1]
thetas = [s[0] - pi / 2, s[0] + s[1] - pi / 2]
link_lengths = [self.LINK_LENGTH_1 * scale, self.LINK_LENGTH_2 * scale]
pygame.draw.line(
self.surf,
start_pos=(-2.2 * scale + offset, 1 * scale + offset),
end_pos=(2.2 * scale + offset, 1 * scale + offset),
color=(0, 0, 0),
)
2016-04-27 08:00:58 -07:00
2021-07-29 02:26:34 +02:00
for ((x, y), th, llen) in zip(xys, thetas, link_lengths):
x = x + offset
y = y + offset
l, r, t, b = 0, llen, 0.1 * scale, -0.1 * scale
coords = [(l, b), (l, t), (r, t), (r, b)]
transformed_coords = []
for coord in coords:
coord = pygame.math.Vector2(coord).rotate_rad(th)
coord = (coord[0] + x, coord[1] + y)
transformed_coords.append(coord)
gfxdraw.aapolygon(self.surf, transformed_coords, (0, 204, 204))
gfxdraw.filled_polygon(self.surf, transformed_coords, (0, 204, 204))
gfxdraw.aacircle(self.surf, int(x), int(y), int(0.1 * scale), (204, 204, 0))
gfxdraw.filled_circle(
self.surf, int(x), int(y), int(0.1 * scale), (204, 204, 0)
)
self.surf = pygame.transform.flip(self.surf, False, True)
self.screen.blit(self.surf, (0, 0))
if mode == "human":
pygame.display.flip()
2016-04-27 08:00:58 -07:00
if mode == "rgb_array":
return np.transpose(
np.array(pygame.surfarray.pixels3d(self.screen)), axes=(1, 0, 2)
)
else:
return self.isopen
2016-04-27 08:00:58 -07:00
def close(self):
if self.screen is not None:
pygame.quit()
self.isopen = False
2021-07-29 02:26:34 +02:00
2016-04-27 08:00:58 -07:00
def wrap(x, m, M):
"""Wraps ``x`` so m <= x <= M; but unlike ``bound()`` which
2016-04-27 08:00:58 -07:00
truncates, ``wrap()`` wraps x around the coordinate system defined by m,M.\n
For example, m = -180, M = 180 (degrees), x = 360 --> returns 0.
Args:
x: a scalar
m: minimum possible value in range
M: maximum possible value in range
Returns:
x: a scalar, wrapped
2016-04-27 08:00:58 -07:00
"""
diff = M - m
while x > M:
x = x - diff
while x < m:
x = x + diff
return x
2021-07-29 02:26:34 +02:00
2016-04-27 08:00:58 -07:00
def bound(x, m, M=None):
"""Either have m as scalar, so bound(x,m,M) which returns m <= x <= M *OR*
2016-04-27 08:00:58 -07:00
have m as length 2 vector, bound(x,m, <IGNORED>) returns m[0] <= x <= m[1].
Args:
x: scalar
Returns:
x: scalar, bound between min (m) and Max (M)
2016-04-27 08:00:58 -07:00
"""
if M is None:
M = m[1]
m = m[0]
# bound x between min (m) and Max (M)
return min(max(x, m), M)
def rk4(derivs, y0, t):
2016-04-27 08:00:58 -07:00
"""
Integrate 1-D or N-D system of ODEs using 4-th order Runge-Kutta.
2016-04-27 08:00:58 -07:00
This is a toy implementation which may be useful if you find
yourself stranded on a system w/o scipy. Otherwise use
:func:`scipy.integrate`.
Args:
derivs: the derivative of the system and has the signature ``dy = derivs(yi)``
y0: initial state vector
t: sample times
args: additional arguments passed to the derivative function
kwargs: additional keyword arguments passed to the derivative function
2016-04-27 08:00:58 -07:00
Example 1 ::
### 2D system
def derivs(x):
2016-04-27 08:00:58 -07:00
d1 = x[0] + 2*x[1]
d2 = -3*x[0] + 4*x[1]
return (d1, d2)
dt = 0.0005
t = arange(0.0, 2.0, dt)
y0 = (1,2)
yout = rk4(derivs6, y0, t)
2016-04-27 08:00:58 -07:00
If you have access to scipy, you should probably be using the
2021-08-26 15:38:23 -04:00
scipy.integrate tools rather than this function.
This would then require re-adding the time variable to the signature of derivs.
Returns:
yout: Runge-Kutta approximation of the ODE
2016-04-27 08:00:58 -07:00
"""
try:
Ny = len(y0)
except TypeError:
yout = np.zeros((len(t),), np.float_)
else:
yout = np.zeros((len(t), Ny), np.float_)
yout[0] = y0
2018-02-27 10:18:07 -08:00
2016-04-27 08:00:58 -07:00
for i in np.arange(len(t) - 1):
this = t[i]
dt = t[i + 1] - this
2016-04-27 08:00:58 -07:00
dt2 = dt / 2.0
y0 = yout[i]
k1 = np.asarray(derivs(y0))
k2 = np.asarray(derivs(y0 + dt2 * k1))
k3 = np.asarray(derivs(y0 + dt2 * k2))
k4 = np.asarray(derivs(y0 + dt * k3))
2016-04-27 08:00:58 -07:00
yout[i + 1] = y0 + dt / 6.0 * (k1 + 2 * k2 + 2 * k3 + k4)
2021-08-26 15:17:57 -04:00
# We only care about the final timestep and we cleave off action value which will be zero
return yout[-1][:4]