mirror of
https://github.com/Farama-Foundation/Gymnasium.git
synced 2025-07-31 05:44:31 +00:00
Update third party environments (#395)
This commit is contained in:
@@ -2,20 +2,84 @@
|
||||
:tocdepth: 2
|
||||
```
|
||||
|
||||
# Third-party Environments
|
||||
# Third-Party Environments
|
||||
|
||||
There are a number of Reinforcement Learning environments built by authors not included with Gymnasium. The Farama Foundation maintains a number of projects for gridworlds, procedurally generated worlds, video games, robotics, these can be found at [projects](https://farama.org/projects).
|
||||
The Farama Foundation maintains a number of other [projects](https://farama.org/projects), most of which use Gymnasium. Topics include:
|
||||
multi-agent RL ([PettingZoo](https://pettingzoo.farama.org/)),
|
||||
offline-RL ([Minari](https://minari.farama.org/)),
|
||||
gridworlds ([Minigrid](https://minigrid.farama.org/)),
|
||||
robotics ([Gymnasium-Robotics](https://robotics.farama.org/)),
|
||||
multi-objective RL ([MO-Gymnasium](https://mo-gymnasium.farama.org/))
|
||||
many-agent RL ([MAgent2](https://magent2.farama.org/)),
|
||||
3D navigation ([Miniworld](https://miniworld.farama.org/)), and many more.
|
||||
|
||||
## Video Game environments
|
||||
*This page contains environments which are not maintained by Farama Foundation and, as such, cannot be guaranteed to function as intended.*
|
||||
|
||||
*If you'd like to contribute an environment, please reach out on [Discord](https://discord.gg/nHg2JRN489).*
|
||||
|
||||
### [highway-env: Autonomous driving and tactical decision-making tasks](https://github.com/eleurent/highway-env)
|
||||
|
||||
[]()
|
||||
[]()
|
||||
|
||||
An environment for behavioral planning in autonomous driving, with an emphasis on high-level perception and decision rather than low-level sensing and control.
|
||||
|
||||
### [sumo-rl: Reinforcement Learning using SUMO traffic simulator](https://github.com/LucasAlegre/sumo-rl)
|
||||
|
||||
[]()
|
||||
[]()
|
||||
|
||||
Gymnasium wrapper for various environments in the SUMO traffic simulator. Supports both single and multiagent settings (using [pettingzoo](https://pettingzoo.farama.org/)).
|
||||
|
||||
### [panda-gym: Robotics environments using the PyBullet physics engine](https://github.com/qgallouedec/panda-gym/)
|
||||
|
||||
[]()
|
||||
[]()
|
||||
|
||||
PyBullet based simulations of a robotic arm moving objects.
|
||||
|
||||
### [tmrl: TrackMania 2020 through RL](https://github.com/trackmania-rl/tmrl/)
|
||||
|
||||
[]()
|
||||
[]()
|
||||
|
||||
tmrl is a distributed framework for training Deep Reinforcement Learning AIs in real-time applications. It is demonstrated on the TrackMania 2020 video game.
|
||||
|
||||
### [Safety-Gymnasium: Ensuring safety in real-world RL scenarios](https://github.com/PKU-MARL/safety-gymnasium)
|
||||
|
||||
[]()
|
||||
[]()
|
||||
|
||||
Highly scalable and customizable Safe Reinforcement Learning library.
|
||||
|
||||
### [stable-retro: Classic retro games, a maintained version of OpenAI Retro](https://github.com/MatPoliquin/stable-retro)
|
||||
|
||||
Supported fork of gym-retro with additional games, states, scenarios, etc. Open to PRs of additional games, features, and platforms since gym-retro is no longer maintained
|
||||
[]()
|
||||
[]()
|
||||
|
||||
Supported fork of gym-retro: turn classic video games into Gymnasium environments.
|
||||
|
||||
### [flappy-bird-gymnasium: A Flappy Bird environment for Gymnasium](https://github.com/markub3327/flappy-bird-gymnasium)
|
||||
|
||||
[]()
|
||||
[]()
|
||||
|
||||
A simple environment for single-agent reinforcement learning algorithms on a clone of [Flappy Bird](https://en.wikipedia.org/wiki/Flappy_Bird), the hugely popular arcade-style mobile game. Both state and pixel observation environments are available.
|
||||
|
||||
### [matrix-mdp: Easily create discrete MDPs](https://github.com/Paul-543NA/matrix-mdp-gym)
|
||||
|
||||
[]()
|
||||
[]()
|
||||
|
||||
An environment to easily implement discrete MDPs as gym environments. Turn a set of matrices (`P_0(s)`, `P(s'| s, a)` and `R(s', s, a)`) into a gym environment that represents the discrete MDP ruled by these dynamics.
|
||||
|
||||
# Third-Party Environments using Gym
|
||||
|
||||
There are a large number of third-party environments using various versions of [Gym](https://github.com/openai/gym).
|
||||
Many of these can be adapted to work with gymnasium (see [Compatibility with Gym](https://gymnasium.farama.org/content/gym_compatibility/)), but are not guaranteed to be fully functional.
|
||||
|
||||
## Video Game environments
|
||||
|
||||
### [gym-derk: GPU accelerated MOBA environment](https://gym.derkgame.com/)
|
||||
|
||||
This is a 3v3 MOBA environment where you train creatures to fight each other. It runs entirely on the GPU so you can easily have hundreds of instances running in parallel. There are around 15 items for the creatures, 60 "senses", 5 actions, and roughly 23 tweakable rewards. It's also possible to benchmark an agent against other agents online. It's available for free for training for personal use, and otherwise costs money; see licensing details on the website
|
||||
@@ -46,9 +110,6 @@ A simple environment using [PyBullet](https://github.com/bulletphysics/bullet3)
|
||||
|
||||
Mars Explorer is a Gym compatible environment designed and developed as an initial endeavor to bridge the gap between powerful Deep Reinforcement Learning methodologies and the problem of exploration/coverage of an unknown terrain.
|
||||
|
||||
### [panda-gym: Robotics environments using the PyBullet physics engine](https://github.com/qgallouedec/panda-gym/)
|
||||
|
||||
PyBullet based simulations of a robotic arm moving objects.
|
||||
|
||||
### [robo-gym: Real-world and simulation robotics](https://github.com/jr-robotics/robo-gym)
|
||||
|
||||
@@ -80,10 +141,6 @@ Reinforcement Learning Environments for Omniverse Isaac Gym
|
||||
|
||||
## Autonomous Driving environments
|
||||
|
||||
### [sumo-rl](https://github.com/LucasAlegre/sumo-rl)
|
||||
|
||||
Gym wrapper for various environments in the Sumo traffic simulator
|
||||
|
||||
### [gym-duckietown](https://github.com/duckietown/gym-duckietown)
|
||||
|
||||
A lane-following simulator built for the [Duckietown](http://duckietown.org/) project (small-scale self-driving car course).
|
||||
@@ -92,18 +149,10 @@ A lane-following simulator built for the [Duckietown](http://duckietown.org/) pr
|
||||
|
||||
An environment for simulating a wide variety of electric drives taking into account different types of electric motors and converters. Control schemes can be continuous, yielding a voltage duty cycle, or discrete, determining converter switching states directly.
|
||||
|
||||
### [highway-env](https://github.com/eleurent/highway-env)
|
||||
|
||||
An environment for behavioral planning in autonomous driving, with an emphasis on high-level perception and decision rather than low-level sensing and control. The difficulty of the task lies in understanding the social interactions with other drivers, whose behaviors are uncertain. Several scenes are proposed, such as highway, merge, intersection and roundabout.
|
||||
|
||||
### [CommonRoad-RL](https://commonroad.in.tum.de/tools/commonroad-rl)
|
||||
|
||||
A Gym for solving motion planning problems for various traffic scenarios compatible with [CommonRoad benchmarks](https://commonroad.in.tum.de/scenarios), which provides configurable rewards, action spaces, and observation spaces.
|
||||
|
||||
### [tmrl: TrackMania 2020 through RL](https://github.com/trackmania-rl/tmrl/)
|
||||
|
||||
tmrl is a distributed framework for training Deep Reinforcement Learning AIs in real-time applications. It is demonstrated on the TrackMania 2020 video game.
|
||||
|
||||
### [racing_dreamer](https://github.com/CPS-TUWien/racing_dreamer/)
|
||||
|
||||
Latent Imagination Facilitates Zero-Shot Transfer in Autonomous Racing
|
||||
@@ -126,14 +175,6 @@ Reinforcement learning environments for compiler optimization tasks, such as LLV
|
||||
|
||||
Configurable reinforcement learning environments for testing generalization, e.g. CartPole with variable pole lengths or Brax robots with different ground frictions.
|
||||
|
||||
### [matrix-mdp: Easily create discrete MDPs](https://github.com/Paul-543NA/matrix-mdp-gym)
|
||||
|
||||
An environment to easily implement discrete MDPs as gym environments. Turn a set of matrices (`P_0(s)`, `P(s'| s, a)` and `R(s', s, a)`) into a gym environment that represents the discrete MDP ruled by these dynamics.
|
||||
|
||||
### [mo-gym: Multi-objective Reinforcement Learning environments](https://github.com/LucasAlegre/mo-gym)
|
||||
|
||||
Multi-objective RL (MORL) gym environments, where the reward is a NumPy array of different (possibly conflicting) objectives.
|
||||
|
||||
### [gym-cellular-automata: Cellular Automata environments](https://github.com/elbecerrasoto/gym-cellular-automata)
|
||||
|
||||
Environments where the agent interacts with _Cellular Automata_ by changing its cell states.
|
||||
|
Reference in New Issue
Block a user