add contributors md and move dev docs out

This commit is contained in:
Jeff Wu
2019-03-06 12:15:51 -08:00
parent 953530fc24
commit 79a246a58e
3 changed files with 106 additions and 82 deletions

17
CONTRIBUTORS.md Normal file
View File

@ -0,0 +1,17 @@
# Contributors (alphabetically)
* **[madisonmay](https://github.com/madisonmay)**
Added Dockerfiles
* **[Margaret Mitchell et al](https://arxiv.org/abs/1810.03993)**
Our [usage](./readme#usage) writeup was loosely inspired by the paper
[Model Cards for Model Reporting](https://arxiv.org/abs/1810.03993)
and related conversations with some of the authors.
* **[webproduktion01](https://github.com/webproduktion01)**
Ported download script to python.
**[Full code contributors list](https://github.com/openai/gpt-2/contributors).**

85
DEVELOPERS.md Normal file
View File

@ -0,0 +1,85 @@
# Installation
Git clone this repository, and `cd` into directory for remaining commands
```
git clone https://github.com/openai/gpt-2.git && cd gpt-2
```
Then, follow instructions for either native or Docker installation.
## Native Installation
All steps can optionally be done in a virtual environment using tools such as `virtualenv` or `conda`.
Install tensorflow 1.12 (with GPU support, if you have a GPU and want everything to run faster)
```
pip3 install tensorflow==1.12.0
```
or
```
pip3 install tensorflow-gpu==1.12.0
```
Install other python packages:
```
pip3 install -r requirements.txt
```
Download the model data
```
python3 download_model.py 117M
```
## Docker Installation
Build the Dockerfile and tag the created image as `gpt-2`:
```
docker build --tag gpt-2 -f Dockerfile.gpu . # or Dockerfile.cpu
```
Start an interactive bash session from the `gpt-2` docker image.
You can opt to use the `--runtime=nvidia` flag if you have access to a NVIDIA GPU
and a valid install of [nvidia-docker 2.0](https://github.com/nvidia/nvidia-docker/wiki/Installation-(version-2.0)).
```
docker run --runtime=nvidia -it gpt-2 bash
```
# Running
| WARNING: Samples are unfiltered and may contain offensive content. |
| --- |
Some of the examples below may include Unicode text characters. Set the environment variable:
```
export PYTHONIOENCODING=UTF-8
```
to override the standard stream settings in UTF-8 mode.
## Unconditional sample generation
To generate unconditional samples from the small model:
```
python3 src/generate_unconditional_samples.py | tee /tmp/samples
```
There are various flags for controlling the samples:
```
python3 src/generate_unconditional_samples.py --top_k 40 --temperature 0.7 | tee /tmp/samples
```
To check flag descriptions, use:
```
python3 src/generate_unconditional_samples.py -- --help
```
## Conditional sample generation
To give the model custom prompts, you can use:
```
python3 src/interactive_conditional_samples.py --top_k 40
```
To check flag descriptions, use:
```
python3 src/interactive_conditional_samples.py -- --help
```

View File

@ -22,91 +22,13 @@ Please [let us know](mailto:languagequestions@openai.com) if youre doing inte
- Potential malicious use cases and defenses against them (e.g. the detectability of synthetic text)
- The extent of problematic content (e.g. bias) being baked into the models and effective mitigations
## Installation
## Development
Git clone this repository, and `cd` into directory for remaining commands
```
git clone https://github.com/openai/gpt-2.git && cd gpt-2
```
See [DEVELOPERS.md](./DEVELOPERS.md)
Then, follow instructions for either native or Docker installation.
## Contributors
### Native Installation
All steps can optionally be done in a virtual environment using tools such as `virtualenv` or `conda`.
Install tensorflow 1.12 (with GPU support, if you have a GPU and want everything to run faster)
```
pip3 install tensorflow==1.12.0
```
or
```
pip3 install tensorflow-gpu==1.12.0
```
Install other python packages:
```
pip3 install -r requirements.txt
```
Download the model data
```
python3 download_model.py 117M
```
### Docker Installation
Build the Dockerfile and tag the created image as `gpt-2`:
```
docker build --tag gpt-2 -f Dockerfile.gpu . # or Dockerfile.cpu
```
Start an interactive bash session from the `gpt-2` docker image.
You can opt to use the `--runtime=nvidia` flag if you have access to a NVIDIA GPU
and a valid install of [nvidia-docker 2.0](https://github.com/nvidia/nvidia-docker/wiki/Installation-(version-2.0)).
```
docker run --runtime=nvidia -it gpt-2 bash
```
## Sampling scripts
| WARNING: Samples are unfiltered and may contain offensive content. |
| --- |
Some of the examples below may include Unicode text characters. Set the environment variable:
```
export PYTHONIOENCODING=UTF-8
```
to override the standard stream settings in UTF-8 mode.
### Unconditional sample generation
To generate unconditional samples from the small model:
```
python3 src/generate_unconditional_samples.py | tee /tmp/samples
```
There are various flags for controlling the samples:
```
python3 src/generate_unconditional_samples.py --top_k 40 --temperature 0.7 | tee /tmp/samples
```
To check flag descriptions, use:
```
python3 src/generate_unconditional_samples.py -- --help
```
### Conditional sample generation
To give the model custom prompts, you can use:
```
python3 src/interactive_conditional_samples.py --top_k 40
```
To check flag descriptions, use:
```
python3 src/interactive_conditional_samples.py -- --help
```
See [CONTRIBUTORS.md](./CONTRIBUTORS.md)
## GPT-2 samples