Compare commits

..

6 Commits
colin ... main

Author SHA1 Message Date
Shyamal H Anadkat
502429c7c8
Merge pull request #46 from scottleibrand/patch-1
Only get_embedding once
2023-01-08 16:07:43 -08:00
Shyamal H Anadkat
c6fc1f2b2a
Merge pull request #54 from oaosman84/patch-1
Remove obsolete reference to old embedding engines
2023-01-08 16:06:08 -08:00
gloryjain
155b125482
Merge pull request #58 from openai/glojain-patch-1
Update notebook to point to form
2023-01-06 12:56:08 -08:00
gloryjain
40e3a10417
Update notebook to point to form
Route rate limit increase requests to form instead of our support email
2023-01-06 12:05:16 -08:00
Osman A. Osman
b2d9cd13d4
Remove obsolete reference to old embedding engines 2023-01-03 19:00:13 -08:00
Scott Leibrand
ae45a48de3
Only get_embedding once
fd181ec78f updated what was previously two different get_embedding() calls w/ different models to be two identical calls. Rather than doing the same API call twice in a row, we can now just set the second variable to be equal to the first one.
2022-12-23 12:30:26 -08:00
5 changed files with 3 additions and 1219 deletions

View File

@ -84,13 +84,6 @@
"print(\"Total number of functions extracted:\", len(all_funcs))" "print(\"Total number of functions extracted:\", len(all_funcs))"
] ]
}, },
{
"cell_type": "markdown",
"metadata": {},
"source": [
"For code search models we use code-search-{model}-code to obtain embeddings for code snippets, and code-search-{model}-text to embed natural language queries."
]
},
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": 2, "execution_count": 2,

View File

@ -95,11 +95,9 @@
"\n", "\n",
"### Requesting a rate limit increase\n", "### Requesting a rate limit increase\n",
"\n", "\n",
"If you'd like your organization's rate limit increased, please feel free to reach out to <support@openai.com> with the following information:\n", "If you'd like your organization's rate limit increased, please fill out the following form:\n",
"\n", "\n",
"- The model(s) you need increased limits on\n", "- [OpenAI Rate Limit Increase Request form](https://forms.gle/56ZrwXXoxAN1yt6i9)\n"
"- The estimated rate of requests\n",
"- The reason for the increase"
] ]
}, },
{ {

View File

@ -162,7 +162,7 @@
"\n", "\n",
"# This will take just between 5 and 10 minutes\n", "# This will take just between 5 and 10 minutes\n",
"df['ada_similarity'] = df.combined.apply(lambda x: get_embedding(x, engine='text-embedding-ada-002'))\n", "df['ada_similarity'] = df.combined.apply(lambda x: get_embedding(x, engine='text-embedding-ada-002'))\n",
"df['ada_search'] = df.combined.apply(lambda x: get_embedding(x, engine='text-embedding-ada-002'))\n", "df['ada_search'] = df['ada_similarity']\n",
"df.to_csv('data/fine_food_reviews_with_embeddings_1k.csv')" "df.to_csv('data/fine_food_reviews_with_embeddings_1k.csv')"
] ]
} }

File diff suppressed because it is too large Load Diff

View File

@ -1,20 +0,0 @@
version: '3.4'
services:
weaviate:
image: semitechnologies/weaviate:1.14.0
restart: on-failure:0
ports:
- "8080:8080"
environment:
QUERY_DEFAULTS_LIMIT: 20
AUTHENTICATION_ANONYMOUS_ACCESS_ENABLED: 'true'
PERSISTENCE_DATA_PATH: "./data"
DEFAULT_VECTORIZER_MODULE: text2vec-transformers
ENABLE_MODULES: text2vec-transformers
TRANSFORMERS_INFERENCE_API: http://t2v-transformers:8080
CLUSTER_HOSTNAME: 'node1'
t2v-transformers:
image: semitechnologies/transformers-inference:sentence-transformers-msmarco-distilroberta-base-v2
environment:
ENABLE_CUDA: 0 # set to 1 to enable
# NVIDIA_VISIBLE_DEVICES: all # enable if running with CUDA