This gist assumes that:
- you are already using OpenAI services via the OpenAI API
- you want to migrate to an Azure-based workflow
- somebody else has already set up the Azure endpoint for you
from openai import OpenAI
client = OpenAI(
# Defaults to os.environ.get("OPENAI_API_KEY")
)
chat_completion = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": "Hello world"}]
)You now just need to change the instantiation of the client to the following:
from openai import AzureOpenAI
client = AzureOpenAI(
azure_endpoint=<AZURE_ENDPOINT>,
api_key=<AZURE_KEY>,
api_version=<API_VERSION>,
)The AzureOpenAI client class can then be used in the same way as the regular OpenAI client class.
Tip
It's a good idea to still use environmental variables and config files when dealing with these keys.
I recommend using config files and environmental variables as to avoid accidentally leaking these credentials (which would allow others to abuse the resource).
My recommended workflow:
- Create a folder
.cfgin your home directory. (mkdir ~/.cfg) - Create a file
openai.cfgin~/.cfg/openai.cfg(touch ~/.cfg/openai.cfg) - Copy the credentials I send to you into
openai.cfgin the following format:
[AZURE]
key=...
endpoint=...Where key will be a string of letters and numbers endpoint will be a URL.
You can now use the configparser module in Python to read these values into your script (see attached script below).
Note: the Azure endpoints each serve a limited set of OpenAI's services. Depending on which service you want, you may want to choose an endpoint in a different region. For example,
gpt-4-vision-previewis not available on the endpointeastus.Please refer to the table here: https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/models