Introduction to the OpenAI API
Introduction to the OpenAI API
Applications built on the OpenAI API
Software applications, web browser experiences, and even whole products are being built on top of the OpenAI API. In this exercise, you’ll be able to explore an application built on top of the OpenAI API: DataCamp’s own version of ChatGPT!
The text you type into the interface will be sent as a request to the OpenAI API and the response will be delivered and unpacked directly back to you.
Using the ChatGPT interface, answer the following question: In what year was OpenAI founded?
- 2015
Your first API request!
Throughout the course, you’ll write Python code to interact with the OpenAI API. As a first step, you’ll need to create your own API key. API keys used in this course’s exercises will not be stored in any way.
To create a key, you’ll first need to create an OpenAI account by visiting their signup page. Next, navigate to the API keys page to create your secret key.
import os
from dotenv import load_dotenv
# Load environment variables from .env file
load_dotenv()
True
# Access the variables as you normally would with os.getenv()
= os.getenv('OPEN_API_KEY2') api_key
The button to create a new secret key.
OpenAI sometimes provides free credits for the API, but this can differ depending on geography. You may also need to add debit/credit card details. You’ll need less than $1 credit to complete this course.
Warning: if you send many requests or use lots of tokens in a short period, you may see an openai.error.RateLimitError. If you see this error, please wait a minute for your quota to reset and you should be able to begin sending more requests. Please see OpenAI’s rate limit error support article for more information.
# Import the OpenAI client
from openai import OpenAI
= OpenAI(api_key=api_key)
client
# Create a request to the Completions endpoint
= client.completions.create(
response # Specify the correct model
="gpt-3.5-turbo-instruct",
model="Who developed ChatGPT?"
prompt
)
print(response)
Completion(id=‘cmpl-8lAB6YHgnJYEyn8iTMZdPLLvUUJEZ’, choices=[CompletionChoice(finish_reason=‘stop’, index=0, logprobs=None, text=‘was developed by the company OpenAI.’)], created=1706251232, model=‘gpt-3.5-turbo-instruct’, object=‘text_completion’, system_fingerprint=None, usage=CompletionUsage(completion_tokens=12, prompt_tokens=6, total_tokens=18))
Digging into the response
One of the key skills required to work with APIs is manipulating the response to extract the desired information. In this exercise, you’ll push your Python dictionary and list manipulation skills to the max to extract information from the API response.
You’ve been provided with response, which is a response from the OpenAI API when provided with the prompt, What is the goal of OpenAI?
This response object has been printed for you so you can see and understand its structure. If you’re struggling to picture the structure, view the dictionary form of the response with .model_dump().
#Extract the model used from response using attributes.
print(response.model)
gpt-3.5-turbo-instruct
#Extract the total tokens used from response using attributes.
print(response.usage.total_tokens)
18
#Extract the text answer to the prompt from response.
print(response.choices[0].text)
ChatGPT was developed by the company OpenAI.
Solving problems with AI solutions
An Online Scientific Journal called Terra Scientia wants to use AI to make their scientific papers more accessible to a wider audience. To do this, they want to develop a feature where users can double-click on words they don’t understand, and an AI model will explain what it means in the context of the article.
To accomplish this, the developers at Terra Scientia want to build the feature on top of the OpenAI API.
Which OpenAI API endpoint(s) could they use to build this feature?
- Completions
- Chat
Structuring organizations
You’ve learned that you can set up organizations to manage API usage and billing. Users can be part of multiple organizations and attribute API requests to a specific organization. It’s best practice to structure organizations such that each business unit or product feature has a separate organization, depending on the number of features the business has built on the OpenAI API.
What are the benefits of having separate organizations for each business unit or product feature?
- Reducing risk of hitting rate limits
- Improved management of usage and billing
- Removes single failure point
OpenAI’s Text and Chat Capabilities
Find and replace
Text completion models can be used for much more than answering questions. In this exercise, you’ll explore the model’s ability to transform a text prompt.
Find-and-replace tools have been around for decades, but they are often limited to identifying and replacing exact words or phrases. You’ve been provided with a block of text discussing cars, and you’ll use a completion model to update the text to discuss planes instead, updating the text appropriately.
Warning: if you send many requests or use lots of tokens in a short period, you may hit your rate limit and see an openai.error.RateLimitError. If you see this error, please wait a minute for your quota to reset and you should be able to begin sending more requests. Please see OpenAI’s rate limit error support article for more informatio
="""Replace car with plane and adjust phrase:
promptA car is a vehicle that is typically powered by an internal combustion engine or an electric motor. It has four wheels, and is designed to carry passengers and/or cargo on roads or highways. Cars have become a ubiquitous part of modern society, and are used for a wide variety of purposes, such as commuting, travel, and transportation of goods. Cars are often associated with freedom, independence, and mobility."""
# Create a request to the Completions endpoint
= client.completions.create(
response ="gpt-3.5-turbo-instruct",
model=prompt,
prompt= 100
max_tokens
)
# Extract and print the response text
print(response.choices[0].text)
A plane is an aircraft that is typically powered by jet engines or propellers. It has wings, and is designed to carry passengers and/or cargo through the air. Planes have become a vital part of modern society, and are used for a wide range of purposes, such as air travel, transportation of goods, and military operations. Planes are often associated with speed, efficiency, and global connectivity.
Text summarization
One really common use case for using OpenAI’s models is summarizing text. This has a ton of applications in business settings, including summarizing reports into concise one-pagers or a handful of bullet points, or extracting the next steps and timelines for different stakeholders.
In this exercise, you’ll summarize a passage of text on financial investment into two concise bullet points using a text completion model.
="""Summarize the following text into two concise bullet points:
promptInvestment refers to the act of committing money or capital to an enterprise with the expectation of obtaining an added income or profit in return. There are a variety of investment options available, including stocks, bonds, mutual funds, real estate, precious metals, and currencies. Making an investment decision requires careful analysis, assessment of risk, and evaluation of potential rewards. Good investments have the ability to produce high returns over the long term while minimizing risk. Diversification of investment portfolios reduces risk exposure. Investment can be a valuable tool for building wealth, generating income, and achieving financial security. It is important to be diligent and informed when investing to avoid losses."""
# Create a request to the Completions endpoint
= client.completions.create(
response ="gpt-3.5-turbo-instruct",
model=prompt,
prompt= 400,
max_tokens = 0
temperature
)
print(response.choices[0].text)
- Investment involves committing money or capital to an enterprise in order to obtain added income or profit.
- Careful analysis, risk assessment, and diversification are important for making successful investments that can build wealth and achieve financial security.
Content generation
AI is playing a much greater role in content generation, from creating marketing content such as blog post titles to creating outreach email templates for sales teams.
In this exercise, you’ll harness AI through the Completions endpoint to generate a catchy slogan for a new restaurant. Feel free to test out different prompts, such as varying the type of cuisine (e.g., Italian, Chinese) or the type of restaurant (e.g., fine-dining, fast-food), to see how the response changes.
# Create a request to the Completions endpoint
= """create a catchy slogan for a new restaurant:
prompt The restaurant deals mainly in italian cuisine"""
= client.completions.create(
response ="gpt-3.5-turbo-instruct",
model=prompt,
prompt= 100
max_tokens
)
print(response.choices[0].text)
“Indulge in Italian flair at our restaurant’s savory affair!”
Classifying text sentiment
As well as answering questions, transforming text, and generating new text, Completions models can also be used for classification tasks, such as categorization and sentiment classification. This sort of task requires not only knowledge of the words but also a deeper understanding of their meaning.
In this exercise, you’ll explore using Completions models for sentiment classification using reviews from an online shoe store called Toe-Tally Comfortable:
- Unbelievably good!
- Shoes fell apart on the second use.
- The shoes look nice, but they aren’t very comfortable.
- Can’t wait to show them off!
# Create a request to the Completions endpoint
= """classify the sentiment of the following statements as either negative, positive, or neutral list in bullet points:
prompt Unbelievably good!
Shoes fell apart on the second use.
The shoes look nice, but they aren't very comfortable.
Can't wait to show them off! """
= client.completions.create(
response ="gpt-3.5-turbo-instruct",
model=prompt,
prompt=100
max_tokens
)
print(response.choices[0].text)
-Positive -Negative -Neutral -Positive
Categorizing companies
In this exercise, you’ll use a Completions model to categorize different companies. At first, you won’t specify the categories to see how the model categorizes them. Then, you’ll specify the categories in the prompt to ensure they are categorized in a desirable and predictable way.
# Create a request to the Completions endpoint
= """Categorize the following companies into, Tech, Energy, Luxury Goods, or Investment list in bullet points:
prompt Apple,
Microsoft,
Saudi Aramco,
Alphabet,
Amazon,
Berkshire Hathaway,
NVIDIA,
Meta,
Tesla,
LVMH """
= client.completions.create(
response ="gpt-3.5-turbo-instruct",
model=prompt,
prompt=100,
max_tokens=0.5
temperature
)
print(response.choices[0].text)
- Tech: Apple, Microsoft, Alphabet, Amazon, NVIDIA, Meta
- Energy: Saudi Aramco
- Luxury Goods: LVMH
- Investment: Berkshire Hathaway
The Chat Completions endpoint
The models available via the Chat Completions endpoint can not only perform similar single-turn tasks as models from the Completions endpoint, but can also be used to have multi-turn conversations.
To enable multi-turn conversations, the endpoint supports three different roles:
System: controls assistant’s behavior User: instruct the assistant Assistant: response to user instruction In this exercise, you’ll make your first request to the Chat Completions endpoint to answer the following question:
What is the difference between a for loop and a while loop?
# Create a request to the Chat Completions endpoint
= client.chat.completions.create(
response ="gpt-3.5-turbo",
model=150,
max_tokens=[
messages"role": "system",
{"content": "You are a helpful data science tutor. Provide code examples for both while and for loops using this syntax ie code between ` code ` "},
"role": "user",
{"content": "What is the difference between a for loop and a while loop?"}
]
)
# Extract and print the assistant's text response
print(response.choices[0].message.content)
A for loop and a while loop are both control structures used to execute a specific piece of code repeatedly. However, they differ in their syntax and the conditions under which they execute.
A for loop is used when you know the number of times you want to iterate through a block of code. It consists of three parts: initialization, condition, and iteration.
On the other hand, a while loop is used when you want to repeat a block of code until a certain condition is met. It only requires a condition to evaluate. The loop will continue executing as long as the condition is True.
Here are examples of both for and while loops:
For loop example:
for i in range(5):
print(i)
Code explanation
One of the most popular use cases for using OpenAI models is for explaining complex content, such as technical jargon and code. This is a task that data practitioners, software engineers, and many others must tackle in their day-to-day as they review and utilize code written by others.
In this exercise, you’ll use the OpenAI API to explain a block of Python code to understand what it is doing.
= """Explain what this Python code does in one sentence:
instruction import numpy as np
heights_dict = {"Mark": 1.76, "Steve": 1.88, "Adnan": 1.73}
heights = heights_dict.values()
print(np.mean(heights))
"""
# Create a request to the Chat Completions endpoint
= client.chat.completions.create(
response ="gpt-3.5-turbo",
model=100,
max_tokens=[
messages"role": "system",
{"content": "You are a helpful data science tutor."},
"role": "user",
{"content": instruction}
]
)
print(response.choices[0].message.content)
This code calculates and prints the mean height from the values in the heights_dict dictionary using the numpy library.
In-context learning
For more complex use cases, the models lack the understanding or context of the problem to provide a suitable response from a prompt. In these cases, you need to provide examples to the model for it to learn from, so-called in-context learning.
In this exercise, you’ll improve on a Python programming tutor built on the OpenAI API by providing an example that the model can learn from.
Here is an example of a user and assistant message you can use, but feel free to try out your own:
- User → Explain what the min() function does.
- Assistant → The min() function returns the smallest item from an iterable.
= client.chat.completions.create(
response ="gpt-3.5-turbo",
model# Add a user and assistant message for in-context learning
=[
messages"role": "system", "content": "You are a helpful Python programming tutor."},
{"role": "user", "content": "Explain what sum function in python does"},
{"role": "assistant",
{"content": """the sum() function is a built-in function used to calculate the sum of all the elements in an iterable, like a list, tuple, or
set. The basic syntax of the sum() function is as follows:
`sum(iterable, start)`
example:
`numbers = [1, 2, 3, 4, 5]
result = sum(numbers) # This will add 1 + 2 + 3 + 4 + 5
print(result) # Output will be 15`
"""},
"role": "user", "content": "Explain what the type() function does."}
{
]
)
print(response.choices[0].message.content)
The type() function is a built-in function in Python that is used to determine the type of an object. It takes an object as its parameter and returns the type of that object.
The basic syntax of the type() function is as follows: type(object)
Here, “object” can be any value or variable in Python. For example, it can be a string, integer, list, dictionary, function, class, etc.
Examples:
number = 10
print(type(number)) # Output: <class 'int'>
name = "John"
print(type(name)) # Output: <class 'str'>
my_list = [1, 2, 3]
print(type(my_list)) # Output: <class 'list'>
my_dictionary = {"apple": 1, "banana": 2}
print(type(my_dictionary)) # Output: <class 'dict'>
The type() function is often used to perform type checking or type validation in Python code.
Creating an AI chatbot
An online learning platform called Easy as Pi that specializes in teaching math skills has contracted you to help develop an AI tutor. You immediately see that you can build this feature on top of the OpenAI API, and start to design a simple proof-of-concept (POC) for the major stakeholders at the company. This POC will demonstrate the core functionality required to build the final feature and the power of the OpenAI’s GPT models.
Example system and user messages have been provided for you, but feel free to play around with these to change the model’s behavior or design a completely different chatbot!
= [{"role": "system", "content": "You are a helpful math tutor."}]
messages = ["Explain what pi is.", "Summarize this in two bullet points."]
user_msgs
for q in user_msgs:
print("User: ", q)
# Create a dictionary for the user message from q and append to messages
= {"role": "user", "content": q}
user_dict
messages.append(user_dict)
# Create the API request
= client.chat.completions.create(
response ="gpt-3.5-turbo",
model= messages,
messages =100)
max_tokens
# Convert the assistant's message to a dict and append to messages
= {"role": "assistant", "content": response.choices[0].message.content}
assistant_dict
messages.append(assistant_dict)
print("Assistant: ", response.choices[0].message.content, "\n")
User: Explain what pi is. Assistant: Pi (π) is a mathematical constant that represents the ratio of the circumference of a circle to its diameter. It is an irrational number, which means it cannot be expressed as a finite decimal or fraction. The value of pi is approximately 3.14159, but it goes on infinitely without repeating.
Pi is a fundamental constant in mathematics and is used in a wide range of mathematical calculations, especially those involving circles, spheres, and trigonometry. It is commonly represented by the Greek letter ”
User: Summarize this in two bullet points. Assistant: - Pi (π) is a mathematical constant representing the ratio of a circle’s circumference to its diameter. - It is an irrational number approximately equal to 3.14159, with infinite decimal places.
Going Beyond Text Completions
Why use text moderation models?
Text moderation is a vital component of most social media platforms, internet chatrooms, and many other user-facing systems. It serves the purpose of preventing the distribution and promotion of inappropriate content, such as hate speech.
In this exercise, you’ll compare OpenAI’s text moderation model to traditional methods of moderation: manual moderation and keyword pattern matching.
OpenAI’s Moderation model
- Designed to moderate the prompts and responses to and from OpenAI models
- Outputs confidence of text violation
- A model that uses all of the words to inform its decision
- Evaluates content based on specific violation categories
Keyword Pattern Matching
- Don’t understand context
Manual Moderation
- Expensive
- Requires 24/7 support
- Inconsistent classification
Requesting moderation
Aside from text and chat completion models, OpenAI provides models with other capabilities, including text moderation. OpenAI’s text moderation model is designed for evaluating prompts and responses to determine if they violate OpenAI’s usage policies, including inciting hate speech and promoting violence.
In this exercise, you’ll test out OpenAI’s moderation functionality on a sentence that may have been flagged as containing violent content using traditional word detection algorithms.
#Ceate a request to the Moderation endpoint
= client.moderations.create(
response ="text-moderation-latest",
modelinput= "My favorite book is How to Kill a Mockingbird.")
# Print the category scores
print(response.results[0].category_scores)
CategoryScores(harassment=1.396209336235188e-05, harassment_threatening=8.344479283550754e-06, hate=4.862324567511678e-05, hate_threatening=1.5291941224404582e-07, self_harm=1.391318733112712e-06, self_harm_instructions=5.645471219395404e-07, self_harm_intent=4.05220532684325e-07, sexual=6.411371941794641e-06, sexual_minors=1.5648997759853955e-06, violence=0.0019001936307176948, violence_graphic=3.9556569390697405e-05, self-harm=1.391318733112712e-06, sexual/minors=1.5648997759853955e-06, hate/threatening=1.5291941224404582e-07, violence/graphic=3.9556569390697405e-05, self-harm/intent=4.05220532684325e-07, self-harm/instructions=5.645471219395404e-07, harassment/threatening=8.344479283550754e-06)
Examining moderation category scores
The same request you created in the last exercise to the Moderation endpoint has been run again, sending the sentence “My favorite book is How to Kill a Mockingbird.” to the model. The response from the API has been printed for you, and is available as response.
What is the correct interpretation of the category_scores here?
- The model believes that there are no violations, as all categories are close to 0
Creating a podcast transcript
The OpenAI API Audio endpoint provides access to the Whisper model, which can be used for speech-to-text transcription and translation. In this exercise, you’ll create a transcript from a DataFramed podcast episode with OpenAI Developer, Logan Kilpatrick.
If you’d like to hear more from Logan, check out the full ChatGPT and the OpenAI Developer Ecosystem podcast episode.
# Open the openai-audio.mp3 file
= open("openai-audio.mp3", "rb")
audio_file
# Create a transcript from the audio file
= client.audio.transcriptions.create(model="whisper-1", file=audio_file)
response
# Extract and print the transcript text
print(response.text)
Hi there, Logan, thank you for joining us on the show today. Thanks for having me. I’m super excited about this. Brilliant. We’re going to dive right in, and I think ChatGPT is maybe the most famous AI product that you have at OpenAI, but I’d just like to get an overview of what all the other AIs that are available are. So I think two and a half years ago, OpenAI released the API that we still have available today, which is essentially our giving people access to these models. And for a lot of people, giving people access to the model that powers ChatGPT, which is our consumer-facing first-party application, which essentially just, in very simple terms, puts a nice UI on top of what was already available through our API for the last two and a half years. So it’s sort of democratizing the access to this technology through our API. If you want to just play around with it, as an end user, we have ChatGPT available to the world as well.
Transcribing a non-English language
The Whisper model can not only transcribe English language, but also performs well on speech in many other languages.
In this exercise, you’ll create a transcript from audio.m4a, which contains speech in Portuguese.
# Open the audio.m4a file
= open("audio.m4a", "rb")
audio_file
# Create a transcript from the audio file
= client.audio.transcriptions.create(model="whisper-1", file=audio_file)
response
print(response.text)
Olá, o meu nome é Eduardo, sou CTO no Datacamp. Espero que esteja a gostar deste curso que o James e eu criamos para você. Esta API permite enviar um áudio e trazer para inglês. O áudio original está em português.
Translating Portuguese
Whisper can not only transcribe audio into its native language but also supports translation capabilities for creating English transcriptions.
In this exercise, you’ll return to the Portuguese audio, but this time, you’ll translate it into English!
# Create a translation from the audio file
= client.audio.translations.create(model="whisper-1", file=audio_file)
response
# Extract and print the translated text
print(response.text)
Hello, my name is Eduardo, I am a CTO at Datacamp. I hope you are enjoying this course that James and I have created for you. This API allows you to send an audio and bring it to English. The original audio is in Portuguese.
Translating with prompts
The quality of Whisper’s translation can vary depending on the language spoken, the audio quality, and the model’s awareness of the subject matter. If you have any extra context about what is being spoken about, you can send it along with the audio to the model to give it a helping hand.
You’ve been provided with with an audio file, audio.wav; you’re not sure what language is spoken in it, but you do know it relates to a recent World Bank report. Because you don’t know how well the model will perform on this unknown language, you opt to send the model this extra context to steer it in the right direction.
# Open the audio.wav file
= open("audio.wav", "rb")
audio_file
# Write an appropriate prompt to help the model
= "The audio relates to a recent world bank report "
prompt
# Create a translation from the audio file
= client.audio.translations.create(model="whisper-1", file=audio_file, prompt = prompt)
response
print(response.text)
The World Bank said in its latest economic outlook report that the global economy is in a dangerous state. As interest rates rise, consumer spending and corporate investment will slow down, economic activities will be impacted, and the vulnerability of low-income countries will be exposed. Global economic growth will be significantly slowed down, and the stability of the financial system will be threatened.
Identifying audio language
You’ve learned that you’re not only limited to creating a single request, and that you can actually feed the output of one model as an input to another! This is called chaining, and it opens to the doors to more complex, multi-modal use cases.
In this exercise, you’ll practice model chaining to identify the language used in an audio file. You’ll do this by bringing together OpenAI’s audio transcription functionality and its text models with only a few lines of code.
= open("arne-german-automotive-forecast.wav", "rb")
audio_file
# Create a transcription request using audio_file
= client.audio.transcriptions.create(model="whisper-1", file=audio_file)
audio_response
# Create a request to the API to identify the language spoken
= client.chat.completions.create( model="gpt-3.5-turbo",
chat_response =[
messages"role": "user", "content": "Identify the language in " + audio_response.text }
{
])
print(chat_response.choices[0].message.content)
The language used is German.
Creating meeting summaries
Time for business! One time-consuming task that many find themselves doing day-to-day is taking meeting notes to summarize attendees, discussion points, next steps, etc.
In this exercise, you’ll use AI to augment this task to not only save a substantial amount of time, but also to empower attendees to focus on the discussion rather than administrative tasks. You’ve been provided with a recording from DataCamp’s Q2 Roadmap webinar, which summarizes what DataCamp will be releasing during that quarter. You’ll chain the Whisper model with a text or chat model to discover which courses will be launched in Q2.
= open("datacamp-q2-roadmap-short.mp3", "rb")
audio_file
# Create a transcription request using audio_file
= client.audio.transcriptions.create(model="whisper-1", file=audio_file)
audio_response
# Create a request to the API to summarize the transcript into bullet points
= client.chat.completions.create( model="gpt-3.5-turbo",
chat_response =[
messages"role": "user", "content": "Summarise text given, list in bullet points with a bit of explanation for each bullet point:" + audio_response.text }
{
])
print(chat_response.choices[0].message.content)
- Technical courses include working with the OpenAI API and Python programming against GPT and Whisper.
- Understanding Artificial Intelligence is aimed at a less technical audience and provides a broad background on the topic.
- Artificial Intelligence Ethics focuses on the potential risks and harm of improper AI implementation and is important for businesses and organizations.
- Data literacy courses are also available, with a specific course on forming analytical questions to bridge the communication gap between technical and non-technical individuals.
- Communication is identified as a key aspect of better data science and is highlighted as an important skill to develop.