How to Create Your Own GPT with OpenAI’s Assistants API

SRC - Security, Risk, Compliance
7 min readMar 21, 2024

--

By Dr. Jaber Kakar

Are you ready to take your AI skills to the next level? Our latest blog post is your roadmap to building your own GPT (Generative Pre-trained Transformer) using OpenAI’s Assistants API. Whether you’re a seasoned developer or just starting, this guide will walk you through the process step by step.

Prerequisites

Before diving into GPT creation, let’s ensure you have everything you need to get started:

OpenAI’s API Key Page
OpenAI’s API Key Page
  • Account Setup: Begin by creating an OpenAI account or logging into your existing one. Head to the API key page (see screenshot above) and select “Create new secret key”. Optionally, you can name the key for easy identification. Consider whether you’d prefer (a) a single API key for all your projects or (b) individual keys for each project. Remember to keep your API key secure and avoid sharing it with others.
  • Language/Tool Selection: Choose the language or tool (cURL, Python, Node.js) you’re most comfortable with to interact with the OpenAI API. In this guide, we’ll be using Python because of its ease of use and extensive libraries. On this note, OpenAI offers a tailored Python library, simplifying API access for Python 3.7.1 or newer. For those fluent in Node.js/TypeScript— a similar library exist for your convenience.
  • Python Setup: If Python isn’t installed on your system, download and install the latest Python and follow the official Python installation guide. Ensure you have Python 3.7.1 or newer installed.
  • Installation OpenAI Python library: Beforehand, consider using Python virtual environments for a clean and structured working space of relevant packages needed for your projects avoiding conflicts with other libraries in other projects. Next, install the OpenAI Python library with a package installer for Python (pip) from your terminal by running the command:
pip install - upgrade openai
  • Setup API key: Depending on your preference, you can use (a) a single API key for all projects or (b) create project-specific keys. Update your .bash_profile script (Linux & MacOS) or configure an environment variable in Windows for a single API key. For project-specific keys, create a .env file in your project directory and set the API key as OPENAI_API_KEY. Make sure to have your .env file listed in your .gitignore file to avoid sharing your API key via git version control. Read more details on this setup here.
  • With these prerequisites in place, you’re ready to make API requests using the Python OpenAI library. Ensure to include the following lines to invoke the API’s functionalities:
from openai import OpenAI
client = OpenAI()

Overview of the Assistants API

OpenAI’s Assistants API opens up possibilities for building AI assistants that cater to specific tasks and contexts. Here’s what you need to know:

  • Assistant Capabilities: AI assistants are driven by user-provided instructions and leverage different LLM models, such as gpt-4 (here is a comprehensive list of models). Equipped with tools like code interpreters and retrieval functions, assistants can seamlessly integrate additional knowledge through supplementary files.
  • Assistants Playground: Explore the capabilities of the Assistants API with OpenAI’s Assistants playground (see below). Get hands-on experience and experiment with different functionalities to understand the full potential of AI assistants before starting to implement your own solution.
OpenAI’s Assistant Playground
OpenAI’s Assistant Playground

Now, in order to implement your own solution it is important to have a solid understanding of the general workflow of OpenAI assistants.

(a) Upload knowledge files: Start by uploading knowledge files using the client.files.create() function. Each file serves a specific purpose (with purpose options being assistants or fine-tune) and can be shared across various endpoints, enhancing the assistant’s capabilities. For more information on this function, check out the link.

from openai import OpenAI
client = OpenAI()

# client.files.create() returns a File object
# Required parameters of this function are:
# file - expects a File object
# purpose - expects the purpose of the uploaded file.
# Valid options are "fine-tune" and "assistants"

my_file = client.files.create(
file=open("knowledge.docx", "rb"),
purpose="assistants"
)

(b) Creation of the assistant: Create your assistant with client.beta.assistants.create(), specifying model as required parameter and optional parameters name, description, instructions , tools, file_ids and metadata. A sample code that creates the assistant Recipe Wizard using my_file as a knowledge file could look as follows:

# client.beta.assistants.create() returns an assistant object
my_assistant = client.beta.assistants.create(
instructions="You help users create recipes for meals that are healthy, inexpensive, and quick to prepare. You should respond to the user’s wishes, cooking preferences and tastes and provide suitable dishes with the corresponding recipes. Users may have different prior knowledge about the respective dishes. Some users may not have cooked very often before.",
name="Recipe Wizard",
tools=[{"type": "code_interpreter"}, {"type": "retrieval"}],
file_ids=[my_file.id],
model="gpt-4"
)

(c) Creation of a thread: A thread refers to a single conversation between a user and your assistant(s). A thread object is created and returned by calling:

# client.beta.threads.create() returns a thread object
my_thread = client.beta.threads.create()

Optionally two input parameters —(i) messages and (ii) metadata can be passed to this function for further customization. For further details, check out the threads API reference.

(d) Adding message(s) to thread: Contents of messages of users or applications are associated to message objects of the underlying thread. Note that messages can consist of both text and files. Each thread is comprised of an arbitrary number of messages; however, truncation takes action when the size of the messages exceeds the model’s context window. Below is an example that associates the underlying message to my_thread:

# client.beta.threads.messages.create() returns a message object
my_message = client.beta.threads.messages.create(
thread_id=my_thread.id,
role="user",
content="Please suggest me a quick and healthy lunch."
)

(e) Run the thread: Once the relevant context/messages are established for a thread, we could run my_thread with the assistant of our choice, e.g., my_assistant.

# Run my_thread for my_assistant
my_run = client.beta.threads.runs.create(
thread_id=my_thread.id,
assistant_id=my_assistant.id
)

# client.beta.threads.runs.create() allows to overwrite configurations
# such as model and tools specified in my_assistant

Runs are asynchronous, which means that we need to check their status until a terminal status (expired, completed, failed, cancelled) is reached. Different status of the run object are shown below.

Statuses of the run object
Statuses of the run object (Source: OpenAI’s API Documentation)
import time # needed for sleep

# poll my_run object for a terminal status
while my_run.status in ['queued', 'in_progress', 'cancelling']:
time.sleep(1) # Wait for 1 second
my_run = client.beta.threads.runs.retrieve(
thread_id=my_thread.id,
run_id=my_run.id
)

# terminal status completed
if my_run.status == 'completed':
# list messages for my_thread
new_messages = client.beta.threads.messages.list(thread_id=my_thread.id)
# determine response from new_messages
response = new_messages.data[0].content[0].text.value

print(f"Assistant response: {response}")
else:
print(my_run.status)

Build your GPT with the Assistants API

Replit Backend

Existing code that allows you to deploy your GPT to a website via Flask API and Voiceflow can be found on the following Replit repository. The code provided consists of two main Python files:

(i) functions.py: This file defines the Python function create_assistant(client) which returns the OpenAI assistant_id of an assistant object. It differentiates between the case when an assistant.json exists or does not exist. This differentiation ensures that an assistant object is only created when needed and avoids unnecessary (OpenAI) API calls.

When assistant.json does not exist, the function first uploads a single knowledge base file knowledge.docx (similar to (a) of the previous section) according to:

file = client.files.create(file=open("knowledge.docx", "rb"),
purpose='assistants')

If you want to use more than one knowledge base file or have different file names, make sure to adjust the above code accordingly. Once the knowledge base file(s) are uploaded, the assistant object is created (similar to (b) of the previous section). Make sure to adjust the parameters of client.beta.assistants.create(), that is instructions, model, tools, and file_ids according to your specific use case and your needs.

In case assistant.json exists, the function retrieves the assistant_id from assistant.json.

(ii) main.py: This file initializes the client, calls the function create_assistant(client) of functions.py. Next, it defines two routes /endpoints of the Flask app:

  • /start: This route starts a new conversation thread. When accessed via a GET request, it creates a new thread using OpenAI's API (cf. (c) of the previous section) and returns the thread ID in JSON format.
  • /chat: This route handles incoming POST requests containing JSON data with a thread_id and a message. It adds the user's message to the specified thread (cf. (d) of the previous section), runs the assistant to generate a response (cf. (e) of the previous section), and returns the response in JSON format.

When you fork this particular Replit repository into your replit userspace, make sure to add your own OpenAI API key. Your API key is used in the intialization process of the client:

OPENAI_API_KEY = os.environ['OPENAI_API_KEY']
# Some code in between
client = OPENAI(
api_key=OPENAI_API_KEY)

Voiceflow Frontend

Voiceflow is used to interact with the assistant that was created with the backend code in Replit. Specifically, Voiceflow allows us to launch our chatbot and interact with the endpoints /start and /chat. The respective Voiceflow template that you can use is available here.

The Voiceflow template creates a conversation thread using the /start route generated in the Replit backend. Next, it creates some sort of introduction allowing the chatbot to introduce itself. After the introduction, the actual chat conversation starts by calling the endpoint /chat. Note that you can use this Voiceflow template as the basis for your own project.

Once you have fully set up your Voiceflow template, publish and embed the widget. Next, you can use the underlying installation code into your website code, specifically before the closing </body> tag on all website pagges you want the chat widget to appear. With that done, you should have your chatbot on your website.

Conclusion

With the knowledge and tools provided in this guide, you’re equipped to create GPTs tailored to your specific needs and objectives. Whether you’re building AI assistants for personal projects, business applications, or educational purposes, the possibilities are limitless.

Embark on your journey to AI mastery with OpenAI’s Assistants API, and unlock the full potential of GPT technology.

Thanks for reading! If you want to learn more about Security, Risk and Compliance please visit our website or contact us on our social media.

--

--

SRC - Security, Risk, Compliance
SRC - Security, Risk, Compliance

Written by SRC - Security, Risk, Compliance

Beratung, um Security, Risk und Compliance bei Ihnen als Enabler für das Business zu etablieren.

No responses yet