On November 6, OpenAI held its first developer conference in San Francisco. The event featured several product updates, including 2-3x price reductions, faster models, and multi-modality.

Below, we break down a few key announcements, analyzing the immediate and longer-term implications for those exploring how to implement this technology within their organizations.

1. 128K context window

This refers to the amount of information the GPT-4 model can process and respond to at one time, in terms of both the size of a prompt or question sent to the model as well as its output. Larger context windows allow for more comprehensive understanding and coherency over longer stretches of text and data, which significantly improves performance over a variety of complex tasks. 

What it means for your business: These larger context windows allow for more creative and powerful use cases. Previous context window sizes—which were typically 16,000 or 32,000 tokens (a single word generally equates to 1.5 tokens)—limited prompts to including excerpts or snippets. Now you can feed an entire product manual or a complete software application code into the model. Additionally, greater efficiency with large prompts means faster replies with lower costs and reduced environmental impact.

2. Multi-modality

GPT-4 Turbo now comes with vision support. This allows users to feed images to the model, which it can respond to with answers and information on the image's content. OpenAI also announced advancements in text-to-speech (TTS) and speech-to-text (STT) capabilities.

What it means for your business: Giving “vision” to an LLM (often now termed a foundation model) enables new classes of use cases and reduces development complexity; certain computer vision tasks are now essentially baked in. Improving the model’s ability to communicate with users through speech unlocks new ways for humans to interact with computers.

3. Better model control

OpenAI is adding the ability for users to share answers in more common data formats, including the popular JSON format. This makes it easier for cluttered, text-based output from LLMs to integrate with other applications. OpenAI also now provides the ability to set a “seed parameter” for consistently generating reproducible results for a given prompt.

What it means for your business: Structured outputs such as JSON reduce complexity, accelerate development productivity, and afford more robust compatibility with existing systems. Reproducible results (or “idempotent” in computer science terms) are often important in highly regulated industries like banking. Together, these enhancements ease model development and testing.

4. Copyright Shield

Copyright infringement and data misuse within LLMs are a cause of concern among the legal and compliance communities. OpenAI’s new Copyright Shield detects and prevents IP issues by safeguarding against the use of copyrighted or harmful material. If an OpenAI customer does face litigation, OpenAI agrees to cover the legal costs. 

What it means for your business: This mitigates some, but not all, of the risks of integrating with OpenAI’s LLM. Regardless, it is a big step for improving adoption and addressing concerns raised by larger corporations.

5. GPTs, agents, and custom actions

OpenAI is introducing GPTs, which enable users to create custom versions of GPT. They can now set the model’s initial instructions, upload documents for context, and define which tools an agent can use when responding to a user. 

What it means for your business: GPTs will likely lead to an explosion of purpose-built “agents” that specialize in specific tasks—like interacting with your calendar or interpreting financial statements—that can be chained together with other agents or computer code. You will soon be able to select your own GPTs and define how they collaborate, allowing you to get more done as the technology takes on more mundane tasks.

6. GPT Store

Akin to Apple’s App Store, OpenAI announced it will soon launch the GPT Store, through which developers can sell their GPTs and share revenue. 

What it means for your business: The impending explosion of products and services built using GPTs will disrupt the market and lead to new opportunities throughout the value chain. GPTs will start integrating into an increasing number of workflows, shifting the burden away from step-by-step programming. Instead, you’ll be able to simply purchase “capabilities” and string them together.

 

OpenAI’s announcements and product enhancements bring added functionality and practicality to foundation models, enabling a new level of cognitive capability that lowers costs, enables faster IT adoption, and provides for greater customization. As GenAI’s key business use cases accelerate, it’s more crucial than ever to stay abreast of the latest updates to harness the power of LLMs and optimize your results.