OpenAI introduces fine-tuning for GPT-3.5 Turbo and GPT-4

Ryan Daws is a senior editor at TechForge Media with over a decade of experience in crafting compelling narratives and making complex topics accessible. His articles and interviews with industry leaders have earned him recognition as a key influencer by organisations like Onalytica. Under his leadership, publications have been praised by analyst firms such as Forrester for their excellence and performance. Connect with him on X (@gadget_ry) or Mastodon (@gadgetry@techhub.social)


OpenAI has announced the ability to fine-tune its powerful language models, including both GPT-3.5 Turbo and GPT-4.

The fine-tuning allows developers to tailor the models to their specific use cases and deploy these custom models at scale. This move aims to bridge the gap between AI capabilities and real-world applications, heralding a new era of highly-specialised AI interactions.

With early tests yielding impressive results, a fine-tuned version of GPT-3.5 Turbo has demonstrated the ability to not only match but even surpass the capabilities of the base GPT-4 for certain narrow tasks.

All data sent in and out of the fine-tuning API remains the property of the customer, ensuring that sensitive information remains secure and is not used to train other models.

The deployment of fine-tuning has garnered significant interest from developers and businesses. Since the introduction of GPT-3.5 Turbo, the demand for customising models to create unique user experiences has been on the rise.

Fine-tuning opens up a realm of possibilities across various use cases, including:

  • Improved steerability: Developers can now fine-tune models to follow instructions more accurately. For instance, a business wanting consistent responses in a particular language can ensure that the model always responds in that language.
  • Reliable output formatting: Consistent formatting of AI-generated responses is crucial, especially for applications like code completion or composing API calls. Fine-tuning improves the model’s ability to generate properly formatted responses, enhancing the user experience.
  • Custom tone: Fine-tuning allows businesses to refine the tone of the model’s output to align with their brand’s voice. This ensures a consistent and on-brand communication style.

One significant advantage of fine-tuned GPT-3.5 Turbo is its extended token handling capacity. With the ability to handle 4k tokens – twice the capacity of previous fine-tuned models – developers can streamline their prompt sizes, leading to faster API calls and cost savings.

To achieve optimal results, fine-tuning can be combined with techniques such as prompt engineering, information retrieval, and function calling. OpenAI also plans to introduce support for fine-tuning with function calling and gpt-3.5-turbo-16k in the upcoming months.

The fine-tuning process involves several steps, including data preparation, file upload, creating a fine-tuning job, and using the fine-tuned model in production. OpenAI is working on a user interface to simplify the management of fine-tuning tasks.

The pricing structure for fine-tuning comprises two components: the initial training cost and usage costs.

  • Training: $0.008 / 1K Tokens
  • Usage input: $0.012 / 1K Tokens
  • Usage output: $0.016 / 1K Tokens

The introduction of updated GPT-3 models – babbage-002 and davinci-002 – has also been announced, providing replacements for existing models and enabling fine-tuning for further customisation.

These latest announcements underscore OpenAI’s dedication to creating AI solutions that can be tailored to meet the unique needs of businesses and developers.

(Image Credit: Claudia from Pixabay)

See also: ChatGPT’s political bias highlighted in study

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

Tags: , , , , , ,

View Comments
Leave a comment

Leave a Reply