Fine-tuning allows the model to adjust the structure and tone of its responses or to adhere to complex domain-specific instructions. Developers can achieve impressive results for their applications with just a few dozen examples in their training dataset. (Source: Image by RR)

Free Training Tokens Offered through September 23 for Developers Fine-Tuning GPT-4o

OpenAI has announced the launch of fine-tuning for GPT-4o, a highly requested feature from developers that allows them to customize the model for their specific use cases. With this feature, developers can fine-tune the model using custom datasets, enhancing performance and accuracy for tasks such as coding, creative writing, and following domain-specific instructions. OpenAI is offering 1 million free training tokens per day until September 23 for all organizations, enabling developers to achieve better results with minimal training data.

Fine-tuning is now available to all developers on paid usage tiers through the fine-tuning dashboard, where they can select the GPT-4o-2024-08-06 model for training, according to openai.com. Additionally, a mini version of GPT-4o is also available, with 2 million free training tokens per day until September 23. According to a story in openai.com, the cost for fine-tuning GPT-4o is $25 per million tokens for training and varying costs for inference, depending on input and output tokens.

Several trusted partners have already tested fine-tuning with GPT-4o, achieving impressive results. For instance, Cosine’s AI assistant, Genie, which autonomously handles software engineering tasks, achieved state-of-the-art (SOTA) scores on the SWE-bench benchmark after being fine-tuned on real-world examples. Similarly, Distyl secured first place on the BIRD-SQL benchmark for text-to-SQL tasks with its fine-tuned GPT-4o, demonstrating high execution accuracy and proficiency in tasks like query reformulation and SQL generation.

OpenAI assures that fine-tuned models are fully controlled by developers, with complete ownership of business data and rigorous safety measures in place to prevent misuse. This ensures that sensitive data is never shared or used to train other models, addressing key privacy and security concerns. Additionally, OpenAI has implemented layered safety mitigations to prevent the misuse of fine-tuned models. These include continuous automated safety evaluations and monitoring of usage to ensure compliance with OpenAI’s policies. As more developers begin leveraging fine-tuning to enhance their AI-driven solutions, OpenAI remains eager to see the innovative projects and breakthroughs that emerge from this new capability.

read more at openai.com