OpenAI’s API users get full access to the new o1 model

Updates for real-time interaction and fine-tuning

Developers that make use of OpenAI’s real-time voice APIs will also get full access to WebRTC support that was announced today. This comes on top of existing support for the WebSocket audio standard and can simplify the creation of OpenAI audio interfaces for third-party applications from roughly 250 lines of code to about a dozen, according to the company.

OpenAI says it will release simple WebRTC code that can be used on a plug-and-play basis in plenty of simple devices, from toy reindeer to smart glasses and cameras that want to make use of context-aware AI assistants. To help encourage those kinds of uses, OpenAI said it was reducing the cost of o1 audio tokens for API developers by 60 percent and the cost of 4o mini tokens by a full 90 percent.

Developers interested in fine-tuning their own AI models can also make use of a new method called “direct preference optimization” for their efforts. In the current system of supervised fine tuning, model makers have to provide examples of the exact input/output pairs they want to help refine the new model. With direct preference optimization, model makers can instead simply provide two separate responses and indicate that one is preferred over the other.

OpenAI says its fine-tuning process will then optimize to learn the difference between the preferred and non-preferred answers provided, automatically detecting changes in things like verbosity, formatting and style guidelines, or the helpfulness/creativity level of responses and factoring them into the new model.

Programmers who write in Go or Java will also be able to use new SDKs for those languages to connect to the OpenAI API, OpenAI said.

Leave a Reply

Your email address will not be published. Required fields are marked *