OpenAI is constantly advancing its AI technology with the release of new models. In 2024, numerous chatbots flood the market, prompting companies like Google, Anthropic, and now Microsoft to develop multiple models tailored for various purposes, rather than focusing solely on refining a single flagship model.
Following this trend, OpenAI introduces GPT-4o, an upgrade from the previous year’s GPT-4 model, which initially launched in March 2023 as part of the paid ChatGPT Plus subscription.
In GPT-4o, the “o” signifies “omni,” reflecting OpenAI’s aim for a more natural human-computer interaction. This enhanced model can analyze a combination of text, images, and audio inputs, and respond using any of these mediums. OpenAI boasts that GPT-4o can process audio inputs in as little as 232 milliseconds, a significant improvement from previous iterations.
While GPT-4o offers improved capabilities in vision and audio analysis, it is initially released with support for text and image inputs, with text outputs. Support for audio inputs will be added in future updates.
GPT-4o will be accessible in the free tier, with ChatGPT Plus members enjoying a fivefold increase in message limits.