ChatGPT's Major Update: Seeing, Hearing, and Speaking Abilities Unveiled by OpenAI
OpenAI is rolling out a significant update for ChatGPT, allowing the viral chatbot to engage in voice conversations with users and interact using images. This brings it closer to widely used AI assistants like Apple's Siri. OpenAI mentioned in a blog post on Monday that the voice feature "opens up possibilities for numerous creative and accessibility-oriented applications."
Similar AI services like Siri, Google voice assistant and Amazon.com's Alexa are integrated with the devices they run on and are often used to set alarms and reminders, and deliver information off the internet.
Since its debut last year, ChatGPT has been adopted by companies for a wide range of tasks from summarizing documents to writing computer code, setting off a race amongst Big Tech companies to launch their own offerings based on generative AI.
ChatGPT's new voice feature can also narrate bedtime stories, settle debates at the dinner table, and speak out loud text input from users.
The technology behind it is being used by Spotify for the platform's podcasters to translate their content in different languages, OpenAI said.
With images support, users can take pictures of things around them and ask the chatbot to "troubleshoot why your grill won't start, explore the contents of your fridge to plan a meal, or analyze a complex graph for work-related data".
Alphabet's Google Lens is currently the popular choice to gain information on images.
The new ChatGPT features will be released for subscribers of its Plus and Enterprise plans over the next two weeks.