OpenAI unifies voice interface
You can finally read the texts.
OpenAI recently announced on X their new plans for unifying the voice section of the mobile app for ChatGPT. Previously, when you would have a voice chat with ChatGPT, there was no immediate transcript of what was said. There is a generated one after the voice session is over, but if you missed a message, you might have to start all the way over again to get back that information.
The other reason why this new live transcription is helpful is because sometimes ChatGPT will misinterpret what you said, and give you an incorrect or irrelevant answer. With the previous post-session transcription, even though you could figure out what ChatGPT didn't understand through what was incorrectly transcribed or just not transcribed at all, you would need to just use that in a new chat or voice session, which might add hiccups to the whole process.

What is even cooler is how it can now show non-text media like images, maps, and other types of media, previously only reserved for normal chats. Now that Voice is built into chat, you can have the same rich experiences and the same visual guidance as someone using a normal ChatGPT conversation.
The new unified Voice features in ChatGPT add a whole new layer of assistance using AI. We will have to wait and see what OpenAI does soon.