Google and OpenAI both premiered the early versions of their fully multimodal, eventually fully integrated AI agents. Soon your phone experience will

Don't Worry About the Vase

submited by
Style Pass
2024-05-16 18:00:04

Google and OpenAI both premiered the early versions of their fully multimodal, eventually fully integrated AI agents. Soon your phone experience will get more and more tightly integrated with AI. You will talk to your phone, or your computer, and it will talk back, and it will do all the things. It will hear your tone of voice and understand your facial expressions. It will remember the contents of your inbox and all of your quirky preferences.

It will plausibly be a version of Her, from the hit movie ‘Are we sure about building this Her thing, seems questionable?’

OpenAI won this round of hype going away, because it premiered, and for some modalities released, the new GPT-4o. GPT-4o is tearing up the Arena, and in many ways is clearly giving the people what they want. If nothing else, it is half the price of GPT-4-Turbo, and it is lightning fast including fast web searches, which together have me (at least for now) switching back to ChatGPT as my default, after giving Gemini Advanced (or Pro 1.5) and Claude Opus their times in the sun, although Gemini still has the long context use case locked up.

Due to the need to triage for now and ensure everything gets its proper attention, it does drop a number of important developments.

Leave a Comment