This story was supposed to have a different beginning. You were supposed to hear about how, earlier this week, I attended a splashy launch party for a

I Witnessed the Future of AI, and It’s a Broken Toy

submited by
Style Pass
2024-04-27 14:00:07

This story was supposed to have a different beginning. You were supposed to hear about how, earlier this week, I attended a splashy launch party for a new AI gadget—the Rabbit R1—in New York City, and then, standing on a windy curb outside the venue, pressed a button on the device to summon an Uber home. Instead, after maybe an hour of getting it set up and fidgeting with it, the connection failed.

The R1 is a bright-orange chunk of a device, with a camera, a mic, and a small screen. Press and hold its single button, ask it a question or give it a command using your voice, and the cute bouncing rabbit on screen will perk up its ears, then talk back to you. It’s theoretically like communicating with ChatGPT through a walkie-talkie. You could ask it to identify a given flower through its camera or play a song based on half-remembered lyrics; you could ask it for an Uber, but it might get hung up on the last step and leave you stranded in Queens.

When I finally got back to my hotel room, I turned on the R1’s camera and held up a cold slice of pizza. “What am I looking at?” I asked. “You are looking at a slice of pizza,” the voice told me. (Correct!) “It looks appetizing and freshly baked.” (Well, no.) I decided to try something else. “What are top 10 …” I stumbled, letting go of the button. I tried again: “What are the top 10 best use cases for AI for a normal person?” The device, perhaps confused by our previous interaction, started listing out pizza toppings beginning with the No. 2. “2. Sausage. 3. Mushrooms. 4. Extra Cheese.”

Leave a Comment