The Humane Ai Pin and Rabbit handheld have captured a good bit of press interest for their individual approaches to integrating generative AI with hardware. Humane, in particular, is presenting its wearable as a look at life beyond the smartphone. That naturally prompts the question: What, precisely, is wrong with the smartphone? While it’s true that the form factor has plateaued, these devices are still out in the world, in billions of hands.

Earlier this week, I met with Jerry Yue amid the cacophonous din of Deutsch Telekom’s Mobile World Congress booth. After a product demo and a sit-down conversation, I admit that I’m impressed with the Brain.ai (alternately known as Brain Technologies) founder and CEO’s vision for the future of smartphones. I won’t go so far as saying I’m fully convinced until I’ve had an opportunity to spend more time with the product, but it absolutely paints a compelling picture of how generative AI might be foundational to the next generation of devices.

The whole “future of smartphones” bit may be hyperbolic, but at the very least, I suspect some of the biggest names in the biz are currently studying the way first-party generative AI effectively forms the backbone of the product’s operating system. But while phone companies may see the future, the interface may prove foggier for consumers. The implementation turns the current smartphone operating system paradigm on its head, requiring a demo to fully comprehend how it’s different and why it’s useful. While I admit I wasn’t completely sold by the pitch, watching it in action brings its efficacy into sharp focus.

The OS isn’t wholly disconnected from Google’s open operating system, but only in the sense that it’s built atop the Android kernel. As we’ve seen from the Trump-era development of Huawei’s HarmonyOS, it’s entirely possible to create something distinct from Android using that as a base. Here, generative AI is more than just integrated into the system, it’s the foundation to the way you interact with the device, how it responds and the interface it constructs.

The notion of an “AI phone” isn’t an altogether new one. In fact, it’s a phrase you’re going to hear a lot in the coming years. I guarantee you’ll be sick of it by December. Elements of AI/ML have been integrated into devices in some form for several years now. Among other things, the technology is foundational to computational photography — that is the processing of the data collected by the camera sensor that occurs on the chip.

Earlier this month, however, Samsung became one of the first large companies to really lean into the notion of an “AI phone.” The distinction here is the arrival of generative AI — the technology behind programs like Google Gemini and ChatGPT. Once again, much of the integration happens on the imaging side, but it’s beginning to filter into other aspects, as well.

Image Credits: Brian Heater

Given how big an investment Google has made in Gemini, it stands to reason that this trend will only ramp up in the coming years. Apple, too, will be entering the category at some time later this year. I wouldn’t classify generative AI as a complete gamechanger on these devices just yet, but it’s clear that those companies that don’t embrace it now are going to get left behind in the coming years.

Brain.ai’s use of the technology goes much deeper than other current implementations. From a hardware perspective, however, it’s a standard smartphone. In fact, the Deutsch Telecom deal that found Yue exhibiting in the magenta-laden booth means the operating system will initially see the light of day via the device known as the T-Mobile REVVL here in the States (known as the “T Phone” in international markets like the EU). The precise model, release date and nature of the deal will be revealed “soon,” according to Yue.

The truth, however, is that the Brain interface is designed to be hardware-agnostic, adapting to the form factor it’s been run on. That’s not to say that hardware isn’t important, of course. At its heart, the T-Mobile REVVL Plus, for example, is a budget phone, priced at around $200. It’s not a flagship by any stretch, but it gives you decent bang for your buck, including a Snapdragon 625 processor and dual rear camera at 13- and 15-megapixels, respectively. Although 2GB isn’t much RAM, Yue insists that the Brain.ai’s operating system can do more with less. Also, again, we don’t know what specific specs the device will have at launch.

The interface begins with a static screen. From there, you query things off with either a voice or text prompt. In one example, Yue asks the system to “recommend a gift for my grandma, who can’t get out of bed.” From there, Brain goes to work pulling up not the response to the query, but an interface specific to it — in this case, it’s aggregated e-commerce results. The resulting page is barebones from a design perspective — black text on a white background. Sentences alternate with boxes showcasing results (in this case, blankets and Kindles).

The query sits at the top. This, like much of the interface, is interactive. In this case, you can tap in to modify the search. Tapping on an image, meanwhile, will add it to a shopping cart for the third-party e-commerce site, and you can check out from there. I should note that all of the results in the demo were pulled directly from Amazon. Yue says the system will pull in some 7,000 retail sites at launch, and you can prioritize results by things like retailers and business size (if you’d prefer to support smaller businesses).

Image Credits: Brian Heater

Shopping is the first example Yue shows me, and many of the fundamentals apply across the board. Certainly there’s consistency in design across features. That’s due in large part to the fact that the device is actually devoid of third-party apps. This represents a massive shift from the current smartphone landscape for the past 15-plus years.

“From a privacy and security perspective, we want to give a new level of control that people don’t have right now,” Yue. “The computer’s understanding of you, now it’s aggregated into different apps. These AI models are black boxes — recommendation machines that exploit our attention. We believe in explainable AI. We will be explaining to you, each step of the way, why we are making a recommendation. You have more people owning the AI and not big tech black boxes.”

Adaptability is another big selling point. The model improves recommendations and gets more customized for the user the more queries are run and tweaked. Of course, third parties were the primary reason app stores revolutionized the industry. Suddenly you’ve gone from a single company creating all of your phone’s experiences to a system that harnesses the smarts and creativity of countless developers. Brain’s experience will be a combination of what its 100-person team can produce and what the AI model can dream up. As the model improves, so, too, will its functionality. Brain.ai is relying on its own model for the primary interface, but will pull from third parties like OpenAI and Google when it determines they’re better equipped to answer a specific query.

Image Credits: Brian Heater

There are limitations to what one can discover in a demo like this, so, as with many other elements, I’m going to have to wait until I have a shipping product in my hand to really evaluate the experience. I’m especially interested in how it handles certain applications, like imaging. It’s worth noting that the REVVL line doesn’t sport great cameras, so unless there’s a big upgrade, this won’t be the device for those who prioritize photos/videos.

The camera will also play an important role in search. One example we discussed is taking a photo of a menu in a foreign country. Not only will it translate (à la Google Lens), it will also offer food recommendations based on your tastes. Yue also briefly demonstrated the system’s image generation with a simple request befitting our surroundings: make magenta sneakers. It did so quickly, with the only real bottleneck being convention center connection speeds (ironic, given the settings).

Connectivity is vitally import here. The AI processing is being done off-device. I discussed the potential for adding some on-device processing, but Yue couldn’t confirm what it might look like at launch. Nor did I get an entirely clear answer for the offline experience. I suspect a big part of the reason Deutsch Telekom is so interested in the product is that it’s one that couldn’t exist in the same way without 5G. It recalls Mozilla’s ill-fated Firefox OS and the earliest days of Chrome OS, or any other number of examples of a product that loses significant functionality when offline.

Image Credits: Brian Heater

Yue founded Brain in 2015, and remained its sole employee until hiring a CTO the following year (Yue remains the sole founder). Born in China, he first connected to technology through a love of robotics and participation in the RoboCup robotic soccer tournament. At 18, he founded the Chinese social app, Friendoc. Two years later, he co-founded Benlai.com, which is now one of the country’s largest food delivery apps. Yue has since returned to the Bay Area to run Brain.ai full time. To date, the company has raised $80 million.

After nearly a decade, the Brain interface is almost ready to launch — and it arrives at the perfect moment. The zeitgeist is very much focused on the manner of generative AI that powers the experience, from standalone devices like Rabbit and the Humane Ai Pin to tech giants like Samsung pitching their own “AI phones.”

Read more about MWC 2024 on TechCrunch

techcrunch.com

Previous articleBrave’s Leo AI assistant is now available to Android users
Next articleSpotify’s new ‘Song Psychic’ is like a Magic 8-Ball that answers your questions with music