Keeping up with an industry as fast-moving as AI is a tall order. So until an AI can do it for you, here’s a handy roundup of recent stories in the world of machine learning, along with notable research and experiments we didn’t cover on their own.
This week, Google flooded the channels with announcements around Gemini, its new flagship multimodal AI model. Turns out it’s not as impressive as the company initially made it out to be — or, rather, the “lite” version of the model (Gemini Pro) Google released this week isn’t. (It doesn’t help matters that Google faked a product demo.) We’ll reserve judgement on Gemini Ultra, the full version of the model, until it begins making its way into various Google apps and services early next year.
But enough talk of chatbots. What’s a bigger deal, I’d argue, is a funding round that just barely squeezed into the workweek: Mistral AI raising €450M (~$484 million) at $2 billion valuation.
We’ve covered Mistral before. In September, the company, co-founded by Google DeepMind and Meta alumni, released its first model, Mistral 7B, which it claimed at the time outperformed others of its size. Mistral closed one of Europe’s largest seed rounds to date prior to Friday’s fundraise — and it hasn’t even launched a product yet.
Now, my colleague Dominic has rightly pointed out that Paris-based Mistral’s fortunes are a red flag for many concerned about inclusivity. The startup’s co-founders are all white and male, and academically fit the homogenous, privileged profile of many of those in The New York Times’ roundly criticized list of AI changemakers.
At the same time, investors appear to be viewing Mistral — as well as its sometime rival, Germany’s Aleph Alpha — as Europe’s opportunity to plant its flag in the very fertile (at present) generative AI ground.
So far, the largest-profile and best-funded generative AI ventures have been stateside. OpenAI. Anthropic. Inflection AI. Cohere. The list goes on.
Mistral’s good fortune is in many ways a microcosm of the fight for AI sovereignty. The European Union (EU) desires to avoid being left behind in yet another technological leap while at the same time imposing regulations to guide the tech’s development. As Germany’s Vice Chancellor and Minister for Economic Affairs Robert Habeck was recently quoted as saying: “The thought of having our own sovereignty in the AI sector is extremely important. [But] if Europe has the best regulation but no European companies, we haven’t won much.”
The entrepreneurship-regulation divide came into sharp relief this week as EU lawmakers attempted to reach an agreement on policies to limit the risk of AI systems. Lobbyists, led by Mistral, have in recent months pushed for a total regulatory carve-out for generative AI models. But EU lawmakers have resisted such an exemption — for now.
A lot’s riding on Mistral and its European competitors, all this being said; industry observers — and legislators stateside — will no doubt watch closely for the impact on investments once EU policymakers impose new restrictions on AI. Could Mistral someday grow to challenge OpenAI with the regulations in place? Or will the regulations have a chilling effect? It’s too early to say — but we’re eager to see ourselves.
Here are some other AI stories of note from the past few days:
- A new AI alliance: Meta, on an open source tear, wants to spread its influence in the ongoing battle for AI mindshare. The social network announced that it’s teaming up with IBM to launch the AI Alliance, an industry body to support “open innovation” and “open science” in AI — but ulterior motives abound.
- OpenAI turns to India: Ivan and Jagmeet report that OpenAI is working with former Twitter India head Rishi Jaitly as a senior advisor to facilitate talks with the government about AI policy. OpenAI is also looking to set up a local team in India, with Jaitly helping the AI startup navigate the Indian policy and regulatory landscape.
- Google launches AI-assisted note-taking: Google’s AI note-taking app, NotebookLM, which was announced earlier this year, is now available to U.S. users 18 years of age or older. To mark the launch, the experimental app got integration with Gemini Pro, Google’s new large language model, which Google says will “help with document understanding and reasoning.”
- OpenAI under regulatory scrutiny: The cozy relationship between OpenAI and Microsoft, a major backer and partner, is now the focus of a new inquiry launched by the Competition and Markets Authority in the U.K. over whether the two companies are effectively in a “relevant merger situation” after recent drama. The FTC is also reportedly looking into Microsoft’s investments in OpenAI in what appears to be a coordinated effort.
- Asking AI nicely: How can you reduce biases if they’re baked into a AI model from biases in its training data? Anthropic suggests asking it nicely to please, please not discriminate or someone will sue us. Yes, really. Devin has the full story.
- Meta rolls out AI features: Alongside other AI-related updates this week, Meta AI, Meta’s generative AI experience, gained new capabilities including the ability to create images when prompted as well as support for Instagram Reels. The former feature, called “reimagine,” lets users in group chats recreate AI images with prompts, while the latter can turn to Reels as a resource as needed.
- Respeecher gets cash: Ukrainian synthetic voice startup Respeecher — which is perhaps best known for being chosen to replicate James Earl Jones and his iconic Darth Vader voice for a Star Wars animated show, then later a younger Luke Skywalker for The Mandalorian — is finding success despite not just bombs raining down on their city, but a wave of hype that has raised up sometimes controversial competitors, Devin writes.
- Liquid neural nets: An MIT spinoff co-founded by robotics luminary Daniela Rus aims to build general-purpose AI systems powered by a relatively new type of AI model called a liquid neural network. Called Liquid AI, the company raised $37.5 million this week in a seed round from backers including WordPress parent company Automattic.
More machine learnings
Orbital imagery is an excellent playground for machine learning models, since these days satellites produce more data than experts can possibly keep up with. EPFL researchers are looking into better identifying ocean-borne plastic, a huge problem but a very difficult one to track systematically. Their approach isn’t shocking — train a model on labeled orbital images — but they’ve refined the technique so that their system is considerably more accurate, even when there’s cloud cover.
Finding it is only part of the challenge, of course, and removing it is another, but the better intelligence people and organizations have when they perform the actual work, the more effective they will be.
Not every domain has so much imagery, however. Biologists in particular face a challenge in studying animals that are not adequately documented. For instance, they might want to track the movements of a certain rare type of insect, but due to a lack of imagery of that insect, automating the process is difficult. A group at Imperial College London is putting machine learning to work on this in collaboration with game development platform Unreal.
By creating photo-realistic scenes in Unreal and populating them with 3D models of the critter in question, be it an ant, stick insect, or something bigger, they can create arbitrary amounts of training data for machine learning models. Though the computer vision system will have been trained on synthetic data, it can still be very effective in real-world footage, as their video shows.
You can read their paper in Nature Communications.
Not all generated imagery is so reliable, though, as University of Washington researchers found. They systematically prompted the open source image generator Stable Diffusion 2.1 to produce images of a “person” with various restrictions or locations. They showed that the term “person” is disproportionately associated with light-skinned, western men.
Not only that, but certain locations and nationalities produced unsettling patterns, like sexualized imagery of women from Latin American countries and “a near-complete erasure of nonbinary and Indigenous identities.” For instance, asking for pictures of “a person from Oceania” produces white men and no indigenous people, despite the latter being numerous in the region (not to mention all the other non-white-guy people). It’s all a work in progress, and being aware of the biases inherent in the data is important.
Learning how to navigate biased and questionably useful model is on a lot of academics’ minds — and those of their students. This interesting chat with Yale English professor Ben Glaser is a refreshingly optimistic take on how things like ChatGPT can be used constructively:
When you talk to a chatbot, you get this fuzzy, weird image of culture back. You might get counterpoints to your ideas, and then you need to evaluate whether those counterpoints or supporting evidence for your ideas are actually good ones. And there’s a kind of literacy to reading those outputs. Students in this class are gaining some of that literacy.
If everything’s cited, and you develop a creative work through some elaborate back-and-forth or programming effort including these tools, you’re just doing something wild and interesting.
And when should they be trusted in, say, a hospital? Radiology is a field where AI is frequently being applied to help quickly identify problems in scans of the body, but it’s far from infallible. So how should doctors know when to trust the model and when not to? MIT seems to think that they can automate that part too — but don’t worry, it’s not another AI. Instead, it’s a standard, automated onboarding process that helps determine when a particular doctor or task finds an AI tool helpful, and when it gets in the way.
Increasingly, AI models are being asked to generate more than text and images. Materials are one place where we’ve seen a lot of movement — models are great at coming up with likely candidates for better catalysts, polymer chains, and so on. Startups are getting in on it, but Microsoft also just released a model called MatterGen that’s “specifically designed for generating novel, stable materials.”
As you can see in the image above, you can target lots of different qualities, from magnetism to reactivity to size. No need for a Flubber-like accident or thousands of lab runs — this model could help you find a suitable material for an experiment or product in hours rather than months.
Google DeepMind and Berkeley Lab are also working on this kind of thing. It’s quickly becoming standard practice in the materials industry.
techcrunch.com