[ad_1]

Research in the field of machine learning and AI, now a key technology in practically every industry and company, is far too voluminous for anyone to read it all. This column aims to collect some of the most relevant recent discoveries and papers — particularly in, but not limited to, artificial intelligence — and explain why they matter.

This week AI applications have been found in several unexpected niches due to its ability to sort through large amounts of data, or alternatively make sensible predictions based on limited evidence.

We’ve seen machine learning models taking on big data sets in biotech and finance, but researchers at ETH Zurich and LMU Munich are applying similar techniques to the data generated by international development aid projects such as disaster relief and housing. The team trained its model on millions of projects (amounting to $2.8 trillion in funding) from the last 20 years, an enormous dataset that is too complex to be manually analyzed in detail.

“You can think of the process as an attempt to read an entire library and sort similar books into topic-​specific shelves. Our algorithm takes into account 200 different dimensions to determine how similar these 3.2 million projects are to each other – an impossible workload for a human being,” said study author Malte Toetzke.

Very top-level trends suggest that spending on inclusion and diversity has increased, while climate spending has, surprisingly, decreased in the last few years. You can examine the dataset and trends they analyzed here.

Another area few people think about is the large number of machine parts and components that are produced by various industries at an enormous clip. Some can be reused, some recycled, others must be disposed of responsibly — but there are too many for human specialists to go through. German R&D outfit Fraunhofer has developed a machine learning model for identifying parts so they can be put to use instead of heading to the scrap yard.

A part sits on a table as part of a demonstration of an identification AI.

Image Credits: Fraunhofer

The system relies on more than ordinary camera views, since parts may look similar but be very different, or be identical mechanically but differ visually due to rust or wear. So each part is also weighed and scanned by 3D cameras, and metadata like origin is also included. The model then suggests what it thinks the part is so the human inspecting it doesn’t have to start from scratch. It’s hoped that tens of thousands of parts will soon be saved, and the processing of millions accelerated, by using this AI-assisted identification method.

Physicists have found an interesting way to bring ML’s qualities to bear on a centuries-old problem. Essentially researchers are always looking for ways to show that the equations that govern fluid dynamics (some of which, like Euler’s, date to the 18th century) are incomplete — that they break at certain extreme values. Using traditional computational techniques this is difficult to do, though not impossible. But researchers at CIT and Hang Seng University in Hong Kong propose a new deep learning method to isolate likely instances of fluid dynamics singularities, while others are applying the technique in other ways to the field. This Quanta article explains this interesting development quite well.

Another centuries-old concept getting an ML layer is kirigami, the art of paper-cutting that many will be familiar with in the context of creating paper snowflakes. The technique goes back centuries in Japan and China in particular, and can produce remarkably complex and flexible structures. Researchers at Argonne National Labs took inspiration from the concept to theorize a 2D material that can retain electronics at microscopic scale but also flex easily.

The team had been doing tens of thousands of experiments with 1-6 cuts manually, and used that data to train the model. They then used a Department of Energy supercomputer to perform simulations down to the molecular level. In seconds it produced a 10-cut variation with 40 percent stretchability, far beyond what the team had expected or even tried on their own.

Simulation of molecules forming a stretch 2D material.

Image Credits: Argonne National Labs

“It has figured out things we never told it to figure out. It learned something the way a human learns and used its knowledge to do something different,” said project lead Pankaj Rajak. The success has spurred them to increase the complexity and scope of the simulation.

Another interesting extrapolation done by a specially trained AI has a computer vision model reconstructing color data from infrared inputs. Normally a camera capturing IR wouldn’t know anything about what color an object was in the visible spectrum. But this experiment found correlations between certain IR bands and visible ones, and created a model to convert images of human faces captured in IR into ones that approximate the visible spectrum.

It’s still just a proof of concept, but such spectrum flexibility could be a useful tool in science and photography.

Meanwhile, a new study coauthored by Google AI lead Jeff Dean pushes back against the notion that AI is an environmentally costly endeavor, owing to its high compute requirements. While some research has found that training a large model like OpenAI’s GPT-3 can generate carbon dioxide emissions equivalent to that of a small neighborhood, the Google-affiliated study contends that “following best practices” can reduce machine learning carbon emissions up to 1000x. 

The practices in question concern the types of models used, the machines used to train models, “mechanization” (e.g., computing in the cloud versus on local computers) and “map” (picking data center locations with the cleanest energy). According to the coauthors, selecting “efficient” models alone can reduce computation by factors of 5 to 10, while using processors optimized for machine learning training, such as GPUs, can improve the performance-per-Watt ratio by factors of 2 to 5.

Any thread of research suggesting that AI’s environmental impact can be lessened is cause for celebration, indeed. But it must be pointed out that Google isn’t a neutral party. Many of the company’s products, from Google Maps to Google Search, rely on models that required large amounts of energy to develop and run.

Mike Cook, a member of the Knives and Paintbrushes open research group, points out that — even if the study’s estimates are accurate — there simply isn’t a good reason for a company not to scale up in an energy-inefficient way if it benefits them. While academic groups might pay attention to metrics like carbon impact, companies aren’t as incentivized in the same way — at least currently.

“The whole reason we’re having this conversation to begin with is that companies like Google and OpenAI had effectively infinite funding, and chose to leverage it to build models like GPT-3 and BERT at any cost, because they knew it gave them an advantage,” Cook told TechCrunch via email. “Overall, I think the paper says some nice stuff and it’s great if we’re thinking about efficiency, but the issue isn’t a technical one in my opinion — we know for a fact that these companies will go big when they need to, they won’t restrain themselves, so saying this is now solved forever just feels like an empty line.”

The last topic for this week isn’t actually about machine learning exactly, but rather what might be a way forward in simulating the brain in a more direct way. EPFL bioinformatics researchers created a mathematical model for creating tons of unique but accurate simulated neurons that could eventually be used to build digital twins of neuroanatomy.

“The findings are already enabling Blue Brain to build biologically detailed reconstructions and simulations of the mouse brain, by computationally reconstructing brain regions for simulations which replicate the anatomical properties of neuronal morphologies and include region specific anatomy,” said researcher Lida Kanari.

Don’t expect sim-brains to make for better AIs — this is very much in pursuit of advances in neuroscience — but perhaps the insights from simulated neuronal networks may lead to fundamental improvements to the understanding of the processes AI seeks to imitate digitally.

[ad_2]

techcrunch.com

Previous articlePolygon Pledges $20 Million to Fund its Carbon Negative Initiative
Next articleWhat’s the real argument in favor of Musk buying Twitter? – TechCrunch