AI generates hypotheses that human scientists haven’t thought of

0

Electric vehicles have the potential to dramatically reduce carbon emissions, but automakers are running out of material to make batteries. A crucial component, nickel, is expected to cause supply shortages from the end of this year. Scientists recently discovered four new materials that could potentially help – and what can be even more intriguing is How? ‘Or’ What They found these materials: The researchers used artificial intelligence to select useful chemicals from a list of more than 300 options. And they’re not the only humans looking to AI for scientific inspiration.

The creation of hypotheses has long been a purely human domain. Now, however, scientists are starting to ask machine learning to produce original information. they design neural networks (a type of machine learning setup with a structure inspired by the human brain) that suggest new hypotheses based on patterns that networks find in data instead of relying on human hypotheses. Many areas could soon turn to muse of machine learning with the aim of speeding up the scientific process and reducing human prejudices.

In the case of new battery materials, scientists pursuing such tasks have typically relied on database search tools, modeling, and their own chemical intuition to select useful compounds. Instead, a team from the University of Liverpool in England used machine learning to streamline the creative process. Researchers developed a neural network that ranked chemical combinations based on their likelihood of resulting in useful new material. Then the scientists used these rankings to guide their lab experiments. they identified four promising candidates for battery materials without having to test everything on their list, saving them months of trial and error.

“It’s a great tool,” says Andrij Vasylenko, associate researcher at the University of Liverpool and co-author of the Drum Materials Research Study, published in Nature Communications last month. The AI ​​process helps identify chemical combinations that are worth examining, he adds, so that “we can cover a lot more chemical space faster.”

The discovery of new materials is not the only area where machine learning could contribute to science. Researchers are also applying neural networks to larger technical and theoretical questions. Renato Renner, a physicist at the Institute for Theoretical Physics in Zurich, hopes one day to use machine learning to develop a unified theory of how the universe works. But before AI can uncover the true nature of reality, researchers must tackle the notoriously difficult question of how neural networks make their decisions.

Get into the spirit of machine learning

Over the past 10 years, machine learning has grown into an extremely popular tool to categorize big data and make predictions. However, explaining the logical basis of one’s decisions can be very difficult. Neural networks are built from interconnected nodes, modeled on neurons in the brain, with a structure that changes as information flows through them. While this adaptive model is capable of solving complex problems, it is also often impossible for humans to decode the logic involved.

This lack of transparency has been dubbed “the black box problem” because no one can see inside the network to explain their “thought” process. Not only does this opacity undermine confidence in the results, it also limits the contribution of neural networks to human scientific understanding of the world.

Some scientists are trying to make the black box transparent by developing “interpretability techniques, which attempt to offer a step-by-step explanation for How? ‘Or’ What a network arrives at its answers. It may not be possible to get a high level of detail from complex machine learning models. But researchers can often identify broader trends in the way a network processes data, sometimes leading to startling findings, such as who is most likely to develop cancer.

Several years ago, Anant Madabhushi, professor of biomedical engineering at Case Western Reserve University, used interpretability techniques to understand why some patients are more likely than others to have breast cancer recurrence or prostate. He transmitted the patient scans to a neural network, and the network identified those who were at higher risk of the cancer coming back. Next, Madabhushi scanned the network to find the most important characteristic in determining a patient’s likelihood of developing cancer again. The results suggested that the way the internal structures of the glands are tightly packed is the most accurate factor predicting the likelihood of cancer coming back.

“It wasn’t a guess. We didn’t know it, ”says Madabhushi. “We used a methodology to discover an attribute of the disease that was found to be important.” It was only after the AI ​​drew its conclusion that his team discovered that the result was also consistent with the current scientific literature on the pathology. The neural network cannot yet explain Why The density of the gland structure contributes to cancer, but it still helped Madabhushi and his colleagues better understand how tumor growth progresses, leading to new directions for future research.

When AI hits a wall

While taking a peek inside the black box can help humans construct new scientific hypotheses, “we still have a long way to go,” says Soumik Sarkar, associate professor of mechanical engineering at the Iowa State University. Interpretability techniques can hint at correlations that appear in the machine learning process, but they cannot prove causation or offer explanations. They always rely on subject matter experts to make sense of the network.

Machine learning also often uses data collected through human processes, which can lead it to reproduce human prejudices. A neural network, called Correctional Offender Management Profiling for Alternative Sanctions (COMPAS), has even been accused of being racist. The network was used to predict the likelihood of convicted offenders. A ProPublica survey reportedly discovered that the system falsely flagged black prisoners as likely to break the law after being released almost twice as often as white prisoners in a county in Florida. Equivant, formerly Northpoint, the criminal justice software company that created COMPAS, has contested ProPublica’s analysis and claimed that its risk assessment program had been poorly characterized.

Despite these problems, Renner, the Zurich-based physicist, remains hopeful that machine learning can help people pursue their knowledge from a less biased perspective. Neural networks could inspire people to think about old questions in new ways, he says. While networks cannot yet make assumptions entirely on their own, they can give clues and point scientists to a different view of a problem.

Renner goes so far as to try to design a neural network that can examine the true nature of the cosmos. Physicists have been unable to reconcile two theories of the universe – quantum theory and Einstein’s general theory of relativity – for over a century. But Renner hopes machine learning will give him the new perspective he needs to fill the scientific understanding of how matter works at the very small and the very large scales.

“We can only make great strides in physics if we look at things in unconventional ways,” he says. For now, he’s building the network with historical theories, giving him a taste of how humans think the universe is structured. In the coming years, he intends to ask her to find his own answer to this final question.


Source link

Leave A Reply

Your email address will not be published.