When the human brain learns something new, it adapts. But when artificial intelligence learns something new, it tends to forget information it already learned. As companies use more and more data to improve how AI recognises images, learns languages and carries out other complex tasks, a paper publishing in Science this week shows a way that computer chips could dynamically rewire themselves to take in new data like the brain does, helping AI to keep learning over time.
The brains of living beings can continuously learn throughout their lifespan. We have now created an artificial platform for machines to learn throughout their lifespan.
– Shriram Ramanathan, Professor of School of Materials Engineering, Purdue University
Unlike the brain, which constantly forms new connections between neurons to enable learning, the circuits on a computer chip don’t change. A circuit that a machine has been using for years isn’t any different than the circuit that was originally built for the machine in a factory.
This is a problem for making AI more portable, such as for autonomous vehicles or robots in space that would have to make decisions on their own in isolated environments. If AI could be embedded directly into hardware rather than just running on software as AI typically does, these machines would be able to operate more efficiently.
In this study, the researchers built a new piece of hardware that can be reprogrammed on demand through electrical pulses. Ramanathan believes that this adaptability would allow the device to take on all of the functions that are necessary to build a brain-inspired computer.
The hardware is a small, rectangular device made of a material that is very sensitive to hydrogen. Applying electrical pulses at different voltages allows the device to shuffle a concentration of hydrogen ions in a matter of nanoseconds, creating states that the researchers found could be mapped out to corresponding functions in the brain.
Through simulations of the experimental data, the Purdue team’s collaborators at Santa Clara University and Portland State University showed that the internal physics of this device creates a dynamic structure for an artificial neural network that is able to more efficiently recognize electrocardiogram patterns and digits compared to static networks. This neural network uses “reservoir computing,” which explains how different parts of a brain communicate and transfer information.
Since the team was able to build the device using standard semiconductor-compatible fabrication techniques and operate the device at room temperature, the researchers believe that this technique can be readily adopted by the semiconductor industry. The researchers are working to demonstrate these concepts on large-scale test chips that would be used to build a brain-inspired computer.
As reported by OpenGov Asia, a new report showed that Artificial Intelligence (AI) has reached a critical turning point in its evolution. Substantial advances in language processing, computer vision and pattern recognition mean that AI is touching people’s lives daily—from helping people to choose a movie to aid in medical diagnoses.
With that success, however, comes a renewed urgency to understand and mitigate the risks and downsides of AI-driven systems, such as algorithmic discrimination or the use of AI for deliberate deception. Computer scientists must work with experts in the social sciences and law to assure that the pitfalls of AI are minimised.
In terms of AI advances, the panel noted substantial progress across subfields of AI, including speech and language processing, computer vision and other areas. Much of this progress has been driven by advances in machine learning techniques, particularly deep learning systems, which have leapt in recent years from the academic setting to everyday applications.