If the datasets used to train machine-learning models contain biased data, it is likely the system could exhibit that same bias when it makes decisions in practice. A group of researchers at MIT, in collaboration with researchers at Harvard University and a tech company, sought to understand when and how a machine-learning model is capable of overcoming this kind of dataset bias.
The researchers used an approach from neuroscience to study how training data affects whether an artificial neural network can learn to recognise objects it has not seen before. A neural network is a machine-learning model that mimics the human brain in the way it contains layers of interconnected nodes, or neurons, that process data.
A neural network can overcome dataset bias, which is encouraging. But the main takeaway here is that we need to take into account data diversity. We need to stop thinking that if you just collect a ton of raw data, that is going to get you somewhere. We need to be very careful about how we design datasets in the first place.
– Xavier Boix, Research Scientist, Department of Brain and Cognitive Sciences
The new results show that diversity in training data has a major influence on whether a neural network can overcome bias, but at the same time, dataset diversity can degrade the network’s performance. They also show how a neural network is trained, and the specific types of neurons that emerge during the training process can play a major role in whether it can overcome a biased dataset.
The researchers also studied methods for training the neural network. In machine learning, it is common to train a network to perform multiple tasks at the same time. The idea is that if a relationship exists between the tasks, the network will learn to perform each one better if it learns them together. But the researchers found the opposite to be true—a model trained separately for each task was able to overcome bias far better than a model trained for both tasks together.
When the network is trained to perform tasks separately, those specialised neurons are more prominent. But if a network is trained to do both tasks simultaneously, some neurons become diluted and don’t specialise in one task. These unspecialised neurons are more likely to get confused. But the next question now is, how did these neurons get there?
That is one area the researchers hope to explore with future work. They want to see if they can force a neural network to develop neurons with this specialisation. They also want to apply their approach to more complex tasks, such as objects with complicated textures or varied illuminations.
As reported by OpenGov Asia, MIT researchers have demonstrated diminutive drones that can zip around with bug-like agility and resilience, which could eventually perform these tasks. The soft actuators that propel these microrobots are very durable, but they require much higher voltages than similarly-sized rigid actuators. The featherweight robots can’t carry the necessary power electronics that would allow them to fly on their own.
Now, these researchers have pioneered a fabrication technique that enables them to build soft actuators that operate with 75% lower voltage than current versions while carrying 80% more payload. These soft actuators are like artificial muscles that rapidly flap the robot’s wings. This new fabrication technique produces artificial muscles with fewer defects, which dramatically extends the lifespan of the components and increases the robot’s performance and payload.