A significant advancement in machine learning research has been achieved by researchers from CSIRO’s Data61, the data and digital specialist arm of Australia’s national science agency.
According to a recent press release, the team has developed a world-first set of techniques to effectively ‘vaccinate’ algorithms against adversarial attacks.
Algorithms ‘learn’ from the data they are trained on to create a machine learning model that can perform a given task effectively without needing specific instructions, such as making predictions or accurately classifying images and emails.
These techniques are already used widely. For instance, these are used to identify spam emails, diagnose diseases from X-rays, predict crop yields and will soon drive cars.
Vulnerable to adversarial attacks
Even though the technology holds enormous potential to positively transform the world, artificial intelligence and machine learning are vulnerable to adversarial attacks.
These attacks are techniques employed to fool machine learning models through the input of malicious data causing them to malfunction.
The machine learning group leader at CSIRO’s Data61 explained that attackers can deceive machine learning models into misclassifying an image by adding a layer of noise (i.e. an adversary) over an image.
Adversarial attacks have proven capable of tricking a machine learning model into incorrectly labelling a traffic stop sign as speed sign, which could have disastrous effects in the real world.
Protecting AI and Machine Learning algorithms
The Agency’s new techniques prevent adversarial attacks using a process similar to vaccination.
They achieve this by implementing a weak version of an adversary like small modifications or distortion to a collection of images.
Doing so would create a more ‘difficult’ training data set. When the algorithm is trained on data exposed to a small dose of distortion, the resulting model is more robust and immune to adversarial attacks.
In a research paper accepted at the 2019 International Conference on Machine Learning (ICML), the researchers also demonstrate that the ‘vaccination’ techniques are built from the worst possible adversarial examples, and can therefore withstand very strong attacks.
This research, according to the CEO of CSIRO’s Data61, is a significant contribution to the growing field of adversarial machine learning.
Artificial intelligence and machine learning can help solve some of the world’s greatest social, economic and environmental challenges.
However, that can only happen by having a focused research into adversarial machine learning technologies as well.
Other AI initiatives
The new techniques against adversarial attacks will spark a new line of machine learning research and ensure the positive use of transformative AI technologies.
Additionally, CSIRO recently invested AU$ 19 million into an Artificial Intelligence and Machine Learning Future Science Platform.
This will target AI-driven solutions for areas including food security and quality, health and wellbeing, sustainable energy and resources, resilient and valuable environments, and Australian and regional security.
Moreover, Data61 led the development of an AI ethics framework for Australia, released by the Australian Government for public consultation in April 2019.