Researchers develop 'vaccine' against attacks on machine learning

phys.org | 5/7/2019 | Staff
applecup (Posted by) Level 4
Click For Photo: https://3c1703fe8d.site.internapcdn.net/newman/gfx/news/hires/2017/algorithm.jpg

Researchers from CSIRO's Data61, the data and digital specialist arm of Australia's national science agency, have developed a world-first set of techniques to effectively 'vaccinate' algorithms against adversarial attacks, a significant advancement in machine learning research.

Algorithms 'learn' from the data they are trained on to create a machine learning model that can perform a given task effectively without needing specific instructions, such as making predictions or accurately classifying images and emails. These techniques are already used widely, for example to identify spam emails, diagnose diseases from X-rays, predict crop yields and will soon drive our cars.

Technology - Potential - World - Intelligence - Machine

While the technology holds enormous potential to positively transform our world, artificial intelligence and machine learning are vulnerable to adversarial attacks, a technique employed to fool machine learning models through the input of malicious data causing them to malfunction.

Dr. Richard Nock, machine learning group leader at CSIRO's Data61 said that by adding a layer of noise (i.e. an adversary) over an image, attackers can deceive machine learning models into misclassifying the image.

Attacks - Machine - Model - Traffic - Stop

"Adversarial attacks have proven capable of tricking a machine learning model into incorrectly labelling a traffic stop sign as speed sign, which could have disastrous effects in the real world.

"Our...
(Excerpt) Read more at: phys.org
Wake Up To Breaking News!
Sign In or Register to comment.

Welcome to Long Room!

Where The World Finds Its News!