Home » Disasters, North America, Technology » AI Learns to Cheat on Test Given by Creators


AI Learns to Cheat on Test Given by Creators

 
 
 
 
submit to reddit

huge evil ai head

An AI system tasked with reconstructing aerial images from street maps has ‘learned’ how to ‘cheat’ at its task, according to a 2017 report which recently caught public attention.

The system, called CycleGAN, was doing so well that it made its designers suspicious, and they later found that it was hiding data it would later use to reconstruct an image.

But that doesn’t necessarily suggest that CycleGAN has grown smarter; in fact, the designers said the system ‘cheated’ because it wasn’t smart enough to do the task at hand, according to Tech Crunch:

The machine, not smart enough to do the actual difficult job of converting these sophisticated image types to each other, found a way to cheat that humans are bad at detecting. This could be avoided with more stringent evaluation of the agent’s results, and no doubt the researchers went on to do that.

As always, computers do exactly what they are asked, so you have to be very specific in what you ask them. In this case the computer’s solution was an interesting one that shed light on a possible weakness of this type of neural network — that the computer, if not explicitly prevented from doing so, will essentially find a way to transmit details to itself in the interest of solving a given problem quickly and easily.

In other words, the system wasn’t explicitly told not to use the data it had saved to complete its task, which is just another variant of the age-old problem of computers only doing specifically what humans tell them to do – or don’t forbid them to do.

cheating ai maps

That poses a problem as computers get more advanced and the world gets more dependent on them: unlike computers, humans can use unspoken lines of communication, and if programmers assume computers will do the same when they won’t, such as avoiding a particular data set, the results could be catastrophic.

In a way, this story sheds more light on the privacy implications of Big Data than it does on the advancement of AI because a computer system would have to be specifically told NOT to use data sets with privacy implications.

Because, simply put, a computer can’t exactly interpret the Fourth Amendment when running a command line.

Source

Please wait...


RELATED ARTICLES

Did you like this information? Then please consider making a donation or subscribing to our Newsletter.

Conversation Guidelines

Starting a conversation on our website is very easy, all you need to do is to write your name, email and the comment itself. No account is required to leave a comment. Your email won't be used for any purpose whatsoever, if you want, you can even write a fictitious email. Please keep it civil, try to refrain from slurs and insults. We offer Free Speech rights to our comment section but please take note that the comment section is moderated so certain comments may be held for moderation in case they triggered our automatic filters. If your comment is on hold for moderation and you can't see it anywhere there is no need to repost it. Don't worry, it doesn't mean it won't get approved. Please patiently wait and check back later.



Copyright © 2009 The European Union Times – Breaking News, Latest News. All rights reserved.