Last month, a team at Google got a tremendous amount of well-deserved attention when they tweaked their image-recognizing system to draw objects. They had the system revealing, and then exaggerating, the features that it uses to tell one object from another. If a layer of the network was identifying something as a face, say, the researchers’ algorithm fed that back to the machine as a face, thus increasing the importance of whatever features it had used to decide what it was. The resulting Dali-esque “Google Dream” images have turned up all over the web, especially since the researchers made their code available. You can even process your own photo through the Google Dream process here.
But something else about the original post caught my eye. At the time I read it, I was at work on this piece, which was published yesterday in Nautilus. In it, I discuss recent efforts to figure out what machine learning algorithms are doing when they make weird, inhuman mistakes, like identifying some TV static as a cheetah. Neural nets devise their own rules to decide whether to connect a particular input to a label like “cheetah.” And what the research has made clear is that the rules (which we can’t really access) are clearly not the same as those that humans use to decide what is what.
In their post discussing “inceptionalism,” the Google Dream work, the researchers—Alexander Mordvintsev, Christopher Olah and Mike Tyka—also mentioned that they could use their technique to shed light on just this issue. In some of their experiments, they reversed the usual neural-net image-identifying process. Instead of asking their system to find (to take one example) a dumbbell, they asked it to take random static and “gradually tweak the image” until it produced a dumbbell. The result, they explained, revealed the features the algorithm considered essential to label something a dumbbell. Here is one of those images:
Like all the others the machine produced, it included some human flesh. Which means the algorithm seems to think that the dumbbell and the arm that lifts it are somehow one thing.
I wanted to get this into the Nautilus piece, but it ended up cut for reasons not worth going into here. So I’ll make the point in this blog post: The inceptionalism work at Google has turned up the same kind of artificial-intelligence weirdness that I wrote about. You may think the essential features of a dumbbell include weight, shape and maybe color. Google’s neural net thinks one essential feature of a dumbbell is a bit of human flesh. That’s not a reason to panic, but it is mighty intriguing.