You may not have heard of the Google Brain team. They are a pretty fringe department, based out of Mountain View, California. Google Brain is, as the name suggests, all about A.I. development. Specifically, A.I. functionality achieved using neural networking.
Recently Google Brain took it’s 3 resident neural networks named Eve, Bob and Alice and gave them a little problem to work on. Alice was instructed to encrypt and send a message to Bob. Bob was instructed to decrypt the message from Alice. And Eve was instructed to try and snoop on the message. Alice and Bob were each given the same unique key to use as the encryption key for the message.
That’s it. They were not given any information or data on cryptography. They had to work from scratch, building their own cryptography algorithms. Over several runs of the test, Alice could develop a cryptography technique that Bob matched, and decoded the message. Eve managed to partially snoop on the message on several occasions, which only resulted in both Bob and Alice improving their cryptography algorithms!
Sci-Fi sexiness aside, the big takeaway here is that these neural networks invented cryptography algorithms that the operators didn’t understand. They were truly groundbreaking.
The Human Weakness in Cryptography
So far, the Google Brain team have managed to work on symmetric encryption of data. The current state-of-the-art in this field, that was designed by humans, is still AES. Provided the key is kept secret and side channel attacks are mitigated, AES is regarded as impossible to break with today’s technology, and when used with a 256 bit key, AES-256 is regarded as secure against tomorrow’s quantum computers.
However, the weakness is still key management. Us mere humans need to manage and secure our encryption keys. This means physically storing them somewhere and involving physical security, and also protecting physically stored keys with passwords. In the entire mathematical scheme of things, it’s the human factor that remains the weakest point.
Where to from here?
Is it possible that a computer can design a better cryptosystem than humans?
At first glance, this seems like something out of a Sci-Fi movie, but other areas of machine learning have seen machines become as competent if not better than humans. In 2014, the DeepFace project at Facebook achieved a level of facial recognition on par with that of humans, and automated facial recognition machines are commonplace at immigration desks in airports around the world, replacing human agents. Self driving cars are now safer than human drivers, with Elon Musk foreseeing a future where manual driving will become illegal..
In the view of the author, AI cryptography does have a long way to go. In particular, current “human” cryptography allows cryptographers to provide mathematical proofs of security. The challenge with AI is that no one understands how it works – so is provable security possible?