Cybersecurity and AI – Why do we trust people?

Matti Suominen

October 22, 2018 at 12:50

We are increasingly seeing various forms of AI being used everywhere. Sometimes it feels like every day there is a new industry that comes up with clever new approaches. What happens next, nobody knows for sure.

 Future of AI creates new kinds of security challenges that we haven’t fully explored yet.

One of these challenges is trying to understand just what the AI is really doing. Traditionally, when we create an application – say, a webshop – we write the code which can handle all the use cases: taking payments, displaying a list of products, storing the shipping information in the database and so forth. AI-based approaches are a little different. Instead of writing the code explicitly for everything, we feed algorithms with data and have them figure out what the best approach for a given problem might be. Today, it doesn't look nearly as fancy as Hollywood would like you to believe. In future though, who knows.

With the webshop case, we know exactly how the shop works. We can pick up the code, read through it and audit it down to the last line until we are happy. Certainly, with complicated enough applications, this might not be a feasible approach due to how much time it takes. Still, in theory, at least we could do it and learn exactly how the application was made and designed if we had enough time and skilled people. With the AI approach, however, things get more interesting. Even today, it’s really hard to take the model that was created as a result of analyzing all the data and explain what exactly it does. It seems to work, and it’s giving us results that make sense, but we can’t always tell what exactly is going on.

Code made by humans has a trait that is rather obvious when you think about it. Because we created it, there has to be some sort of train of thought that makes sense to us. The programmer didn’t just throw things into a blender and get the final product that way. No, they figured out how a webshop should work, what sort of features it should have, how those features play together and what kind of problems we specifically want to avoid. It almost forms a story that one could tell to someone and have it understood with no need to know anything about programming. In fact, programming today is often based on what we call "user stories". These are simple human-readable statements (e.g. "The web shop should accept payments with Visa cards") that come directly from people who often have no idea how the implementation will look like. This ensures that there is a method to the madness of creating the code.

In the future, as things get more and more complicated, it’s hard to say for how long we still understand how the AI models work and whether we can verify or audit them. Theoretically, once a human can no longer follow the logic or it gets too complicated, we may have a few options beyond just relying on that the algorithm is doing its thing because the mechanism that created it is solid. So far, AI hasn't been great at explaining its own train of thought. Certainly, it doesn't give us user stories that would make any sense to humans.

We already have plenty of examples where AI goes wrong. Sometimes it’s really obvious, sometimes not. Well-known examples include situations where an AI starts to discriminate or becomes really rude to people. Scariest situations are those where nobody seems to be able to get the AI to start behaving and eventually they just pull the plug. While some of these scenarios can be quite amusing, there is a more sinister side to all of it. What if the AI in question is responsible or involved in keeping something safe or secure? How do we know that the security of the system isn’t compromised just because the AI is having the digital equivalent of a bad day?

 

A prime example of algorithmic bias

                                                                  http://fortune.com/2018/10/10/amazon-ai-recruitment-bias-women-sexist/

All of this sounds quite concerning at first. One might think that it’s really difficult to trust such a system when we can’t fully understand how it works. Surely we should not let it make decisions that can affect our lives, right?

Let’s approach the topic from a different angle for a moment. Think back to, say, 100 or more years ago. How did we do security back then?

We’d have guards with guns, swords and pointy sticks.

 

Security was handled largely by people with a bit help from mechanical devices like locks and doors. Even today, we still struggle to understand how exactly people function. We can make generalizations, figure out motivations, categorize people into vague groups and so forth. Even then, we can’t be fully sure just how the person guarding the door is going to act today. 100 years ago, it was even more of a mystery because our understanding of topics like biology and psychology has improved since then.

When you think of it this way, the last couple of decades have been really unusual historically in that we actually understand how things work – after all, we built them with explicit instructions. At any point in history before that, we had at best a vague idea as to how each individual worked, what motivated them and would they stand guard the whole night or go for a drink once they got bored. Somehow, we are able to trust people – despite all the difficulties in trusting each other – but AI and similar concepts tend to scare us. Yet, with AI, we at least know for sure what the mechanics are behind it. With us humans, all we have is creation myths from different cultures around the world giving us food for thought as to how we may have gotten here and why.

Maybe we are at historical crossroads where the world is reverting back to how it used to be. We don’t understand or fully control all the things around us, yet we understand enough that we can trust them to generally do the right thing. At that point, we have to figure out what concepts like security mean in this new situation and how much assurance we can have that things will work out in the end. Who knows, maybe in the future the primary sciences we think of in terms of working with AI will be psychology and philosophy, not computer science and mathematics.

In the meanwhile, it’ll likely be some time before your Head of Security has a degree in psychology.

Stay safe out there as we learn more about what it means to put our trust in these models and how the world around us will look when moving forward.

Related blogs