We are increasingly seeing various forms of AI being used everywhere. Sometimes it feels like every day there is a new industry that comes up with clever new approaches. What happens next, nobody knows for sure.
Future of AI creates new kinds of security challenges that we haven’t fully explored yet.
All of this sounds quite concerning at first. One might think that it’s really difficult to trust such a system when we can’t fully understand how it works. Surely we should not let it make decisions that can affect our lives, right?
Let’s approach the topic from a different angle for a moment. Think back to, say, 100 or more years ago. How did we do security back then?
We’d have guards with guns, swords and pointy sticks.
Security was handled largely by people with a bit help from mechanical devices like locks and doors. Even today, we still struggle to understand how exactly people function. We can make generalizations, figure out motivations, categorize people into vague groups and so forth. Even then, we can’t be fully sure just how the person guarding the door is going to act today. 100 years ago, it was even more of a mystery because our understanding of topics like biology and psychology has improved since then.
When you think of it this way, the last couple of decades have been really unusual historically in that we actually understand how things work – after all, we built them with explicit instructions. At any point in history before that, we had at best a vague idea as to how each individual worked, what motivated them and would they stand guard the whole night or go for a drink once they got bored. Somehow, we are able to trust people – despite all the difficulties in trusting each other – but AI and similar concepts tend to scare us. Yet, with AI, we at least know for sure what the mechanics are behind it. With us humans, all we have is creation myths from different cultures around the world giving us food for thought as to how we may have gotten here and why.
Maybe we are at historical crossroads where the world is reverting back to how it used to be. We don’t understand or fully control all the things around us, yet we understand enough that we can trust them to generally do the right thing. At that point, we have to figure out what concepts like security mean in this new situation and how much assurance we can have that things will work out in the end. Who knows, maybe in the future the primary sciences we think of in terms of working with AI will be psychology and philosophy, not computer science and mathematics.
In the meanwhile, it’ll likely be some time before your Head of Security has a degree in psychology.