Personal Safety Lessons from A.I. (Artificial Intelligence)

Personal Safety Lessons from A.I. (Artificial Intelligence)

I’m a fan of A.I. (Artificial Intelligence). I don’t believe that “robots” will take over the world - feel free to email me, or my robot captor/guardian in a few years, if you’ve managed to escape their regime. However, I do think it is worth taking a moment to look at what A.I. does well at from a security/safety perspective, along with what it’s not so good at, and may never be so good at; I don’t believe that it will ever be able to surpass the human evolution of “intuition”; that “system” which has been keeping us safe - from each other - for thousands of years. Human “heuristics” and visual intelligence has proved fairly successful at being able to deal with the mass computational processing power of a variety of machines. In 1996 IBM’s “computer” Big Blue lost at chess to world chess champion Gary Kasparov by four games to two. However, in a rematch a year later Kasparov was the loser. This was seen as a major advancement in A.I., however it would be easy to lose sight of the genius behind the human mind, in celebrating the sheer processing power of IBM’s computer. Whereas Big Blue was able to process the almost infinite possibilities of making a particular move/decision, Kasparov revealed that he rarely thought more than one or two moves beyond a decision/move he’d make. Kasparov innately understood from his experiences what different chess boards looked like at various stages of a game, and what this might mean for future movements and decisions etc. Kasparov, spoke afterwards about how he only actually thought a few moves ahead at any point in a game; this compared to Big Blue whose programmers were running multiple long-ranging/far-seeking scenarios before each move. This initial loss by AI seemed to demonstrate how a human using a few basic heuristics could outplay a computer that could imagine multiple scenarios. However, in a rematch in 1997, Kasparov found himself the loser to the “machine”. This was a breakthrough moment, however it was eclipsed in 2016, when AlphaGo (created by Google), did what was to many the unthinkable, by beating a human at the game of Go.

                The game of Go is often acknowledged and seen as one of the most complex board games in existence. The huge number of possible combinations means that it is virtually impossible for a computer to calculate and productively assess all future moves. However, in 2016, a computer beat Lee Sedol, the world champion, by four games to one; a machine had learnt to do the impossible. Whilst this was impressive, in 2023 Kellin Pelrine, who plays at an amateur level one division below the top, beat the machine by exploiting a flaw in the A.I. that was used to beat Sedol. A flaw that highlights an amazing but simple advantage that humans have over machines: the ability to generalize, and not blindly follow rules. Ironically the flaw in the A.I. used to win Go matches, was identified by A.I., however Pelrine utilized it in his games, winning 14 of 15, without any technological assistance. The computer lacked the ability to differentiate “distractions” and was solely reliant on past experiences to make its moves. If it hadn’t previously experienced something, it wasn’t adept at recognizing the danger it was in; this included moments when it was only a few moves from being beaten, where a human player would have immediately visually “recognized” that they needed to change strategy etc. Ultimately, the machine couldn’t fully understand the context of a situation, something which humans are extremely good at, and something we need to better train in order to deal with potentially violent situations.

                One of the areas where A.I. is extremely good, is in recognizing patterns of human behavior. Models have been developed that are unbelievably successful at identifying individual shoplifters’ offending patterns to the point where prediction of future events is almost perfect. This is largely because whilst many offenders believe that they are acting “randomly” in an unpredictable manner they are not e.g., most serial offenders commit their initial crimes close to their home, before moving further away to commit their next few, before returning to area closer to home as they become more confident – and complacent – in committing their offenses etc. Geographic profiling models, given enough data, are now extremely adept at predicting and forecasting these patterns. In a quest to make their offenses seem random, most serial offenders follow a similar manner of offending. A.I., is great at taking all that data, and working out the most likely patterns, in a way that humans are unable to. However, A.I. is terrible (at the moment), at understanding context, and this can be seen in its inability to recognize human emotions.

                A lot of people think that they can identify whether someone is happy, sad, or angry by their facial expressions, however that isn’t the case, and A.I. that works from this premise gets it horribly wrong e.g., I have seen people pull their face into a tight grin/smile when they are terrified and surprised, but if you were to just judge their emotion from the “smile”, and the way they looked, you’d deduce from this that they were “happy”. Most of our recognition and judgment of someone’s emotional state comes from the context in which we see their facial expression; if we see a “smile” at someone’s birthday party, we will likely identify them as being “happy”, if we see them smiling – and possibly even laughing (I’ve seen that) - at someone who is aggressively shouting/screaming at them then we are more likely to identify them as being scared etc. Humans excel at recognizing contexts and being able to make general assumptions rather than by blindly following specific rules. However, many people want to substitute these natural skills by reducing personal safety down to a computer program that says if X happens do Y, when someone is doing A respond by doing B, and this is a failure to recognize where our “intelligence” lies.

Most people have never directly experienced violence e.g., in 2022 the FBI reported that there were 380.7 violent crimes per 100 000 people, in the US, meaning that very few people actually experienced violence directly. Obviously, violent offending isn’t spread evenly across the country, and different cities, towns, and locales etc., experience different rates of violent offending. This means that most peoples’ “experience” and knowledge about violence isn’t firsthand and often comes from news media reports, along with fictionalized and often sensationalized portrayals of violence in various TV shows and movies. In fact, news media reporting often “borrows” from such fictional depictions of crime and violence when they construct their stories e.g., The Raoul Moat case in the U.K., was reported on the 24x7 news cycle as a faithful recreation of the Rambo movies; a misunderstood individual who’d reached their breaking point and was involved in a cat-and-mouse manhunt with the authorities etc. We don’t have either firsthand, or reliable/accurate secondhand/thirdhand experiences of violence to inform our decision making, which is what the A.I., approach to dealing with violence would be based on, however we read context very well and are able to generalize and not work to rigid patterns, and these are the skills we should be relying on to keep us safe. 

Share:
Krav Maga Blog Author Gershon Ben Keren
Gershon Ben Keren
2.8K Followers

Gershon Ben Keren, is a criminologist, security consultant and Krav Maga Instructor (5th Degree Black Belt) who completed his instructor training in Israel. He has written three books on Krav Maga and was a 2010 inductee into the Museum of Israeli Martial Arts.

Click here to learn more.