5 Aug

2025 – Terminator Returns (Again?!)

Whether you’re a proponent or opponent to Artificial Intelligence (AI), it’s difficult to avoid stories surrounding the matter, usually accompanied by some form of alarmist headline, and If some stories are to be believed, we’re not far from Terminator-esque AI bots dominating the world with no need for humans.

For instance, in 2017, Facebook tasked two AI “agents” with negotiating a trade deal together. Thanks to Facebook’s instruction, the agents could create their own shorthand language to make negotiations more efficient, which they did, but in such a manner that could not be deciphered by Facebook’s engineers and subsequently, as reports suggest, caused the machines to be hastily turned off.

Hold that thought for a minute. The English language has been around for roughly 1,400 years and in the space of a day's work, these AI “agents” resolved to eliminate this archaic form of negotiation and found a more efficient way to do it – that humans could not understand!

It’s not burdensome to site numerous illustrations whereby AI systems have decided there’s a better way to perform a task than how their human counterparts do it. Again in 2017, Google’ AlphaGo stunned the world by taking what is regarded as the most complex board game, “Go”, and defeated the world’s number one human player with ease. Supplement this with the recent commissioning of the first “robot tanks” and it’s easy to see why more of us are beginning to believe James Cameron’s post-apocalyptic futuristic fiction is becoming reality.

As AI has the potential to become more intelligent than any human, we have no sure-fire way of predicting how it will behave. We cannot use past technological developments as much of a basis as we’ve never created anything with the ability to, wittingly or unwittingly, outsmart us. The best example of what we could face may be our own evolution. Humans now control the planet, not because we’re

the strongest, fastest or biggest, but we’re the smartest. If we’re no longer the smartest, are we assured to remain in control?

At its conception, an AI device can, of course, be programmed by insidious means that are also completely out of line with our human morals. These will (hopefully) be on rare occasions, but the problem gets compounded further in the way that these machines “learn”. An AI device is just that, a piece of technology, void of human conscience, so in “learning” the most efficient process of completing the task it was programmed to perform, it’s well within reason to assume the device could develop a destructive method of achieving its goal without a human conscience to fall back on when faced with making a decision.

If the above wasn’t enough to keep you up at night, let's delve a little deeper into the risks posed by AI. The most obvious are fully autonomous weapons, which I hopefully don’t need to elaborate on why this is so much of a risk. But what about a more discreet risk? Currently, China’s impending social credit system, due to be rolled out in 2020, is a hot topic of discussion. Through a combination of AI and facial recognition cameras, it will assign each of its 1.3bn people a social “score” that is based on good or bad acts members of the public are caught performing through facial recognition. Take your chances and jaywalk to get to your shift on time? That’s a minus mark. Is smoke too close to a no-smoking area? Again, that’s a minus. Absolutely, this does work both ways, but the subjective nature of what constitutes good and bad being decided by a device without a human conscience demonstrates how risky these solutions can be if not managed/regulated properly.

Ultimately, it’s important to never lose sight of the fact that in order for an AI system to function, it needs to be programmed by a human first, and as long as they are not purposely programmed in a malevolent way, with human morals in mind, there is no reason to believe that AI cannot be used in conjunction with humans to bring improvements to technology, quality of life and business. In 2019, we took another historic step towards ensuring safety in the field; 42 countries came together to support the first global framework to regulate the development of AI technologies, granting us the opportunity to see the release of safe technology that will be beneficial to our day to day lives. Finally, let’s not forget the most important component that all the above citations have in common – they all require power to function! Once a human pulls the plug, the AI ceases to exist.

Now that I have hopefully subdued fears of an impending Armageddon, whilst also highlighting just how dynamic AI can be, we can now start to think about how you can benefit from such technologies – unfortunately, this will be released as part of next months' blog, so I guess it would be pertinent to say “I’ll be back”.


 

TAG

Subscribe to our newsletter



Sign up to our newsletter to get the latest marketing news, tips, trends and more to help your business grow delivered directly to your inbox

Get in touch