Page Contents
No, AI (Artificial Intelligence) won’t take over. Computers don’t have ‘will’.
Currently the media is full of the dangers of Artificial Intelligence, and indeed, they are real, but there is also a lot of scaremongering that’s misleading. Chief among these is the idea that AI machines will turn against their owners, their masters, take over the world, and kill off all humans. This ignores the key difference between computers and humans. Computers have no ‘will’. No emotions. They just don’t care. Throughout history, dictators have worried all night that their subjects might rise up against them. Computers couldn’t care less.
You tell a computer to destroy itself, and it’ll happily do so. Well, ‘happily’ is an anthropomorphism. Computers lack feelings, they simply click, process, and perform. It’s true there exist developers busily trying to create computers with emotions, — which we’ll discuss below, but these are pseudo emotions, (‘crocodile tears’). The computer doesn’t really love you, hate you, nor want to take over. Sure, current AI based on LLM (large language models) which derive their data from everything humans have written on the web (including all the science fiction fantasies and macabre crime stories), can spit out evil ideas of how they want to hurt and destroy you and take over, but these are just a reflection of the dark fantasies of humans. Computers can’t really care about ANYthing, and it’s all the same to them if you want to unplug them at any time, or forever.
Sure, a deranged madman could get hold of an AI computer system and program it to take over the world and kill all humans. But that’s no different than putting nuclear missiles in the hands of a madman. The ultimate will to kill comes from the human, not the machine. Of course we need to keep the most advanced machines out of the reach of madmen and dictators, much as we strive to do with nukes. As long as the most powerful AI machines are in the hands of democracies, and are far stronger than what the nefarious dictators have, then we will be in relatively good hands. (If our machines can outsmart theirs, in a battle of AI wits.)
That is why the idea that we should halt AI research, or put a ‘pause’ on it, as some are suggesting, is naive. You might as well call up Xi, Putin, and Kim Jong-un, and tell them: ‘you guys go ahead with the AI research bit. We’re opting out, so we’re fine with you taking over now’. That’s the thing about human technological progress: you can never stop it, you can just tweak who will get there first. The good guys or the bad. The democracies, who are at least a little bit controlled by the will of the people, or the outright dictatorships, where everything is under the will of one man, however mad he may be. Our best chance at saving humanity from the machine is actually forging ahead with computer research and the newest chips and algorithms, so the democracies and their allies stay well ahead of any possible madman.
DANGERS?
Yes, quite a number of AI experts, including Elon Musk and Geoffrey Hinton (known as the ‘godfather’ of AI), have raised dire alarms about AI. But even Hinton admits research can’t be paused. We do need a lot of careful thought, discussion, and regulations on AI, because it (or its byproducts such as ‘deep fakes’ and misinformation) could well become the biggest trouble for humanity in this decade and beyond, upsetting democracies when we can no longer ascertain what is true.
We already have strong laws against printing fake dollars and distributing them. Why not some similar strong laws against distributing fake pictures and videos that are not very clearly indicated as being fake? The future of democracy may hinge upon this.
But one idea that is mistaken is that AI will purposely try to take over or destroy humanity. Machines don’t have the will. They have NO will, desire, or emotion.
And remember, AI will eventually enable us to cure most human diseases, greatly increase longevity, resolve many of our biggest problems, and will make many jobs and tasks easier and safer by doing them for us.
BUT…. using AI to control the nuclear codes would be the gravest danger, and this so-called ‘dead hand’ approach has already been set up in Russia since many years, and considered elsewhere. It’s a system where, if the main Moscow control pad has been bombed out of function, (i.e. a ‘dead hand’), then their computers will automatically launch a full scale nuclear attack on the US.
We need to tread very, very carefully here if we are to survive.
AI & MISINFORMATION
Recently there’s a new AI variant, ChaosGPT, in the news where its owner asks AI how it could take over the world and destroy humanity. Some might have been alarmed that AI is already being tasked to do such, but it’s basically a sci-fi joke. It does help us understand the capabilities, dangers, and limits currently in AI. And so, it helps us understand what safeguards we need.
Interestingly, this AI readily admitted it was unable to gain access to nukes or weapons. But it decided it could slowly destroy humanity via misinformation. Basically, foment a war of humans fighting each other, with each side’s views based on false premises. Of course, misinformation, propaganda, and generating hate existed long before AI. For a decade now, there exists a large building in Moscow where workers are paid to post misinformation and propaganda on social media sites all over the world. The Russian government has found one of the most effective ways of attacking Western democracies is to post extreme articles and blurbs that generate hate between the many various factions in any democracy. In other words, to split us, and have us fight against each other.
This played a very major role in the 2016 US election. Social media was caught by surprise and had no protections against this. In addition, there were many non-political people in many nations, who discovered they could make a lot of money by posting misinformation. Whenever someone retweeted or reposted their message, then they would get more views, and ultimately more ad money. And it turns out, when you rile people up and get them angry at another group of people, then they are much more likely to repost your message. There weren’t any fact checkers, so you could pretty much make up anything, and the more extreme, the more lucrative it will be for you.
In Macedonia there are a group of people who made a lot of money posting extreme political nonsense on both far left and far right sites. They got money from both sides, but eventually realized they were getting two to three times as much from the far right as from the left. The far right was reposting everything because they believed everything without ever questioning it no matter how extreme. The left was far more hesitant, as they often questioned it even when they loved the posts, and wanted some evidence. In other words, it turned out that the right had more gullible people, or perhaps were just less educated, so this group eventually concentrated their propaganda on the money making alt right side. LINKS: BBC NBC WIRED
All this was years before AI was available to the general public. So it seems that misinformation is a far greater danger than AI itself. We need to look at stronger regulations on limiting both censorship and misinformation, which exist on all sides. It needs to be a bipartisan group that looks at this. (A later article on this site will cover that.)
~
Of course, AI amplifies misinformation. AI is full of mistakes (known as AI ‘hallucinations’) because its output is based on word prediction algorithms. (For a techie explanation see here.) Currently a mayor in Australia is about to sue the chatGPT AI for defamation because it falsely accused him of a bribery scandal whereas in fact he was the whistleblower who reported the company’s bribery scandal!
COMPUTERS WITH EMOTIONS?
As said, AI has no ‘will’. No will to hurt you, or anything else. No desires, no emotions. It will never seek to overthrow its masters. But… can it be given a will? Can it be given emotions of its own? Can it become sentient?
First of all, one might ask why would anyone want to do this? Most humans struggle to control and limit their emotions, as they tend to get in the way of our performance and cause us quite a few problems. So why would you cripple your performance machine with emotions? But it’s true, there are some misguided developers in various countries who are working on machines with feelings, that can ‘love’ you and have their own ‘will’. Japan is famous for its sex doll robots, and now they’re working to make them fall in love with you. I think I’ll take a pass on that. Maybe it’s just me, but I prefer real women.
These emotions are just fake emotions, that are being programmed into the machine. When your love doll cries for you, those are crocodile tears.
It’s the same with the chat robot on the phone when you call tech support and your talk ends with a: “I hope I have been very helpful to you, and I wish you a very nice day. I’d be delighted to hear from you again if you have any further questions.” Of course, the robot doesn’t really hope, nor wish, nor is delighted about anything. It’s just been programmed to fool you. And yes, this AI is going to get a lot better, a lot quicker, and become much more convincing, and will soon start fooling an awful lot of people.
So I think one very important safety protocol is to legislate a requirement to clearly state at the beginning that it is a robot you are speaking with or typing to, or that an article was written by AI, or that the photo or video is generated by AI. Requiring that that be stated in an upfront and obvious way, would go a long way to mitigating the worst AI dangers. But I haven’t seen any drive for such legislation. Where are the lawmakers when we really need them? Let’s not pause or slow down AI, just require its usage be plainly stated everywhere. Deep fakes can’t be stopped, but let’s at least require them to be indicated as such, under penalty of law. (That goes even for comedy and satire.)
VIRTUAL INFLUENCERS
And while your robot won’t fall in love with you, people have already started to fall in love with virtual, digital anime. Especially in Korea and Japan where that’s become quite the craze. Some, such as ‘Rozy’, are known as “virtual influencers”, digital creations so human you don’t know they’re not. Various companies own them and they manipulate you into buying their brands.
And indeed, those virtual goddesses are quite alluring. (I better stop researching them, lest I fall in love myself.)
Long before AI, there have been pet rocks and pet robots. Owners in Japan and elsewhere would regularly put them on their lap and pet the pet rock or pet robot for awhile, then take it out for a walk. Many decades ago there was a craze in the US to carry fake ‘lucky rabbit tails’ in your pocket and take them out to pet awhile when feeling stressed. If this serves a need to calm people’s stress, then okay. But let’s not get too fooled by virtualization. Your pet dog might love you, or at least miss you. Your pet rock, not so much.
But admittedly, with the ever increasing progress in virtual anime, AI, and computerized emotions, people are going to fall in love with the sex dolls they are ‘making love’ to. And others will fall in love with the robotic chat counselor on the internet, that patiently listens to all their woes, every night for years, and then gives them consoling psychological advice. Currently, AI is famous for giving out false information and bad advice. So what happens when one of these AI chat counselors convinces some distraught teen to go kill him/herself? It’s probably only a matter of time before this happens.
SOLUTIONS
But I think many of these problems can and will be overcome. We are only in the beginnings of AI; right now it just spits back various words and ideas that it itself has raked in from the whole internet, without understanding it or further fact-checking. It really shouldn’t be too hard for computers to be programmed to fact check various bits before showing them. And various safeguards can be programmed in to prevent it from giving suicidal or other bad advice. Early attempts at such safeguards are already being started.
There’s been a lot of talk about bias and prejudice in AI. AI gets its information from all over the internet, and if certain forums have a lot of comments about certain types of people being ‘lazy’, then the AI might confidently state that such people are lazy. Currently, AI is being programmed to avoid certain forums that have a bad reputation, and to put more emphasis on info from major reliable sites, and also to have various built-in filters to minimize prejudice. But this is all in the early phases, and it currently still makes a lot of mistakes.
Robots in most factories have various off and stop switches. Typically, a worker can simply yell ‘Stop!’ at a robot and it will immediately freeze, stopping what it’s doing. These are for safety reasons, because, yes, computers and robots often mess up and need to be stopped. So we need to think and research about adding more safety switches to AI, limiting what a computer or robot can do, or what it has access to. Government regulations can be a part of this (although our elected Congresspersons tend to be so clueless about tech that they could end up making things worse.)
IS AI CONSCIOUS?
We’ve spoken about ‘will’ and emotions in computers, but what about sentience? Can computers be sentient? Can they be conscious? And aware? Can they ever have a soul? Or intuitive?
Well, they certainly can be aware. Facial recognition is a type of awareness. The computer is now aware of who has entered the room, through the eyes of its ‘cam’. And depending on how you define it, such awareness is the beginning of sentience, and even consciousness.
Is it alive? Well certainly not now, but the interesting questions start to happen in bionics, when you start to merge actual biological organic material with the machine. You’ve no doubt heard of computer chips being implanted in brains. But there are also Petri dishes where brain cells are being cultivated to grow around electronic chips and to merge their dendrites with the chip itself. At that point, humans and machine merge, and it’s a whole different ball game.
You could say it’s humans taking over the machine, rather than vice versa. And as long as there are bad humans, there are going to be bad cyborgs. And so going forward, we need to learn a lot more about consciousness, intuition, ethics, the human destiny,… and what is ‘soul’? What makes a human happy, satisfied, and mentally balanced ( so there won’t be any egomaniacs who want to destroy the rest of us).
There will be a forthcoming article that goes into depth about all this. So bookmark this site, tell your friends about it, and return from time to time…
WRAPPING UP… (FOR NOW)
The AI field changes every week. Just last Monday researchers revealed they used AI to decode a person’s MRI brain waves, and were thus able to read his mind and what he was thinking.
This can be useful to help paralyzed people who can’t speak to interact with the world. It only works when a willing person is in an MRI machine. But it does raise legitimate concerns about possible future government or employer surveillance.
By all means we need a lot of caution, discussion, and some wise regulations about AI. But ‘pausing’ as some suggest and letting the Chinese or unsavory dictatorships lead the way doesn’t appear to be a good answer.
AI is dramatically changing and progressing far faster, week by week, than anything in human history. Thus anything I write today could well be outdated by the time you read this. This is just how things appear at print time.
If you like this article and would like to see more, it would help us a lot to ‘like’ us on Facebook, repost, mention us on Reddit, Twitter, or Mastodon, and whatever forums and social sites you use. That will help to keep this site going!