From all the news reports, it might seem a foregone conclusion.
No, AI is not taking over. Will it some day? Honestly, there’s no way to know.
But until then, it’s important to understand what AI, or Artificial Intelligence, is and is not, and how it can be used for good as well as evil.
Become a Patron of Ask Leo! and go ad-free!
AI world domination?
AI is definitely increasing in capabilities and uses, and while it won’t go all Terminator on us anytime soon, it’s definitely something important to be aware of and possibly to use in constructive ways. AI doesn’t really think as we would define thinking, but it does at times present a very good simulation. More important than the technology, though, is how we choose to use or abuse it.
What is AI?
In a previous article, I discussed how AI might be artificial but it’s not intelligent. What we are calling AI is just software written by humans plus lots and lots of data. It’s more correctly referred to as machine learning, but that’s not nearly as sexy.
The impressive advancement in recent years is twofold:
- The amount of data on which AI algorithms are based. The sheer quantity of data being used might be impossible for mere mortals like you and I to grasp.
- The ability of the software to work with all that data incredibly quickly. That they can do it in near real time is amazing.
How AI works
The algorithm implemented in the software is often deceptively simple. In What Is ChatGPT Doing … and Why Does It Work?, Stephan Wolfram wrote a lengthy but readable summary of the technology. It all kinda boils down to a very simple concept: “What’s the most likely next word?” repeated over and over.
For example, I might ask ChatGPT, “Does AI think?” It analyzes gazillions upon gazillions1 of examples of text (hence the terms large language model — it’s a large collection of language examples) and decides that for those three words, the next most likely word is… AI.
It then iterates. For “Does AI think? AI”, the next most likely word is does.
For “Does AI think? AI does”, the next most likely word is not.
And so on. The results are impressive, and becoming more so.
What I find interesting is the amount of randomness intentionally built in. This means you might get different answers for the same prompt from one time to the next. My understanding is that this makes it feel less mechanical and more human — humans are certainly random, after all.
Does AI think?
I went to the source and asked ChatGPT. Here’s the response I got.
AI does not “think” in the same way that humans do. AI, including language models like me, are essentially complex algorithms designed to process and manipulate data based on patterns and statistical relationships. AI can solve problems, answer questions, and generate output based on the input it receives, but it does not possess consciousness, self-awareness, or emotions like a human does.
In other words, AI can simulate certain aspects of human intelligence, such as problem-solving, pattern recognition, and language understanding, but it does not actually “think” or experience the world as humans do. AI’s capabilities are a result of its training data and the algorithms used to process that data, rather than any form of genuine thought or understanding.
I actually disagree, slightly, with one characterization: “essentially complex algorithms”. I don’t want to minimize the complexity of the implementation, because there’s some amazing work being done to make this stuff useful. But as I described above, the basic algorithm is conceptually simple.
So, it’s a simulation. But will it take over?
Will AI take over the world?
I once again asked the source, ChatGPT.
As an AI, I can’t predict the future, but I can provide you with some perspectives on the matter.
Whether AI will “take over the world” depends on how you interpret the phrase. If you mean AI becoming self-aware and overthrowing human rule, this is an unlikely scenario. Most AI researchers and ethicists are working to ensure that AI systems are designed to be safe, controllable, and beneficial to humanity. The idea of AI taking over in the way often depicted in popular culture is speculative and not grounded in current scientific understanding.
However, if you mean AI becoming increasingly integrated into various aspects of human life and significantly impacting our society, then this is already happening. AI is transforming industries like healthcare, finance, transportation, and entertainment, among others. It is automating tasks, enhancing decision-making, and providing new capabilities that were once thought to be the exclusive domain of humans.
The future of AI will largely depend on how humans develop, regulate, and use these technologies. It’s essential for society to carefully consider the ethical implications and potential consequences of AI’s growing presence in our lives. By doing so, we can harness its potential for good while mitigating potential risks and ensuring that AI remains aligned with human values and interests.
So, no…2 but yes.
It’s not about the tech
My take is that it’s no longer about the technology. That will, of course, continue to improve, but we’ve reached a point where more important issues have been raised.
What matters more now is how we “puny humans” will use the technology; as ChatGPT said, it depends on “how humans develop, regulate, and use these technologies.”
There are plenty of stories making the headlines of AI-based solutions going off the rails. “Self-driving” cars plow into things they shouldn’t. AI chatbots become more aggressively racist the more they interact with (apparently the wrong kind of) people. AI resumé filters unintentionally reinforce improper stereotypes.
Teenagers and college students are excited at the possibility of ChatGPT-like technology doing their homework for them. (Presumably precipitating further decline in their own writing skills.) I pity the teachers and administrators who now have to ask, “Did an AI write this?” as they try to evaluate their students’ work.
And at what point does it even matter if an AI wrote something, if it got the “job” done?
But there are good stories as well. For example, you and I simply aren’t capable of pulling information from the gazillions and gazillions of resources out there — something an AI might do in seconds to present solutions we’d never have discovered on our own. Similarly, consider the applications in medicine, where AIs trained in detailed pattern recognition are now diagnosing conditions more accurately and much sooner than their human counterparts.
To implement all of this properly still requires human oversight, since we’ve also seen that at least the current iteration of AI can be very confidently… wrong.
Watch this space. The developments in AI are pretty exciting; I’m particularly enamored with the medical diagnostic possibilities.
But as always, remain skeptical. Things are not as dire as headlines might lead you to believe, and things are not as wonderful as headlines might lead you to believe.
But things are changing. That much is sure. It has always been thus.
While I used AI for parts of this article, I’m not artificial, at least not yet. Let me prove it to you every week: Subscribe to Confident Computing! Less frustration and more confidence, solutions, answers, and tips in your inbox every week.
Footnotes & References
1: Technical term, that. Means “a hell of a lot.”
2: Which is what the more paranoid might say is exactly what we would expect an AI with a hidden agenda to tell us.
An interesting perspective: Enjoy the Singularity: How to Be Optimistic About the Future
Note: All ChatGPT answers used the GPT4 model, recently introduced, and were copied verbatim.