From all the news reports, it might seem a foregone conclusion.
No, AI is not taking over. Will it some day? Honestly, there’s no way to know.
But until then, it’s important to understand what AI, or Artificial Intelligence, is and is not, and how it can be used for good as well as evil.
Become a Patron of Ask Leo! and go ad-free!
AI world domination?
AI is definitely increasing in capabilities and uses, and while it won’t go all Terminator on us anytime soon, it’s definitely something important to be aware of and possibly to use in constructive ways. AI doesn’t really think as we would define thinking, but it does at times present a very good simulation. More important than the technology, though, is how we choose to use or abuse it.
What is AI?
In a previous article, I discussed how AI might be artificial but it’s not intelligent. What we are calling AI is just software written by humans plus lots and lots of data. It’s more correctly referred to as machine learning, but that’s not nearly as sexy.
The impressive advancement in recent years is twofold:
- The amount of data on which AI algorithms are based. The sheer quantity of data being used might be impossible for mere mortals like you and I to grasp.
- The ability of the software to work with all that data incredibly quickly. That they can do it in near real time is amazing.
How AI works
The algorithm implemented in the software is often deceptively simple. In What Is ChatGPT Doing … and Why Does It Work?, Stephan Wolfram wrote a lengthy but readable summary of the technology. It all kinda boils down to a very simple concept: “What’s the most likely next word?” repeated over and over.
For example, I might ask ChatGPT, “Does AI think?” It analyzes gazillions upon gazillions1 of examples of text (hence the terms large language model — it’s a large collection of language examples) and decides that for those three words, the next most likely word is… AI.
It then iterates. For “Does AI think? AI”, the next most likely word is does.
For “Does AI think? AI does”, the next most likely word is not.
And so on. The results are impressive, and becoming more so.
What I find interesting is the amount of randomness intentionally built in. This means you might get different answers for the same prompt from one time to the next. My understanding is that this makes it feel less mechanical and more human — humans are certainly random, after all.
Does AI think?
I went to the source and asked ChatGPT. Here’s the response I got.
AI does not “think” in the same way that humans do. AI, including language models like me, are essentially complex algorithms designed to process and manipulate data based on patterns and statistical relationships. AI can solve problems, answer questions, and generate output based on the input it receives, but it does not possess consciousness, self-awareness, or emotions like a human does.
In other words, AI can simulate certain aspects of human intelligence, such as problem-solving, pattern recognition, and language understanding, but it does not actually “think” or experience the world as humans do. AI’s capabilities are a result of its training data and the algorithms used to process that data, rather than any form of genuine thought or understanding.
I actually disagree, slightly, with one characterization: “essentially complex algorithms”. I don’t want to minimize the complexity of the implementation, because there’s some amazing work being done to make this stuff useful. But as I described above, the basic algorithm is conceptually simple.
So, it’s a simulation. But will it take over?
Will AI take over the world?
I once again asked the source, ChatGPT.
As an AI, I can’t predict the future, but I can provide you with some perspectives on the matter.
Whether AI will “take over the world” depends on how you interpret the phrase. If you mean AI becoming self-aware and overthrowing human rule, this is an unlikely scenario. Most AI researchers and ethicists are working to ensure that AI systems are designed to be safe, controllable, and beneficial to humanity. The idea of AI taking over in the way often depicted in popular culture is speculative and not grounded in current scientific understanding.
However, if you mean AI becoming increasingly integrated into various aspects of human life and significantly impacting our society, then this is already happening. AI is transforming industries like healthcare, finance, transportation, and entertainment, among others. It is automating tasks, enhancing decision-making, and providing new capabilities that were once thought to be the exclusive domain of humans.
The future of AI will largely depend on how humans develop, regulate, and use these technologies. It’s essential for society to carefully consider the ethical implications and potential consequences of AI’s growing presence in our lives. By doing so, we can harness its potential for good while mitigating potential risks and ensuring that AI remains aligned with human values and interests.
So, no…2 but yes.
It’s not about the tech
My take is that it’s no longer about the technology. That will, of course, continue to improve, but we’ve reached a point where more important issues have been raised.
What matters more now is how we “puny humans” will use the technology; as ChatGPT said, it depends on “how humans develop, regulate, and use these technologies.”
There are plenty of stories making the headlines of AI-based solutions going off the rails. “Self-driving” cars plow into things they shouldn’t. AI chatbots become more aggressively racist the more they interact with (apparently the wrong kind of) people. AI resumé filters unintentionally reinforce improper stereotypes.
Teenagers and college students are excited at the possibility of ChatGPT-like technology doing their homework for them. (Presumably precipitating further decline in their own writing skills.) I pity the teachers and administrators who now have to ask, “Did an AI write this?” as they try to evaluate their students’ work.
And at what point does it even matter if an AI wrote something, if it got the “job” done?
But there are good stories as well. For example, you and I simply aren’t capable of pulling information from the gazillions and gazillions of resources out there — something an AI might do in seconds to present solutions we’d never have discovered on our own. Similarly, consider the applications in medicine, where AIs trained in detailed pattern recognition are now diagnosing conditions more accurately and much sooner than their human counterparts.
To implement all of this properly still requires human oversight, since we’ve also seen that at least the current iteration of AI can be very confidently… wrong.
Do this
Watch this space. The developments in AI are pretty exciting; I’m particularly enamored with the medical diagnostic possibilities.
But as always, remain skeptical. Things are not as dire as headlines might lead you to believe, and things are not as wonderful as headlines might lead you to believe.
But things are changing. That much is sure. It has always been thus.
While I used AI for parts of this article, I’m not artificial, at least not yet. Let me prove it to you every week: Subscribe to Confident Computing! Less frustration and more confidence, solutions, answers, and tips in your inbox every week.
Podcast audio
Footnotes & References
1: Technical term, that. Means “a hell of a lot.”
2: Which is what the more paranoid might say is exactly what we would expect an AI with a hidden agenda to tell us.
An interesting perspective: Enjoy the Singularity: How to Be Optimistic About the Future
Note: All ChatGPT answers used the GPT4 model, recently introduced, and were copied verbatim.
In the early days of computers when someone would say the computer made a mistake the answer would be GIGO, Garbage In, Garbage Out. AI or rather machine learning software simply looks at the data created by humans and the algorithms find an answer to determine the probability of each word based on usage of language in that data. GIGO is the negative version of you get out what you put in. One danger lies in the algorithm using erroneous data if it’s repeated often enough. And what’s going to happen when so much content is created by machine learning that machine generated content is using other machine generated content to generate content. ;-)
Colossus, The Forbin Project. Is Guardian ready for prime time?
I so enjoyed that movie. Currently reading the book (which I didn’t realize had sequels).
People cannot do math in their heads any more since we now rely entirely on calculators, even for the simplest arithmetic.
I fear the same may happen with the ability to write. Writing is work, but it is important for training your brain and expressing yourself honestly. How much easier it is to send an emoji, or to have AI do it for you, than to think about what you really mean to say and find the words to say it.
I banned calculators a few time in math classes I taught. There was a great outcry, but they actually did better than using calculators. With a calculator, one wrong keystroke, and the answer was 100% wrong. One mistaken digit when solving on paper allowed them to get partial credit for the part they did right. To make it easier, I told them the answers would be whole numbers or rational numbers.
Leo is AI will make the next generation dumber or or starter that is the question
One aspect of AI that is of concern is human thinking has motives which societies have to apply certain lingual mores applied for the sake of civility. Since values and mores are often determined by society’s expectations and those expectations are so diverse globally, a more sophisticated AI could evolve, without purposeful intent, into a mechanism to deceive even more sophisticated than human thinking. How does one teach morals to a devise incapable of understanding morals?
I can see the advantages of having AI improve the security of my computer’s OS as well as medical diagnostics and debugging large complex software suites (including Windows itself), but I’m very skeptical about integrating it everywhere, and that sounds very much like what I’m hearing that Microsoft, Google, Amazon, et-al want to do. Look at the AI that Microsoft has integrated into their Bing search engine. Why do you think they put it there first? I think it’s because they can collect more data about everyone who uses Bing and analyze/use it more efficiently.
All in all, I’m unsure how I feel about AI technology, but I suspect that its evolution is inevitable, and I don’t know if there’s any way to avoid its influence. I can only hope that as AI evolves, it’s evolution is a positive influence on humanity.
My2Cents,
Ernie
The big mistake was to call such software intelligent.
The only intelligence involve in AI is the programmers.
If we’d called it Artificial Cleverness then there wouldn’t be as much paranoia being displayed on the subject.
Thanks Leo for a rational, intelligent article on the subject.
Chris.
If you look at history of humanity, you will notice that many positive technological discoveries were quickly adopted to evil purposes. One of clear examples would be the explositive invented by Nobel. It was supposed to allow built tunnels and roads easier for people doing the work. But it did not take long before this invention was used in making weapons. Anyone can say AI will not progress same way?
It will. But like explosives and so many other inventions, there’s a positive side as well. The key is making sure that the positives outweigh the negatives.
Pretty much everything invented has a malicious or deadly application. Cars kill millions; electricity can kill. I don’t think I can name a technology that doesn’t have a malicious application.
I think AI just blocked my comment. It surely could not be Leo, because it is sleeping time where he is.
If you call a spam filter AI, then it’s possible. Spam filters work through machine learning so they could be considered AI in the broad sense.
I haven’t found any blocked comments from you so maybe it was somehow posted and you just didn’t see it.
As for the timezone difference, I’m 9 time zones away from Leo. Either of us can review and manage comments.
Nope. You were probably just impatient. (Pages are often cached and comments don’t always show up immediately.)
It’s 2023 and we still use fonts that make the acronym “AI” look like a man’s first name (e.g. Al Roker). The most impressive thing about A.I. is that it can tell these two apart without seeing them in a sentence. I think I’m going to go write a virus and call it “expiorer.exe”. Ugh.
Hackers have been playing on this for a long time already. Particularly with extended/international character sets that have even more opportunities for confusion.
…to clarify my “rant”, this is not an attack on you Leo, or your website, it’s just that I can’t get my brain to stop seeing a name every time I see this acronym.
OH, and there’s already someone planning ahead. GPTZero was written by a college student to help teachers/professors detect AI written content.
https://www.npr.org/2023/01/09/1147549845/gptzero-ai-chatgpt-edward-tian-plagiarism
This is likely the beginning of a very complicated game of AI whack-a-mole as AI gets better then AI-detectors get better, and on and on.
One problem with AI is that with so much false information on the Web, AI can’t distinguish between the tue and the false. They might use something similar to Google’s Page Rank to improve their accuracy, but I would expect some fake news to make it in.
Hopefully Asimov’s laws of robotics will kick in then all will be well
Hello, I have taken the time to write an article about the potential and limitations of AI (specifically for music, but what is true for music, is true for a lot more) on my (new) blog. I agree with pretty much everything you’ve said in your video, but I elaborated a bit more in some respects.
For instance, I tried answering the question of when it makes sense to use it and why using it too much could become problematic. Also, I explained what the human process is when we create art. I think you may enjoy the read.
This is my introduction to the blog post: “While the potential of AI is pretty apparent, only few people seriously discuss its limitations. When it comes to the creation of art, the key to understanding AIs shortcomings is: that it is not human. Art is the individual, creative, and -usually- intentional expression of something that moves us. AI does not feel nor perceive. Thus it has no intention of expressing anything on its own. Therefore it needs to be prompted to do so. It only executes commissioned work like a skilled craftsman that doesn’t care. Consequently, it can be a tool, an inspiration, or an assistant, but it can not be an artist.”
I’ve read the rules and I understand that comments containing links will be moderated before publication. If you choose to exclude the link from my message, I do understand it. Just thought I’d share, because I just released the blog post today and “randomly” was suggested your video on this very topic on youtube afterwards.
Anyways here is the link, I hope you don’t perceive this as spam, but it is obviously some kind of (well-intended) self-promotion, that I deem to be constructive: https://schoenerbeats.blog/ai-in-music/
Cheers
I was curious whether Microsoft Start tailored the news I get based on my political interests, so I went to the source Microsoft Bing Chat. Here’s what they said:
It looks like Bing Chat got tired of that subject. ;-)
I recall one of the AI chats — probably Bing, now that I think of it — limiting conversations to five iterations before changing the topic, regardless of topic.
Or it might have reacted to the (even obfuscated) profanity.
I was surprised that Microsoft couldn’t answer a question about themselves. I found that comical.
Given how LLM’s work, I’m honestly not surprised.