Become a Patron of Ask Leo! and go ad-free!
Transcript
If AI is plagiarism, then we are all plagiarists. Hi, everyone. Leo Notenbaum here for askleo. Com. And yes, you’ll notice that I’m sitting in a slightly different environment. I’m doing a test. This is something new, and I will explain it in a little bit more detail towards the end of the video. So there’s a lot of a kerfuffle around AI these days. I want to basically try and talk about some of the issues and some of what I believe are some of the misunderstandings about what AI is and isn’t. I want to start by basically comparing AI to ourselves. For example, let’s say you read a book. You go to your library, you check it out. It’s big, it’s thick, it’s got lots of information. You chew it right up, you love it. Then somebody comes along and asks you a question about what you’ve read. You, of course, provide an answer. That answer will be based on your understanding of the book. You won’t have memorized the book, but you’ll have understood to some degree what the book is all about, and you’ll provide an answer in your own words. In a sense, what you’re doing is regurgitating the information that that you’ve consumed in a different form in response to a question.
It doesn’t have to be a question. Of course, you could go off and write your own paper or even write your own book, but it’s your own book based on the knowledge that you have learned from reading this other book. Ai does the same thing. It does almost exactly the same thing. The difference, of course, being that it does it at scale. Ai consumes a lot of information. It does not memorize the information. By that, I mean it doesn’t have copies of all of these books that it’s read in its memory, so to speak. It simply has a representation of the words, how words are structured, what concepts are there, how things lay out. But it’s absolutely not a photocopy of the book, just like it isn’t for you after you read a book. You have concepts, you have ideas, you have words, you have terminology that you then put together and use in your own way to generate content, if you will, in response to whatever prompt you are given, be it a question or, like I said, an incentive to write an article, a blog post, or even your own book. Honestly, that’s how humanity works.
We learn, we consume information. We then synthesize that information and use it in different ways to create more information, to accomplish tasks, to do whatever it is we want to do with that information. Like I said, I don’t see AI as being really any different. Ai I’s consume information, they process information, and then they produce information in response to the prompts and questions that we ask it to. That’s not plagiarism. That’s learning. That’s how humans have learned since the dawn of time. The difference, of course, is scale. Ai is doing it at a much higher rate, much more rapidly consuming significantly more information than any single human could. In a normal case, with just humans involved in the process, the creators of content have various ways of getting compensated. Perhaps you buy a book, perhaps you subscribe to a magazine, perhaps you subscribe to something else, perhaps it’s at the behest of a company you work for or a patronage situation, whatever. There have always been incentives and compensation for creating content that others can then consume. Given the massive scale of content consumption by AI, it makes us uncomfortable that it seems like there should be some additional or different way of compensating the creators of content because it’s being used in this way.
Even though it’s being used the same way that it’s been used since the dawn of time, the fact that it’s AI and the fact that it’s happening at such a large scale makes people uncomfortable. I don’t really disagree. Like I said, I do think that this is really nothing new, but I do think there is an opportunity here for us to perhaps more fairly understand how content creators of all kinds should be compensated. I say for all kinds because we talk about writing, of course, because that’s one of the first applications of AI and one of the most common. But in reality, this applies to art, this applies to music, this applies to video, it applies to all of the different ways that AI might be getting used. It’s all similar to what humans have done in the past. The difference is scale. That seems like an opportunity for at least revisiting how content creators should be compensated. Now, one of the things that I find interesting is that a lot of content creators, mostly websites, have the option to opt out. Supposedly, the AI scrapers that read the Internet will recognize that they have been instructed to not pay attention, not read this particular website, the content on this particular website.
I get not all the scrapers pay attention to that, but that’s the intent. It seems odd to me And I say that because it’s equivalent to me writing a book and then saying, You can read it, but you can’t. You are not allowed to read it for whatever reason, whatever arbitrary reasons I might have. And yes, again, I keep coming back to this fact. I keep coming back to this fact, opinion, whatever you want to call it, that AI consuming information is very roughly equivalent to you and I consuming information, and that the lines we might draw seem rather arbitrary based on concerns that I don’t think are necessary Necessarily accurate. Now, I want to be clear. I have concerns about AI. Plagiarism isn’t one of them because I don’t see it as being any different than what humans have been doing since humans were invented themselves. My concerns are more along the lines of how we use AI, our overreliance on it, our over trusting it, even in these early stages. Our intent, our attempt to use AI to deceive. These are all things that concern me greatly. But like I said, they really don’t apply or they really don’t have anything to do with the concept of plagiarism.
One the approaches, the mental models that I use for AI being appropriate or not. Remember, like so many things, AI is a tool. It reminds me of many years ago, someone was complaining about how they spent too much time in their day doing email. Well, they weren’t really doing email. Email is just a tool to accomplish some other task. It’s the other task that needed to be examined and potentially reprioritized or whatever. Good. Same thing is true for AI. Ai is just a tool. It can be used for good or evil. It can be used in inappropriate ways. We’ve seen this already. Ai slops showing up in Amazon, Book stores and other places where it’s just they’re trying to sell books that have been created completely by AI, and the books themselves are just garbage. And yet they still are lucrative enough because people buy based on the title, for example, not necessarily on the actual content. They don’t find out that they got duped with this garbage until too much later. The classic case, especially when ChatGPT, for example, first came out, is that we were very concerned that students would be using AI to do the work for them.
Remember that writing an essay or creating a document of some sort is intended as a way to confirm or to prove that you have actually learned something, that you know something that you can then turn around and put into words. Using AI to do that really only proves that the AI has learned something, and it doesn’t do anything for you. And most importantly, it doesn’t encourage you to actually learn what you were supposed to have learned in the first place. And of course, of late, we’re also seeing things like non-consensual images of various sorts. I won’t run the range, but you can imagine that the classic case of using AI to… In the past, we were able to put someone’s head on somebody else’s body. Now with AI, we’re able to do that, A, much more easily, and B, make it a video so that you can see people doing things that they actually never did and never would have agreed to be shown doing. Those are the kinds of things that absolutely do concern me about AI. It’s not the tool, though. It’s how the AI is being used. Ai can be used for good.
I use it myself. I use it for assistance in getting ideas, summarizing documents. I do it for creating images. And honestly, that’s one of the places where there’s so much room for what I would call valid disagreement. For example, I use, usually, DALL-E to create an article image, an image to represent one of my seven takeaways newsletters. I think it gives it a little bit more personality, gives it something more visually striking. It’s something that shows well on social media when those issues are shared there. Not everybody agrees. Not everybody likes the fact that I’m using AI for that. They don’t like the fact that it looks like AI. They don’t think that it adds any value. I understand. I disagree, but I understand. And that’s the stuff that I think is honestly applies to pretty much all art and content and whatever. Some people like it, Some people don’t. Some people aren’t even on the fence. They absolutely hate it. But again, it’s the same thing as has always been the case, whether it be AI generated or not. Ultimately, I think we have a long way to go here. I think one of the One of the things we really need to remember is that as humans, we tend to fear first what we don’t understand.
Yeah, who understands AI? There’s a lot of opportunity for fear. But don’t let the fear get in the way of actually understanding and seeing some of those opportunities and potentially making use of the technology, or at least not making prejudicial judgments based on somebody else’s appropriate use of the technology. Remember, every new technology has been here. I’m sure we could go back to the wheel, but the things that come to mind for me are the printing press, television, radio, Automobiles. All of these things had individuals who were seriously concerned that they each represented the fall of humanity, that they were going to lead to horrible events, horrible things happening. We already have ways to destroy humanity. We don’t need AI to do that. That AI might. It could. It’s not what I’m concerned about. What I’m concerned about are how people will end up using this new tool in ways that they certainly could have used other tools in the past. Ai is letting them do it more efficiently. Anyway, let me know what you think. I’m interested in your perspectives on AI, AI use, and where you think it’s appropriate, where it’s inappropriate, where you think you’re headed.
As always, leave the comments down below. And while I can’t respond to every comment, I just get too many, I definitely appreciate your ideas, and I definitely will respond to at least a few as I have the opportunity throughout the day.
The Test
Now, let’s talk about this test. What I’ve been looking for is a way to basically have a less formal way to talk with you. I realized that this isn’t necessarily a discussion in the sense that I don’t have somebody. I’m talking at a camera. You are a camera. But obviously, the discussion happens and comments and so forth. But I wanted something less formal than the traditional Ask Leo videos. Right now, I’m mostly talking off the top of my head. I do have notes. You may have seen me looking at them from time to time here on my iPad because I don’t want to be completely unprepared. And to be honest, if I just start talking about something, when I’m done, I will then remember all the things that I wanted to include. There’s that much at least. The intent here is that there be light editing. I found that I need to stop for a moment and gather my thoughts occasionally or review my notes, and that stuff will get edited out.
But things like, I don’t know, maybe you heard the dog parking in the background or something. That’s what this is. This is an informal me just sitting down in front of the camera and talking at you for a variety of reasons. I mean, ultimately, it’s a different vibe, right? It’s a different vibe. It’s just me being a little bit less formal. I may end up publishing this in podcast format. Now, I realize this is a video, and I will be publishing it, of course, on YouTube. Two things about that. One, as it turns out, YouTube is now the biggest single place people go to for podcasts. So I believe all I really need to do is mark this as a podcast and it’ll show up as being available as such. We’ll see how that goes. Like I said, it’s an experiment. The other is that I have long held that it is extremely dangerous to publish on somebody else’s platform. Don’t get me wrong, YouTube is wonderful, and I’m very grateful for the opportunities that I’ve been given here. But what I’m also going to do, and I’ve already been doing this with all of my normal Ask Leo videos, is that I also have a copy of it on a different platform.
When you look at this video on YouTube, great, there will probably be a page on askleo. Com where this video also exists, potentially with the transcript. Haven’t decided that yet. But also, if you are signed in to an Ask Leo account, you will get the video from a different source. That’s actually from one of my own domains. It’ll just be directly embedded. Bottom line is that there won’t be YouTube interference, there won’t be YouTube ads. Whereas if you are not logged in, then it’s just an embed of the YouTube video. Anyway, I’m playing with that as well. That’s what I’ve been doing, the whole YouTube, not YouTube thing for quite some time with my regular videos. But I’m expecting to do the same thing with this as well. Once again, I’m curious as to how this strikes you. I’m tempted to give it a name. I don’t know what that name should be. I don’t know if it’s It’s important because I’m still publishing it through the normal Ask Leo YouTube channel. But if you got thoughts on that, heck, if you got thoughts on the format, if you got thoughts on the name, if you got thoughts on any of this stuff, once again, leave it down below in the comments and absolutely I read them all, even though I may not have the opportunity to respond to absolutely every one of them.
So thank you for sticking it out this long, for participating in this little experiment, and we’ll see how it goes. My intent, by the way, is to hopefully do one of these a week, thereabouts, usually over the weekend. It’s currently Saturday, and I’ve got an opportunity to do this. So anyway, thanks for watching, and we’ll see you again soon. Take care. Bye-bye.
Do this
Subscribe to Confident Computing! Less frustration and more confidence, solutions, answers, and tips in your inbox every week.
I'll see you there!
When I saw the title of this podcast,I thought about how I did papers in college:
I’d go to the library, go through the catalogue, order a pile of books at the desk, sit down at a table, and take notes from the books on 3″x5″ cards. Then I’d go home and go through the notes, arrange them in a coherent sequence and type the paper paraphrased from the notes. I’d compile a bibliography of my sources and turn in my paper.
That’s pretty much exactly how AI works.
Minus the bibliography, in many cases.
. . . and that’s (bibliography) the difference between quoting a source and plagiarizing content.
Ernie
Is it, though? If I write an article based on all the reading I’ve done (and it’s a lot), am I plagiarizing just because I don’t (or even can’t) cite my sources?
In the case of a book or books you’ve read, set your citation with the books as your source of information. Another way to give credit where it’s due (which is what citations are all about) would be to write/say something like “After reading [list of book names here] …”, then continue with your thought/statement.
Much of what I’ve learned has been from reading online/books. I often amalgamate knowledge I’ve obtained through a combination of reading from many sources and experimentation. If I refer to something I’ve read, I cite my source. If I’m discussing something I’ve done (built/assembled a computer/reconfigured a device to function in a manner not originally intended/added functionality to an OS it did not originally support/etc.), I’m discussing my experiences because it’s something I’ve done, so I’m the source, and no citation is required, unless I was following directions/steps from a known source, in which case, I cite that source.
An example is enabling secure boot support in my Garuda GNU/Linux system, using information I found in the CachyOS forums. When I wrote about what I did on It’sFOSS, I was careful to cite my sources, namely the article I followed from the CachyOS forum, and the documentation I found on the sbctl GitHub page. sbctl is the package I used to add support for secure boot, combined with having kernel updates automatically signed.
This is what I always try to do. I hope it clarifies what I was saying,
Ernie
I just answered an Ask Leo! question using Google Gemini. I copied and pasted a portion of the answer and gave the credit to Gemini. If I hadn’t credited Google, that would have been plagiarism.
AI could jeopardize sites like Ask Leo! if people ask AI chat bots everything. (I probably shouldn’t be giving people ideas.
) Of course, I’ve run into some AI hallucinations because there’s a lot of wrong information on the web, but it’s pretty good at tech questions.
I have no comment on AI… it is what it is and still relatively early in its development. We’ll see how it morphs over time.
As for this new video format, a “casual atmosphere” is more conducive for chatting.
I think the Ask Leo ‘instruction’ and ‘answer’ videos are fine in the more formal “office” type setting. But for “According to Leo” conversations, the Family Room type environment is more relaxed and, I believe, allows you to just be yourself (rather than a teacher).
I like this format (yeah, some work on lighting is needed), but this was a very good “pilot” show. I look forward to future installments.
Thank you for helping me understand AI a bit more. The unknown is usually frightening as AI is to me. But, I am learning!
Leo,
I agree with your assessment of AI, any my concerns about it have nothing to do with plagiarism, either. Plagiarism can be avoided by citing sources, perhaps in a bibliography at the end of the AI response, or at the point where the quotation ends (parenthesized). That’s how I was taught to do it in college.
My concerns involve AI’s misuse, either for nefarious/fraudulent intent, or by companies who want to collect more detailed information on visitors to, or users of, their product(s)/website(s) – without visitor’s/user’s permission. That’s one reason I avoid allowing co-pilot to be installed on my computer. It may be entirely benign today, but what about the future? I’m not so sure that I trust Microsoft quite that much.
With regards to you setting up your own web blog (podcast), please do so, and add an RSS subscription feature so I can add it to my VLC list of podcasts. As for a name, if you intend to keep your content limited to tech issues, call it Confident Computing, Casually. If no so limited, call it something like Notenboom, Leo’s, or The Casual Corner (just a few thoughts off the top of my head).
Ernie
My RSS feeds for this aren’t working (on my list to diagnose), but https://www.youtube.com/playlist?list=PLj2GP54CyoNo7zD13exeJ2Lx7GhQAMFqs is YouTube’s idea of the podcast. Sadly no RSS feed from that that I’m aware of.
At my age (66) and have not written a document, essay or something formal for someone else in years, I have no vested interest in AI. My knowledge of the subject is very little, if any. I do agree that an informal relaxed format is good for a discussion and formal for a lesson. I can see AI being useful or dangerous, as with any technology. I debated with myself to write this post, as I really do not know enough to justify posting anything. I do engage with others, as humans are social beings (for the most part). Just my two cents worth.
Ken, in I understand correctly, the point of this experiment is for Leo to learn whether his readers would be interested in it. That means that any of us who viewed the video have something to contribute, whether it’s regarding the video content, or the informal format, not to mention whether we would prefer that Leo turn this experiment into something more permanent. 🙂
My2Cents,
Ernie
Thanks for your thoughts on AI. I found them informative.
I also like the format