Hmm. By this argument, getting ChatGPT to write your college-course essays for you is also "nothing new", given the abundant opportunities for plagiarism that existed before.
But I think ease of use matters, in part because trivial (in the sense of "easy" / "taking little time") deceptive acts are easier to justify for otherwise-ethical actors. It's also easier to get into murky waters if step 1 isn't "get my fraud on" but rather "oh what's this new feature?" or "hmm, what would ChatGPT say about Descartes?"
Deception is a blurry line, but these are all technological steps that make it easier to approach that line -- and then, perhaps in a thoughtless moment, cross it.
>given the abundant opportunities for plagiarism that existed before.
We cannot blame stuff for things people do. You can use a knife to cut your dinner or you can use it to kill someone. The knife and the knife maker are not to blame for your wrong doings.
You can choose to lie or not. Don't blame ChatGPT if it is misused by someone. Don't blame Adobe if Photoshop is misused by someone.
Interestingly now there are many people who do blame them, like the discourse with gun. It seems like ease of use and scalability really changes the nature of the discussion.
I think it's fair to consider what the tool is designed to do as well. Generative AI is capable of distorting reality, but I would argue that it is not the primary purpose of the tool. Likewise, knives can be used to kill, but for the most part they are tools made for cutting, not killing. On the other hand, I fail to see a purpose for guns apart from killing, and for some guns such as hand guns, for killing people.
>I fail to see a purpose for guns apart from killing,
You can use guns for target shooting, you can use guns as a detterent [sic] when your life is in danger, you can use guns for hunting.
Of these three things you listed, two of them are killing. A gun wouldn't be useful as a deterrent if the threat of its application wasn't present—you can't brandish a feather and get the same effect because a feather offers no threat and does not have a reputation of lethal harm. And hunting is literally killing, although it may have a legit purpose, such as sustenance.
> Never saw anyone hunt deers using handguns or assault rifles
I assume you have never been hunting.
Thompson Center makes the Contender handgun specifically for hunting.
Every hunter should carry a handgun for self defense if the critters do not cooperate and fall over quickly. Hog hunts can get dicey even if you are experienced and prepared.
AR-type rifles are favorites for hunting, but not usually in .223.
And of course the reason for an armed and trained population has nothing to do with hunting. It's about stopping predators who prey on good people.
that's a licence class of it's own just as a passenger vehicle drivers licence doesn't qualify you to drive an articulated vehicle, heavy vehicle, or vehicle for more than 9 passangers.
Thanks for the video, didn't know about using semis for pig control.
Ok, yes, makes sense, I think, with its own license class, for those who work with / need to do that. (Looks like otherwise dangerous weapons, if anyone could get their hands on them)
You probably havent seen many hunters then. Many people use AR-10s for deer hunting around where I live. Which would probably count as an "assault rifle" to you.
The question should be: What is the purpose of killing?
There are good people who want to be left alone and there are predators.
Predators can do a lot of damage over time.
I want the good people to stop the predators. In an ideal world, the predators would die the first time they tried to prey on the innocent. That is the purpose of the killing: to stop predators who prey on good people.
While I cannot argue that this statement is wrong it does make me wish some things would never have been invented in the first place because of how much harm they can cause and the fact that there will always be a huge chunk of population who will surely abuse those things for bad. Such as nuclear weapons, AI, etc. Sometimes I even wish internet wouldn’t exist so that we could continue living simpler lives.
It’s an analogy that misses the point entirely. To make Chatgpt openai had to use a data set that included the results of lots of people work. Many people write as labor. It’s their most valuable asset. They didn’t ever get asked if they’d like to let the output of their labour be used to make a writing machine. A machine that transfers the value of their asset to Openai. Also an individual can make moral choices about how to apply their labor. A machine cannot. So it is theft of economic and moral power.
The often proposed case against ChatGPT is that everything it produces is illegitimate, no matter how you use it. It would not be the knife in this analogy, it would be the murder.
> Don't blame ChatGPT if it is misused by someone.
I was very careful not to. I’m merely pointing out that availability matters.
People’s actions sit in all sorts of levels of gray. Of course it’s up to the person, but that doesn’t mean access/opportunity/circumstance don’t matter.
> I think ease of use matters, in part because trivial (in the sense of "easy" / "taking little time") deceptive acts are easier to justify for otherwise-ethical actors. .
On the other hand, the tools to do this have been easily accessible for years. Sure, you had to buy Photoshop and spend 15 minutes on YouTube, but this is not a major hurdle. Having the tools universally available just increases the awareness and makes fraud less likely, as people are more inclined to not blindly trust them.
Until now, you would have needed enough disposable income to pay somebody to write comparable essays for you. Now that the option has been sufficiently democratized, we're forced to take more seriously the possibility that someone's work isn't hard-earned and evaluate our opinions of them (and their grades) differently.
But I think ease of use matters, in part because trivial (in the sense of "easy" / "taking little time") deceptive acts are easier to justify for otherwise-ethical actors. It's also easier to get into murky waters if step 1 isn't "get my fraud on" but rather "oh what's this new feature?" or "hmm, what would ChatGPT say about Descartes?"
Deception is a blurry line, but these are all technological steps that make it easier to approach that line -- and then, perhaps in a thoughtless moment, cross it.