Posts

The AI of Trolling

avatar of @tarazkp
25
@tarazkp
·
·
0 views
·
5 min read

A friend linked me an interesting article today about a lawyer who used ChatGPT to supplement a submission for a case her was defending, citing cases that the AI recommended. The only problem was, the cases don't actually exist - the AI made them up. This raises lots of questions in terms of what they are creating and how useful it is, but there is also another interesting side to it, as according to the article, the lawyer questioned the legitimacy of one of the cases in question.

According to Schwartz, he was "unaware of the possibility that its content could be false.” The lawyer even provided screenshots to the judge of his interactions with ChatGPT, asking the AI chatbot if one of the cases were real. ChatGPT responded that it was. It even confirmed that the cases could be found in "reputable legal databases." Again, none of them could be found because the cases were all created by the chatbot.

Lols. Seems ChatGPT does have a sense of humor after all - it is a troll.

Which got me thinking.

If the AIs are scraping our content and then building an output that is acceptable based on what has been supported by others, isn't it going to very quickly start mimicking the loudest voices on the internet - a fraction of the world population, but the ones who scream the most? These voices tend to be the most polarized and the ones who propel the "point system" of the attention economy, where it isn't about usefulness or accuracy, it is about beating the opposition.

We already have seen how AI usage can lead to biased decisions, like the racist AIs in the US court, and now when it is trawling everything, it is likely to do the same. It is also probably going to give answers that at least on the surface, seem plausible, but won't hold up under a decent sniff test, like the lawyer above found out. This means that it is going to feed us tailored information for us, using the information it has available like our own social media usage and, it is going to source the answers based on what we are likely to accept as correct.

Similar to a tailored Facebook feed, the AI chatbots and similar are going to be identifying who we are (it isn't hard since most require some kind of verification signup), trawl all the information it has on us and then give us answers to our prompts based on what we want to hear, setting up personalized information silos, under the guise of robust information sources. It is like the worst of the news, made personally for us, to tell us whatever story we want to believe at the time. And, as there is no way to see into the sourcing currently at least, there is very little chance that the average person is going to go through the steps to verify the information, as after all, they are using an AI to supplement their own content.

It is like a reverse web of trust.

A web of trust uses multiple information sources to be able to apply a confidence level to data across a network, to say for example, what is the likelihood of eye witness accounts being true. Or whether someone really does have all the experience they say they do on their CV.

AI however doesn't need to show its working a this point, meaning that it can spit out content and it is up to the user to decide whether it is trustworthy or not. However, most users still think that what they are getting out is correct to at least a general level, since it is meant to be using reputable sources for its base. This is obviously not true though, since it is pushing out a lot of incorrect content also, but who's checking?

Only the people who have specific expertise are likely able to really point out a particular error within the field of which they are an expert, but give that same person content on another field, and the content might pass muster. This is the same when people read a news article about something they have intimate knowledge on and are able to pick the flaws easily, but then turn the page and swallow whatever is said on a complex topic they know little about, as if it is accurate. It is an error of logic.

And as humans, we make them all of the time, which is why a lot of us turn to machines because we are under the illusion that they can do a better job than us, all of the time. They can't but they can likely do a better average job across thousands of fields than any of us as individuals, as ChatGPT can create content on hundreds of topics in the same time we are getting started on the first paragraph of one, and before we have even started researching.

There is no way for us to keep up with the AIs from a content production standpoint, so what it means is that in order to stay relevant, we have to battle on another front instead. Creativity is one of those, personality another. However, it seems that for now at least, they are learning to troll better than us, because people are applying what they get out of it at a professional level, even if it is inaccurate.

Just imagine the scenario of the lawyer above asking someone on the street for some cases that support his argument and in thirty seconds, the person reels off some names. What are the chances that the lawyer is going to believe them and then cite the cases in their case submission?

Zero.

But when it is coming from an AI, it apparently flies under the radar of good sense and gets a preapproval status. And remember, I will assume that this lawyer guy is at least smart enough to pass the bar at some point and has faked it long enough to be practicing law for three decades in some capacity. And, shouldn't he be the kind of person who would be skeptical by nature?

What hope do the rest of us Average Joes have?

It is like how everyone believes that they aren't influenced by advertising on the internet, even though the advertising revenue model undermines their position. We are all biased and therefore none of us are objective. Because of this, we are easily manipulated and nudged to act in ways that we might not have acted otherwise - like a hypnotist's trick making a person cluck like a chicken in front of an audience.

It is not about intelligence, it is about human nature.

Taraz [ Gen1: Hive ]

Posted Using LeoFinance Alpha