Should we verify EVERYTHING or trust until something is proven false?

In this new world of deepfakes, what should be the default: skepticism unless something is cryptographically verified, or trust unless something is proven fake? And who should bear the burden and cost of verification—platforms, device makers, government, journalists, or everyday individuals?

Reply to This Discussion

19
EJB Subscriber

The cold war concept of 'Trust but Verify', I agree with, JGunn. Works well on big issues and decisions. The problem today is that we encounter an ENDLESS STREAM of possibly fake images and stories. Personally, I don't do the social media thing, or doom-scroll much, so my exposure is limited, but even the relatively small amount of internet slop I experience is WAY too much to fully verify. So I am left with my own common sense.

If something seems extreme, I wait for multiple sources to agree. Sometimes I'll even research if the issue will significantly change my behavior or world view. But really...how do we trust ANY source of information? Most of them are influenced by a pretty small number of entities that may or may not be interested in actual truth at any given time. So I am ultimately left with common sense again.

What would be best would be a general public that knew the difference between 'I bet that's true' and 'FACT'. Hard reach now that 'Alternative Facts' are an accepted thing for much of the population.

Critical thinking is the only real defense against fakery, and our system has to establish that as Bedrock for education. Whatever that may cost, I'd be happy to help pay.

JGunn Subscriber

Depends on what's at stake. Personally, having time on my hands (I'm a retired economist and statistician) I double-check the social media posts of my friends and family, and let them know whether I see verification from a reliable source (e.g., a news organization with a long and consistent track record of fact checking and publication of any errors in its reporting). Mostly that's just an exercise to remind people they should verify before they re-share A.I. slop, so isn't strictly necessary.

But court cases, legal contracts, anything with financial or health consequences - everyone needs to "trust but verify" with the emphasis on "verify." Unfortunately we all have a tendency for confirmation bias and it takes concerted effort to override it until it becomes a habit.

Raghu Subscriber

I think the days of trusting something because it is on the web or in print are long over. With AI-generated fake videos looking so real, we need AI to tell us whether they are fake. I think it is a good policy to check the origin of any post or audio, or video on the web with trusted tools.

Maridee Stanley Subscriber

Verify everything. Society can't survive without agreement on reality. Our disagreements over reality are already tearing society apart, and this trend will get worse as deep fakes improve and become easier to create. Everyday individuals should not be required to pay for this, but platforms and device makers should. If the government is using a potential deepfake artifact as evidence, it should bear the cost of establishing its validity.

Hermeneut Subscriber

Unlike so many previous new tech introductions which had little or zero forethought into downsides, I think it's imperative we get ahead of this inevitable degradation for once. If the capacity to pass off deepfakes gets solidly entrenched, we will never get control or rid of it.

Just for context, I was part of and present in such new techs starting with the Macintosh in 1983-4. Much benefit, of course, and also much degradation. Especially when algorithms became profit tools in social media. Cf. the movie: the Corporation.

Stan Subscriber

Already we [should] judge the validity of statements-of-fact with reference to the source as we understand it -- familiar news sources develop and maintain a reputation (based on their past history) for reliability and bias; unknown characters on social media don't have such a historical reputation, so we must skeptically weigh the new evidence based on our prior understanding of the world (and everyone should learn to do this!). In the judicial realm, we would do better to further develop (and rely on) logical scientific principles; but that seems to depend on citizens -- in this case judges and juries -- appreciating such a perspective.

Carol Subscriber

I agree with verifying everything. It is alarming to me how many people watch these deepfakes and believe them.

Dick200

With the avalanche of digital information available from minute to minute, it's essential to differentiate between what's authentic and bogus.

Personally, my "BS detector" is never turned off anymore.

Until something of an inflammatory nature is reliably verified, take it with a "grain of salt".

Treasrhunter

Adding the feed issue, that being the giant algorithmic driven social media and ad sites that assure we receive ads and content based on history, and the deep fakes and false news become a cocoon of deceit woven by a web of liars to sway opinion. I know this isn't new to any of you, but the reality is humans' capacity to think for themselves seems to be diminishing. And the absorption of this data by everyone, young and old, just adds to my theory that a new mega level of mind control has taken over a segment of our society. When people blindly follow their feed, in spite of the truth, our societies and cultures around the world are in serious trouble. The final blow is the instantaneous and constant barrage of this information. Rather than taking a moment to find the truth we simply move on to the next viral thought pushed by this machine.

Forensics won't matter much in a world driven by dollars and deceit. That is already painfully obvious by simply looking at sites like Snopes that try to provide fact checks. They can't begin to keep up. The complexity of this issue is a real Pandor's box of such magnitude that we likely will never reign it in at this point.

Hermeneut Subscriber

Unlike so many previous new tech introductions which had little or zero forethought into downsides, I think it's imperative we get ahead of this inevitable degradation for once. If the capacity to pass off deepfakes gets solidly entrenched, we will never get control or rid of it.

Just for context, I was part of and present in such new techs starting with the Macintosh in 1983-4. Much benefit, of course, and also much degradation. Especially when algorithms became profit tools in social media. Cf. the movie: the Corporation.

NESS Subscriber

Verify everything !!!!!

It's a scary world when we can't trust our eyes.

poetry isn't truth Subscriber

we have to find a practical balance between verifying everything and waiting until someone else proves fakery. abraham lincoln said: " you can fool some of the people all of the time and all of the people some of the time but you cannot fool all of the people all of the time:." so we need to learn how to ascertain if something is a fake. we have to have laws on the books, because faked videos or altered videos are a form of libel or slander, depending on how they are used. we have to punish people who do things like this. we have to instill in our children that libel and slander are WRONG. tweaked videos are NOT FUNNY. we need to start thinking about where the wonder of AI will lead us. do we have the ability to pull AI's plug. petroleum engineering is a marvel but we can see some of the problems that have arisen from it. we can see where unregulated stem cell research might lead if we are not careful: the same place as anthrax, which naturally occurs in soil wherever bovines graze, which has been strengthened into a monstrous bioweapon. we need to think about what we are doing and where this might lead us. we need to be able to look at stuff and decide whether it might be true or false: an activity which might be referred to as "critical thinking".

so our default should be a form of skepticism, just not extreme skepticism.

John Menninga Subscriber

All our perceptions are shaped by our assumptions and biases. Our brains don’t just filter out irrelevant information — they also ignore what doesn’t fit our expectations.

Recognizing this is, I think, the first step.

From there, I think, in an age of AI and deepfakes, a healthy skepticism toward any published photo or video is warranted,

And always consider the source.

(Note that Trump’s campaign and ICE have already published AI-doctored images.)

And I do think, particularly in the news media, that both in-house and independent agencies dedicated to spotting and calling out AI-generated images is warranted.

Rebecca Subscriber

It would be optimal if everything could be verified. But that's impossible. I would like all content I receive to be verified. I don't have time to verify it myself, and I'm online to communicate with colleagues or friends and to learn from researchers and professional journalists. In other words, I'm online to learn, not to be played with.

Rogério A Profeta Subscriber

I have no doubt that verifying everything is impossible. Therefore, we should avoid sharing everything we receive unless we take care to verify the source.

jcat Subscriber

Without the availability of methods of truth verification and falsity detection, we cannot trust the internet, and our society falls prey to immoral agents of chaos, destruction and greed. I believe that successful research on verification and falsity versus reality detection is a necessary condition for the survival and advancement of civilization.

Me Subscriber

At this point I think that EVERYTHING needs to be assessed for truth and validity. Because the web lets anything in and even those who know most truths can still be led astray by falsehoods. We need to flood the internet waves with truths so it can’t lead anyone astray.

Steff Roberts

Deep fakes are corrupting our sense of reality. There indeed needs to be platforms that verify things posted as facts, because too many people are believing things that are negatively impacting our relationships with others, our government, and our sense of what is right and wrong.

Bob Coppock Subscriber

The costs should be borne be the creator of the deepfake. But creators of the technology should post a large bond that makes then liable for use of their technology. On the other hand, evidence that isn't evidence can't be used as evidence.

Articles in This Discussion

More Discussions

18Active
13Active
8Active
16Active
View All Discussions