Students these days are terrible at sorting true facts from misinformation online and on social media, many studies show. But it’s not because students aren’t good at critical thinking, argues Mike Caulfield, a research scientist at University of Washington’s Center for an Informed Public.
Instead, they just need a little bit of guidance on how to approach the flood of text, images and websites they encounter on a daily basis. And that guidance will only be more important as ChatGPT and other AI tools enter the mix.
Caulfield, along with Stanford University emeritus education professor Sam Wineburg, set out to create that guidance for students — and anyone struggling to cope with today’s information landscape. The result is the book “Verified: How to think straight, get duped less and make better decisions about what to believe online.”
One problem that students — and, really, any of us — face, Caulfield argues, is that people often approach information they encounter online with the same strategies for telling fact from fiction that worked well in an earlier time, when most published material had undergone some level of vetting and verification.
“There wasn’t suddenly a massive decline in critical thinking,” Caulfield says. “People were just applying approaches to information on the internet that weren’t really appropriate to the internet, that people were applying some of these pre-internet approaches that weren’t really applicable.”
EdSurge connected with Caulfield to talk about his strategies for managing today’s flood of information — and how new AI tools will impact efforts by educators to teach information literacy.
Listen to the episode on Apple Podcasts, Overcast, Spotify, Stitcher or wherever you listen to podcasts, or use the player on this page. Or read a partial transcript below, lightly edited for clarity.
EdSurge: In the book you argue that one of the most important things to do when sorting through information online is what you call “critical ignoring.” What is critical ignoring and why is that something you’re highlighting?
Mike Caulfield: One of the primary things you’re doing when you’re reading on the internet is you’re trying to decide if something’s worth your attention or not. In fact, it’s probably the skill that you apply the most because the internet is relatively unfiltered. I mean, it’s filtered by algorithms and so forth, but compared to something like a traditional paper or a book or something, [it] is relatively unfiltered, and you’re making the decision about what to read and not, you’re constantly leafing through these sorts of things, and only a small fraction of things online are probably worth your attention.
In traditional models we’ve often taught students that the way you solve any problem is by giving it deep critical attention. And of course, this is disastrous on the internet. If, for example, a student sees something that’s Holocaust denialism, and if your advice for the student is, ‘Well, take an hour, engage deeply with this person’s arguments, follow the chains of thought, see what they’re citing.’ I mean, that’s horrible, horrible advice.
Instead, look up the person who wrote what you’re reading, and you can often immediately see, ‘Oh, well this person denies the Holocaust. This person is probably not worth my time.’
That’s really hard for academics to wrap their heads around — that the answer to every question is not just apply deep attention, but that attention is your limited resource.
Information is abundant. I have behind me right now on my bookshelf three or four years of reading if I spent nothing but my time reading, right? So information’s not the scarcity. Your attention is the scarcity, figuring out what to apply your attention to.
If there’s one thing we want to teach students, it’s how to better choose what to invest their attention and time in.
You have a lot of great metaphors in the book, and you argue that a problem is that people aren’t using the right kind of mental model to properly evaluate information online. How should people approach information online or in social media?
“It’s a little more like the world of verbal rumor … where information is coming to you and you’re not quite sure what the origin is. And if you’re getting a rumor, if someone says, ‘Oh, did you hear that Bob is suspected of embezzling money?’ Your first response is, ‘Where’d you hear that?’
But somehow on the internet, because it’s printed, because it looks so polished, everything has this sort of sheen of authority, people skip that step. So we show them how to do that on the internet through these various techniques and quick searches.
You’ve developed what you call the SIFT method for evaluating information online. What’s the elevator version of that?
The first thing is stop. Stop is a reminder that when you feel something is particularly compelling or interesting, to stop and ask yourself if you know what you’re looking at. And that distinction is important. A lot of people think we mean stop and figure out whether this thing is true or not. And for us, that’s not the first step. The first step is asking yourself, ‘Do I know what I’m looking at?’ That’s where most people go wrong. Most people think, ‘Oh, well, I’m looking at a local newspaper.’ And sometimes the truth is, no, actually it’s a partisan blog. Or they think ‘Oh, I’m looking at a recent photograph from 2023.’ And in reality it’s like, ‘No, you’re looking at a 2011 photograph, something that happened in Germany, not the U.S.’ So the first thing is stop and ask yourself, ‘Do you know what I’m looking at?’ ‘Do I know where it came from?’ ‘Do I know anything about this subject?’
The second is investigate the source. And we’re not talking here about Pulitzer Prize winning investigations. We’re just talking about, ‘Is this a reporter or is this a comedian?’ Because that’s going to make a difference in how you interpret their breaking news item. ‘Is this a scholarly work? Is this something else? Is this person a conspiracy theorist? Is this person in what we call a position to know through expertise, through professional experience, through being a direct witness to something? Or is this a person that really has no better idea of the situation than you do and it’s maybe not worth your time?’
If you are looking at that source and they’re not a substantially strong source, then we ask that you go find something else. One of the things we found with students is they often seem bound to the first source that they hit. And what we’re trying to do with the ‘F’ in sift, which is find better coverage, is step back a second and ask yourself if the thing that came to your doorstep is not really the best source or a good enough source for you, go out do a search, and we show the techniques to find better information and get a source that actually is going to respect your time, that you can trust, that is in a position to know.
And then the final piece is trace — which means trace the claims, quotes and contexts to the original source. And this is not always necessary, but one of the things we often found was that students would see a tweet or a post or a TikTok that is citing some piece of information that’s supposedly authoritative. And they would just stop there, and they’d say, well, this says that The New York Times said X. And it’s like, well, you can’t actually do that. The person on TikTok saying, The New York Times said this, that’s not where you stop. You’ve got to go upstream. You’ve got to go find that article.
These days all the talk is about ChatGPT and other AI tools, and the regular internet is feeling like the old technology. How does AI change things?
A large language model (LLM) like ChatGPT isn’t thinking in any sense that we normally define thinking. What it’s doing is putting together, for any given piece of text including any question you ask, it’s putting together a model of the things that people would likely say in response to that text. And it is doing that in a statistical way. It’s like your phone’s autocomplete.
If you ask it something like, ‘What are the three reasons for the decline of the Roman Empire?’ it looks at ‘decline of the Roman Empire,’ and ‘three reasons.’ And it comes up with some predictive text on, Hey, in places where people are talking about the decline in the Roman Empire, and they’re talking about reasons and they use this word ‘three,’ what are some of the sorts of things that people say? And it just kind of does that on multiple levels. So it presents a pretty compelling answer. It can be good at summary, where there’s a lot of text to put together, a lot of text for it to pull from. But it has some flaws. And the biggest flaw is that it doesn’t really have communicative goals. It doesn’t really know what it’s saying. It’s not able to evaluate things in the way a human is.
And there’s a couple things wrong with that. Without understanding the point of the thing that you’re doing, it can go astray. And that is not as big a problem for experts in a field, because if you’re an expert in something, you go to ChatGPT and you type something in, you can see pretty immediately, ‘Oh, actually this is a helpful summary.’ Or, ‘Oh, no, this has things wrong.’ But it’s not great for novices.
And that’s the problem. I think people have got this upside down. People think, ‘Oh, ChatGPT is going to help a novice be like an expert.’ But in reality, ChatGPT and LLMs are good for experts because they can see when this thing is clearly spouting out bull-.
One of the key points that we’ve made throughout the book is just because something looks authoritative is not enough. You have to ask, ‘Does this feel like it makes sense?’
ChatGPT makes it possible for anyone to look like they know what they’re talking about. And it gives a sort of surface that looks very impressive. And so it makes it all the more important that when you see something online that you not say, ‘Oh, is this a scholarly tone? Does this have footnotes?’ Those things are meaningless. Now in the world of LLMs, anybody can write something that looks authoritative and has all the features of authoritative texts without knowing what they’re talking about at all. And so you’ve got to go elsewhere. You’ve got to get off the page [to find out more about the source]. And I think it just makes these skills all the more pressing.