Misinformation Resources

Misinformation Resources

As librarians, it is our duty to help our patrons discern fact from fiction by adequately meeting the information needs of our diverse community, both online and in person. Each month we are providing guidance on how to identify fake news, find and analyze reliable sources, evaluate your own biases, and consult with experts and fact-checkers to support the reliability of your sources. Together, we can move beyond fake news and into a new era of information literacy for all. 

It’s hard to believe that in just seven months, Americans will be lining up to cast their ballots in another presidential election. As exciting as elections are, we know that staying informed during this time can feel exceptionally overwhelming. As we are inundated with a constant stream of headlines, social media posts, and competing narratives, our brains may struggle to process and analyze information and our ability to distinguish fact from fiction can be impacted. Thankfully, organizations like the News Literacy Project (NLP) are here to support the public and those in education and the media to become more news-literate and push back against misinformation

The NLP is a non-partisan American education nonprofit dedicated to building a nation equipped with critical thinking skills for the digital age. Founded in 2008 by Pulitzer Prize winner Alan C. Miller, the NLP offers a range of resources for educators, students, and the general public. Their mission is to empower individuals to become discerning consumers of information.  Through workshops, online resources, and even a mobile app, the NLP teaches users how to:

  • Identify credible sources: The NLP equips users with tools to assess the legitimacy of news outlets and social media accounts.
  • Recognize bias: News reporting is never entirely objective. The NLP helps users identify and understand the perspectives influencing a story.
  • Spot misinformation and disinformation: The spread of false information online is a serious problem. The NLP teaches users how to identify misleading content and avoid sharing it.

The NLP emphasizes the importance of a healthy democracy being built upon a foundation of informed citizens. By equipping individuals with news literacy skills, they aim to foster a more engaged and critical populace. Listed below are some of the NLP’s key resources:

  • Checkology: A free e-learning platform with engaging, authoritative lessons on subjects like news media bias, misinformation, conspiratorial thinking, and more. Learners develop the ability to identify credible information, seek out reliable sources, and apply critical thinking skills to separate fact-based content from falsehoods.
  • Get Smart About News: A free weekly newsletter that offers a rundown of the latest topics in news literacy including trends and issues in misinformation, social media, artificial intelligence, journalism, and press freedom.
  • RumorGuard: A viral rumor fact checker that offers concrete tips to help you build your news literacy foundation and confidently evaluate claims you see online.
  • Is that a fact? Podcast: Podcast that informs listeners about news literacy issues that affect their lives through informative conversations with journalists and other experts across a wide range of disciplines.
  • News literacy tips, tools, and quizzes: Test and sharpen your news literacy skills with short activities, engaging quizzes, and shareable graphics for learners of all ages.

With the ever-changing media landscape, news literacy has become an essential skill for everyone. The News Literacy Project is a valuable resource for those seeking to navigate the complexities of the information age and become more informed citizens.

A few months ago we discussed some of the challenges presented by AI-generated content on social media, particularly in the context of misinformation and the many ways in which it is disseminated online. Fast forward to February 2024 and current events have proven that AI is not a power that can continue to go unchecked. In late January, social media platform X, formerly Twitter, was forced to block searches for Taylor Swift after explicit AI-generated images started circulating on the site. The photos were removed, but not before they were seen by tens of millions of X users. This troubling incident prompted a bipartisan group of United States senators to introduce a bill that would criminalize the spread of nonconsensual images generated by AI, but for the time being, there is no federal law regulating the production and distribution of AI-generated images.

Though there have always been various legal and ethical concerns raised by the proliferation of AI images, the rate at which this issue is growing far surpasses the amount of time it has taken humans to solve it. While lawmakers have long anticipated that AI images would complicate modern notions of privacy and intellectual property, far less attention was paid to informed consent. Addressing these legal and ethical concerns requires a multifaceted approach involving collaboration among policymakers, IT specialists, ethicists, and other stakeholders to establish guidelines and regulations that balance innovation with societal well-being and individual rights. But if the last week has taught us anything, it is that we are not without power in this fight. Ultimately, it was not lawmakers or big tech who protected Taylor Swift, but her fans. X only blocked the account responsible and suspended searches for Taylor after Swifties mass reported the images.

So, what can we do? Most importantly, we need to learn how to identify AI-generated images. This has and will continue to become more challenging as technology advances. However, there are some signs and techniques that can help us recognize them. First, AI-generated images may have inconsistencies or implausible details that are not easily noticeable. Look for abnormalities in lighting, shadows, reflections, or proportions. AI models struggle with generating realistic poses or perspectives, so be sure to check if there are odd or physically implausible positions of objects or people in the image. AI-generated images may also exhibit artifacts or distortions, especially around the edges of objects or in areas of complex detail. Pay attention to unusual pixelation, blurriness, or irregularities.

While many of these details may be tricky to spot, others are more obvious. Some AI-generated images have watermarks or signatures left by the algorithms used to generate them. Check for any unusual markings that may indicate AI manipulation. There are also online tools and services specifically designed to detect AI-generated content. Some platforms, such as Is it AI? and Illuminarty use advanced algorithms to analyze images and identify patterns associated with AI generation. Not all AI images are created and circulated with malicious intent. In fact, AI images can be quite fun to create and share, but if you ever come across photos that appear nonconsensual or explicit, it is important that you report them and encourage others to do so as well. If we all take a note out of the Swiftie playbook and work as a team to protect one another, we can contribute to larger efforts to keep AI in check.

Staying informed about political events during the age of information is crucial, especially during election seasons. While this may seem like a fairly simple task, the proliferation of social media has made it much more difficult to distinguish between credible news and misinformation online. As the 2024 election approaches, it is important that we arm ourselves with the tools necessary to navigate the information landscape effectively and avoid falling victim to fake news. To that end, we thought the start of the new year would be a great time to brush up on some of our digital literacy skills.

The first step in avoiding fake news is to verify the credibility of your information sources. Stick to well-established and reputable news outlets that adhere to journalistic standards and cross-reference information by checking multiple sources to ensure accuracy and objectivity. Be cautious of websites or platforms with a history of spreading misinformation and use fact-checking tools to verify the accuracy of claims and statements. Websites like FactCheck.org, PolitiFact,   NPR Fact Check, and Snopes specialize in debunking false information. Plug-in tools for web browsers such as Media Bias Fact Check can also help flag potentially misleading content by providing fact-checking insights as you browse.

Listed below are some extra tips to help you spot mis- and dis-information on your social media feeds.

  • Be skeptical of headlines
 

Fake news often relies on attention-grabbing headlines to lure readers in. Before sharing or believing a story, read beyond the headline and delve into the content. Misleading headlines can misrepresent the actual information contained within an article, so always read the full piece to understand the context.

  • Pay attention to language and tone

Pay attention to the tone and language used in news articles. Legitimate news sources maintain a neutral and objective tone, presenting facts without sensationalism. If an article is overly emotional, contains inflammatory language, or uses excessive capitalization or punctuation, it may be a sign of biased or fake news.

  • Don’t forget to fact check photos and videos

In the digital age, images and videos can be easily manipulated to convey false information. Before sharing multimedia content related to the election, verify its authenticity. Reverse image searches and video analysis tools can help determine if visuals have been altered or taken out of context.

  • Beware of social media echo chambers

Social media platforms play a significant role in shaping public opinion, but they can also contribute to the spread of fake news. Be aware of the potential for echo chambers, where individuals are exposed to information that aligns with their existing beliefs. Diversify your sources and engage with a variety of perspectives to avoid confirmation bias.

  • Learn to spot misinformation in the wild

Understanding common tactics used to spread misinformation can enhance your ability to identify fake news. Look out for misleading statistics, out-of-context quotes, and the use of anonymous or unverifiable sources. Being aware of these tactics will empower you to critically evaluate the information you encounter.

As the 2024 election approaches, the importance of navigating the information landscape responsibly cannot be overstated. By adopting a critical mindset, verifying sources, and utilizing fact-checking tools, individuals can play a proactive role in combating fake news. Staying informed is a civic duty, but doing so responsibly ensures that the information we consume contributes to a more informed and resilient democratic society. The work is tedious, but every scrap of misinformation that we identify and dispose of is another step in safeguarding our democracy. Always remember that our librarians are here to help you every step of the way!

The Encyclopedia of Misinformation: A Compendium of Imitations, Spoofs, Delusions, Simulations, Counterfeits, Impostors, Illusions, Confabulations, Skullduggery, Frauds, Pseudoscience, Propaganda, Hoaxes, Flimflam, Pranks, Hornswoggle, Conspiracies & Miscellaneous Fakery by Rex Sorgatz

Happy Holidays from Otis Library! To celebrate, we thought it might be fun to take a break from the doom and gloom and offer up a more lighthearted book spotlight. The Encyclopedia of Misinformation [insert longest subtitle of all time] by Rex Sorgatz is an exhaustive historical compendium of deception and delusion. In it, Sorgatz explores heavy topics such as propaganda, hoaxes, conspiracy theories, pseudoscience, deepfakes, fanfiction, and gaslighting in a playful and humorous way. Using a series of vignettes to illustrate the points, Sorgatz raises important questions about the nature of truth and deception in the modern world. From the world of knockoff handbags to Slender Man and something called the “infinite monkey theorem,” Sorgatz demonstrates just how pervasive misinformation can be and encourages readers to be critical of all of the information that we encounter, even that which seems innocuous.

The Encyclopedia of Misinformation is available for free on Hoopla! Hoopla is a library media streaming platform made accessible to you by Otis Library. Simply download the Hoopla app, set up a free account using your Otis Library card, and enjoy a wide selection of ebooks, audiobooks, movies, TV shows, and more! Looking to learn more about how you can fight misinformation? Check out the series Finding Misinformation: Digital Media Literacy series by The Great Courses, also available on Hoopla!

Over the course of the last several years, Artificial Intelligence (AI) has revolutionized various aspects of our lives, from enhancing healthcare and improving transportation systems to personalizing our online experiences. However, this technological marvel has a dark side. When ChatGPT was launched to the public in November of 2022, AI became freely and openly available to anyone with an internet connection, creating unprecedented obstacles for educators on a global scale. Not only does AI discourage creativity and critical thinking, it can also be a powerful tool for spreading misinformation, amplifying rumors, and even manipulating public opinion. In this article, we will explore how AI can be employed to disseminate false information and the challenges it poses for society.

One of the key ways AI spreads misinformation is through its amplification effect. Social media platforms, search engines, and recommendation algorithms use AI to analyze user behavior and deliver personalized content. This technology is incredibly efficient at keeping users engaged and maximizing the time they spend on these platforms. However, this efficiency can also be exploited to propagate false information. Misinformation often spreads faster and wider than true information, mainly because sensational or false claims grab more attention. AI algorithms, designed to increase user engagement, may prioritize content that is emotionally charged or controversial. This can inadvertently amplify and promote fake news, conspiracy theories, and biased narratives. AI algorithms, while designed to be objective, can inadvertently propagate bias. These biases can be exploited to promote certain ideologies or worldviews, leading to the spread of misinformation. When AI algorithms favor certain types of content or sources over others, it can create echo chambers that reinforce existing beliefs and filter out alternative viewpoints.

AI-driven automated bots are another potent tool for spreading misinformation. These bots can generate and disseminate vast amounts of content on social media platforms, making it appear as if there is significant public support for a particular narrative or belief. They can create the illusion of a groundswell of public opinion when, in reality, it may be driven by a small group of actors. In addition, deep learning technology has given rise to deepfakes and synthetic media, which can be used to create convincing fake videos, audio recordings, or text. These AI-generated forgeries can deceive the public by making it seem as though real individuals are endorsing or spreading false information, further eroding trust in the authenticity of media and information sources.

Addressing the issue of AI-driven misinformation is a complex challenge, but it is essential for safeguarding the integrity of information ecosystems. Listed below some ways that we can combat the spread of misinformation through AI.

Fact-checking and verification: Shake the dust off of your fact-checking skills and stay consistent. AI-generated content can be very difficult to detect, and it is nowhere near foolproof. Bard, an AI chatbot developed by the tech giant, Google, is notorious for turning out incorrect information. If you decide to experiment with AI, be sure to verify facts and key concepts to avoid spreading misinformation.

Support digital media literacy: Take advantage of media literacy programs that educate the public on how to critically evaluate information sources and encourage and support independent fact-checking organizations that aim to identify and debunk false information. Promoting digital literacy can empower individuals to discern between credible and unreliable sources and help them recognize common tactics used in the spread of misinformation.

Utilize reporting mechanisms: Keep up with reporting false or misleading content to social media platforms. Quick response to reports can help mitigate the spread of misinformation

Advocate for responsible use of AI and algorithmic transparency: AI developers and users must adhere to ethical guidelines and principles, but stricter regulations can be implemented to ensure the responsible use of AI technology. Put pressure on your representatives to implement privacy legislation and ensure that tech companies strive to make their AI algorithms more transparent and accountable. If you have concerns about the use of AI, contact your legislators and let them know.

AI is a double-edged sword, capable of both incredible advancements and destructive consequences. While it has the potential to revolutionize various industries, it also presents significant challenges, particularly in the context of misinformation. Addressing the issue of AI-driven misinformation requires a collaborative effort involving tech companies, policymakers, educators, and the public. By understanding the mechanisms through which AI propagates false information and taking proactive steps to mitigate these risks, we can hope to preserve the integrity of our information ecosystems in the digital age.

By the way, we used AI to write parts of this article. Could you tell the difference?

Some social media users live for a good debate, and that isn’t necessarily a bad thing. Sure, we don’t recommend that you spend all of your free time arguing with strangers on the internet, but there’s nothing wrong with a little civil virtual discourse from time to time. After all, calling out mis- and dis-information when you see it is an important responsibility of any digital citizen. But how does one go about “winning” an argument online, and when is it appropriate to stand down? Shown in the infographic below are 5 simple tips to help you set boundaries and avoid frustration when engaging in debates on social media.

1.)    Empathize – Try to understand where your opponent is coming from and calmly articulate your own reasoning. If you are familiar with the person, try to recall any details that you know about them that may help you to determine how their opinion has formed over time and make an effort to explain how events in your own life have influenced your beliefs. This can help to establish common ground and eliminate some tension from the conversation.

2.)    Check your mood – Debating when you’re in a bad mood is never a good idea. Not only will your ability to effectively communicate your opinions be diminished, but you will also be more likely to say things from a place of anger, rather than a place of reason. If you’re not feeling your best mentally, try to refrain from getting behind a keyboard.

3.)    Don’t rush – Take your time when formulating a response. Rushing your responses can lead to errors, and you may even make the mistake of spreading misinformation. Slow down and take the time you need to do proper research and organize your thoughts. Hasty responses are rarely effective.

4.)    Set limits – If you don’t feel as though you’re getting anywhere with your opponent after 3 or 4 replies, you may want to think about standing down. Set reasonable boundaries for yourself and stick to them. Identifying misinformation is important, but mental health comes first.

5.)    Watch the language – Pay attention to your opponent’s language to gauge whether or not they are going to be open to hearing new ideas. It is perfectly okay to walk away from a debate before it begins. Your time is valuable and best spent participating in constructive debates, not petty arguments.

“The USPS package arrived at the warehouse but could not be delivered due to incomplete address information. Please confirm your address in the link. The USPS team wishes you a wonderful day!”

 Have you ever gotten a message like this? Typically the “notice” is accompanied by a link to what appears to be the United States Postal Service website. The website prompts users to enter personal information like name, home address, and phone number. It then asks for a small fee for redelivery, followed by a request for credit card information. While the website may look legitimate, this type of message is almost always a scam. Unfortunately, millions of citizens fall victim to these types of schemes every year. The question is, how do they keep getting away with it?

 The answer is very simple: dependability. By this point, many of us have learned how to identify and avoid getting scammed online. You may even feel confident that it could never happen to you. But what happens when the message comes from an organization that you trust? What happens when the URL appears to be legitimate, and the website appears identical to that of the real United States Postal Service? What if you have an important package in transit and don’t want to risk losing it, so you decide to pay the $6 redelivery fee, because surely one of the most dependable government agencies wouldn’t try to swindle you? Unfortunately, criminals have gotten exceptionally skilled at replicating content, meaning that they can design a website that looks and functions very similarly to legitimate websites that they know people trust.

 This is an example of something called “smishing,” or the practice of sending fraudulent text messages pretending to be a reputable company or institution to persuade individuals to reveal personal and financial information. We can think of smishing as a form of imposter content, or disinformation that is created and presented using the branding of an established news agency or government institution. Imposter content is a very effective way of spreading disinformation. Let’s take the New York Times, for example. The New York Times is widely considered to be one of the country’s most trusted news sources. Considering that many individuals use the New York Times to fact check, it comes as no surprise that most readers don’t feel the need to verify information published on the NYT website. Meaning that if a well-executed imposter site pops up and begins spreading falsehoods, readers might not even notice that they aren’t reading a trusted source.

 Even the most cautious internet user is at risk of falling for imposter content. When visiting websites (especially those that you have navigated to from an outside link) be sure to take a look at the branding. Does the logo seem distorted? Are the colors the same? Also pay attention to the content. If it is a news site, do you notice any sensational headlines or photos that appear to have been manipulated? Always do a quick fact check if you are looking at a screenshot of a news article. Content creators will often insert screenshots into their videos. These screenshots can easily be doctored to make them appear to come from a website like CNN or the New York Times, so it is important to verify that they are legitimate. Finally, never enter your personal information on a website that you did not go to directly. (i.e., do not follow a link from an unverified text message.) If in doubt, try entering some of the language from the text or email that you received into a Google search. If the scam has already been reported it will more than likely show up in your search results. As always, don’t hesitate to contact your local librarians with any questions!

The Fight for Privacy: Protecting Dignity, Identity, and Love in the Digital Age

Danielle Keats Citron

W. W. Norton & Co.

 Intimate privacy is a precondition to a life of meaning. It captures the privacy that we want, expect, and deserve at different times in different contexts. At its core, it is a moral concept.

Danielle Keats Citron, 2022

This month we will be taking a short break from mis- and dis-information content to focus on internet privacy with a new book spotlight! Over the course of the last three decades, stunning technological advancements have muddled the modern definition of privacy, prompting experts to reimagine and redefine what privacy means in a world where a person’s entire life story can be pieced together using digitally collected data alone. And it’s not just the information that we post consciously. Data is harvested from every app we download and every website we visit to be used by corporations to better target their ads. Now, the ethical implications of this expansive digital surveillance system have been called into question yet again with Danielle Keats Citron’s 2022 book, The Fight for Privacy: Protecting Dignity, Identity, and Love in the Digital Age.

 Citron, professor of law and privacy expert, focuses on a concept that she calls “intimate privacy.” Intimate privacy, she explains, refers to the social norms that protect the more intimate aspects of our lives, such as our health, gender, close relationships, and sex. The invasion of this privacy, Citron argues, can stunt the development of our self-identity and threaten our intimate relationships. As dystopian as all of this sounds, Citron believes that through advocacy, we will eventually be able to strike a balance that allows us to enjoy the benefits of technology while keeping our intimate privacy protected. The tricky part is that this balance hinges upon a collective effort to convince lawmakers that intimate privacy is a civil right that should be guaranteed to all citizens. As is the case with mis- and dis-information, the responsibility belongs to us.

 “This is a terrific, though terrifying, exposé about how often our intimate activities and intimate information end up on social media. Professor Danielle Keats Citron makes a compelling case for a ‘right to intimate privacy’ under the law. This beautifully written book deserves a wide audience and hopefully will inspire needed meaningful change in the law.”

Erwin Chemerinsky

University of California, Berkeley School of Law

Visit our online catalog to reserve a copy of The Fight for Privacy.

“The emotional appeal of a conspiracy theory is in its simplicity. It explains away complex phenomena, accounts for chance and accidents, offers the believer the satisfying sense of having special, privileged access to the truth.”

Anne Applebaum, 2020

Twilight of Democracy

The faked moon landing, chemtrails, the death of Princess Diana, a chamber of secrets hidden behind Mount Rushmore, Aliens in Area 51. Do any of these tales sound familiar? You probably recognize them as popular conspiracy theories, or conjectures that imply that an organization of influence is responsible for a major event or phenomenon. Most of us have encountered a conspiracy theory at least once in our lives, whether it be in jest or in one of the shadier corners of social media. But while we might understand what a conspiracy theory is, knowing what the term means is just the tip of the iceberg. The bigger, more consequential question is: how does a conspiracy theory come to be, and how can we use that knowledge to identify conspiracies before they gain traction online?

In June of 2020, tens of thousands of social media users were exposed to the sensational claim that Wayfair, a popular furniture and home goods firm, was using high-priced cabinets as a front for a child-trafficking ring. According to the claim, Wayfair had been using cabinets priced in the thousands to sell children directly from their website. Screenshots from the Wayfair website showed a description of the furniture pieces, most of which were given human names such as Samiyah, Annabelle, and Allison. Some of the cabinets were priced as high as ten thousand dollars. News of the alleged human trafficking ring, which originated in the QAnon community, quickly spread through lifestyle influencer and parenting communities before breaking into the mainstream. For the first few days, the internet was divided. Some believed the theory to be plausible, while others felt that it was too sensational to be true. Wayfair denied the allegations, explaining that the price of their industrial cabinets was comparable with other retailers, as was their decision to assign the furniture human names. Still, the Wayfair conspiracy theory has endured. Why?

In her book, Twilight of Democracy: The Seductive Lure of Authoritarianism, Anne Applebaum points to the inherent simplicity of the conspiracy theory to answer this question. Humans create outlandish theories to explain away things that they don’t understand or can’t come to terms with, such as the death of a beloved celebrity or a natural disaster. In other cases, such as that of “Wayfairgate,” conspiracy theories are born out of social isolation. Believers of the Wayfair conspiracy claimed that exposing the company was up to “the people” because the mainstream media was pushing it under the rug. This widespread amateur investigation gave people a sense of purpose and created the illusion that they were part of an inside operation to bring down the alleged perpetrators. The recent influx of misinformation that now circulates online has only exacerbated the intensity of conspiracy theories, many of which pose a real threat to democratic institutions and society as a whole.

The good news: conspiracy theories are fairly easy to identify because of their shock value. Take in the claim and sit with it for a while. Does the claim seem realistic, or does it feel hyperbolic? Is there any solid evidence to back it up? Where did it originate? Who are the people or groups disseminating the claim? Do they have any credentials? Dust off your fact-checking skills and seek credible sources. If you can, share what you find with your friends and followers on social media. Sometimes, a simplified, reasonable explanation of an event is all it takes to quell the spread of conspiracy theories, but be aware that some people will simply not be convinced that a conspiracy theory is a farce. In these cases, it is best not to engage. Instead, focus your energy on getting ahead of misinformation through reason and research.

Earlier this spring, TikTok user Allie Casazza made a bold claim to her nearly 13,000 followers: I just cured my daughter’s strep throat homeopathically. Casazza, an influential blogger and self-proclaimed “mom coach,” went on to say that she was able to cure her teenager’s severe case of strep throat with potato juice. Yes, you read that correctly, potato juice. Casazza, who has no medical background, is not the first TikToker to assert that potato juice can “cure” serious illnesses, including but not limited to: liver disease, stomach ulcers, and arthritis. As you may have already guessed, there is no substantial evidence that potatoes treat strep throat, a bacterial infection that if left untreated can lead to kidney inflammation or rheumatic fever, a serious condition that can cause permanent damage to the heart.

 Casazza’s video quickly caught the attention of outraged healthcare professionals, who took to TikTok to express how dangerous Casazza’s claim was and demand that the video be removed from the platform. The consensus among TikTokers in the medical field was overwhelming: failure to administer antibiotics to children with Strep A can have life-threatening consequences, and under no circumstances is potato juice a suitable substitute. This is not the first time that medical misinformation has circulated on TikTok and Instagram, and it certainly won’t be the last. If you think all the way back to November’s article on confirmation bias, you may recall how exaggerated claims often begin in isolated online communities. This is exacerbated by TikTok’s strikingly accurate algorithm, which continuously feeds users content of a similar nature. These information bubbles can become so intense that members quickly begin to reject any piece of contradictory evidence and will block users that challenge them. It’s a fast track to radicalization, and it’s happening every day.

 In 2013, a PEW foundation survey revealed that 6 out of 10 Americans turn to the internet to find the cause of their own medical conditions. To make matters worse, 35 percent of those who found a diagnosis online did not consult with a medical professional for confirmation. Those figures are undoubtedly higher in 2023. At this stage, it is unrealistic to expect Americans to simply stop using the internet to find information about their health, but we should exercise caution, especially on social media where Western medicine is often disparaged by users who favor a homeopathic lifestyle. Listed below are some simple ways that you can use the internet responsibly to make informed decisions about your health.

 Be sure to crosscheck your information

You can put those self-guided research skills to work and search for peer-reviewed literature, but always be sure to consult with a medical professional (or several) before taking OR denying any new medications or supplements.

 Evaluate your own digital community

Have you fallen into an information bubble? Are prominent members of your community (influencers, popular TikTok creators, etc.) legitimate? If a content creator claims to be a doctor, be sure to Google their name and verify that they are licensed and affiliated with an accredited medical institution. Venture outside of your comfort zone and evaluate what others are saying around you.

Be on the lookout for profit motives

Most popular content creators are going to try to sell you something at least once. This year, the brand Bloom Nutrition has exploded on TikTok and Instagram, with creators on all sides of TikTok promoting their Greens & Superfoods powder. The brand claims that the powder aims to aid in digestion, decrease bloating, and boost immunity, but it lacks clinical research support and proven results. TikTok creators get paid for their promotion of the product, the brand sees increased sales, and the consumer is left with a product that doesn’t necessarily harm them but doesn’t help them either.

Sander van der Linden
W.W. Norton & Company, 2023 

Last month, social psychologist Sander van der Linden’s book Foolproof joined the Otis Library’s nonfiction collection. Foolproof, the culmination of decades of research and lived experience, aims to tackle the misinformation phenomenon at the root. To do so, van der Linden asks questions such as, why is the human brain so vulnerable to misinformation? How does it spread throughout online communities? And most importantly, how can we stop it? Likening the spread of misinformation to a virus, van der Linden argues that we must shift our focus from what we may think of as antiviral remedies, such as fact-checking, to inoculation. Drawing upon basic principles of psychology, van der Linden provides readers with practical defense strategies and tools for fighting misinformation spread.

“An optimist and fine writer, van der Linden, professor of social psychology at the University of Cambridge and an expert on human belief systems, explains why humans accept something as true or false and how they can fend off misinformation. He begins by explaining why we are susceptible, discusses how falsehoods persist, and then explains how to inoculate ourselves and others. Insightful, convincing, instructive reading.”

Kirkus, 2023

Visit our online catalog to reserve a copy of Foolproof.

When we started this series in September, we did so with the intention of providing our patrons with the knowledge and tools necessary to combat the spread of mis- and dis-information in our online communities. So far, we have talked about how to identify different “fake news,” tactics, avoid confirmation bias, engage in meaningful self-guided learning, navigate specialized literature, and spot mis- and dis-information on social media. Nevertheless, we have yet to answer two important questions: what is the difference between misinformation and disinformation, and why does it matter? While the terms may seem synonymous, they refer to two distinct phenomena, both of which should be acknowledged individually in conversations about false or deceptive information.

The main distinction between misinformation and disinformation is intention. Simply put, misinformation is false or inaccurate information that is spread unintentionally. In most cases, the sharing of this information is accidental. Disinformation, on the other hand, is false or misleading information that is spread deliberately and often covertly. The spread of misinformation sounds far less nefarious than that of disinformation, but it is important to remember that sharing false information, whether it is intentional or not, is always a dangerous game. Both misinformation and disinformation cause division and hostility, undermine democracy, and distract the public from having constructive, meaningful conversations about current social issues. Together, they have also diminished public confidence in experts and critically wounded our collective ability to accept truth.

The sharing of misinformation may be more innocent in nature, but it is no less damaging. The truth is, everyone has helped to spread false information at some point during their lives, both online and in person. Mistakes are an inconvenient part of the human condition, but that doesn’t mean that we shouldn’t make every effort to avoid them. The most effective way to do that is to be very cautious with what you choose to share online. Even the smallest details matter, and it is our responsibility as internet users to do the work. We have infinite knowledge at our fingertips – let’s use it to move forward, not back.

See what countries around the world are doing to stop the spread of misinformation: https://www.poynter.org/ifcn/anti-misinformation-actions/#us

Last month, we talked about the dissemination of information on YouTube. In March, it is time to turn our attention to one of the world’s most popular social media platforms, Facebook. If you think back to 2016, you might remember Facebook coming under fire for the role that it played in the Presidential Election, running Russian-sponsored ads that sought to fuel social conflict and manipulate political discourse in the United States. Facebook CEO, Mark Zuckerberg, quickly dismissed the allegations, denying that the platform played any part in the 2016 election until April of the following year. The data, however, is undeniable. In the few months preceding the Election, the 20 most popular “fake news” articles on Facebook garnered 8.7 million engagements. By contrast, the 20 most popular “hard news” articles, amassed only 7.3 million engagements.

Even today, with the better part of a decade behind us, mis- and dis-information continue to thrive in the environments created by Facebook, Twitter, and YouTube. The 2016 election emphasized how deeply below the surface of our newsfeeds the roots of false information have grown. Though Facebook has since launched initiatives to stop the spread of misinformation and false news on their platform, the damage has already been done. For as long as humans continue to connect and share ideas on social media, mis- and dis-information will be planted and grow. It will blossom and reproduce like weeds, the likes of which cannot be contained by one person, team, or institution alone. As a collective, we must recognize and accept that our world will never be free of mis- and dis-information, but that does not mean that we should simply scroll by and watch as it flourishes. Listed below are three simple but effective actions that you can take to curb the spread of false information on Facebook. 

1.)    Call it when you see it (but not always)

Calling people out for posting false information on social media is a slippery slope. Chances are, you already have a pretty good idea of which of your Facebook friends will be receptive to new or different information. If someone has a history of dismissing authoritative evidence on social media, you may want to consider avoiding engagement. That may sound contradictory, but sometimes offering information to people who consistently resist or respond negatively to it can fuel the spread of false or misleading information. Perhaps even more importantly, regularly engaging in contentious conversations online can affect your  mental health. If your gut tells you that your comment won’t make a positive impact, it may be best to step away. Alternatively, you can anonymously report the post for false information, or even share and debunk it on your personal page.    

2.)    Provide evidence

If you’re going to attempt to tackle mis- or dis-information spread on Facebook, be sure to provide credible evidence. Ideally, that information should come from reputable fact checking resources such as PolitiFact, FactCheck.org, NPR FactCheck, Snopes, and the Washington Post Fact Checker. Check out some of our previous articles if you’re interested in learning more about how to find and utilize open-access resources!

3.)    Be wary of ads

Just because it looks reputable, doesn’t mean it is. Any Facebook user can create a business page and pay to run ads that they have created. Although Facebook is supposed to review the ads before posting them, the process has proven to be faulty at best. In 2020, Facebook and TikTok were accused of approving ads with “blatant misinformation” about the Midterm elections and the voting process. Be sure to fact check ads the same way that you would regular posts.

On May 4, 2020, an obscure filmmaker named Mikki Willis uploaded a 26-minute video to YouTube, Vimeo, and Facebook. It was the low-budget film, “Plandemic,” and by May 7, it seemed the entire world had seen it. Within the short span of three days, “Plandemic: The Hidden Agenda Behind COVID-19” had gone mainstream, with dire repercussions. Mis- and dis-information related to COVID-19 had been circulating online long before the virus touched down in North America, but “Plandemic” marked the beginning of a new age of health misinformation on social media. As the COVID-19 death toll in the United States approached 100,000, virologist Judy Mikovitz convinced a powerfully large group of Americans that a scheming cabal of political and medical elites were using the pandemic and subsequent vaccine for financial gain. Mikovitz, a little known researcher who has since been discredited, became the embodiment of the reinvigorated anti-vaccine movement.

The video amassed over 8 million views within the first week of its release, despite YouTube and other social media platforms’ efforts to take the video down due to inaccuracies. As soon as one was removed, another would pop up. “Plandemic” was not the only video containing medical misinformation that was circulated on YouTube that spring. Between April and June of 2022, YouTube removed over 100,000 videos containing misinformation. According to YouTube’s community guidelines, the company does not allow “misleading or deceptive content that poses a serious risk of “egregious harm.” In addition to violent, dangerous, and sensitive content, YouTube restricts content containing misinformation related to health and technology, as well as any content that may interfere with democratic processes.

Though the platform has always had a removal system for content that does not meet community guidelines, the “Plandemic” phenomenon prompted YouTube to ramp up efforts to combat the spread of mis- and dis-information. These guidelines, however, are far from perfect. Because YouTube is intent on enforcing guidelines “equally for everyone” they regularly remove content that aims to stop the spread of mis- and dis-information by debunking videos like “Plandemic.” In addition, the removal of misleading or deceptive content often escalates the spread of misinformation because it reinforces the belief that institutions like Big Pharma and Big Tech have a vested interest in gatekeeping valid information and concealing the “truth” from the general public.

Knowing this information is integral to our ability to use platforms like YouTube responsibly. Awareness is key. Listed below are some tips to help you stop the spread of misinformation on social media.

1.) Review YouTube’s community guidelines to learn what types of content they deem unsuitable for their platform and why
YouTube often relies on users to anonymously report or flag content that violates community guidelines. Flagged videos are reviewed and removed if deemed inappropriate. Reporting videos that contain mis- and dis-information is a simple and effective way to stop the spread.

2.) Step away from the share button
Sharing content on social media without determining that the information therein is valid is one of the easiest ways to accidentally share misinformation online. Be sure to fact check the information presented in YouTube videos the same way that you would with a written article.

3.) Beware of logical fallacies and vet your experts
The makers of “Plandemic” presented the film in such a way that it felt real, and they relied on Mikovitz to appeal to reason as a figure of authority. Mikovitz was quickly discredited, but not before she influenced thousands of people. Ask yourself these questions: are there any conflicts of interest? Has the expert ever had a scientific paper retracted and why? Are research papers peer-reviewed?

4.) Fact check, fact check, fact check
Our brains interpret audiovisual content very differently than they do written content. It is important to remember that videos uploaded to platforms such as YouTube and Vimeo are not always trustworthy and must be validated. Take note of the content creator’s main points and fact check them on reputable websites such as FactCheck.org or NPR Fact Check.

“I must recognize that in the same situation, I could have been either a killer or an evader -both were human- if I want to understand and explain the behavior of both as best as I can.”

Christopher Browning, 1992
Ordinary Men

In 1992, esteemed Holocaust historian, Christopher Browning, published his seminal work, Ordinary Men: Reserve Battalion 101 and the Final Solution in Poland. In it, Browning documents the formation and activities of the 101st Battalion of the German Order Police, an ancillary of the German Schutzstaffel (SS). From 1942 to 1943, it is estimated that members of Battalion 101, a group of middle-aged reserve policemen, were responsible for the deaths of over 83,000 European Jews. Christopher Browning was conducting research in the small German town of Ludwigsburg when he came across an indictment concerning Battalion 101: an expansive collection of testimonies conducted by the Office of the State prosecutor in Hamburg beginning in the 1960s. The testimonies, Browning explained, were uniquely candid, propelling him into an emotional space that he had yet to explore. “Never before had I seen the monstrous deeds of the Holocaust so starkly juxtaposed with the human faces of the killers.”

Perhaps the most damaging myth about history is that it is an objective practice. The key to reading historic literature, or what historians refer to as historiography, is to understand that history is not always subjective. After months spent poring over the grim testimonies, Browning concluded that members of Reserve Battalion 101, a wary assemblage of working-class husbands, fathers, and sons, did not kill because they were violent anti-Semites, rather, they killed out of obedience to authority and immense pressure to conform to the larger group. Browning’s recognition that, in this situation, he could have been either an evader or a killer was also an admission that Ordinary Men would not be devoid of his own humanity. Browning believed that any study of the Holocaust would be futile without trying to understand its perpetrators in human terms. In endeavoring to understand them himself, Browning fundamentally altered the way in which we study, conceptualize, and discuss the Holocaust, and he did so by rejecting objectivity.

Despite Christopher Browning’s success, his work did not go unchecked. Two years after the release of Ordinary Men, author Daniel Goldhagen published a scathing rebuttal. In his book, Hitler’s Willing Executioners: Ordinary Germans and the Holocaust, Goldhagen argues that eliminationist anti-Semitism is the only tenable explanation for the mass executions carried out by ordinary Germans. Goldhagen’s thesis, described as “anti-German” by many historians, was widely criticized. In Part III of Hitler’s Willing Executioners, wittingly titled, “Police Battalions: Ordinary Germans, Willing Killers,” Goldhagen draws upon the same indictment used by Browning. The Browning versus Goldhagen debate is one of the best examples of history’s inherent objectivity. It also demonstrates how important it is to read more than one secondary source on any given topic. A non-expert reader might interpret either of these works as absolute truth, but there are no truths in history, only the opinions of the historians who write it, and they do not always agree. Listed below are 5 tips that can help you get the most out of historiography.

1. Stick to peer-reviewed literature and read a lot of it
The key to any self-guided learning is to read widely, and this is especially true with history. As demonstrated by Christopher Browning and Daniel Goldhagen, just because two historians are using the same evidence does not mean they will make the same argument. Non-expert readers with limited knowledge on a particular topic should make an effort to seek out additional resources for proper context and perspective.

2. Read the end or footnotes
Historians use foot or endnotes to cite their references. A small number following a sentence indicates that there is a corresponding note either at the bottom of the page or at the end of the book. These notes are important sources of information, as they often point to other sources that may be beneficial for a non-expert reader. These sources can help to situate the reader within the proper context of the event and broaden historiographical perspective. Foot and endnotes can also be used for commentary or to elaborate on a point made in the text.

3. Consult primary sources
In history, primary sources are first-hand accounts of a topic. Some examples of primary sources include: newspapers, speeches, diaries, letters, texts of law, survey data (such as the census), photographs, and video. Any primary sources that the author uses should appear in the foot or endnotes. Take a look online and see if you can locate any of these sources in libraries or archives near you. Many primary sources have been digitized and are available online for free. Christopher Browning and Daniel Goldhagen wrote two completely different books using the same primary sources. Sometimes, it can be helpful to have a look at them yourself.

4. Focus on the big picture and ask questions
As you read through the introduction, ask yourself what question the historian is attempting to answer with their book. Is their argument new to the field? Do they provide sufficient evidence? Are they successful in using that evidence to support their argument? How might the authors’ background impact their perspective? Don’t be afraid to be critical!

5. Read the reviews
It can be difficult to judge that which we are unfamiliar with. Book reviews are great resources for experts and non-experts alike. Book reviews are typically short (1-2 page) articles written by other scholars in the field. They will include a brief summary of the book as well as any criticism they may have. Book reviews can be a great way to identify and avoid unsound historiography.  

Open-access secondary sources:
Cambridge University Press
JSTOR Open Access journals
SAGE Open
Institute of Historical Research: School of Advanced Study University of London
Organization of American Historian

Digital Primary Sources:
Internet Archive
Chronicling America (newspapers)
Library of Congress Digital Collections
Digital Public Library of America
National Archives and Records Administration (NARA)

Over the course of the last decade, alongside the rise of the “fake news” epidemic, the terse expression “educate yourself,” has proliferated digital dialogue, most notably on social media. But while fact-checking may seem like the most obvious answer to the fake news problem, public demand for self-education is merely a one-dimensional solution to the spreading of false or misleading information online. From behind our keyboards, we appeal to both friends and strangers to do their own research, but rarely do we stop to think about what self-directed learning actually entails, or the many ways in which it could go wrong. Despite some redeemable disadvantages, self-guided research is an important responsibility for all stewards of the internet.

One of the most challenging aspects of self-guided learning is that it forces us to wade through the jargon of specialized fields or areas of study that are otherwise unfamiliar to us. In no discipline is this more difficult than in the sciences. (Unless you are a scientist, of course.) Scientific literature is not only dense, but overflowing with terms and concepts that are entirely foreign to non-expert readers. Students of scientific disciplines receive extensive training in research methods for this reason. With an increasing number of journals embracing open-access practices, more scientific literature is freely available to the public than ever before, but that information is only valuable to the non-expert reader if they have the tools and skills to interpret it. Listed below are 5 tips that can help you get the most out of open-access scientific literature.

1)  Choose reputable scientific journals and read widely

Not all sources are created equal. Seek out journals that produce and publish “good science,” such as journals that are linked to reputable scientific societies. If the journal is not affiliated with a scientific society, be sure to verify the editors’ credentials. You can access journal articles for free through open-access databases like ScienceDirect, PubMed, ScienceOpen, Jurn, and CORE.

2)  Read chronologically

Scientific articles typically follow the IMRaD structure. IMRaD, which stands for Introduction, Methods, Results, and Discussion, is a universal framework designed to model the flow of the authors’ research. Even if you only intend to skim the article, it is good research practice to read it in chronological order.

Read more about IMRaD

3)  Learn the lingo

Use a highlighter or pen to mark words or concepts that you do not recognize as you read and look up the definitions in a scientific dictionary or glossary.

TSD: The Science Dictionary

Visionlearning Science Glossary

4)  Read the references

Whatever you do, do not disregard references! Authors are required to substantiate their statements by referencing existing published literature. References can be a gold mine of information, especially for a non-expert reader.

5)  Ask questions

Reading scientific literature can be overwhelming. Formulating a set of questions before diving into a scientific article can help you focus not only on the research, but the reason why the research is being done. What problem does it hope to solve? What is the authors’ approach? What methods are being used? Refer back to these questions as you read through the paper to keep your focus on the core issues.

“It is the perpetual error of the human intellect to be more moved and excited by affirmatives than by negatives; whereas it ought properly to hold itself indifferently disposed towards both alike. Indeed in the establishment of any true axiom, the negative instance is the more forcible of the two”

Francis Bacon, 1620
Novum Organum

In 1960, cognitive psychologist Peter Cathart Wason conducted an experiment in which he asked participants to identify a rule that could be applied to number triples. Participants were given the numbers 2, 4, and 6 as an example and asked to generate their own sets of triples to fit the rule. Wason periodically gave the participants feedback as they developed their number theories. After five trials, Wason reported that participants typically only tested number sequences that fit their theory of what the rule was, while avoiding triples that would prove their theory wrong.

Have we lost you, yet? Putting numbers aside, what Wason found was that the majority of participants deliberately chose information that confirmed their beliefs, while ignoring that which falsified them. Perhaps even more telling is that when Wason’s data was reviewed in 2019, it was found that many of the participants improved their answers after receiving feedback in subsequent trials. By the fifth trial, 23 of the 29 participants were able to solve the problem. Nevertheless, Wason led with the data that proved his theory correct, rather than acknowledging that, given multiple opportunities, participants did not behave the way that he expected them to. Whether he was conscious of it or not, Wason himself had fallen victim to the behavior that we now recognize as “confirmation bias.”

So, what does confirmation bias really mean, and how does it manifest in a digital environment? Confirmation bias is a natural human inclination to seek out, analyze, and store information in such a way that it supports or confirms one’s preexisting beliefs. In short, we see what we want to see. We may also reject or ignore information that does not align with our particular convictions, even that which is supported by credible evidence, because it forces us to face uncomfortable truths. Confirmation bias is especially prevalent on social media, where algorithms, not people, are responsible for making decisions about what types of content we see or do not see. These algorithms create what we may think of as an information bubble, in which users are insulated by others who share similar ideas. These online communities can be exceptionally beneficial, but they can also be dangerous sites of mis- and dis- information spread. The tendency to withdraw into these private digital societies restricts our ability to think outside of the box and comprehend ideas that do not align with our own. As the English philosopher and political leader, Francis Bacon, observed long before Wason conducted his experiment in the 1960s, humans are naturally inclined to seek affirmation over refutation. We cannot avoid confirmation bias entirely, but we can use this knowledge to lessen its effect.

Overcoming confirmation bias requires consistent effort, as well as a willingness to seek out and process information that does not align with our existing beliefs. This can be uncomfortable work, but it is the most effective way to broaden our collective worldview and slow the spread of mis- and dis-information within our online communities. To avoid confirmation bias, make an effort to pursue contrary opinions and try to understand the rationale behind them. It is also important to consume multiple sources of information that represent multiple points of view. As always, consistent fact checking should be an integral part of this process – and remember- your local librarians are always available to help you navigate and process information in all environments, whether it be in the library or online.

To learn more about challenging confirmation bias, check out these free digital citizenship lesson plans from Common Sense Education and Facing History:

Challenging Confirmation Bias l Common Sense Education
https://www.commonsense.org/education/digital-citizenship/lesson/challenging-confirmation-bias 

Confirmation and Other Biases l Facing History and Ourselves
https://www.facinghistory.org/resource-library/confirmation-and-other-biases

Imagine, you are sitting at home after a long day at work, phone in hand, scrolling through your Facebook or Instagram feed. You pause to look at a meme posted by one of your friends. It is a photo of Abraham Lincoln accompanied by a quote: “Nearly all men can stand adversity, but if you want to test a man’s character, give him power.” You like the sound of the quote and the message it sends, so you click share, unquestioningly sending it out to the dozens of people that follow your page. However, according to Daniel Worthington, Director of the Papers of Abraham Lincoln at the Abraham Lincoln Presidential Library and Museum, there is no record of this quote in their database. In 2016, the phrase was traced back to a speech delivered by American politician Robert G. Ingersoll in 1883, nearly two decades after the president was assassinated. Abraham Lincoln never spoke those words.

It is the dreaded logical fallacy: a false, flawed, or deceptive statement that sounds logical or insightful, but can be easily disproved through simple reasoning. One of the most effective logical fallacies is the appeal to authority fallacy, a fake news tactic that employs the face of a well-known, often widely trusted authority figure, such as Abraham Lincoln, to legitimize a false narrative. A brief Google search for “nearly all men can stand adversity” renders dozens of articles explaining how Ingersoll’s quote came to be misattributed to the sixteenth president. Yet, most social media users do not filter every bit of information that they ingest before releasing it back into the digital universe, especially if it looks and sounds very similar to what they already know to be true. If it looks like Lincoln, and sounds like Lincoln, it must be Lincoln, right?

The logical fallacy is a simple trick of the mind. A bridge from point A to point B that appears stable from either end but has an impassable gap in the very center. An optical illusion designed to deceive the traveler who fails to consult with the most recent iteration of the map before embarking on their journey. Fortunately, logical fallacies are quite easy to outsmart if you understand how they function in the digital sphere. The most effective way to identify logical fallacies is through reliable and consistent fact checking. PolitiFact, FactCheck.org, NPR FactCheck, Snopes, and the Washington Post Fact Checker, are all credible fact-checking resources that are freely accessible online. It is important to consult with these resources before sharing crowdsourced content online, even when it seems like plausible information. Another way that you can identify and stop the spread of misinformation is to inform others when you encounter false claims or statements online by leaving a comment or sending a private message. As always, if ever in doubt, do not hesitate to contact us at the library, where we are happy to answer any questions you may have!

As librarians, it is our duty to help our patrons to discern fact from fiction by adequately meeting the information needs of our diverse community, both online and inperson. We do this not through persuasion or censorship, but by providing free and open access to authoritative sources supported by credible evidence and expert testimony. Our collections are designed to produce ideas, not promote them. It is in this spirit that we strive to curate resources that support critical thinking and informed decision-making while enhancing media and information literacy.  

Despite our ongoing efforts to build and maintain diverse, authentic collections in our physical space, we understand that our patrons will also be turning to online resources to meet their information needs. As a public library, we want all of our patrons to know that our commitment to fighting information disorder extends beyond the walls of our library and into the digital sphere. Going forward, it is our goal to provide you with the necessary tools to combat the spread of mis- and dis- information in your own lives. In the coming months, stay tuned for more guidance on how to identify fake news, find and analyze reliable sources, evaluate your own biases, and consult with experts and fact-checkers to support the reliability of your sources. Together, we can move beyond fake news and into a new era of information literacy for all. 

Scroll to Top