Pages

Sunday, January 27, 2019




WHY DO WE PREFER FICTION?

HOW MANY PEOPLE IN THE USA THRIVE ON LIES, AND WHO ARE THEY? IT’S NOT REALLY SURPRISING. SEE THE FOLLOWING ARTICLE. I THINK THE KEY IS A STRONG STRAIN OF ESSENTIAL MALICE; I WON’T GO SO FAR AS TO SAY THAT THEY ARE DEMONIC, BECAUSE I DON’T BELIEVE IN DEMONS, BUT I DO BELIEVE IN INSANITY, INNATE VICIOUSNESS AND A LOW INTELLIGENCE. SO, LETS’ POSTULATE THAT MANY MORE PEOPLE THAN WE USUALLY THINK ABOUT IN THIS WAY ARE INSANE – INCLUDING LOW LEVELS OF ILLNESSES SUCH AS SCHIZOPHRENIA, THE LACK OF A CONSCIENCE, OR POOR LOGICAL ABILITY; AND TOO MANY OF THE REST ARE UNDEREDUCATED. PONDER THIS: WHAT IS THE PROBABLE EFFECT ON GROUP IQ OF A STRONGLY ENFORCED MANDATORY SET OF “BELIEFS,” EITHER RELIGIOUS, POLITICAL, MILITARY, OR SOME OTHERWISE CULTURAL ELEMENT. BE SURE TO INCLUDE HARSH PUNISHMENT OR SHAMING OF CHILDREN IN THIS CATEGORY.

IF WE CAN’T STAND UP AGAINST BRAINWASHING AND AT LEAST PROTEST OUT LOUD AND BEFORE ALL, ARE WE GOING TO SURVIVE AS A DEMOCRACY? I’M AFRAID NOT, AND NOT BECAUSE OF AN INVASION BY THE RUSSIANS, EITHER. IT’S OUR OWN FAULT. DO YOU REMEMBER WHEN DONALD TRUMP SAID AT A RALLY, “I LOVE THE UNDEREDUCATED?” I’LL JUST BET HE DOES. THEY’RE EASIER TO FOOL AND TO LEAD AROUND BY THE NOSE. THEY WILL MORE QUICKLY SAY, “YES, MASTER.” READ THIS ARTICLE. I HAVEN’T COPIED THE GRAPHICS IN HERE SO GO TO THE WEBSITE BELOW TO READ IT ALL. IT’S LONG AND REQUIRES CLOSE ATTENTION, BUT IT DOES EXPLAIN A LOT.


http://science.sciencemag.org/content/363/6425/374

Fake news on Twitter during the 2016 U.S. presidential election
Nir Grinberg1,2,*, Kenneth Joseph3,*, Lisa Friedland1,*, Briony Swire-Thompson1,2, David Lazer1,2,†
See all authors and affiliations

Science 25 Jan 2019:
Vol. 363, Issue 6425, pp. 374-378
DOI: 10.1126/science.aau2706

Finding facts about fake news
There was a proliferation of fake news during the 2016 election cycle. Grinberg et al. analyzed Twitter data by matching Twitter accounts to specific voters to determine who was exposed to fake news, who spread fake news, and how fake news interacted with factual news (see the Perspective by Ruths). Fake news accounted for nearly 6% of all news consumption, but it was heavily concentrated—only 1% of users were exposed to 80% of fake news, and 0.1% of users were responsible for sharing 80% of fake news. Interestingly, fake news was most concentrated among conservative voters.

Science, this issue p. 374; see also p. 348

Abstract
The spread of fake news on social media became a public concern in the United States after the 2016 presidential election. We examined exposure to and sharing of fake news by registered voters on Twitter and found that engagement with fake news sources was extremely concentrated. Only 1% of individuals accounted for 80% of fake news source exposures, and 0.1% accounted for nearly 80% of fake news sources shared. Individuals most likely to engage with fake news sources were conservative leaning, older, and highly engaged with political news. A cluster of fake news sources shared overlapping audiences on the extreme right, but for people across the political spectrum, most political news exposure still came from mainstream media outlets.

In 1925, Harper’s Magazine published an article titled “Fake news and the public,” decrying the ways in which emerging technologies had made it increasingly difficult to separate rumor from fact (1). Nearly a century later, fake news has again found its way into the limelight, particularly with regard to the veracity of information on social media and its impact on voters in the 2016 U.S. presidential election. At the heart of these concerns is the notion that a well-functioning democracy depends on its citizens being factually informed (2). To understand the scope and scale of misinformation today and most effectively curtail it going forward, we need to examine how ordinary citizens experience misinformation on social media platforms.

To this end, we leveraged a panel of Twitter accounts linked to public voter registration records to study how Americans on Twitter interacted with fake news during the 2016 election season. Of primary interest are three simple but largely unanswered questions: (i) How many stories from fake news sources did individuals see and share on social media? (ii) What were the characteristics of those who engaged with these sources? (iii) How did these individuals interact with the broader political news ecosystem? Initial reports were alarming, showing that the most popular fake news stories in the last 3 months of the presidential campaign generated more shares, reactions, and comments on Facebook than the top real news stories (3). However, we do not yet know the scope of the phenomenon, in part because of the difficulty of reliably measuring human behavior from social media data (4). Existing studies of fake news on social media have described its spread within platforms (5, 6) and highlighted the disproportionate role played by automated accounts* (7), but they have been unable to make inferences about the experiences of ordinary citizens.

Outside of social media, fake news has been examined among U.S. voters via surveys and web browsing data (8, 9). These methods suggest that the average American adult saw and remembered one or perhaps several fake news stories about the 2016 election (8), that 27% of people visited a fake news source in the final weeks before the election, and that visits to these sources constituted only 2.6% of hard news site visits (9). They also show a persistent trend of conservatives consuming more fake news content, with 60% of fake news source visits coming from the most conservative 10% of Americans (9). However, because social media platforms have been implicated as a key vector for the transmission of fake news (8, 9), it is critical to study what people saw and shared directly on social media.

Finally, social media data also provide a lens for understanding viewership patterns. Previous studies of the online media ecosystem have found evidence of insulated clusters of far-right content (10), rabbit holes of conspiratorial content (11), and tight clusters of geographically dispersed content (12). We wish to understand how fake news sources were positioned within this ecosystem. In particular, if people who saw content from fake news sources were isolated from mainstream content, they may have been at greater risk of adopting misinformed beliefs.

Data and definitions

Fake news sources

We follow Lazer et al. (13), who defined fake news outlets as those that have the trappings of legitimately produced news but “lack the news media’s editorial norms and processes for ensuring the accuracy and credibility of information.” The attribution of “fakeness” is thus not at the level of the story but at that of the publisher [similar to (9)].

We distinguished among three classes of fake news sources to allow comparisons of different operational definitions of fake news. The three classes correspond to differences in methods of generating lists of sources as well as perceived differences in the sites’ likelihoods of publishing misinformation. We labeled as “black” a set of websites taken from preexisting lists of fake news sources constructed by fact-checkers, journalists, and academics (8, 9) who identified sites that published almost exclusively fabricated stories [see supplementary materials (SM) section S.5 for details]. To measure fake news more comprehensively, we labeled additional websites as “red” or “orange” via a manual annotation process of sites identified by Snopes.com as sources of questionable claims. Sites with a red label (e.g., Infowars.com) spread falsehoods that clearly reflected a flawed editorial process, and sites with an orange label represented cases where annotators were less certain that the falsehoods stemmed from a systematically flawed process. There were 171 black, 64 red, and 65 orange fake news sources appearing at least once in our data.

Voters on Twitter
To focus on the experiences of real people on Twitter, we linked a sample of U.S. voter registration records to Twitter accounts to form a panel (see SM S.1). We collected tweets sent by the 16,442 accounts in our panel that were active during the 2016 election season (1 August to 6 December 2016) and obtained lists of their followers and followees (accounts they followed). We compared the panel to a representative sample of U.S. voters on Twitter obtained by Pew Research Center (14) and found that the panel is largely reflective of this sample in terms of age, gender, race, and political affiliation (see SM S.2).

We estimated the composition of each panel member’s news feed from a random sample of the tweets posted by their followees. We called these tweets, to which an individual was potentially exposed, their “exposures.” We also analyzed the panel’s aggregate exposures, in which, for example, a tweet from an account followed by five panel members was counted five times. We restricted our analysis to political tweets that contained a URL for a web page outside of Twitter (SM S.3 and S.4). Because we expected political ideology to play a role in engagement with fake news sources, we estimated the similarity of each person’s feed to those of registered Democrats or Republicans. We discretized the resulting scores to assign people into one of five political affinity subgroups: extreme left (L*), left (L), center (C), right (R), and extreme right (R*). Individuals with less than 100 exposures to political URLs were assigned to a separate “apolitical” subgroup (SM S.10).

Results
Prevalence and concentration
When totaled across all panel members and the entire 2016 U.S. election season, 5.0% of aggregate exposures to political URLs were from fake news sources. The fraction of content from fake news sources varied by day (Fig. 1A), increasing (in all categories) during the final weeks of the campaign (SM S.7). Similar trends were observed in content sharing, with 6.7% of political URLs shared by the panel coming from fake news sources.


Download high-res image
Open in new tab
Download Powerpoint
Fig. 1
Prevalence over time and concentration of fake news sources.
(A) Daily percentage of exposures to black, red, and orange fake news sources, relative to all exposures to political URLs. Exposures were summed across all panel members. (B to D) Empirical cumulative distribution functions showing distribution of exposures among websites (B), distribution of shares by panel members (C), and distribution of exposures among panel members (D). The x axis represents percentage of websites or panel members responsible for a given percentage (y axis) of all exposures or shares. Black, red, and orange lines represent fake news sources; blue line denotes all other sources. This distribution was not comparable for (B) because of the much larger number of sources in its tail and the fundamentally different selection process involved.

However, these aggregate volumes mask the fact that content from fake news sources was highly concentrated, both among a small number of websites and a small number of panel members. Within each category of fake news, 5% of sources accounted for more than 50% of exposures (Fig. 1B). There were far more exposures to red and orange sources than to black sources (2.4, 1.9, and 0.7% of aggregate exposures, respectively), and these differences were largely driven by a handful of popular red and orange sources. The top seven fake news sources—all red and orange—accounted for more than 50% of fake news exposures (SM S.5).

Figure 1, C and D, shows that content was also concentrated among a small fraction of panel members for all categories of fake news sources. A mere 0.1% of the panel accounted for 79.8% of shares from fake news sources, and 1% of panel members consumed 80.0% of the volume from fake news sources. These levels of concentration were not only high in absolute terms, they were also unusually high relative to multiple baselines both within and beyond politics on Twitter (SM S.15).

The “supersharers” and “superconsumers” of fake news sources—those accountable for 80% of fake news sharing or exposure—dwarfed typical users in their affinity for fake news sources and, furthermore, in most measures of activity. For example, on average per day, the median supersharer of fake news (SS-F) tweeted 71.0 times, whereas the median panel member tweeted only 0.1 times. The median SS-F also shared an average of 7.6 political URLs per day, of which 1.7 were from fake news sources. Similarly, the median superconsumer of fake news sources had almost 4700 daily exposures to political URLs, as compared with only 49 for the median panel member (additional statistics in SM S.9). The SS-F members even stood out among the overall supersharers and superconsumers, the most politically active accounts in the panel (Fig. 2). Given the high volume of posts shared or consumed by superspreaders of fake news, as well as indicators that some tweets were authored by apps, we find it likely that many of these accounts were cyborgs: partially automated accounts controlled by humans (15) (SM S.8 and S.9). Their tweets included some self-authored content, such as personal commentary or photos, but also a large volume of political retweets. For subsequent analyses, we set aside the supersharer and superconsumer outlier accounts and focused on the remaining 99% of the panel.


Fig. 2
Shares and exposures of political URLs by outlier accounts, many of which were also SS-F accounts.

(A) Overall supersharers: top 1% among panelists sharing any political URLs, accounting for 49% of all shares and 82% of fake news shares. Letters above bars indicate political affinities. (B) Overall superconsumers: top 1% among panelists exposed to any political URLs, accounting for 12% of all exposures and 74% of fake news exposures. Black, red, and orange bars represent content from fake news sources; yellow or gray bars denote nonfake content (SS-F accounts are shown in yellow). The rightmost bar shows, for scale, the remainder of the panel’s fake news shares (A) or exposures (B).


No comments:

Post a Comment