The ability to promote discussion through social media is a critical tool for people and organisations to create much-needed conversations about issues that matter, but aren’t talked about enough, in societies around the world: from young Nigerians promoting dialogue about taboo subjects like domestic abuse to a US non-profit creating a viral hashtag about race and equity in the country’s education system. Many even accredit the Arab Spring as the first ‘Twitter revolution’, as it is widely acknowledged that social media facilitated interaction and communication amongst participants of political protests.
But, despite internet access being a fundamental human right, as declared by the UN in June last year, the internet now represents much more than informing and connecting people. Discussions of major events have shown that, particularly where social media is concerned, the internet also has the power to misinform, divide and even harm people.
News coverage is one example: ‘fake news’ is now often used, by individuals and organisations alike, as a weapon to promote or push back against competing ideologies online. During the 2016 US presidential election, fake news stories were widely published and circulated with ease across all major social media platforms. A survey of teenagers aged 10 to 18 in the US found that 31% had shared a news story online in the last 6 months that they later found out was false. Although the current influence of fake news is entirely unclear, incidences of fake news articles spreading like wildfire across social media platforms has become a common occurrence.
Despite the possible benefits of young people now relying more on social media than television as a news source, since coverage and commentary of live events arrive much faster online, the rise of fake news means that genuine solutions for preventing the spread of misinformation on social media are desperately needed. To combat the spread of misinformation on its platform, Facebook announced plans in December to create a fact-checking system, but, nine months on it is not clear whether this has been effective in reducing the scale of fake news circulating on the network.
Other potentially much more harmful drawbacks to the influence of social media may stem from the ability of individuals to present themselves in a different image and thus behave differently online than they would in real life. For example, some use their virtual persona for the purpose of spreading harassment and abuse: 1 in 5 teenagers worldwide have experienced online abuse and more than half of those surveyed say that cyberbullying is worse than being bullied in person. Young people in particular have urged that major social media companies do more to tackle bullying on their sites. Other individuals may use social media to create a persona that they aspire to resemble in real life, often in response to societal expectations about, for instance, body image and career goals being presented on social media as the norm. A survey of young people in the UK found that 35% of girls aged 11-21 are most worried about comparing themselves to others online, and a third of girls are worried about how they look in the photos they post online. As well as having to deal with harassment and societal pressures online, obsession or even addiction to using social media is a genuine issue for many people. Researchers at the University of Chicago suggest that social media addiction can be stronger than addiction to cigarettes and alcohol, and can significantly impact a person’s daily and social life, as well as mental health.
Propaganda which glorifies the Islamic State aims to recruit vulnerable individuals online, in many cases persuading them to travel to Syria, or encouraging them to commit jihad on UK soil. The Home Affairs Committee tells us that in only 1 or 2 % of radicalisation cases had mosques or religious institutions been involved, but that online radicalisation was involved in almost all cases. Extremist content exists all across the internet, from both Islamic extremist sources and far-right extremist groups such as Britain First, whose party page has more Facebook likes than any other UK political party.
To combat the spread of vitriol and extremism, Tech giants Facebook, YouTube, Twitter and Microsoft have created a collaborative forum to share best practices and potential solutions, and Google now allows users to report false or offensive information in their search suggestions and boxed-out answers. As for online harassment and cyberbullying, the Crown Prosecution Service will be ordering prosecutors to treat online hate crimes as seriously as they would offences carried out in person. However, many hate crimes go unreported and critics say that social media companies have far done little to curb the spread of abuse on their platforms.
The steps taken by tech companies so far are a good start, but robust action is needed to tackle both the spread of misinformation and the harmfully antisocial side to social media. Vigilance when browsing the internet is key. Our Web Guardians™ programme aims to inform mothers that the internet is an endless source of information, but that it can also be extremely harmful for your child. To find out more about our programme go to www.webguardians.org.