Regulations, Good Practice and Censorship
Aim: To investigate the impact social media has had on human interaction.
Example: Tommy Robinson - co-founder of the English Defences League, far right activist and anti-islam activist, confronts a twitter user after he posted a tweet saying "someone just kill Tommy Robinson already". He receives a lot of coverage in the media for his views - this raises the question of how fine the line is between free speech and hate speech in the media.
Current social media regulation: (BBC article)
The government has proposed measures to regulate social media companies over harmful content, including "substantial" fines and the ability to block services that do not stick to the rules.
At the moment, when it comes to graphic content, social media largely relies on self-governance. Sites such as YouTube and Facebook have their own rules about what is unacceptable and the way that users are expected to behave towards one another. This includes content that promotes fake news, hate speech or extremism, or could trigger or exacerbate mental health problems.
Relevant regulatory bodies:
IPSO: Can monitor online copy from news agencies and magazine industry but have no control over private individuals/businesses.
ASA: Can advise on online advertisements, as can Ofcom and BBFC.
Online streaming and VoD (coupled with increasing piracy), however, make this problematic.
Using social media at work
This tweet received a lot of backlash from other twitter users. She posted this tweet prior to a 12 hour flight to South Africa, but did not see the result of her tweet until she landed. The tweet was then removed from twitter and Sacco was fired from her job. The company asked that Sacco was not to be condemned. Two days later she issued an apology first to a South African newspaper and then to ABC News (USA).
£40,000 damages in Twitter libel case:

Censorship:
North Korea
China:
Current social media regulation: (BBC article)
The government has proposed measures to regulate social media companies over harmful content, including "substantial" fines and the ability to block services that do not stick to the rules.
At the moment, when it comes to graphic content, social media largely relies on self-governance. Sites such as YouTube and Facebook have their own rules about what is unacceptable and the way that users are expected to behave towards one another. This includes content that promotes fake news, hate speech or extremism, or could trigger or exacerbate mental health problems.
Self governance: YouTube has defended its record on removing inappropriate content. The video-sharing site said that 7.8m videos were taken down between July and September 2018, with 81% of them automatically removed by machines, and three-quarters of those clips never receiving a single view. Globally, YouTube employs 10,000 people in monitoring and removing content, as well as policy development.
Facebook, which owns Instagram, told the BBC that it has 30,000 people around the world working on safety and security. It said that it removed 15.4m pieces of violent content between October and December, up from 7.9m in the previous three months.
Some content can be automatically detected and removed before it is seen by users. In the case of terrorist propaganda, Facebook says 99.5% of all material taken down between July and September was done by "detection technology".
If illegal content, such as "revenge pornography" or extremist material, is posted on a social media site, it will be the person who posted it, rather than the social media companies, who is most at risk of prosecution.
Regulation: There is (currently) no standard regulation or regulatory body to monitor social media. However, private businesses cannot break the Data Protection Act (by, for example, publishing clients' personal details without permission.
Example: Cambridge Analytica: 'a lesson in institutional failure'
They had exploited Facebook data harvested from millions of people across the world to profile and target them with political messages and misinformation, without their knowledge or consent. The story made headlines all of the world. Britains information commissioner obtained a warrant to enter Cambridge Analytica's offices and seize its servers. Facebook's share price plunged more than $50bn.
In April Facebook admitted it wasn't 50 million users who had had their profiles mined. It was actually 87 million users. Mark Zuckerberg was hauled before US congress. And in October the information Commissioner's Office fined Facebook its maximum possible penalty - £500.000 which Facebook is appealing against (2016 - ongoing).
Regulation: Equally, hate speech is covered by law, and this applies to social media. Hate speech includes:
- Racism
- Homophobia
- Sexism
- Xenophobia
- Islamophobia
Terrorist Attacks:
Increasingly, terrorist cells and individuals are uploading graphic content and sharing it via social media. This is taken down by the relevant organisation (such as Facebook or Twitter), who respond to ‘reported’ content that may be offensive.
The issue, though, is the response time. For example, ISIS beheadings, after being uploaded onto social media sites, were shared amongst users so rapidly that authorities were effectively unable to remove them. This is because users will view the content and record it so that even if the site attempts to remove the video it will simply be posted again by multiple people under unofficial names making it harder for the site to trace it. There’s also the issue of the ‘dark web’.
'What draws people to the dark corners of the web' (audio):
People who remove offensive content from media are called 'sin-sifters'.
It feels morally wrong to watch someone being brutally attacked online.
The brutal beheadings from ISIS that were being streamed online. 3,000 people were interviewed - 95% said they watched a portion of the video (roughly 20% of the video), 5% saying they watched the entire video. Reasonings were because they were curios, wanting to see if it was real and just because it was so brutal. It is a very private compulsion that people don't want their friends/family to know about. A reddit user who has a morbid curiosity about death since he was young, watches death videos. He watches them so often that he has become de-sensitised to the issue. The videos were then removed from the site after someone posted a video of the Christchurch mosque attacks.
Christchurch mosque attacks:
By a white supremacist extreme right wing man, at two mosques when at least 49 people were killed and 48 injured. The attacks were broadcasted on a live facebook stream by the attacker. These were the first ever terrorist attacks in New Zealand.
Pewdiepie has condemned the attacker because at the beginning of the live stream he mentions "remember lads, subscribe to Pewdiepie" saying it sickens him that this man uttered his name whilst committing such a tragic and deathly attack.
The attacker also had a manifest of documents about white supremacy, terrorism and anti-immigration.
Self-regulation: whether professionally or privately, users are advised to comply with the law through self-regulation. This means they should have an awareness of what is, and what is not, illegal, and act accordingly.
Relevant regulatory bodies:
IPSO: Can monitor online copy from news agencies and magazine industry but have no control over private individuals/businesses.
ASA: Can advise on online advertisements, as can Ofcom and BBFC.
Online streaming and VoD (coupled with increasing piracy), however, make this problematic.
Using social media at work
£40,000 damages in Twitter libel case:

Censorship:
North Korea
China:


Excellent notes on the whole - the only bit missing is North Korea and China - very interesting case studies when it comes to censorship.
ReplyDeleteMr Boon