Ringhand: British Parliamentary Committee, Citing “Insidious Ability” to Mislead Online, Calls for Election Law Changes

The following is a guest post from Lori Ringhand:

A report issued by a British Parliamentary committee generated headlines this week for its scathing takedown of Facebook’s data management practices. The report, however, also made detailed suggestions about how to update British election law to cope with the growing challenges presented by unregulated online electioneering activity.

The report, Disinformation and Fake News, was issued by the House of Commons Digital, Culture, Media and Sport Committee. It is the result of one of several investigations in the UK triggered by concerns about foreign interference, data privacy, and the spread of online misinformation in the 2016 Brexit referendum.

The Committee’s description of the problem it set about to examine was grim:

“In a democracy, we need to experience a plurality of voices and, critically, to have the skills, experience, and knowledge to gauge the veracity of those voices. While the Internet has brought many freedoms across the world and an unprecedented ability to communicate, it also carries the insidious ability to distort, to mislead, and to produce hatred and instability. It functions on a scale and at a speed that is unprecedented in human history.” 

The scope of the Committee’s investigation matched its assessment of the severity of the problem. The Committee worked for 18 months, heard from dozens of witnesses, and convened an “International Grand Committee” bringing together officials from nine legislative bodies around the world, all of whom are struggling to update their own regulations in this area.

The Committee made several important recommendations. Many of them focus on creating more transparency about who is disseminating election-related materials online, and how and why particular individuals are being targeted to see that material. These recommendations include extending existing “imprint” rules to online paid political advertisements (which in the UK, as in the US, are not covered by current disclaimer laws), and requiring social media platforms to validate the identify of entities purchasing political ads and to maintain an archive of all such ads. An additional recommendation would mandate that specific consent be obtained from social media users before personal data harvested online can be used for political profiling.

Other Committee recommendations grew out of concerns that spending that is already thought to be illegal under current law may nonetheless be funding online election-related activity. To address this concern, the Committee recommended strengthening existing law to make clear that foreign individuals, groups, and corporations are prohibited from making any election-related expenditures in the UK, and to more rigorously protect the system from political money laundering by ensuring that the named source of contributions made to electioneering groups is in fact the true source of the funds  (an issue that is at the heart of an ongoing criminal investigation into whether illegal foreign money was funneled to anti-EU groups during the Brexit campaign).

The most controversial part of the report, though, is likely to be the Committee’s recommendation calling for increased regulation of online material itself, and for large social media platforms to be responsible for policing that material on their sites. Under current law, social media platforms like Facebook are not treated as publishers, so are generally not liable for illegal content (such discriminatory employment ads, harassment, or copyright violations) appearing on their sites. The Committee recommends changing that, while also expanding the definition of such content to include “harmful” election-related material. The report explains this recommendation by stressing the growing ability of coordinated misinformation campaigns to wreak havoc in elections by spreading false, fraudulent, and digitally manipulated information online, and the apparent unwillingness of Facebook and other large platforms to do anything about it. But the Committee kicked the can down the road on the vexing question of how to define “harmful content” in this context, recommending instead that a new regulatory body be tasked with that unenviable task.

Share this: