Last week, executives from Facebook, Google and Twitter appeared before three different US congressional committees, to answer questions about how their platforms were used to influence voters in the 2016 elections.
It is a significant moment. Other media like radio, newspapers and television have never been grilled like this, though Fox News and The New York Times surely influence voters, too.
Perhaps one difference is that the nature of legacy media influence is clear. It mimics political binaries in the US. It has also been regulated for decades, including degrees of transparency around advertising, classification and distribution. Political ads, in particular, are subject to Federal Election Commission rules, including disclaimers that declare who authorised and financed them.
None of these apply to social media. For most of its life, it has been perceived as neutral, a mere conduit for streams of thought. The rough convention was that users are the publishers, not the tech company.
No gatekeepers. Anonymity. Content was just content, not endorsements. Traffic or 'engagement' became the primary value. The more speech, the better.
Not surprisingly, it was women and minorities who first pointed out that this was not sustainable. The hardest push for 'Block' and 'Report' buttons came from groups vulnerable to abuse and threats online. This perversely put the onus on individuals to protect themselves, rather than on harassers to not harass, or for forums to be made safe.
It demonstrates how the libertarian principle driving social media, that all speech is equal, means we could hear less and less from those we need to hear the most. As former Google engineer Yonatan Zunger put it: 'If someone can impose costs on another person for speaking, then speech becomes limited to those most able to pay those costs.' Technology ends up replicating socioeconomic differentials rather than dismantling them, as it claims to do. If you can pay, then you play.
"We would have to hope that teachers and students are being given latitude when it comes to learning how to engage critically with information online."
The hearings reveal how this model has been exploited in other ways. It is no longer in dispute that Russian troll farms, such as the St Petersburg-based Internet Research Agency, spread inflammatory content via social media. The initial sample of paid ads and metadata show that both sides of politics have been studiously targeted.
Facebook handed over 3000 ads and removed at least 470 fake accounts. An estimated 15 per