Last week, executives from Facebook, Google and Twitter appeared before three different US congressional committees, to answer questions about how their platforms were used to influence voters in the 2016 elections.
It is a significant moment. Other media like radio, newspapers and television have never been grilled like this, though Fox News and The New York Times surely influence voters, too.
Perhaps one difference is that the nature of legacy media influence is clear. It mimics political binaries in the US. It has also been regulated for decades, including degrees of transparency around advertising, classification and distribution. Political ads, in particular, are subject to Federal Election Commission rules, including disclaimers that declare who authorised and financed them.
None of these apply to social media. For most of its life, it has been perceived as neutral, a mere conduit for streams of thought. The rough convention was that users are the publishers, not the tech company.
No gatekeepers. Anonymity. Content was just content, not endorsements. Traffic or 'engagement' became the primary value. The more speech, the better.
Not surprisingly, it was women and minorities who first pointed out that this was not sustainable. The hardest push for 'Block' and 'Report' buttons came from groups vulnerable to abuse and threats online. This perversely put the onus on individuals to protect themselves, rather than on harassers to not harass, or for forums to be made safe.
It demonstrates how the libertarian principle driving social media, that all speech is equal, means we could hear less and less from those we need to hear the most. As former Google engineer Yonatan Zunger put it: 'If someone can impose costs on another person for speaking, then speech becomes limited to those most able to pay those costs.' Technology ends up replicating socioeconomic differentials rather than dismantling them, as it claims to do. If you can pay, then you play.
"We would have to hope that teachers and students are being given latitude when it comes to learning how to engage critically with information online."
The hearings reveal how this model has been exploited in other ways. It is no longer in dispute that Russian troll farms, such as the St Petersburg-based Internet Research Agency, spread inflammatory content via social media. The initial sample of paid ads and metadata show that both sides of politics have been studiously targeted.
Facebook handed over 3000 ads and removed at least 470 fake accounts. An estimated 15 per cent or 48 million Twitter accounts are fake or automated. Google traced more than 1000 YouTube videos to the Internet Research Agency.
The political impact of shareable lies can be hard to extract from partisan feeling, especially when it seems to have benefited one candidate. But the truth is that election results can be a poor measure of anything, much less the success of mischief online. Too many variables are involved, not least of which are susceptibilities in the electorate to such messaging. These must be addressed, as they predated the election and will outlast the hearings on Russian 'active measures'.
So much has been made of foreign interference, and not enough about how the social amplification of misinformation could be so immediate and widespread.
The obvious counterpoint to this is education, both at civic and school levels. We would have to hope that teachers and students are being given latitude when it comes to learning how to engage critically with information online.
The more complex part of limiting responses to inflammatory material is good government. It is not as easy to get someone upset about their government, or politicians in general, when they can see that their healthcare, jobs, house and future are relatively secure. An inclusive discourse helps.
Our leaders essentially need to give voters reasons to see lies for what they are, instead of going, 'hey that could be true' when a malicious meme appears on their Facebook feed.
Until this comes to pass, tech giants have choices to make about the content that is circulated on their platforms. They are being exploited by malevolent actors, too.
There is a political case to be made, which will be left up to Democrats and Republicans who have so far been sorely dissatisfied with vague commitments to do better.
An economic case could also be made that an editorial voice, proposed by Zunger, would maximise user engagement and add value. It would certainly keep social media platforms in the mainstream, where the revenue is, rather than the fringe.
But there is also a moral case. Should private companies continue to profit from a supposedly neutral model that puts individuals, and even democracy itself, at risk?
Fatima Measham is a Eureka Street consulting editor. She co-hosts the ChatterSquare podcast, tweets as @foomeister and blogs on Medium.