Reading the headlines about the Online Safety Bill might give you the impression that it has been dramatically weakened.
Five years after it was first proposed, Culture Secretary Michelle Donelan has produced a new version of the bill – the third, by my count – and removed a key element, the provisions against so-called “legal but harmful” content.
Campaigners and charities have accused Ms Donelan of watering down the bill, and on the face of it, the criticism seems fair.
Without rules against “legal but harmful” content, someone who is being abused online in the most shocking circumstances – after they have been the victim of a terror attack for instance – will find no protection.
The bill is, undeniably, weaker than before.
Yet, as Ms Donelan tried to explain, the weakness has to be compared to the strength of the original product. Yes, the bill has been diluted, but from a point of eye-watering potency. That doesn’t mean it is now worthless and watery.
To understand just what powerful stuff the Online Safety Bill was, consider the practicalities of going after “legal but harmful” content.
Why the Online Safety Bill is proving so controversial
Online Safety Bill: Social media sites will no longer have to take down ‘legal but harmful’ material in amendment
Sharing ‘downblouse’ images and pornographic ‘deepfakes’ without consent to be made a crime
This was a new category created especially for the bill, which meant that things that it would be legal to say to someone’s face would no longer be permissible online, as long as they caused someone harm.
That sounded like a good idea, until civil servants tried to define the exact meaning of harm. Did it mean hurt feelings? Physical ill-effects? On one person or lots of people? What about jokes? Or journalism?
Even after years of work, no-one was precisely sure. The attempts to avoid unintended consequences reassured very few people.
‘A recipe for trouble’
If that sounds like a recipe for trouble, wait until you hear how “legal but harmful” was going to be enforced on the ground.
Not by the police, nor by civil servants. Lacking the technology and the human resources needed to comb social media for infringements, the government had decided to hand the responsibility for spotting legal-but-harmful content to the tech giants themselves.
Firms such as Facebook and Twitter were suddenly going to find policing this vague concept, with large fines if they failed to obey.
Many believed that the firms would over-enforce, shutting down any conversation that seemed even potentially harmful, but in truth no-one knew for sure. Even by the standards of new laws, it was perilously uncertain.
Read more: Why bill is proving so controversial
Removing the legal but harmful provisions makes the bill less risky. But there’s a catch. The changes only remove those provisions for adults. The bill still requires children to be protected from viewing harmful material.
This means that all the original difficulties are still very much present. How will firms detect children? Presumably they will need massive age verification systems, perhaps using AI technology to identify children.
How will that work? How will they tell the difference between an 18-year-old, who needs to be protected, and a 19-year-old, who apparently does not? What will the punishment be if they fail to get it right?
How to define ‘harm’?
Then there’s the question of how harm will be defined. As things stand, MPs will create a list of things they believe are harmful, which the platforms will have to interpret. This will not be smooth sailing.
Even simple measures in the bill are fraught with difficulty. It was recently announced that the updated legislation would outlaw the encouragement of self-harm. What precisely does that mean? Does that include algorithmic encouragement, or just the publication of certain kinds of content?
There are also real concerns among privacy campaigners that the bill might force firms to delve into people’s messages on apps such as WhatsApp, breaking privacy-preserving end-to-end encryption.
The new bill leaves many areas untouched. But if it is passed – and given the strength of feeling in the Lords that is far from certain – then it will be a sweeping, complicated law of immense significance.
Campaigners and charities will not be happy. Neither will free speech advocates, who see in this bill a charter for censorship.
But the bill will extend the rule of law to many areas that are at present shockingly unregulated, especially when it comes to children.
What’s more, it will give this government and future governments the chance to learn what works and what doesn’t. That is not a popular way of looking at legislation, but it is vital, in this new area, to learn by doing.
The government has wasted five years when it could have been gathering data and learning how to regulate online spaces. In that time, many children’s lives have been irrevocably damaged, even lost.
Justice delayed, it is sometimes said, is justice denied. The same goes for legislation. This bill might not be too little, but it is certainly too late.