Have you ever come across something online that made you feel unsafe — a harassing comment, fake profile, or even disturbing content that platforms seemed to ignore? You’re not alone.
In fact, a study by Singapore’s Infocomm Media Development Authority (IMDA) earlier this year revealed something worrying: more than half of legitimate user complaints about harmful content were not acted on promptly. That’s exactly what pushed the government to act — and now, a new Online Safety Commission is on its way.
What the New Law Actually Does
The newly proposed Online Safety Bill, tabled in Parliament this week, aims to set up a powerful Online Safety Commission by mid-2026.
This commission will have the authority to:
- Order social media platforms to remove or restrict access to harmful content within Singapore.
- Ban repeat offenders or perpetrators from accessing their social media accounts.
- Block harmful online spaces, including group pages and websites spreading toxic or illegal material.
- Give victims a right to reply to false or damaging posts — a first for Singapore.
In short, the law is designed to protect users, not censor opinions. It targets content like child exploitation, online stalking, harassment, doxxing, and abuse of intimate images — areas where global tech giants have been slow to act.
Why Singapore Is Taking a Stand
Digital Minister Josephine Teo said it plainly: “More often than not, platforms fail to take action to remove genuinely harmful content reported to them by victims.”
And she’s right — harmful content has become a modern form of violence. Whether it’s a child being bullied online or a victim’s private information leaked publicly, the damage can be deep and long-lasting.
The government’s message is clear: digital safety is no longer optional.
This move also follows earlier action under the Online Criminal Harms Act (2024), which saw Meta warned with possible fines of up to S$1 million for failing to control impersonation scams.
What This Means for Singapore Users
For most people, this new law won’t restrict your freedom — it’ll make your online experience safer. Here’s how it helps you directly:
- Faster response to harmful posts when you report them.
- Less exposure to scams, fake accounts, and online harassment.
- Clear accountability for big tech companies that host harmful content.
Think of it as a “digital neighbourhood watch” — but with real legal power behind it.
What’s Next
The bill will be debated in Parliament soon, and if passed, the commission could begin work by mid-2026. The rollout will happen in phases, gradually including more forms of online harm such as non-consensual sharing of personal data and incitement of hatred or violence.
It’s a bold step — and one that could redefine how governments worldwide approach online accountability.
Frequently Asked Questions
1. When will Singapore’s Online Safety Commission start operating?
If the bill passes, the commission is expected to begin its work by mid-2026, with powers to block harmful content and handle user complaints.
2. Will this law affect what I post on social media?
Not unless your posts involve harassment, hate speech, or illegal content. The law focuses on protecting users, not limiting free expression.
3. What kinds of online harms are covered under the new law?
The initial phase targets harassment, stalking, child pornography, doxxing, and intimate image abuse. More categories like incitement of enmity will be added later.