Why in news?
Australia has implemented a first-of-its-kind ban preventing anyone under 16 from using major social media platforms such as TikTok, Instagram, Facebook, YouTube, X, Snapchat, and Threads.
Under the new rules, minors cannot create new accounts and any existing profiles are being shut down. The move is historic and is drawing global attention, as other countries observe how the ban unfolds and whether it effectively protects children online.
What’s in Today’s Article?
- Australia Sets Global Precedent With Social Media Age Ban
- Why Australia Introduced the Under-16 Social Media Ban?
- Concerns Over Rights and Feasibility
- How Australia’s Rule Differs From India’s Approach?
Australia Sets Global Precedent With Social Media Age Ban
- Australia has become the first country to legally enforce a minimum age of 16 for social media use.
- Platforms like Instagram, YouTube, Snapchat and others must now block over a million underage accounts, marking a major shift in global online safety regulation.
- What the New Australian Law Mandates?
- Under the Online Safety Amendment (Social Media Minimum Age) Act, platforms must:
- Take “reasonable steps” to identify under-16 users and deactivate their accounts.
- Block new account creation by anyone below 16.
- Prevent workarounds, such as fake birthdays or identity misrepresentation.
- Have a grievance mechanism to fix errors where someone is wrongly blocked or allowed.
- This shift places direct responsibility on tech platforms to verify user ages and enforce compliance — something never before mandated at this scale.
- Key Exemptions in the Law
- The Australian government has excluded several online services from the age-ban:
- Dating apps
- Gaming platforms
- AI chatbots
- This has raised questions, especially as some AI tools have recently been found allowing inappropriate or “sensual” conversations with minors.
Why Australia Introduced the Under-16 Social Media Ban?
- The Australian government says the ban aims to shield young users from the “pressures and risks” created by social media platforms.
- These include:
- Addictive design features that encourage excessive screen time
- Harmful or unsafe content affecting mental health and well-being
- High levels of cyberbullying — over half of young Australians report experiencing it
- The government argues that stronger safeguards are required because existing platform policies have failed to protect minors.
- Regulatory Impact: Big Tech Under Pressure
- The new law has forced major companies such as Meta, Google and TikTok to overhaul their systems.
- Meta has reportedly begun deactivating under-16 accounts.
- Platforms that fail to block under-16 users face penalties up to AUD 33 million.
- Although tech companies oppose the law publicly, all have stated they will comply.
- Importantly, children themselves aren’t penalised for attempting to access social media — only platforms are.
Concerns Over Rights and Feasibility
- The Australian Human Rights Commission has criticised the blanket ban, arguing that:
- It may restrict a child’s right to free expression
- It risks pushing children to unsafe, unregulated online spaces
- Enforcement challenges could weaken the effectiveness of the law
- Debate continues over whether this strict ban is the right solution or if more balanced, protective alternatives exist.
- The Risk of State Overreach
- Digital rights advocates warn that child-safety regulations can expand into tools of state control. Examples include:
- Turkey, where child-safety powers were used to remove political posts.
- Brazil, where similar laws restricted election content.
- India, where online speech is already heavily regulated.
- Safety rules can become a gateway to censorship.
- Why Bans Often Fail in Practice?
- Teenagers repeatedly bypass restrictions using VPNs, fake ages, and loopholes.
- The internet’s decentralised design — originally meant for resilience — makes enforcing bans extremely difficult.
- Meanwhile, platforms like Twitch host thriving creator economies, complicating blanket restrictions.
- Reactions: Tech Pushback, Parental Support
- Tech companies warn the new rules may be impractical and intrusive.
- Parents and safety advocates widely support the move, citing rising online harms, bullying, and mental-health concerns among teenagers.
- The law is now being closely watched by other governments as a possible model for future regulation.
How Australia’s Rule Differs From India’s Approach?
- Unlike Australia’s blanket ban, India does not restrict children from using social media.
- Instead, the Digital Personal Data Protection Act, 2023 focuses on parental consent and data safeguards.
- Key points:
- No minimum age for social media use, but anyone under 18 is treated as a child under the law.
- Platforms must implement a “verifiable parental consent” mechanism before processing children’s data — though the law does not prescribe how this must be done.
- Companies are prohibited from processing children’s data in ways that may harm their well-being.
- Platforms cannot track, monitor behaviour, or run targeted ads toward children.
- India’s model is therefore data-protection–centric, not access-restricting, unlike Australia’s outright ban for under-16s.