Assessing the Social Media Ban: Separating Concerns from the Evidence

Date

Spread the love

Australia’s upcoming ban on social-media accounts for under-16s has prompted strong debate, particularly following comments from NSW Libertarian MLC John Ruddick, who has expressed concerns about free expression, personal freedoms and the potential for expanded identification requirements. As discussion continues, it is useful to examine the legislation alongside the available research to understand what the ban does, why it was drafted, and how it may affect young people and the wider community.

What the Legislation Actually Introduces

The Online Safety Amendment (Social Media Minimum Age) Act 2024, which comes into effect on 10 December 2025, requires major social-media platforms to take “reasonable steps” to prevent children under 16 from holding accounts — as outlined in the bill’s summary. The requirement applies to platforms such as Facebook, Instagram, TikTok, Snapchat, X, Reddit and YouTube.

Importantly:

  • The law regulates platforms, not individuals or families.
  • Under-16s are still able to use messaging services, school platforms, and child-specific products such as YouTube Kids.
  • According to the eSafety Commissioner, platforms cannot rely solely on government ID to verify age; they must use a mix of privacy-preserving methods such as behavioural signals and age-estimation technology, with ID required only for users who choose to appeal an incorrect assessment.

These safeguards mean the legislation does not establish a universal identification requirement for all social-media use, though some adults may encounter verification prompts if they are mistakenly flagged as underage.

Understanding the Free Speech Considerations

Ruddick and others have raised questions about the effect of the ban on young people’s participation in political discussion. The restriction applies only to a specific group of commercial social-media platforms; it does not prevent teenagers from communicating through email, messaging apps, school forums, youth organisations, community events or other digital spaces.

Australia’s implied freedom of political communication, as referenced in analyses of the amendment on Wikipedia, limits laws that disproportionately burden political communication. It also allows Parliament to introduce reasonable and proportionate restrictions that serve legitimate aims — including child safety. Whether the High Court will consider this particular law proportionate remains to be determined, but the purpose of the legislation is clearly framed around reducing harm rather than limiting political speech.

What the Research Says About Online Harm

Several concerns driving the ban are grounded in recent studies. The eSafety Commissioner’s national survey, summarised in Latest eSafety research reveals social media use is widespread among kids — and so are the harms, found that:

  • 96% of Australian children aged 10–15 use social media.
  • Seven in ten reported encountering harmful content, including violent material, sexual content, self-harm imagery, hateful posts or risky viral challenges.

This research also found that grooming-type interactions most commonly occur on mainstream social-media platforms.

Separately, an academic study, Measuring Harmful Content Over Time on Video-Sharing Platforms, showed that accounts registered as belonging to “13-year-olds” encountered graphic or harmful video content more quickly and more frequently than “18-year-old” accounts, even when they simply scrolled passively.

These findings suggest that the risks are not confined to rare incidents; they are linked to how platforms are designed and how their recommendation systems function.

Parenting, Platforms and Shared Responsibility

Some critics argue that online safety is primarily a parental responsibility. While parents play a central role, the research highlights areas where individual supervision alone may not be sufficient — particularly given that harmful content often appears through algorithmic recommendations or automated feeds outside parental control.

The legislation aims to create a baseline of protection at the platform level, which the Government argues supports, rather than replaces, the role of families.

The debate surrounding the social-media age restriction reflects broader questions about children’s rights, mental health, technological design and the responsibilities of governments and corporations. Ruddick’s concerns highlight important civil-liberty considerations, while the Government’s position is grounded in evidence about widespread exposure to harmful content and risks linked to online environments.

Proportionality and Practical Impact

The ban applies only to users under 16 and only to certain social-media platforms. Adults are unaffected except in situations where a user may be incorrectly flagged as underage, in which case an appeal process exists through the mechanisms described by the eSafety Commissioner.

Age-based restrictions are common in Australia across a range of activities — from alcohol and driving to certain film classifications — and are generally assessed on whether they respond proportionately to evidence of risk.

Whether the social-media ban is the best approach will continue to be discussed. Some experts advocate for complementary measures such as safer design standards, stronger moderation, digital-literacy education and greater support for parents. These issues are likely to shape the long-term policy conversation.

A Complex Issue That Requires Ongoing Public Engagement

As the High Court challenge progresses and platforms begin adapting to the law, continued public discussion — informed by research and by a clear understanding of the legislation — will remain essential.

About the Author

More
articles