News

‘OK, boomers’: The potential challenges and risks of imposing a social media minimum age requirement on under-16s

Amidst a flurry of legislation passed by the Senate on the final sitting day of the Federal Parliament for 2024, Australia has become the first jurisdiction in the world to ban access to social media for children and teenagers under the age of 16.

The new minimum age requirement will not take effect for up to 12 months, but has, predictably, attracted global media attention. It has also sparked criticism from social media companies such as Meta, TikTok, Snap Inc. and X (formerly Twitter); the social media industry body Digital Industry Group Inc. (DIGI); the Australian Human Rights Commission; and a host of child safety experts and advocates, with the Australian Child Rights Taskforce – the peak body for child rights in Australia – detailing its concerns in an open letter to the Prime Minister signed by over 100 academics, 20 leading international experts and 20 Australian civil society organisations.

The legislation is a genuine world-first, although other jurisdictions have introduced or propose to introduce controls on children’s access to social media. For instance, France passed a law in 2023 requiring platforms to restrict access for children under 15 without parental consent and has called on fellow European Union member states to follow suit, while China has imposed strict limits on screentime for minors including restricting teenage users of Douyin (the Chinese domestic equivalent of TikTok) to 40 minutes’ use per day. Several states in the United States have also introduced age restrictions or parental consent requirements, although a number of these laws have been ruled unconstitutional (including in Arkansas and Utah) or are under challenge (such as in Florida). However, the Australian legislation imposes a higher age restriction than all other comparable jurisdictions, will not be grandfathered to permit users with existing accounts to retain access, and contains no exceptions for parental consent.

This article considers the reasons advanced in favour of the minimum age requirement, before turning to the content of the legislation and an exploration of which social media platforms are likely to be affected (and which are not). We also examine some of the potential challenges posed by the new minimum age requirement.

Why this, why now?

The harms that social media can pose, particularly to children and teenagers, are well-documented. On 18 November 2024, shortly before the new legislation was introduced, the Joint Select Committee on Social Media and Australian Society released its final report, Social media: the good, the bad and the ugly. Chapter 3 of the report identifies numerous risks and harms associated with social media use, including:

  • exposure to or engagement with harmful content, such as violent material, sexual content including child sexual abuse and exploitation material, content depicting terrorist acts or crimes, hate speech, discrimination, false information (including mis- and disinformation) and material featuring self-harm or suicide;
  • promotion of unhealthy eating, alcohol, tobacco and drug use, and gambling;
  • users being treated in a nasty or hurtful way online, including cyberbullying;
  • children being targeted by or experiencing harmful contact, such as online stalking, sexual harassment or other online predatory behaviour;
  • privacy violations such as leaked images; and
  • social media-based scams.

The report also noted the various drivers of harm on social media platforms, including platforms’ business models (such as advertising-based revenue models) amplifying the risks and increasing the likelihood of harm. Whilst the report concluded that further research was required into the relationship between social media use and adverse mental health consequences, the Committee also heard considerable evidence about specific harms said to be amplified by social media, such as eating disorders, cyberbullying, self-harm and suicide.

Minister for Communications Michelle Rowland, in delivering her second reading speech introducing the new law, articulated the reasons for its introduction as follows:

‘Social media as we commonly and collectively understand it, has become a ubiquitous and a normal part of life for all of us, and many young Australians take part in this activity without adverse consequence. It can be a source of entertainment, education and connection with the world—and each other. Those things, particularly for young Australians, can be beneficial.

But, for too many young Australians, social media can be harmful. Almost two-thirds of 14- to 17-year-old Australians have viewed extremely harmful content online, including drug abuse, suicide or self-harm, as well as violent material. A quarter have been exposed to content promoting unsafe eating habits.

Research conducted by eSafety found that 95 per cent of Australian caregivers find online safety to be one of their toughest parenting challenges. …

Social media has a social responsibility. We know they can—and should—do better to address harms on their platforms. That’s why we’re making big changes to hold platforms to account for user safety.’

The measure also has widespread support from the Australian public, indicating a greater acceptance of government intervention to protect health and safety (particularly for children) likely exacerbated by distrust of social media companies.

Recent polling by YouGov indicated that 77% of Australians were in favour of restricting social media access for children under 16. YouGov polling also indicated overwhelming support for broader measures to hold social media companies to account, with 87% of Australians in favour of stronger penalties for companies that fail to comply with Australian laws and 75% of Australians supporting the planned introduction of a ‘digital duty of care’ to require social media companies to take proactive measures to protect users from harmful online content.

The ban may be a rare example of law reform by which the Government can give Australians something they overwhelmingly want, scoring a political victory in the process, while receiving support from the Coalition Opposition.

 

What does the legislation say?

The Online Safety Amendment (Social Media Minimum Age) Bill 2024 (Cth) (Bill) amends the Online Safety Act 2021 (Cth) (Act) to insert a new Part 4A, ‘Social media minimum age’. New section 63B specifies that the object of the Part is ‘to reduce the risk of harm to age-restricted users from certain kinds of social media platforms’.

Although it is colloquially known as a ‘ban’ and is commonly referred to as such (including in this article), new section 63D is not worded as a prohibition on users under 16 from accessing social media platforms.

Rather, it takes the form of a civil penalty provision which states that ‘A provider of an age-restricted social media platform must take reasonable steps to prevent age-restricted users having accounts with the age-restricted social media platform’ (the minimum age requirement). Pursuant to an amendment to the definitions set out in section 5 of the Act, an ‘age-restricted user’ means ‘an Australian child who has not reached 16 years’.

Section 63D prescribes a civil penalty of 30,000 penalty units (equivalent to up to $9.9 million at time of writing) on providers for failing to take reasonable steps to comply with the minimum age requirement. This increases to 150,000 penalty units if the provider is a body corporate; currently equivalent to a maximum pecuniary penalty of $49.5 million.

New sections 63DA and 63DB, which were inserted following amendments by the Senate, specify that providers of age-restricted social media platforms (AR SMPs) must not collect ‘government-issued identification material’ (including identification documents or a digital ID), or information of a kind specified in the legislative rules, for the purposes of complying with section 63D. A maximum civil penalty of 30,000 penalty units also applies for failing to comply with sections 63DA or 63DB, which again increases to 150,000 penalty units for bodies corporate.

However, section 63DB(2) provides that the prohibition on collecting government-issued identification material does not apply if the provider offers alternative means of age assurance which are reasonable in the circumstances. The Supplementary Explanatory Memorandum to the Bill clarifies that the effect of this provision is that AR SMPs may only use government-issued identification if alternative reasonable age assurance methods are provided, which ‘may include user interaction or facial age estimations’.

If the eSafety Commissioner is satisfied that an AR SMP provider has contravened sections 63D (the minimum age requirement), 63DA(1) or 63DB(1), the Commissioner may prepare a statement to that effect and provide a copy to the provider, and if appropriate, publish the statement on the Commissioner’s website: new section 63J.

Other amendments introduced by the Bill include:

  • granting the eSafety Commissioner the additional functions of formulating and promoting written guidelines for the taking of ‘reasonable steps’ to prevent age-restricted users having accounts with AR SMPs (new sections 27(1)(qa) and (qb));
  • introducing privacy law protections in respect of personal information (as defined in the Privacy Act 1988 (Cth)) held by entities that is collected for the purposes of complying with section 63D (new section 63F);
  • conferring information-gathering powers on the eSafety Commissioner where the Commissioner believes on reasonable grounds that an AR SMP provider holds information relevant to their compliance with sections 63D, 63DA and 63DB (new sections 63G and 63H);
  • empowering the Information Commissioner to prepare a similar statement to that contemplated by section 63J, if the Information Commissioner is satisfied that a provider has used, disclosed or failed to destroy information in a manner that is taken to be an interference with privacy (new section 63K);
  • increasing the civil penalty for failure to comply with industry codes and standards, set out in sections 143 and 146 of the Act, from 500 to 30,000 penalty units (new sections 143(2) and 146(1)); and
  • requiring an independent review of the operation of Part 4A to be conducted within 2 years, which must include consideration of the adequacy of the privacy protections and any other matters determined by the Minister, and the results of that review to be set out in a written report tabled in both Houses of Parliament (new section 239B).

 

When will the minimum age requirement come into force?

Pursuant to new section 63E, the minimum age requirement takes effect on a day specified by the Minister, which must not be later than 12 months after the date of commencement.

Significantly, the original text of the draft Bill stated that the specified effective date ‘must not be earlier than 12 months after the day this section commences’ (emphasis added); in other words, it imposed a minimum timeframe of at least 12 months before the minimum age requirement could take effect. The Explanatory Memorandum to the Bill specified that the purpose of the deferred commencement was to provide the eSafety Commissioner and the industry with sufficient time to develop and implement appropriate systems, as well as to enable the government’s ongoing age assurance trial to ‘inform guidance to industry on what age assurance technologies would be considered ‘reasonable’ and consistent with the minimum age obligation’.

However, following a late-night amendment by the Senate the night before the Bill was passed, the words ‘earlier than’ were amended to ‘later than’ – effectively changing the minimum timeframe to a maximum deadline of up to 12 months before the minimum age requirement must come into force.

 

Which social media platforms are covered?

The Bill does not name specific platforms that will, or will not, be subject to the minimum age requirement. Rather, new section 63C(1) defines an AR SMP as follows:

For the purposes of this Act, age-restricted social media platform means:

(a) an electronic service that satisfies the following conditions:

(i) the sole purpose, or a significant purpose, of the service is to enable online social interaction between 2 or more end-users;

(ii) the service allows end-users to link to, or interact with, some or all of the other end-users;

(iii) the service allows end-users to post material on the service;

(iv) such other conditions (if any) as are set out in the legislative rules; or

(b) an electronic service specified in the legislative rules;

but does not include a service mentioned in subsection (6).

‘Online social interaction’, for the purposes of section 63C(1)(a)(i), includes ‘online interaction that enables end-users to share material for social purposes’: new section 63C(2). However, it does not include (for example) online business interaction, and an AR SMP may be, but is not necessarily, a ‘social media service’ as defined in section 13 of the Act.

The Minister may only make legislative rules specifying an electronic service pursuant to new section 63C(1)(b) if they are satisfied that it is reasonably necessary to do so to minimise harm to age-restricted users: new section 63C(4). Before making such rules, the Minister must seek and have regard to advice from the eSafety Commissioner, and may also seek advice from any other authorities or agencies they consider relevant: new section 63C(5).

As such, it remains unclear precisely which social media platforms will be subject to the minimum age requirement. However, it is generally anticipated that at a minimum, platforms such as Facebook, Instagram, TikTok, Reddit and X (formerly Twitter) will be captured. In her second reading speech, Minister Rowland confirmed that Snapchat would also likely be captured, despite earlier reports that it could be exempt on the basis that it fell within the definition of a ‘messaging service’ rather than a ‘social media platform’. Whether the Minister will exercise her power under new section 63C to specify additional platforms subject to the minimum age requirement remains to be seen.

 

Which platforms are not covered?

New section 63C(6) provides that an electronic service is excluded from the definition of AR SMP if:

(a) none of the material on the service is accessible to, or delivered to, one or more end-users in Australia; or

(b) the service is specified in the legislative rules.

Similarly, before making legislative rules specifying what is not an AR SMP, the Minister must seek and have regard to advice from the eSafety Commissioner, and may also consult other relevant authorities or agencies: new section 63C(7).

In the Explanatory Memorandum, the government indicated that it proposes to exclude the following from the definition of AR SMP in the first instance:

  • messaging apps;
  • online gaming services; and
  • services with the ‘primary purpose of supporting the health and education of end-users’.

Whilst the government is yet to specify the precise platforms that would fall within these categories, it has been reported that Messenger Kids, WhatsApp and similar messaging apps will be excluded. Minister Rowland stated in her second reading speech that the inclusion of messaging apps ‘could have wider consequences, such as making communications within families harder’, and would be inconsistent with the government’s stated aim of protecting young people and ‘enabling [them] to actively participate in the digital environment where they have grown up’. Minister Rowland also observed that whilst messaging apps are not completely free of risk, users ‘do not face the same algorithmic curation of content and psychological manipulation to encourage near endless engagement’ as is the case with traditional social media platforms.

The exclusion of online gaming services has drawn some criticism. For instance, the online gaming platform Roblox contains an open chat room function, which until recently could be used by users under the age of 13 to speak to other users. Whilst this function has now been changed for safety reasons, children under 13 can still use the chat function to speak to other users within a mini-game or experience hosted on the platform. Online safety educator Kirra Pendergast, who presents online safety presentations to primary school students, noted that many students raised their hands when asked if they had ever seen ‘adult content’ on Roblox that made them feel uncomfortable, and considers that Roblox should be categorised as a social media platform rather than a game.

It is not yet clear how the government will determine what constitutes a service with the ‘primary purpose of supporting the health and education of end-users’, or the precise services that will be exempted from the minimum age requirement on this basis. However, in her second reading speech, Minister Rowland observed that the category would likely include ‘services like ReachOut’s PeerChat, Kids Helpline ‘MyCircle’, Google Classroom, YouTube, and other apps that can be shown to function like social media in their interactivity but operate with a significant purpose to enable young people to get the education and health support they need’.

Advocacy group 36 Months has called for YouTube to be exempted on the basis that the intention of the minimum age requirement was to protect children from the harms of social media, not restrict them from accessing online educational tools and entertainment. It has been reported that numerous stakeholders called for an exemption for YouTube because of its educational value. However, it has not yet been confirmed whether YouTube as a whole will be exempt, or whether the exemption will be more restricted (for instance, whether children under 16 will be able to access the ‘YouTube Kids’ service but not the broader platform).

Further, the new section 63D imposes an obligation on providers to take reasonable steps to prevent age-restricted users from ‘having accounts’ with AR SMPs. As such, the minimum age requirement will not prevent users from accessing content on platforms such as YouTube without holding an account with those platforms. The Explanatory Memorandum confirms expressly that the minimum age requirement ‘would not affect user access to ‘logged-out’ versions of a social media platform’, such as users viewing YouTube content without signing into an account or Facebook users accessing some content (such as a business’s landing page) without logging in.

 

Potential implementation challenges

Much remains to be seen about how the minimum age requirement will operate in practice, particularly given the truncated consultation period between the introduction of the Bill and its passage. The ‘rushed’ introduction of the Bill and ‘extremely short’ consultation period has been one of the main concerns raised by critics and supporters alike, including in almost all submissions made to the Senate inquiry into the Bill. As Australia’s Human Rights Commissioner Lorraine Finlay and National Children’s Commissioner Anne Hollonds wrote in a joint opinion piece:

The proposed law… is being pushed through Parliament without adequate consultation. It was only released last Thursday [21 November 2024] and raises more questions than it answers. …

There is a lot of detail that is yet to be worked out, which is why the law proposes a 12-month implementation period before the ban itself commences. Despite this, the legislation is being rushed through the Parliament this week, without taking the time to get the details right. Or even knowing how the ban will work in practice.

The Australian public was given just one day to make submissions to the parliamentary committee considering the proposed laws. The committee today [Monday 25 November 2024] held just three hours of public hearings and is required to report tomorrow [Tuesday 26 November 2024]. This is not meaningful consultation and does not allow for a thorough consideration of the views of experts, parents or children.

Putting aside concerns about the legislative process by which the Bill was introduced and passed, there are numerous potential implementation challenges that now confront the government.

Categorisation of platforms

It remains unclear which particular social media platforms will, or will not, be captured by the minimum age requirement. These details will be left to the legislative rules, which are yet to be drafted and are only required to be made if the Minister considers it reasonably necessary to do so.

Whilst there are advantages to the definition of ‘AR SMP’ (or exclusions from that definition) remaining intentionally broad, more nuanced consideration may be required in some circumstances. For example, whilst the government’s current intention appears to be to exclude online gaming services, the Roblox example suggests that further consideration may be required as to whether a gaming platform which has chat functionality, or otherwise facilitates ‘online social interaction between 2 or more end-users’ or allows end-users to ‘link to, or interact with’ other end-users, should be classified as an AR SMP for the purposes of the minimum age requirement.

Age verification

It also remains to be seen how age verification will work in practice. A consortium led by the Age Check Certification Scheme will conduct an age assurance technology trial next year with a view to reporting by mid-2025, to determine the effectiveness of available technologies as potential options to implement the minimum age requirement. The trial will reportedly run from January to March 2025, involve over 1,000 randomly chosen Australians of varying ages and cultural backgrounds, and assess three main forms of age assurance technology:

  • Age verification: use of a person’s identity credentials, such as an uploaded identity document (such as a driver’s licence or passport) or digital ID to verify a person’s stated date of birth;
  • Age estimation: analysing a person’s biological features, such as a real-time photo or video of their face, to estimate their age; and
  • Age inference: use of known details of a person’s life circumstances to infer whether they are an adult (for instance, their marital status, bank records or whether they hold a credit card or mortgage).

All three methods raise concerns regarding reliability. For instance, whilst age verification based on identity credentials is likely to be the most reliable of the three options, it would require some form of proof that the credential is in fact owned by the person supplying it; and in light of new section 63DB, the use of ‘government-issued identification material’ for age verification purposes would need to be coupled with alternative verification methods in any event.

It has been reported that biometric data, such as facial estimation technology, is the next most likely method after providing ID. However, age estimation through facial recognition technology is not perfect; it is reportedly only accurate to within 3.7 years of a person’s true age and, significantly for the purposes of the minimum age requirement, is less accurate for tweens and teenagers than for adults over 20.

Moreover, facial recognition technology has been found to be less accurate for some population groups than others, including higher error rates for female faces as opposed to male faces, and for darker-skinned faces than Caucasian faces. For instance, it has been reported that commercially available facial recognition systems have an error rate of ‘0.8% of light-skinned men, compared to nearly 35% for dark-skinned women’, whilst the highly-regarded system Yoti (used by Meta) has an average margin of error of 2 years for people aged 13 to 16 years old.

Age inference technologies, meanwhile, are also imperfect. For example, age verification based on a person’s credit card details could inadvertently exclude any user over the age of 16 that does not own a credit card, whilst many teenagers and young adults that are over 16 but still live with their parents could be inadvertently excluded by a requirement that they provide proof of mortgage or rental documents. Marital status is also, self-evidently, an imperfect method of age verification in a world in which many people over the age of 16 are not married.

Privacy risks

There are considerable privacy concerns associated with the provision and retention of data for age verification purposes, particularly if that data is held by the social media companies that provide AR SMPs. However, those concerns will likely remain even if the data is not held by the providers; for instance, any entity responsible for storing the data (such as the government or a third-party provider) would likely become a prime target for a cyber-security attack.

Whilst the Supplementary Explanatory Memorandum reveals that new sections 63A and 63B were included to protect users’ privacy, it has been reported that their inclusion ‘may well have triggered the countdown to a somewhat unsettling game of Would You Rather for about 20 million Australian social media users: hand over your ID or your facial data if you want to use the platforms.’ The age assurance trial will include evaluating the privacy implications of each potential age verification method; however, it remains to be seen exactly how users’ privacy will be protected, particularly in an age of ever-increasing risks from cyber-attacks and data breaches.

Enforcement challenges

Enforcement of the minimum age requirement will be a critical challenge. In addition to the practical difficulties associated with age verification, it remains unclear how compliance will be enforced against AR SMPs in practice, particularly in circumstances where many of the major social media platforms that would be affected are domiciled overseas and have previously shown little regard for the application of Australian laws to their activities. (See, for example, Meta’s refusal to renew commercial deals with Australian news media companies under the News Media Bargaining Code, and X Corp’s non-compliance with a take-down notice and Federal Court orders to hide footage of a stabbing attack on a bishop in Sydney in April 2024, in proceedings commenced and ultimately discontinued by the eSafety Commissioner.)

Further, the obligation imposed by new section 63D merely requires AR SMPs to take ‘reasonable steps’ to prevent users under the age of 16 from holding accounts. It will be for the eSafety Commissioner to formulate guidelines for social media companies as to what constitutes ‘reasonable steps’ – with new subsection 27(6) making clear that any such guidelines will not have the status of legislative instruments – and, in all likelihood, the meaning of ‘reasonable steps’ will be left to the courts to decide in subsequent enforcement proceedings.

Potential unconstitutionality

Some commentators have also contended that the minimum age requirement could breach the implied freedom of political communication and therefore be unconstitutional, noting that social media is ‘a crucial source of political information and communication for children’ frequently used by school-age political activists such as Greta Thunberg and Anjali Sharma.

Professor Sarah Joseph, writing in The Conversation, observed that the minimum age requirement may be vulnerable to challenge on the basis that it is unworkable or easy to thwart and is therefore not suitable for achieving its purpose, or is not necessary for achieving its purpose given there are other ways of achieving that purpose that impose a lesser burden on political communication.

With respect, we question the implication that the legislation is unconstitutional. Despite the concerns noted above, the policy goals of the legislation are of such public importance that it would likely be characterised as reasonably appropriate and adapted to achievement of those goals. The legislation is likely to meet the High Court’s need for proportionality.

By reason of the Senate’s amendments to new section 63E, the implementation period has been amended from a minimum period of 12 months (as initially contemplated) to a maximum period of 12 months before the minimum age requirement takes effect. This will increase the pressure on the government and industry to address these challenges within the next year, before the requirement comes into force.

 

Other potential drawbacks

Critics of the minimum age requirement have also raised concerns that it lacks an evidentiary basis, is unlikely to work, and will have wide-ranging implications for all social media users in Australia. The open letter signed by the Australian Child Rights Taskforce detailed numerous concerns, including that:

  • a ‘ban’ would be ‘too blunt an instrument to address risks effectively’;
  • the requirement will affect children’s right to access and participation, contrary to the United Nations Committee on the Rights of the Child recommendation that national policies be aimed at allowing children to benefit from engaging with the digital environment while simultaneously ensuring safe access;
  • platforms will be disincentivised from implementing child safety features for any younger users that are able to circumvent ineffective age assurance and ‘slip onto’ the platform (effectively making those platforms even more unsafe);
  • platforms that are exempt from the minimum age requirement also contain safety risks, such as harmful content promoted by algorithms, and those products would not be improved by banning other platforms; and
  • there should be a greater focus on systemic regulation, and supporting and empowering young people to navigate the online world as they transition to adulthood.

Further, although the Explanatory Memorandum and second reading speech state explicitly that the purpose of the minimum age requirement is to protect young people rather than punishing or isolating them, child safety and mental health advocates have raised concerns that the minimum age requirement will result in young people being prevented from accessing support networks, particularly vulnerable young people such as migrant teenagers and LGBTQIA+ youths that have found a sense of community or safe spaces online. As Commissioners Finlay and Hollonds of the Australian Human Rights Commission observed:

‘For children in marginalised, remote, or vulnerable situations, social media offers a lifeline. It connects children with disability to peers, resources, and communities they may not otherwise access. It helps LGBTQIA+ youth find acceptance and solidarity. It can improve access to healthcare, particularly for children seeking mental health support. These digital spaces can educate, inform, and remind kids who feel isolated—whether physically or emotionally—that they are not alone.

Children and young people have rights to access information and to freely express themselves as they develop and form their identities. A social media ban directly threatens these rights.’

In what might be described colloquially as the ‘OK, boomer’ effect, many children and teenagers are extremely savvy in the online environment and may well be able to circumvent the minimum age requirement in any event, particularly given the added motivation of gaining access to prohibited platforms or regaining the access that will be taken away from them once the requirement takes effect. It is estimated that huge numbers of children under 13 have social media accounts despite many platforms (such as Facebook, TikTok and Snapchat) requiring users to be at least 13 years old to sign up; for instance, the Norwegian government estimates that half of the nation’s nine-year-olds, 58% of 10-year-olds and 72% of 11-year-olds currently use some form of social media.

 

Where to from here?

Prime Minister Anthony Albanese has accepted that the implementation of the minimum age requirement will not be perfect, ‘just like the alcohol ban for [children] under 18 doesn’t mean that someone under 18 never has access’, but maintains that introducing it was ‘the right thing to do’.

So much may be accepted, particularly given the well-documented harms of social media and the widespread community support for a ‘ban’ on social media for under 16s. However, the potential challenges with implementing the minimum age requirement are many, ranging from the question of which platforms will and will not be included to practical considerations such as the age verification methods to be used, the mitigation of privacy risks, and the challenges of enforcement. Whether the minimum age requirement will truly achieve its desired effect therefore remains an open question, to which there will be no quick or easy answer.

Aparna Jayasekera

Senior Associate

Disclaimer: The information published in this article is of a general nature and should not be construed as legal advice. Whilst we aim to provide timely, relevant and accurate information, the law may change and circumstances may differ. You should not therefore act in reliance on it without first obtaining specific legal advice.

Related articles

A rapidly moving beast: Australian regulatory reforms to tackle ‘greenwashing’ and the lessons we are learning

The latest issue of the International Bar Association’s Journal of Energy & Natural Resources Law features commentary by Bennett’s Reid Thornett, co-authored by Michelle Brooks, Michael Tangonan, Tom Webb and Phillipa McCormack. …

arrowRead article

Regulatory Roundup: ASIC’s 2024 focus areas and 2025 enforcement priorities

ASIC recently published its enforcement priorities for 2025. In this article, we examine whether ASIC has met its strategic priorities and objectives in 2024 and which sectors will be impacted by ASIC’s …

arrowRead article

A recap of the High Court’s decisions in Capic v Ford and Williams v Toyota

In the December issue of Brief, the journal of the Law Society of Western Australia, we’re pleased to see an update by YLC Chair, Associate Thomas Coltrona, on the High Court’s recent …

arrowRead article