News

Identifying and Protecting Against Misinformation: An Update on Recently Proposed Legislative Measures and the Importance of Media Literacy

This article was first published in the February 2025 edition of the Law Society of Western Australia’s Brief digital journal.

 

The internet is rife with misinformation and disinformation. The World Economic Forum’s 2024 Global Risks Report identified misinformation and disinformation, which it defined as “persistent false information (deliberate or otherwise) widely spread through media networks, shifting public opinion in a significant way towards distrust in facts and authority”, as the most significant short-term global risk.1

Whether it is simply inaccurate stories or anecdotes that go viral, or false information deliberately spread for financial or political gain, critical thinking and media literacy are essential skills. Now that it is easier than ever to generate content using AI (including realistic photographs and videos), it is important to think critically about what we read and share online.

Sometimes the inaccurate ‘facts’ and information we consume can be relatively innocuous – for example, at the time of writing this article, a map was being circulated on social media, which purported to show ‘the products most Googled in every country’. The results were somewhat eyebrow raising – ‘IVF’ in Australia, ‘vasectomy’ in New Zealand, ‘patent’ in the USA, and ‘watermelon’ in Japan to name a few2 – all pretty unlikely. Digging deeper, the results did not match data previously released by Google.3 It turned out the map was from at least 2015 and actually showed “the most-Googled for object in each country, using the autocomplete formula of ‘How much does * cost in [x country].’”4 – a very different proposition.

This sort of ‘misinformation’ is unlikely to impact anyone’s life or livelihood. Of more concern is health misinformation and financial scams that can cause real harm.

At the far end of the spectrum, the spread of misinformation (or the deliberate spread of disinformation) can have far-reaching and serious consequences, as occurred in the UK in 2024. There, violent anti-Muslim riots were sparked when far-right groups spread false information about the identity of the suspect alleged to have murdered 3 children and injured 8 others in an horrific knife attack in Southport.5 A false name, and claims that the attacker was an asylum seeker, spread like wildfire on social media. This was partly thanks to algorithms on social media platforms which promote ‘engaging’ content – typically, content that is more provocative.

In the authors’ view the spread of misinformation and disinformation can pose a real threat to democracy, as occurred when supporters of Donald Trump stormed the US Capitol on 6 January 2021 following his claims the Presidential election had been “rigged” and “stolen”. But in enacting measures to limit the harm caused, the legislature must walk a fine line between protecting democracy and undermining it by censoring public discourse and freedom of speech. In this arena, regulation must be accompanied by education in order for it to be effective.

Combatting Misinformation and Disinformation: the Federal Government’s recent failed attempt

On 12 September 2024 the Federal Government introduced legislation by which it sought to improve safeguards used by digital platforms to protect Australians from the spread of misinformation and disinformation on social media: the Communications Legislation Amendment (Combatting Misinformation and Disinformation) Bill 2024 (Misinformation Bill).

At a high level, the definition of ‘misinformation’ under the proposed legislation would have captured information that was “reasonably verifiable as false, misleading or deceptive” and “reasonably likely to cause or contribute to serious harm”;6 the definition of disinformation would capture the dissemination of misinformation where there was an intent to mislead or where ‘inauthentic behaviour’ (such as the use of automated systems) was involved.7

Although the Misinformation Bill passed the House of Representatives, and despite attempting to address public concerns regarding government censorship and the potential impact on freedom of expression, it was met with strong opposition in the Senate Environment and Communications Legislation Committee (Committee). On 24 November 2024, just over 2 months after its introduction, it was withdrawn and abandoned.8

Intention of the Bill: to target digital communications platforms

The Misinformation Bill was intended to regulate digital communications platforms, rather than individual end users, largely by adding a new Schedule 9 to the Broadcasting Service Act 1992. It had 3 key objectives:9

  • to empower the Australian Communications and Media Authority (ACMA) to require digital communications platform providers take steps to manage the risk that misinformation and disinformation on digital communications platforms poses in Australia;
  • to increase transparency regarding the way in which digital communications platform providers manage misinformation and disinformation, for example by requiring them to publish their policies, risk assessments and complaint mechanisms for dealing with misinformation and disinformation; and
  • to empower users of digital communications platforms to identify and respond to misinformation and disinformation on digital communications platforms.

This three-pronged approach essentially required digital communication platforms such as Meta and X to address, at a systemic level, the environment in which misinformation thrives on their platforms. The Misinformation Bill did not, for example, create an offence of disseminating disinformation, or an avenue by which someone affected by the spread of misinformation could seek a remedy from those responsible. (That would have been left to the tort of defamation, and other regulatory authorities to the extent the misinformation fell within their ambit, as discussed below).

The powers proposed to be afforded to ACMA under the Misinformation Bill were quite extraordinary; not only would it have had the power to implement a standard over an industry-drafted code, but it could have also issued remedial directions if satisfied a code or standard was contravened. Contravention of a misinformation standard, or ACMA’s remedial directions to comply with a misinformation standard, risked the imposition of a civil penalty by the Federal Court, including significant fines of up to $7.825m or 5% of annual turnover (whichever is greater).10

The Misinformation Bill would also have given ACMA broad powers to require production of information or documents.11

Criticisms and an alternative proposal

The Misinformation Bill focussed on changing the social media landscape by requiring digital communications platform providers to police and take action on their own platforms. However, a cynic could expect the implementation to have gone one of two ways: either the bar for what would be considered misinformation and disinformation was set so high that there would have been no discernible change, or, if the civil penalty provisions struck enough fear, social media platforms would err too far on the side of caution and over-censor public discourse. There was concern that platforms would limit publications based on subjective interpretation of what constitutes “misinformation”, and that the definitions of misinformation and disinformation were too vague.

Despite the Committee’s criticisms of the Bill, it reported “broad support for greater efforts to improve digital literacy, including resources to help end-users to identify reliable and authoritative information sources, and to critically evaluate information they consume online”.12

In contrast to the Bill’s proposal to assign responsibilities to digital platforms for implementing and publishing media literacy plans, the Committee was in favour of a “nationally consistent and supported approach to digital and media literacy”, involving the collaboration of key stakeholders such as the government, research and educational groups, media literacy experts, news media and the digital platform industry. The Committee proposed that those efforts could be informed by the work of other regulators, including the Office of the eSafety Commissioner, and from best practices in other jurisdictions.13

Duty of Care Regime

Despite the setback, the Government appears to still be focused on increasing efforts to regulate internet safety on digital platforms via legislation. The day after the Government scrapped the Misinformation Bill, the Online Safety (Digital Duty of Care) Bill 2024 (Online Safety Bill) was introduced to the House of Representatives.

Amongst other things, the Online Safety Bill seeks to impose “a singular and overarching duty of care onto large providers for the wellbeing of their Australian users”14 and would require large providers15 to conduct risk assessments and prepare mitigation plans to prevent reasonably foreseeable harm from occurring in the first place. The matters a provider must have regard to in conducting a risk assessment include, inter alia, the dissemination of online scams and negative effects on electoral processes.16

A legislated Digital Duty of Care is not a new concept in this arena: it has been implemented in the EU and UK, and was recently recommended by former ACCC Deputy Chair Delia Rickard in her report following the independent review of the Online Safety Act which was received by the Government in early November 2024. It was subsequently recommended again by the Joint Select Committee on Social Media and Australian Society in its final report, ‘Social media: the good, the bad and the ugly’.17

The Online Safety Bill, which has also been modelled on the EU and UK regimes, proposes to modernise the outdated Online Safety Act by expanding the powers available to the eSafety Commissioner to provide for greater transparency and regulation of a digital platform’s ‘algorithm’. The proposed legislation mandates increased transparency about digital platforms’ algorithm management and harm mitigation and stronger penalties where platforms breach the duty of care owed.

Whilst the proposed legislation appears to be a step in the right direction, it has faced several criticisms including:

  1. concerns that by imposing positive duties on digital platforms, those entities may over-censor content to avoid penalties leading to suppression of legitimate content and discussion;
  2. criticisms about the definition of ‘harmful content’ and how its ambiguity may lead to inconsistent enforcement or compliance requirements; and
  3. the legislation is only intended to target large platforms and vulnerable Australians may fall victim to disinformation or other harmful content published on platforms not regulated by the proposed legislation.

The criticisms underscore the challenges facing policymakers when attempting to legislate to improve user safety whilst upholding fundamental rights and freedom of speech.

It is apparent that any way forward needs to include improved and adaptive media literacy programs to ensure Australians are able to make informed decisions about their social media use and what they see online, including how to discern ‘real’ images and video from that created by AI.

Cele-bait Cryptocurrency Scheme

The National Anti-Scam Centre reported in December 2024 that Australians had lost more money to social media scams than by any other method.18 In 2024, there were 15,575 reported scams and $64.2 million lost to social media scams.19 Social media platforms have been criticised for doing little to crack down on scammers using their platforms, with older Australians more likely to fall victim to scammers on Facebook.

As AI advances, we have also seen a rise of fraudsters utilising AI-generated “deepfakes” of high profile celebrities and business persons to market financial products and scam consumers on social media. “Deepfakes” are AI-generated or manipulated videos, audio or photos, often of celebrities. They are almost impossible to distinguish from the real thing and are the leading edge of online disinformation.

In March 2022, the Australian Competition and Consumer Commission commenced proceedings against Meta for declaratory relief, orders for pecuniary penalties, injunctions and ancillary relief for Meta’s alleged contraventions of various provisions of the Australian Consumer Law (Sch 2 to the Competition and Consumer Act 2010 (Cth)) and the Australian Securities and Investments Commission Act 2001 (Cth) for aiding and abetting or otherwise being directly or indirectly knowingly concerned with the publication on Facebook of various “fake celebrity endorsement ads” which were connected with money making schemes.20

ACCC’s claim relates to various fake endorsement ads published on Meta since October 2017 using the likenesses of public figures including David Koch, Dick Smith, Andrew Forrest, Waleed Aly, Celeste Barber, Chris Hemsworth, Harry Triguboff, Karl Stefanovic, Mark Ferguson, Mel Gibson, Mike Amor, Nicole Kidman, Scott Pape, Eddie McGuire, Frank Lowy, Russell Crowe and James Packer (to name only a few). The ACCC alleges that Meta’s algorithm and advertising business model is designed to target people who were likely to be misled into believing the advertised scheme was associated with the celebrities featured in the fake ad.21

If a the ACCC is successful, it will set precedent for increasing Meta’s accountability for the ads it allows on its platforms. However, Meta recently announced the implementation of new rules which are set to commence in February 2025, which are intended to apply to advertisers that promote financial products and services such as insurance products, mortgages, loans (both short and long-term), investment opportunities and credit card applications, to Australian consumers on its platform.22 Under the new rules, advertisers must verify their beneficiary and payer information, including their Australian Financial Services Licence (AFSL) number (unless otherwise exempted) before their ads can be published on Instagram or Facebook.

Legislative Reform

Meta’s new rules will likely come into effect ahead of the passage of the Scams Prevention Framework Bill 2024 which was introduced to Parliament by the Federal Government on 7 November 2024. If passed, the Scams Prevention Framework Bill will introduce major reforms to the telecommunications, social media and banking industries in order to ‘prevent, combat and respond to scams’.23

What Now?

In addition to the proposed reforms described above, there are already regulators with the power to directly target individuals disseminating misinformation where it falls within their authority. For example:

  1. The advertisement of therapeutic goods is heavily regulated24 and Therapeutic Goods Administration has the power to issue infringement notices and directions in respect of non-compliant advertising of therapeutic goods; and 25
  2. ASIC has recently taken action against ‘finfluencers’26 – social media influencers who discuss financial products and services – and issued guidelines to assist them to operate within the law.27

Separately, where a person is defamed by misinformation or disinformation spread on social media, they may be able to take their own action under the Defamation Act. However, this can be at great cost to that individual.

While the Misinformation Bill had its flaws, overall, it would have been a positive step towards addressing the harmful impact of misinformation and disinformation online. It is clear that despite this setback, efforts are still being made at a legislative level to introduce laws that should in part address the spread of misinformation and disinformation online.

The various Bills currently before Parliament are likely to be further developed and enacted in 2025. The profession should monitor these bills as they may provide remedies to users who are adversely affected by misinformation and disinformation.

However, the wheel of progress is slow and – inevitably – reactive.

It is important that users have the tools they need to think critically about what they read online, recognise misinformation when they see it, and help to fight its spread.

How to spot misinformation

The Therapeutic Goods Administration has published guidelines to help consumers spot what it calls ‘dodgy health product ads’,28 which can be summarised as follows:

  • Misleading claims: Does the advertisement claim the medicine is 100% effective, harmless or free of side effects? If something seems too good to be true then it probably is. (Needless to say, this applies to getrich-quick schemes as well.);
  • Check the label: approved medicines in Australia have an AUST L, AUST L(A) or AUST R number on the label;
  • Does the advertisement refer to cancer or other serious conditions? The TGA must give permission for an advertisement for a heath product to refer to cancer, mental illness, sexually transmitted infections or COVID-19;
  • Don’t believe everything that is said by social media influencers: influencers, especially those who are not health professionals, should not be relied on for health advice and may be promoting a product for financial gain.

In respect of financial information, you can protect yourself by checking the ASIC Connect Professional Register29 to find out whether a finfluencer you follow holds an Australian Financial Services Licence.

More generally;

  1. Stop before you share – read the article or watch the video before you share it; don’t just rely on the headline or title to convey the full story;
  2. Pay attention to how the story makes you feel. Has it made you angry? If it is a political story, does it confirm your low opinion of the ‘other side’? Is it likely to have been written to drive engagement?;
  3. Check the source – is it credible? Is it satire? Many people have unwittingly shared stories from satirical news sites assuming they are true;
  4. Look for other reputable sources reporting on the same story or claim – do they corroborate the claims? Has it been reviewed by fact-checking organisations?;
  5. Be sceptical of photo and video evidence – you can conduct a reverse image search to look for the source of the image. Is it actually what the post or article claims it to be?; and
  6. Identify your own biases. Do you read stories that challenge your own way of thinking, or skip over them?

 

Footnotes

  1. World Economic Forum, The Global Risks Report 2024 19th Edition: Insight Report (Report, January 2024), 18.
  2. ‘These Are The Most Googled Products In every Country In One Giant Map’, The Language Nerds (Web Page), <https://thelanguagenerds. com/2023/these-are-the-most-googledproducts-in-every-country-in-one-crazy-map>.
  3. Camilla Ibrahim, ‘Year in Search: Here’s what Aussies searched for in 2023’, Google Australia Blog (Blog post, 12 December 2023) <https://blog. google/intl/en-au/products/explore-get-answers/ year-in-search-2023>.
  4. Drake Baer, ‘The most Googled products in every country in one giant map’ Business Insider, (online, 29 April 2015), <https://www.businessinsider.com/ google-cost-searches-2015-4>.
  5. Kara Fox, ‘UK rocked by far-right riots fuelled by online disinformation about Southport stabbings, CNN (online, 1 August 2024) <https://edition. cnn.com/2024/08/01/uk/southport-attackdisinformation-far-right-riots-intl-gbr/index.html>
  6. Misinformation Bill, Sch 1 cl 13(1).
  7. Misinformation Bill, Sch 1 cl 13(2); cl 15.
  8. The Hon Michelle Rowland MP, ‘Communications Legislation Amendment (Combatting Misinformation and Disinformation) Bill 2024’, (Media Release, 24 November 2024).
  9. Explanatory Memorandum, Communications Legislation Amendment (Combatting Misinformation and Disinformation) Bill 2024 (Cth) (Explanatory Memorandum), p 2.
  10. Misinformation Bill, Schedule 2, cl 20(5H).
  11. Misinformation Bill, Sch 1 cl 33 – 34.
  12. Environment and Communications Legislation Committee, Parliament of Australia, Communications Legislation Amendment (Combatting Misinformation and Disinformation) Bill 2024 [Provisions] (Full Report, November 2024), [6.10].
  13. Environment and Communications Legislation Committee, Parliament of Australia, Communications Legislation  Amendment (Combatting Misinformation and Disinformation) Bill 2024 [Provisions] (Full Report, November 2024), [6.11] – [6.12].
  14. Explanatory Memorandum, Online Safety (Digital Duty of Care) Bill 2024, 3.
  15. Online services which have a monthly active user base of 10% of the Australian general and / or child populations: Explanatory Memorandum, Online Safety (Digital Duty of Care) Bill 2024, 3.
  16. Online Safety Bill, Sch 1 cl 28F.
  17. Joint Select Committee on Social Media and Australian Society, Parliament of Australia, Social media: the good, the bad, and the ugly: Final Report (Report, November 2024), [5.141].
  18. ‘Scam alert: Social media scams’, Scam Watch (Web page, 9 December 2024) <https://www. scamwatch.gov.au/about-us/news-and-alerts/ social-media-scams>
  19. National Anti-Scam Centre, Australian Competition and Consumer Commission, Scam Statistics (ScamWatch Report, 6 January 2024) <https:// www.nasc.gov.au/scam-statistics>.
  20. Australian Competition and Consumer Commission v Meta Platforms, Inc. (formerly Facebook, Inc.) (No 3) [2024] FCA 890, [1] – [3]; [14] – [51].
  21. Australian Competition and Consumer Commission v Meta Platforms, Inc. (formerly Facebook, Inc.) (No 3) [2024] FCA 890, [31] – [51].
  22. Josh Taylor, ‘Meta to force financial advertisers to be verified in bid to prevent celebrity scam ads targeting Australians’, The Guardian (online, 2 December 2024) <https://www.theguardian.com/ technology/2024/dec/02/meta-to-force-financialadvertisers-to-be-verified-in-bid-to-preventcelebrity-scam-ads-targeting-australians>.
  23. Explanatory Memorandum, Scams Prevention Framework Bill 2024, [1.1].
  24. Therapeutic Goods Act 1989 (Cth), Chapter 5.
  25. Therapeutic Goods Act 1989 (Cth), ss 42YK and 42DV respectively.
  26. ASIC, ‘Permanent injunctions ordered against social media finfluencer Tyson Scholz’, (Media Release, 13 April 2023), <https://asic.gov.au/about-asic/newscentre/find-a-media-release/2023-releases/23096mr-permanent-injunctions-ordered-againstsocial-media-finfluencer-tyson-scholz>.
  27. ASIC, ‘Discussing financial products and services online’, Information Sheet 269, (Web Page, March 2022), <https://asic.gov.au/regulatory-resources/ financial-services/giving-financial-product-advice/discussing-financial-products-and-services-online>.
  28. Therapeutic Goods Administration, ‘How to spot a dodgy health product ad’, (Web Page, 14 May 2020) <https://www.tga.gov.au/news/news/howspot-dodgy-health-product-ad)>.
  29. Available at <https://connectonline.asic. gov.au/RegistrySearch/faces/landing/ ProfessionalRegisters.jspx>.

Taleesha Elder

Senior Associate

Monique Vincent

Associate

Alexandra Hughes

Associate - Corporate

Disclaimer: The information published in this article is of a general nature and should not be construed as legal advice. Whilst we aim to provide timely, relevant and accurate information, the law may change and circumstances may differ. You should not therefore act in reliance on it without first obtaining specific legal advice.

Related articles

What are we, smoking? Implications for Pharmacists resulting from new Vaping Reforms

Electronic cigarettes, known as ‘vapes’, have recently challenged paper cigarettes as the go-to nicotine delivery system in popular culture. This is largely because vapes are promoted as exposing users to fewer toxins …

arrowRead article

Indemnity Costs for Defamation Defendants: s 40(2)(b)

When it comes to costs in defamation, we defamation lawyers are acutely aware that successful plaintiffs often recover their costs on an indemnity basis. However, a successful defamation defendant can also use …

arrowRead article

Bennett Team Featured in February Brief

We’re pleased to share that the latest issue of Brief, the journal of the Law Society of Western Australia, features contributions from several of our Bennett team members: “Identifying and Protecting Against …

arrowRead article