A current examine by Western Sydney College, Grownup Media Literacy in 2024, revealed worryingly low ranges of media literacy amongst Australians, significantly given the deepfake capabilities posted by newer AI applied sciences.
This deficiency poses an IT safety danger, provided that human error stays the main reason for safety breaches. As disinformation and deepfakes turn into more and more refined, the necessity for a cohesive nationwide response is extra pressing than ever, the report famous.
As a result of AI can produce extremely convincing disinformation, the danger of human error turns into magnified. People who are usually not media literate usually tend to fall prey to such schemes, doubtlessly compromising delicate info or methods.
The rising menace of disinformation and deepfakes
Whereas AI gives plain advantages within the technology and distribution of knowledge, it additionally presents new challenges, together with disinformation and deepfakes that require excessive ranges of media literacy throughout the nation to mitigate.
Tanya Notley, an affiliate professor at Western Sydney College who was concerned within the Grownup Media Literacy report, defined that AI introduces some specific complexities to media literacy.
“It’s actually simply getting tougher and tougher to establish the place AI has been used,” she instructed TechRepublic.
To beat these challenges, people should perceive the best way to confirm the data they see and the best way to inform the distinction between a high quality supply and one more likely to put up deepfakes.
Sadly, about 1 in 3 Australians (34%) report having “low confidence” of their media literacy. Schooling performs an element, as simply 1 in 4 (25%) Australians with a low degree of schooling reported having confidence in verifying info they discover on-line.
Why media literacy issues to cyber safety
The connection between media literacy and cyber safety won’t be instantly obvious, however it’s important. Current analysis from Proofpoint discovered that 74% of CISOs think about human error to be the “most important” vulnerability in organisations.
Low media literacy exacerbates this problem. When people can’t successfully assess the credibility of knowledge, they turn into extra inclined to frequent cyber safety threats, together with phishing scams, social engineering, and different types of manipulation that straight result in safety breaches.
An already notorious instance of this occurred in Might, when cybercriminals efficiently used a deepfake to impersonate the CFO of an engineering firm, Arup, to persuade an worker to switch $25 million to a sequence of Hong Kong financial institution accounts.
The position of media literacy in nationwide safety
As Notley identified, enhancing media literacy isn’t just a matter of schooling. It’s a nationwide safety crucial — significantly in Australia, a nation the place there may be already a cyber safety abilities scarcity.
“Specializing in one factor, which many individuals have, resembling regulation, is insufficient,” she stated. “We truly need to have a multi-pronged method, and media literacy does plenty of various things. One in every of which is to extend folks’s information about how generative AI is getting used and the best way to suppose critically and ask questions on that.”
In line with Notley, this multi-pronged method ought to embody:
- Media literacy schooling: Instructional establishments and neighborhood organisations ought to implement strong media literacy packages that equip people with the talents to critically consider digital content material. This schooling ought to cowl not solely conventional media but additionally the nuances of AI-generated content material.
- Regulation and coverage: Governments should develop and implement rules that maintain digital platforms accountable for the content material they host. This contains mandating transparency about AI-generated content material and making certain that platforms take proactive measures to stop the unfold of disinformation.
- Public consciousness campaigns: Nationwide campaigns are wanted to boost consciousness in regards to the dangers related to low media literacy and the significance of being important shoppers of knowledge. These campaigns must be designed to succeed in all demographics, together with those that are much less more likely to be digitally literate.
- Business collaboration: The IT business performs an important position in enhancing media literacy. By partnering with organisations such because the Australian Media Literacy Alliance, tech firms can contribute to the event of instruments and sources that assist customers establish and resist disinformation.
- Coaching and schooling: Simply as first help and office security drills are thought-about important, with common updates to make sure that workers and the broader organisation are in compliance, media literacy ought to turn into a compulsory a part of worker coaching and frequently up to date because the panorama modifications.
How the IT business can assist media literacy
The IT business has a singular duty to leverage media literacy as a core element of cybersecurity. By creating instruments that may detect and flag AI-generated content material, tech firms may help customers navigate the digital panorama extra safely.
And as famous by the Proofpoint analysis, CISOs, whereas involved in regards to the danger of human error, are additionally bullish on the power of AI-powered options and different applied sciences to mitigate human-centric dangers, highlighting that know-how might be the answer for the issue that know-how creates.
Nonetheless, it’s additionally essential to construct a tradition with out blame. One of many greatest causes that human error is such a danger is that individuals usually really feel frightened to talk up for worry of punishment and even shedding their jobs.
Finally, one of many greatest defences we’ve got in opposition to misinformation is the free and assured change of knowledge, and so the CISO and IT crew ought to actively encourage folks to talk up, flag content material that issues them, and, in the event that they’re apprehensive that they’ve fallen for a deepfake, to report it immediately.