We need to learn how to read again
On literacy and the failure of media
In an age of digitised news and the political merit of AI research, media literacy is more important than ever. Yet it seems as though a generation growing reliant on AI is failing to equip themselves with the proper tools to decode media.
With Donald Trump’s rise to presidency and his army front of tech-billionaires at his inauguration, it’s only natural to wonder (and perhaps fear) what the future of digital industries might look like. While Meta continues to reap its immense harvest of user data, Zuckerberg’s presidential association sows fear in many Democrats and left-leaning citizens across the world. Where does the future of media sit between the cultural monsters of capitalists and increasing privatisation? And perhaps more importantly—is there space for real media literacy in a world that seemingly only values advertising dollars? The decline in importance for critical thinking and accessible academia has become amplified in today’s society with the conflation of AI resources and algorithmically curated content on social media. For some, especially students, ChatGPT has become habitual. Instead of reading whole papers, students ask the AI chatbot to summarise its content. Instead of writing essays, students ask it to conjure up thesis statements and introductions. Educational establishments are begging students to learn at the hands of an infinite void of information.
Individually, these instances don’t seem threatening to our intellect. They seem aligned with our search for ‘a good life’, automating and simplifying tasks for us. But this small act of de-intellectualising, in conjunction with increasingly biased journalism and a lack of accountability, all accumulate to soften our brains. We want the easy way out. We don’t want to have to analyse every news source and decode buzzwords and look for biased language. For the average citizen, it’s definitely tiring. It’s unfair that the responsibility of real, raw education is placed on the individual, and not the media. Misinformation is only to be amplified by the meagre introduction of the comment section. A playground rife with gossip and rage-bait, comment sections on political social media content open a whole new can of worms. It’s changed the way consumers think and interact with not only the content itself, but each other. One’s opinion has the power to influence another’s regardless of what the original content was about. Hence, comment sections are a weapon in the decline of media literacy: full of false statistics, opinions stated as facts, and questionable deregulation, it’s a breeding ground for fake information created by humanity itself.
The comment section of any social media often falls victim to bandwagon mentality. Since the introduction of algorithmically curated comments, the amount of likes and replies a comment receives seems to indicate its legitimacy. If a user has liked a comment, I could assume that they’ve agreed with it, and the more likes it has, the more people would have agreed to that particular statement. On social media platforms like Instagram, TikTok and especially YouTube, where political long-form content or reporting has a bigger reach and better credibility, likes for comments go up to the hundred thousands. This not only reflects what the public are thinking and agreeing with, but also implicitly tells users what to think. A politically charged comment with a hundred thousand likes is just the small push that could change someone’s mind, and because it’s algorithmically curated, comments with more likes and engagement are more likely to be shown at the top of each feed unlike those without the same feedback. With the ability to engage with comments combined with the anonymity of these actions, it’s easy to boost a comment’s visibility with little public repercussions. This weeds out opinions that are less popular, restricting the range of conversation the public could be having. Those that aren’t educated in media literacy won’t have the proper resources to, firstly, decode the media source, and secondly, differentiate highly liked, opinionated comments from fact. Many are unlikely to fact-check any comments or do further research into the content they’ve just consumed; rather, the average user would choose to take things at face value and put their trust into what is essentially an unregulated public voting arena.
Comment sections are not entirely a controlling tactic performed by political news channels—in fact, I’d agree that turning off a post’s comments is a form of modern-day censorship. But the way it’s structured is telling of a design that prioritises its engagement and advertising dollars rather than a real focus on educating and fact-checking before exposing the public to certain opinions. Facebook’s recent changes to their speech algorithm this January is a prime example of a political gear towards prioritising free expression over safety and literacy. An article titled ‘More Speech and Fewer Mistakes’ on their Meta website outlines the introduction of a ‘Community Notes’ model and the removal of their previous third party fact-checking program. Quoting a 2019 speech at Georgetown University by Zuckerberg himself, the article reinforces his argument that impeding free speech often “reinforces existing institutions and power structures instead of empowering people”. I’m not sure what existing institutions he’s referring to that aren’t already empowering people that fund Facebook, but this emphasis on free speech signals a change in priority for social media usage. Meta goes on to refer to their previous fact-checking program as a mistake, and are aiming to return to their “fundamental commitment to free expression.” They cited their reasons for abolishing the program as having too many biases showing up in who and what was getting fact checked, which apparently inhibited legitimate political content. Hence, the natural solution: remove the entire program together, which was the only thing stopping serious misinformation, rather than investigating the said biases.
Facebook plans to slowly phase in their Community Notes program, which was first seen on X (formerly known as Twitter). As stated on their website, the program will work by “empower[ing] their community to decide when posts are potentially misleading and need more context, and people across a diverse range of perspectives decide what sort of context is helpful for other users to see.” This is intended to provide factual information about political content in a way that’s somehow less biased. Instead of flagging political information on social media as potentially misleading using a real fact-checking program, Meta intends to replace this with ratings written by contributing users. Essentially, it’s anarchy. In order for the program to properly work, Community Notes needs a settled agreement between a diverse range of opinions to prevent bias. But this will never really happen. Social media user discourse will never come to an amicable enough conclusion for the program to flag information. The internet is also a timeless, bottomless pit. Debates never really ‘end’ on digital platforms. Debates that have been ‘settled’ will resuscitate with new information, facts and opinions. Debates will stack onto each other until it’s hard to even see what it was originally about. The way Community Notes is structured, from a media analysis lens, is designed to encourage so much free speech to the extent that it drowns each other out. In hiding behind a facade of free expression, it seems like Meta simply intends to allow misinformation.
This new approach to social media policy is telling in what communication standards billionaire corporations want to set as the norm. Meta wants to give free reign to the public to handle themselves in presenting civilised, factual debate and information, but the problem is that the public can’t handle themselves.
It’s frankly quite scary that a major player in online political discourse and dissemination will no longer have basic fact-checking policies. It’s even scarier that the average Australian adult doesn’t possess basic media literacy to not only decode misinformation, but to also differentiate between what’s AI and what’s not.
A 2024 study published by the University of Western Sydney researched adult media literacy abilities, needs and experiences in Australia. 77 per cent of adult Australians either don’t know what media literacy means, or have never heard of it. 18 per cent of adults are ‘somewhat familiar’ with the term, and only 5 per cent are confident they know what the term means. You’d assume that younger generations would be more tech-savvy, but only 27 per cent of Gen Z and Gen Y (ages 18 to 39) adults are familiar with the term. Disadvantaged groups such as adults in low-income households, those with low levels of education, and women are less likely to be media literate. While it’s unsurprising that there’s a large correlation between income, education, and media literacy, it’s shocking that such a large percentage of adults in Australia are completely unfamiliar with it. Media is a constant in our everyday lives. We are always consuming it through social media, news reporting, online articles—even when we’re reading books. It’s everywhere. It’s unavoidable and inevitable. It will no doubt influence our decision making in the most minute details of our lives (that’s what advertising is for), but when it starts influencing our politics without us knowing—that’s when we really need media literacy.
Facebook, or more appropriately Meta, has failed us. The education system has failed us. Media has failed us. What is the point in having media reporting when the responsibility of real news relies on the individual? And how can this responsibility be fulfilled when the average Australian individual is not even educated in media literacy? Media is our modern-day scripture written by a billionaire messiah, preaching for egalitarianism hidden beneath an oligarchy. To view fact-checking as censorship is an attack on education. People cannot make informed decisions when, quite frankly, they are not informed at all. Fact-checking is not an infringement on the freedom of speech, and to frame it as such is a sign that social media platforms have failed us as guardians and enablers of expression. Any kind of media will always share a social responsibility in education, and Meta’s push for AI in not only its policies but its very framework has told us that they’ve abandoned this precedent for advertising dollars and profit. It’s easy to see AI as this neutral, high-tech program that’s going to change the world, and while there are many benefits to the integration of AI in everyday life, it’s anything but neutral. Behind its robotic preface is a very human drive for creating this thing. And that human drive is, for the most part, a global race to being the world’s most dominant manufacturer of AI. It’s like racing to the moon all over again, and Trump wants to be the man with the waving American flag.
It’s bad enough that false information and biased reporting are only inflamed by algorithmically curated, and now, unchecked content in comment sections. But now that this is combined with low media literacy rates and the option of using ChatGPT instead of doing critical self-research, it’s more important than ever to ensure that our existing and incoming generations are equipped with media literacy tools. Critical education is at stake and people need to be more informed now than ever before, especially when political campaigns are being increasingly conducted on social media. Nothing is more powerful than information, and every member of society should have the right and opportunity to be educated and aware. Increasing media literacy nationally in Australia and globally would prove difficult just from the sheer rate technology is already progressing. But as educational institutions are re-assessing their approaches to integrated learning with AI, there’s hope that media literacy will make a comeback.
The monsters of the future look like bad AI art and asking ChatGPT what five times five is. Let’s reclaim our future and tame those monsters. Let’s cultivate our cultural anxieties before they lose control.


