Unit 2 Portfolio


What role do algorithms play in shaping public narratives (in journalism)?

Within our contemporary media landscape, algorithms have become immensely pivotal in shaping public narratives. These complex computational processes are able to determine the selection, prioritization, and dissemination of news content, which in turn significantly influences public perception and discourse. As gatekeepers of information, algorithms can help curate personalized news feeds—but this often leads to the creation of echo chambers that reinforce existing beliefs and limit exposure to diverse viewpoints. This phenomenon raises critical questions about the role of technology in democracy and the free flow of information.

The integration of artificial intelligence in journalism has transformed traditional newsrooms, automating tasks such as data analysis, content generation, and audience engagement. AI-driven algorithms are able to analyze vast datasets to identify patterns and trends, enabling journalists to craft stories that resonate with their audience. However, this (over)reliance on algorithms also poses various ethical challenges, including biases in data selection and the potential for misinformation. AI-generated content can blur the lines between authentic journalism and fabricated narratives, undermining public trust in media institutions.


A recent development that highlights the influence of algorithms in journalism is the Los Angeles Times' introduction of an AI-generated rating system called ‘Insights’. This feature assesses op-eds and provides alternative political viewpoints, labeling content as Left, Center Left, Center, Center Right, or Right. The initiative aims to expose readers to a variety of perspectives, but since it operates independently from human journalists, it raises concerns about accuracy and transparency: journalists at the paper have expressed apprehension that such automation could further weaken people’s trust in the media, and suggest that funding should support traditional journalism instead.


A YouTube video by Big Think called ‘How news feed algorithms supercharge confirmation bias’ further elucidates the profound impact of algorithms on public narratives. It illustrates how algorithms—initially designed to solve complex mathematical problems and used to pick out and pinpoint miniscule analytical details that would take humans days upon days to notice—have evolved to influence various aspects of daily life, including the consumption of news. The video emphasizes that algorithms can create ‘filter bubbles’, whereby individuals are predominantly exposed to information aligning with their pre-existing beliefs, thereby narrowing their perspective and understanding of broader societal issues. 


Additionally, the video highlights that algorithms prioritize content based on engagement metrics, often favoring sensational or emotionally charged stories over substantive journalism. This is primarily the result of social media platforms, which have conditioned us into repeatedly viewing short videos that require some kind of bold hook to maintain the viewer’s attention span. This prioritization can hence lead to the amplification of misinformation and the marginalization of critical (but less sensational) news, thereby distorting public perception and discourse. The reliance on algorithms for news dissemination also raises concerns about accountability, as the decision-making processes of these algorithms are often opaque, making it challenging to identify and correct biases or errors.


Another crucial note about algorithmic influence in journalism is the monetization model that most digital news platforms employ. Many news organizations rely on ad-driven revenue models, which prioritize engagement metrics, such as likes, clicks, shares, and comments. This economic structure thus incentivizes algorithms to promote content that maximizes user interaction, often at the expense of journalistic integrity. Algorithm-driven content curation often favors emotionally provocative headlines, leading to a rise of clickbait-style reporting. This selection process subtly reshapes public narratives by favoring content that evokes strong emotions (like outrage, shock, incredulity) over in-depth, non-biased investigative journalism.


The economic pressures of algorithmic curation are particularly evident in the rise of ‘churnalism’. This is a practice where journalists rapidly produce articles with minimal original reporting, often based on trending topics identified by AI-driven analytical sources. A 2017 report  on Taylor & Francis Online illustrates how algorithms are not just tools for news distribution; they also actively influence editorial decision-making—churnalism ‘enables the recycling and repurposing of news like never before, though the aggregation of information driven by algorithms; and models that can capture previously out-of-reach data…’. This suggests that this sequence of events begins so far before the public is even able to access any content. Journalists are becoming increasingly reliant on dashboards which display their real-time analytics, showing which stories are performing ‘well’ in terms of engagement. Although this data can be valuable, it also pressures journalists to prioritize virality over depth, and potentially dilute the quality of journalism overall.


In addition to shaping what news gets amplified, algorithms also impact whose voices actually get heard in public discourse. An article by Media Helping Media discusses the different kinds of algorithmic biases found in news production, from selection bias to representation bias to amplification bias. Through these, marginalized communities experience news dissemination, as issues affecting underrepresented groups tend to receive less algorithmic visibility in comparison to mainstream topics that generate higher engagement. This occurs because algorithms optimize for audience retention, often sidelining nuanced or more complex discussions in favor of more universally appealing content. Social justice movements like Black Lives Matter and #MeToo have gained traction partly due to algorithmic amplification—but the same algorithms that boost visibility for viral moments may also suppress ongoing, less sensational discussions about systemic inequalities. Furthermore, algorithms contribute to the erosion of traditional journalistic gatekeeping. Historically, news editors have acted as curators of public discourse, deciding which stories deserved to grace the front page. Today, however, this role has largely been outsourced to algorithmic ranking systems that operate without ethical judgement. While this decentralization of power can democratize information access, it does also introduce new vulnerabilities. Algorithmic decision-making lacks the capacity we have as humans for contextual understanding, leading to unintended consequences (such as the amplification of conspiracy theories). The rapid spread of misinformation during events like the COVID pandemic underscores this very risk, and platforms such as Facebook and X struggled to contain false narratives, demonstrating the challenge of relying on automated moderation without robust human oversight and strict regulations. Even beyond their role in shaping public narratives, algorithms fundamentally alter the political economy of journalism. As news consumption shifts from traditional media outlets to digital platforms, these spaces have immense control over which stories gain visibility. Facebook, Google, and X operate as primary gatekeepers, leveraging their uniquely sophisticated ranking algorithms to dictate the news cycle. This shift has led to what some media scholars describe as the ‘platformization of journalism’, a term which relates to the increasing dependence of news organizations on tech giants for distribution and revenue.


The ramifications of this shift are profound. News outlets must now optimize content to align with platform-specific algorithms, resulting in the widespread use of SEO (search engine optimization) tactics and headline engineering to boost visibility. A 2023 report by Reuters Institute for the Study of Journalism found that over 60% of major news publishers now tailor headlines, thumbnails, and metadata—all based on algorithmic performance analytics rather than editorial judgement. This creates a feedback loop where news content is molded not by journalistic value but by engagement-driven algorithmic incentives. A stark example of this phenomenon is the 2018 Facebook News Feed algorithm update, which prioritized content from ‘friends and family’ over news organizations. This decision significantly impacted digital media, causing a decline in referral traffic to many journalism websites. On the other hand, sensational, emotionally-charged content flourished. The same algorithmic biases that encourage clickbait journalism amplify political content, distorting public perception of issues and further exacerbating ideological polarization.


Another overlooked consequence of algorithmic news curation is censorship: certain stories, viewpoints, or entire media outlets are suppressed due to opaque filtering mechanisms. Unlike traditional editorial processes (which are transparent and subject to professional rules/standards), algorithmic filtering lacks accountability. This issue is often referred to as the ‘invisible editor phenomenon’, where AI-driven moderation determines the boundaries of public discourse without transparency. The 2024 Digital News Report by Reuters found that ‘transparency is considered most important amongst those who already rust the news (84%), but much less for those who are generally distrustful (68%).’ Additionally, a vast majority of readers across the globe state that they are uncomfortable with AI producing news: in the UK, an overwhelming 63% state that they dislike this notion, while only 10% of readers are comfortable with the premise.





Source: Reuters. Public attitudes towards the use of AI in journalism. Available at: https://reutersinstitute.politics.ox.ac.uk/digital-news-report/2024/public-attitudes-towards-use-ai-and-journalism


In addition to curating news, algorithms are increasingly being used to produce journalism itself. News sources such as Reuters and The Associated Press have integrated automated content generation systems, which allow AI to write financial reports, pop culture and sports summaries, and even breaking news alerts. These articles appear to be indistinguishable from ones written by humans, which has raised many concerns about journalistic authenticity and editorial oversight. Due to this, AI struggles to capture context, tone, and cultural sensitivities—features integral to any piece of journalistic writing. Oversimplifying complex stories also means that a lot of nuance and detail may get left out, thus not providing readers with holistic perspectives on certain topics. Additionally, because AI models are trained on existing datasets, they may inherit biases from past reporting. These mistakes would then be reinforcing historical inequalities in media representation.


Despite these efforts, the challenge remains: can algorithms ever truly be impartial? Or are they inherently shaped by the biases of their designers and data inputs? These questions are at the core of numerous debates surrounding algorithmic governance. One proposed solution is the implementation of algorithmic audits, where independent researchers assess the fairness and accuracy of varying news recommendation systems. Media organizations can also actively work to reinvent algorithmic news distribution in a way that prioritizes editorial integrity. These potential solutions include public service algorithms (non-commercial algorithms that focus on and highlight fact-based journalism over engagement metrics), hybrid approaches that combine algorithmic efficiency with human editorial oversight, and movements towards transparency and public accountability (meaning that platforms should disclose how their recommendation algorithms function and additionally allow independent audits to assess bias).


Algorithms play a profound role in shaping public narratives in journalism, acting as both gatekeepers and amplifiers of information. Although they do offer efficiency and personalization, their existence and influence also raises ethical and structural concerns, from reinforcing echo chambers to prioritizing engagement, as opposed to accuracy and in-depth journalism. As digital media continues to evolve and make major advancements, the challenge for journalists and policymakers alike is to strike a balance between algorithmic efficiency and editorial integrity. Transparency, accountability, and diversity in algorithmic decision-making will be essential in ensuring that the public discourse remains both democratic and dynamic.


Within journalism, algorithms have the potential to significantly impact tomorrow’s workforce—today’s youth—both positively and negatively. To collect primary data on what young adults feel about algorithmic bias in journalism, I created a survey through which I could sample peoples' opinions, available here. My findings confirmed a lot of what the sources I chose to analyse did, with a large number of individuals stating that they do not trust social media platforms to provide accurate and balanced news. 76% of them also believed that news platforms should be required to disclose how their algorithms rank and filter news.




Source: Google Forms.


Algorithms make it possible to streamline newsroom operations and automate tasks such as data analysis and content curation/generation, allowing journalists to focus on more creative, appealing, and investigative work. This can therefore lead to increased productivity, efficiency, and the ability to reach a broader audience. However, the negative impact lies on our overreliance on algorithms, which may push sensationalism and engagement over journalistic integrity. This shift could undermine the quality of reporting, and may even lead to w workforce more focused on clicks rather than critical thinking. Furthermore, the displacement of human journalists by AI poses risks of job loss and the erosion of traditional journalistic skills, potentially reducing diversity in newsrooms and consequentially narrowing public discourse.




Sources


  1. Los Angeles Times. ‘AI-Generated Rating System ‘Insights’ Analyzes Political Alignment of Op-Eds’. Los Angeles Times, https://www.latimes.com/insights
  2. Big Think. ‘How News Feed Algorithms Supercharge Confirmation Bias’. YouTube, uploaded by Big Think, https://www.youtube.com/watch?v=prx9bxzns3g
  3. Johnston, J, & Forde, S. ‘Churnalism: Revised and revisited’. Digital Journalism, 5(8), 943–946. https://doi.org/10.1080/21670811.2017.1355026
  4. Media Helping Media. ‘Dealing With Algorithmic Bias in News’. Media Helping Media, https://mediahelpingmedia.org/advanced/dealing-with-algorithmic-bias-in-news/
  5. Reuters Institute for the Study of Journalism. ‘Digital News Report 2023’. Reuters Institute, University of Oxford, https://reutersinstitute.politics.ox.ac.uk/digital-news-report/2023
  6. Reuters Institute for the Study of Journalism. ‘Digital News Report 2024’. Reuters Institute, University of Oxford, https://reutersinstitute.politics.ox.ac.uk/digital-news-report/2024/dnr-executive-summary

Comments

  1. Ariya I really liked your research topic, I think it's great that you chose something personal to you which allowed to provide a unique perspective. I also like that you included the survey and results you personally sent out, that is unique and made your research feel validated and credible. I think you have a clear, conversational tone that is hard for a lot of people to get. The project is very nicely done, good job! - Kayla Hennessey

    ReplyDelete
  2. I think the research you did for this topic is really insightful and presenting in a way that explains it well. I liked the graphs you put in your post further expanding upon your topic. I think algorithms are a great thing, agree with your concern when AI is tasked with judging thing, such as the rating system. I think its interesting to see the bias the algorithms exhibit and how that can impact smaller, marginalized writers. I would wounder if it is based on the programmers and if their bias is what causes the algorithms to behave that way. - Kate Kaplan

    ReplyDelete

Post a Comment

Popular posts from this blog

What I Listen To

Unit 3 Portfolio

Unit 1 Draft