top of page

Our Thoughts Ahead of Washington Cyber Summit

17 Sept 2024

By Thomas Barton, Founder and CEO of Polis Analysis. Thomas will be representing Polis Analysis at the Aspen Cyber Summit Washington, D.C. on 18 September to discuss the threat disinformation poses to transatlantic security.


This Wednesday I will have the opportunity to represent Polis Analysis at the Aspen Cyber Summit in Washington, D.C.. During this time, I will be looking to build upon productive conversations concerning means to tackle the dangers posed by disinformation.


With a global electoral supercycle underway, 2024 has proven to be a pivotal year in the bid to tackle a host of online dangers intended to covertly shape opinion throughout the world. Record-low trust in key public institutions and legacy media has encouraged individuals to increasingly seek alternative sources of information online. Whilst freedom of choice must be celebrated, this trend has also opened wider means for bad actors to intentionally spread false/misleading content in order to achieve ideological or political goals.


This has only been encouraged by the emergence of fast developing technologies such as artificial intelligence (AI). Generative AI models have lowered the threshold required in order to successfully disseminate false/misleading content across online communities, whilst commercial Large Language Models (LLMs) continue to provide answers based not upon the truth value of content but rather frequency within training data.


NATO describes cyber activity as an “important field” of information warfare, with its potency significantly enhanced by the birth of the internet and proving increasingly influential within a new digital age. The organisation cites Russia and China amongst other state and non-state actors as purveyors of hybrid strategies intended to “blur the lines between war and peace,” destabilising societies from the inside by sowing doubt within target populations.


Disinformation is also recognized by the US as a common tactic used by malefactors within Foreign Malign Influence Operations (FMIO) intended to shape domestic policy, decision-making, and discourse. Only two weeks ago did we see the Department of Justice and the Department of the Treasury announce respective domain seizures and sanctions against Russian media and disinformation groups for their attempts to covertly influence November’s US Presidential Election.


Yet, as noted within my reflections following my visit to NATO’s Youth Summit in Miami earlier this year, the UK lags behind the precedent set by its transatlantic allies when it comes to maturely recognizing the growing use of disinformation as a cyber tool intended to undermine state security and societal cohesion.


Whilst the Online Safety Act 2023 (OSA) was an applaudable first step towards combating digital dangers, as Polis Analysis argued during the legislation’s evidence submission process, the OSA does not do enough to address disinformation. As found within Parliament’s Defending Democracy Inquiry, UK law remains filled with loopholes allowing an “ordinary person” lacking “technological skill” from quickly spreading convincing disinformation which social media companies currently lack any obligation to address and remove. Our vulnerability is clear to see following this summer’s Southport riots, where disinformation played a crucial role in encouraging violence and riots across the UK.


There is no silver bullet to permanently resolve the threat of disinformation – despite recent prominence, the intentional spread of false/misleading content is by no means a new phenomenon. Multiple studies have highlighted a range of natural susceptibilities within human psychology, trust in media cannot be restored overnight, and technological advancements will only become increasingly intelligent and powerful as time passes. Yet, as I seek to encourage during my visit to Washington, important steps can be taken to create a holistic approach aimed at mitigating its effects.


Multiple studies highlight media and digital literacy as successful methods of decreasing disinformation susceptibility, empowering individuals to independently discern truth from falsehoods without impeding free speech or demanding a return to reliance on low-trust legacy media sources. Moreover, whilst AI has certainly enhanced disinformation’s danger, this technology also holds the potential to address many of the digital threats we currently face. AI-powered tools are already being used by a host of organisations such as AllSides, FullFact, NewsGuard, and GroundNews in order to tackle veiled bias and fake news, whilst a study released only last week within the journal Science found conversation with an AI-system named “DebunkBot” showed a 20% drop in conspiracy theory beliefs.


Effective tools are readily available to help protect the world against disinformation, but as the UK has experienced firsthand it will take a concerted, whole-of-society effort from all sectors to achieve meaningful success. As Polis Analysis prepares itself to advocate in favor of these methods within England’s school curriculum review later this year, I look forward to using this invaluable opportunity at the Aspen Cyber Summit to further discuss and develop these ideas with experts within the sector.





bottom of page