I thank the Joint Committee on Tourism, Culture, Arts, Sport and Media for the invitation to discuss online disinformation and media literacy. Whereas a number of online harms have been specified and defined in the Irish Online Safety and Media Regulation Bill 2022, online disinformation is currently not referenced.
In May 2020, the draft UK online safety Bill provided for the establishment of an advisory committee on disinformation and misinformation. Subsequently, the House of Lords and House of Commons joint committee on the draft online safety Bill featured an extensive consideration of online disinformation covering topics ranging from vaccine hesitancy to integrity of elections. The joint report outlines that the UK Government aims to tackle the problem of disinformation through strengthened media literacy and includes a requirement for Ofcom, the UK's communications regulator, to establish an advisory committee. However, the report also notes that "the viral spread of misinformation and disinformation poses a serious threat to societies around the world" and that media literacy is not a stand-alone solution. I agree, particularly regarding the establishment of an advisory committee.
The reason I reference the UK online safety Bill is that, as a cyber behavioural scientist, I have worked closely with the UK Government for a number of years, specifically the Department for Digital, Culture, Media and Sport, DCMS, in the area of online harms. I have also contributed to symposiums and reports regarding the EU audiovisual media services directive, AVMSD.
For the purposes of informing Irish legislative or, indeed, any other initiatives to counter online harms such as online disinformation, I will highlight a number of points to the committee, the first of which is the requirement to scope the problem space. Rather than consideration of online harms on an ad hoc basis, I recommend a research initiative to create an Irish taxonomy of online harms. This would create context for the consideration of any specific harm such as online disinformation. I refer the committee to our research on the protection of minors report, which was commissioned by Ofcom, the UK regulator for online harms. I was academic co-lead on this research and report. Members will see in the document provided our classification of risk of online harm, which was commissioned to inform UK video sharing platform, VSP, regulation, for example, TikTok, YouTube, Instagram and so forth. Members will note in the table that online disinformation falls under a classification of manipulation.
The second point is to develop the Irish safety tech sector. I have worked closely with DCMS regarding the establishment and development of the UK safety tech sector. Safety tech providers develop technology or solutions to facilitate safer online experiences and protect users from harmful content, contact or conduct. I was one of the expert advisers to the UK Government safety tech report entitled Safer Technology, Safer Users: The UK as a World-Leader in Safety Tech. We developed an evidenced-based taxonomy of technical solutions to online harms; everything from child exploitation and abuse to misinformation and disinformation, operating at multiple levels, for example, system, platform, end point and so forth. Safety tech is gaining traction worldwide. I recently presented at both UN and G7 dedicated safety tech events. Last month, we published the first research report on the emerging billion dollar US safety tech sector.
The third area is to conceptualise a workable framework of solutions. There is a requirement to conceptualise a framework of solutions whereby in the case of online disinformation, the user's exposure to online harms is mitigated by social solutions, for example, media literacy, and safety tech solutions, that is, automatically detect and disrupt false, misleading or harmful narratives, and, importantly, do both of these things while balancing the benefits and risks of online technologies. We can discuss the diagram provided later.
In summary, any consideration of online harms in an Irish context must be informed by an evidenced-based approach. My four recommendations are to have a requirement to scope, define and characterise the problem space; to commission research to explore the Irish safety tech sector and ecosystem; to develop a workable interconnected framework of solutions; and to establish an advisory committee on misinformation and disinformation.
Online harms have the characteristics of big data in terms of volume, velocity and variety. Tackling these issues requires an understanding of the threat landscape, clear definitions and classifications and characterisation of the scale of the problem along with consideration of artificial intelligence, AI, and machine learning, ML, workable solutions. Online safety and media regulation is extremely important. However, my prediction is that regulation will not be practicable, feasible, workable or, indeed, successful if the recommendations I have outlined are overlooked. I thank members for their time.