Skip to main content
Normal View

Joint Committee on Tourism, Culture, Arts, Sport and Media debate -
Wednesday, 2 Feb 2022

Online Disinformation and Media Literacy: Discussion

This afternoon's public meeting has been convened with Professor Mary Aiken, professor of forensic cyberpsychology at the University of East London and adjunct professor at the University College Dublin, UCD, Geary Institute for Public Policy to discuss online disinformation and media literacy. Disinformation has been described as verifiably false or misleading information created, presented and disseminated for economic gain or public deception with media literacy described as the tool or skill set required to enable citizens to assess, analyse, evaluate and create media content.

On behalf of the committee, I warmly welcome our witness, Professor Aiken, who is joining us in committee room 1 remotely via Microsoft Teams. She is very welcome. The format of the meeting is such that I will invite Professor Aiken to deliver an opening address that is limited to five minutes, which will be followed by questions from members of the committee. As attendees will probably be aware, the committee may publish the opening statement on the Oireachtas website following this meeting.

Before I invite Professor Aiken to deliver the opening statement, I will explain some limitations to parliamentary privilege and the practice of the House as regards references that can be made to other persons in their evidence. Our witness today is giving evidence remotely from a place outside the parliamentary precincts and, as such, may not benefit from the same level of immunity from legal proceedings as a witness who is physically present. Witnesses are again reminded of the long-standing parliamentary practice that they should not criticise or make charges against any person or entity by name or in such a way as to make him or her identifiable, or otherwise engage in speech that may be regarded as damaging to the good name of the person or entity.

I remind members again of the constitutional requirements that they must be physically present in the confines of Leinster House in order to participate in public meetings. I will not permit a member to attend where he or she is not adhering to this constitutional requirement. I ask that those who are attending online use the raise hand button when they wish to contribute. I also remind all those joining today's meeting online to ensure their mobile phones are on silent mode or switched off.

I now invite Professor Aiken to deliver her opening statement, which is limited to five minutes.

Professor Mary Aiken

I thank the Joint Committee on Tourism, Culture, Arts, Sport and Media for the invitation to discuss online disinformation and media literacy. Whereas a number of online harms have been specified and defined in the Irish Online Safety and Media Regulation Bill 2022, online disinformation is currently not referenced.

In May 2020, the draft UK online safety Bill provided for the establishment of an advisory committee on disinformation and misinformation. Subsequently, the House of Lords and House of Commons joint committee on the draft online safety Bill featured an extensive consideration of online disinformation covering topics ranging from vaccine hesitancy to integrity of elections. The joint report outlines that the UK Government aims to tackle the problem of disinformation through strengthened media literacy and includes a requirement for Ofcom, the UK's communications regulator, to establish an advisory committee. However, the report also notes that "the viral spread of misinformation and disinformation poses a serious threat to societies around the world" and that media literacy is not a stand-alone solution. I agree, particularly regarding the establishment of an advisory committee.

The reason I reference the UK online safety Bill is that, as a cyber behavioural scientist, I have worked closely with the UK Government for a number of years, specifically the Department for Digital, Culture, Media and Sport, DCMS, in the area of online harms. I have also contributed to symposiums and reports regarding the EU audiovisual media services directive, AVMSD.

For the purposes of informing Irish legislative or, indeed, any other initiatives to counter online harms such as online disinformation, I will highlight a number of points to the committee, the first of which is the requirement to scope the problem space. Rather than consideration of online harms on an ad hoc basis, I recommend a research initiative to create an Irish taxonomy of online harms. This would create context for the consideration of any specific harm such as online disinformation. I refer the committee to our research on the protection of minors report, which was commissioned by Ofcom, the UK regulator for online harms. I was academic co-lead on this research and report. Members will see in the document provided our classification of risk of online harm, which was commissioned to inform UK video sharing platform, VSP, regulation, for example, TikTok, YouTube, Instagram and so forth. Members will note in the table that online disinformation falls under a classification of manipulation.

The second point is to develop the Irish safety tech sector. I have worked closely with DCMS regarding the establishment and development of the UK safety tech sector. Safety tech providers develop technology or solutions to facilitate safer online experiences and protect users from harmful content, contact or conduct. I was one of the expert advisers to the UK Government safety tech report entitled Safer Technology, Safer Users: The UK as a World-Leader in Safety Tech. We developed an evidenced-based taxonomy of technical solutions to online harms; everything from child exploitation and abuse to misinformation and disinformation, operating at multiple levels, for example, system, platform, end point and so forth. Safety tech is gaining traction worldwide. I recently presented at both UN and G7 dedicated safety tech events. Last month, we published the first research report on the emerging billion dollar US safety tech sector.

The third area is to conceptualise a workable framework of solutions. There is a requirement to conceptualise a framework of solutions whereby in the case of online disinformation, the user's exposure to online harms is mitigated by social solutions, for example, media literacy, and safety tech solutions, that is, automatically detect and disrupt false, misleading or harmful narratives, and, importantly, do both of these things while balancing the benefits and risks of online technologies. We can discuss the diagram provided later.

In summary, any consideration of online harms in an Irish context must be informed by an evidenced-based approach. My four recommendations are to have a requirement to scope, define and characterise the problem space; to commission research to explore the Irish safety tech sector and ecosystem; to develop a workable interconnected framework of solutions; and to establish an advisory committee on misinformation and disinformation.

Online harms have the characteristics of big data in terms of volume, velocity and variety. Tackling these issues requires an understanding of the threat landscape, clear definitions and classifications and characterisation of the scale of the problem along with consideration of artificial intelligence, AI, and machine learning, ML, workable solutions. Online safety and media regulation is extremely important. However, my prediction is that regulation will not be practicable, feasible, workable or, indeed, successful if the recommendations I have outlined are overlooked. I thank members for their time.

I thank Professor Aiken. We will now move to the question and answer session with members. Each member has a five-minute slot for both questions and answers. We will start with Deputy Christopher O'Sullivan.

I thank the witness for the very insightful, thought-provoking and important opening statement. We need to listen to the recommendations that were made.

To tease this out a bit further, will Professor Aiken develop on how far we should go in terms of preventing the spread of disinformation? This has been thrown into the limelight in the past week by the fact that Neil Young and Joni Mitchell, both legends and two of my favourite artists, walked away from Spotify.

It has highlighted the fact that disinformation and the spreading of disinformation are a global problem that is having a massive impact. I am wondering how far it should go. They walked away because of misinformation, disinformation and falsehood that were being spread by a podcast. Do we have to aim to stamp that out completely and censor where at all possible or what is the approach? That is a good example to develop on.

With regard to Facebook, social media and the general spreading, in the last 12 months in particular, of anti-vaccine information, false information about vaccinations and conspiracy theories regarding vaccinations, that was rife and went almost completely unchecked. There seemed to be no checks or preventative mechanisms in place to stop that. I believe many good, decent, intelligent and well-educated people were dragged down rabbit holes and brought to a place where, perhaps, they would not have been if these checks and balances were in place. Perhaps Professor Aiken would elaborate further on that area.

There is another problem that is even more difficult to address, which is influencers. Instagram is a good example of where there are influencers with 50,000 to 100,000 followers. In their story line they are pedalling misinformation regarding vaccines, to use that good example again. It appears that would be difficult to censor or to stop. How does Professor Aiken think we could go about stopping that?

Those are a few, everyday examples where we see, particularly in social media, misinformation going unchecked. In the case of Joni Mitchell and Neil Young, it is the artists who are taking on the responsibility to act, as opposed to governments. What are Professor Aiken's thoughts on that?

Professor Mary Aiken

I cannot address all the issues in the couple of minutes we have, but they are important points and extremely topical. First, I wish to go on the record and say that I am not a fan of cancel culture.

I thought Professor Aiken was going to say she is not a fan of Joni Mitchell.

Professor Mary Aiken

While activism online is important, what we see is the rise of the cyber pitchfork mob, where people are effectively deplatformed without due process, a fair hearing or an evidence-based approach. We have seen the rise in this form of cancel culture because there is a paucity or a lack of legislation to address these issues. Like nature, the Internet abhors a vacuum and what happens is that people step in to self-regulate. We have seen social media platforms and user-generated content self-regulating. I believe the US Senator Richard Blumenthal recently said that the age of self-policing is over. This is the crux of the problem.

However, there is a very delicate balance to be struck between freedom of speech, freedom of thought, freedom of expression and censorship. These are very complex domains. Personally, I enjoy Joe Rogan's podcast. He brings very difficult issues to a broad population and he has an engaging style. Again, the crux of the issue is these platforms not wanting to be considered as publishers, when in fact they are, and therefore not having the same journalistic standards that traditional print media may have. We should note that Spotify and Joe Rogan have moved very quickly to address the issue and have suggested some remedies for the future in terms of following an interview with one controversial expert, perhaps, with a balanced or opposing view.

I am conscious of the time so I am sorry for the brief answers.

Deputy Mythen has five minutes.

First, I thank Professor Aiken for taking the time off to attend this meeting. Who gets to define what is misinformation and disinformation? Professor Aiken recommended an advisory committee. What would her recommendation be for the composition of that committee? How can we guarantee independence in it?

Second, does Professor Aiken think that automatic takedowns of disinformation and misinformation are best practice? How does this method work compared to advisory notes and warnings? She mentioned Spotify. On its system it states that some of the podcasts could contain misinformation and people can decide for themselves. That is how it is getting around that.

Lastly, how resilient is AI in detecting or interpreting misinformation and disinformation?

Professor Mary Aiken

Again, it is a very complex landscape. Indeed, there can be overlay between misinformation and disinformation. The Deputy also raises the valid point about who decides what is true and what is false, and at what point in time that decision can be made. Something that might have been considered disinformation or false previously may turn out to be something that is eminently true in terms of research that supports evolving phenomena. The problem with technology is that these things are happening in real time and commentary is taking place in real time. The goalposts are shifting constantly.

A specialist expert group could help advise what could be considered misinformation and disinformation in an Irish and European context and, importantly, how that should be approached, mitigated, intervention staged or triaged. It is a complex area and it requires specific Irish consideration.

On the automatic takedowns of misinformation, how does this method compare to advisory notes and warnings?

Professor Mary Aiken

With regard to automatic takedowns, this comes back to the complexity of big data and to the points I am making about the safety tech sector and the ecosystems. In order to identify something that is disinformation in a cyber context, it is necessary to have a data classifier system. Before one has the data classifier system, one must create a taxonomy. The taxonomy is the chart that was contained in my opening statement. One classifies an online harm such as disinformation. Then there is a decision tree built around that classifier which says, for example, specific disinformation regarding a vaccine that are known untruths. That could then be tagged by a data classifier and that might lead to an automatic takedown. The problem is that to do that in an automated way we have to invest considerable resources in these types of data classifier projects. I am working on one at present for the UK Government which, to the Deputy's point, will automate these solutions. The good news is that in Ireland we have probably the leading expert in Europe, if not worldwide, in my good friend and colleague, Professor Barry O'Sullivan, so if we wanted to architect AI solutions specifically for disinformation, we have the resources available here. He is at University College Cork.

I notice that gambling was not mentioned on Professor Aiken's classification of harm online. Can she give her perspective on this as a category of harm?

Professor Mary Aiken

Is it gambling or gaming?

Professor Mary Aiken

Yes. What is mentioned under mental health and well-being is a specific area of gaming disorder. Gaming disorder is a psychological classification under consideration. In terms of gambling, that would roughly fall into mental health and well-being, but gambling is specifically dealt with in the UK, and this is a UK taxonomy, under specific legal instruments.

I notice that this committee included gaming in its recommendations but it is not incorporated in the Bill as yet. Is this something the committee would envisage revisiting going forward?

Yes, we would. We discussed this thoroughly, especially in regard to underage in Ireland. They are aiming at underage. They start young and then cause all sorts of problems.

Professor Mary Aiken

Yes. I want to add one further comment in this context. One of the reasons I am stating that we need to first create an entire framework and consider every potential online harm, every classification, is that, as we then work through the legislative process, we can draw those down and start incorporating them with amendments. We cannot cover every online harm in a Bill or it would be delayed forever but what we can do is have this framework in an Irish context and start tackling each one.

There is also what we call comorbidity between harms, so there might be a crossover between, say, addictive-type behaviours and gaming, or a crossover between self-esteem issues or social isolation and gaming. Creating a sort of classification system around that, one that is automated, would be extremely useful.

I thank Professor Aiken for her presentation. I have two questions. Professor Aiken mentioned that education is not the only way we can deal with this. How would we approach this in an education setting? Would Professor Aiken envisage that it would have to be included in things like teacher training and school issues? There is no doubt but that younger and younger children are now online. How do we ensure their capacity for critical thinking in their engagement online and what sort of things do we need to look at?

Professor Aiken said that when we are looking into the scoping of the research, she recommends that an institute of research here in Ireland might look at this. Is there a particular university, technological university or institute that comes to mind that already has capacity to do this or has a research department in this area that would be best placed to look at this?

Professor Mary Aiken

I thank the Senator for the question. On the point about online harm, the classification system that we have developed for Ofcom covers sexual, aggression, manipulation, self-injurious, mental health and well-being, cognitive and moral. Moral is something that is specifically mentioned in the audiovisual media services directive in that online harm should not impact the physical, mental or moral well-being of children, defined as being under the age of 18. What we see is that the spectrum of harms almost touch into every specialist area in terms of academic subject matter expertise. Is there one institute that exists in Ireland that could address all of these issues? No, there is not. This is transdisciplinary. It is a combination between behavioural sciences and technology. My background is in cyberpsychology, which is interdisciplinary. I am not sure we need to establish an entire research institute because that would be a major undertaking but we certainly could create a research initiative and funding proposals where individual researchers could tap into their area of subject matter expertise or collaborate together to address these issues.

My other question on the education part has perhaps been answered when Professor Aiken referred to it being multidisciplinary.

Professor Mary Aiken

The problem is that we have had 20 years of education and awareness raising with the Internet, and look where we are. When I say that education and awareness raising as a stand-alone entity does not work, that is based on the history. It is an important aspect. The diagram on page 5 of my presentation with regard to the user journey demonstrates that as the user navigates his or her online journey, effectively, the first interface is a framework of social solutions, which are education, awareness raising, media literacy, parental intervention and potentially parental controls, and all of the above in terms of what we have been doing to date. The newer part is that the user moves through the taxonomy of safety tech solutions. We have had 30 or 40 years of cybersecurity but cybersecurity protects data, systems and networks; it does not protect what it is to be human online, hence the evolution of the new safety tech sector. I am proud to say I am one of the founder members of this sector in the UK and I have been working in this area for 20 years. We now have a sector which proposes solutions at a system level, platform level and endpoint, which is the user device level, and automates that process. It works in tandem with the social solutions. Let me point out that media literacy when talking to a six-year-old who is below the age of reason is not a solution as they do not have the critical thinking skills.

Thank you. Even as I was saying it, I was wondering about children and critical thinking skills. I was worried I might trip myself into something.

Professor Mary Aiken

Older children would have developed some of these skills. When we look at the safety tech ecosystem, what is pertinent to the point the Senator raises is that the Bill on many occasions references the term “age-appropriate”. What does that actually mean? When we have classifications and we think of Piaget’s stages of cognitive development, they are all based around the real world. There is no comprehensive classification system for what I describe as cyberdevelopment, so how do we decide and enforce what is age-appropriate when we do not have the literature to scientifically create age bounds? For example, what is the best age to give a child a smartphone? Where are the guidelines? That could be part of specialist research to define in an Irish context what is age-appropriate.

I thank Professor Aiken.

I thank Professor Aiken for her contribution and appreciate her taking the time to be with us today. My colleagues have covered many of the issues. Professor Aiken might give us her perspective on the area of online satire, or what is posing as satire but is actually anti-government misinformation. I will give one example. We recently saw a Government Minister accused of trying to have a particular song banned. It was complete nonsense but, looking online, the number of people who actually thought that was a real thing was incredible. It really showed to me the frightening power of social media when it can be manipulated. We also have online platforms that deliberately quote politicians out of context, and this creates the frenzy, creates the outrage and creates the pile-on. Has Professor Aiken done any research in that regard? I see some of these pages state clearly that they are satire, but what actually happens is that the link gets shared and the headline is read, but the detail is not read and the terms and conditions are not looked at. Is that an area Professor Aiken and her colleagues have studied or done research on? It is increasingly prevalent in Ireland and I think it is a threat to our democracy.

Professor Mary Aiken

Absolutely. It is interesting the Deputy mentions that in an Irish context and it is why I point to the need for research for classifications in an Irish context. As we all know on this call, we have something in Ireland that we call “slagging”, which does not really exist in other countries in the same way. Therefore, what might be considered cyberbullying or harassment in another country, actually might be just slagging in an Irish context, and there might be cues around that slagging like the use of a smiley face to show that “I am not really serious about this.” That is why it is very important that cultural context is taken into consideration.

The Deputy’s point referred to the targeting of public figures.

The UK online safety Bill at early stages considered the targeting of public figures as a form of online harm. In an Irish context, it would be very important to think about this. When I look at how politicians are targeted on social media platforms - I am talking to a group here with a vested interest - the question that comes to mind is why any person in his or her right mind would want to go into politics in this day and age. Why would people put themselves through that, where young people can see how they and their family members can be targeted and, as per the Deputy's point, how nuggets of information can be taken and blown out of context and be very harmful and damaging? In cyber behavioural science we call this phenomenon "trolling", which is a term I am sure members of the committee have heard. There is a very interesting research paper, Trolls just want to have fun, which found a correlation between those who scored high on a matrix known as the Dark Tetrad, which is a combination of the personality traits narcissism, sadism, psychopathy and Machiavellianism, and trolling. They found a relationship between those who professed they liked to troll and these Dark Tetrad traits. The research concluded that trolling was a manifestation of everyday sadism. Do we want to have platforms where sadists are actively encouraged to deploy for their own amusement online and particularly target public figures? I feel sorry for public figures because they cannot really engage and answer back as that makes it much worse. It is an important area for consideration.

I am a fan of satire, particularly political satire. When it is done well, it is one of the greatest forms of comedy. We have seen brilliant legends in Ireland over the years such as the late Dermot Morgan, Mario Rosenstock and Oliver Callan do this really well. We have also seen online political satire that is tasteful, does not cross the line and is clearly satire. The dangerous development that I have witnessed in the past year or so is the arrival of online content that is hiding behind satire but, I would say, deliberately sets out to create misinformation under the guise of satire and falls under the heading of satire but ultimately is politically motivated. It is an area that deserves further scrutiny. Even in the past 24 hours, I have seen a few examples of this that are really chilling.

In my experience, some of this content is anti-women, anti-establishment and almost exclusively anti-Government. It is an area that the body politic needs to look at but from the world of academia, I would be interested in finding out if any particular studies have been done. Outside of this meeting, I would be happy to correspond with Professor Aiken and point her towards particular areas that I feel are perfect examples of what I am talking about.

Professor Mary Aiken

The Deputy makes a very good point. Parallel to that, we have also seen the rise of cyber vigilantism, where particular people are targeted by groups that come together. That can be extremely distressing for the person on the receiving end of that activity. The subtlety in terms of what could pass for satire and is intended to be damaging is a very nuanced aspect of trolling. There are ways it can be tackled. One could look at repeat offenders, how syndicated or organised the activity is and if there is political intent behind how it is manifesting. It is a very interesting, albeit very nuanced, area in terms of further exploration.

On Professor Aiken's point in regard to potential future candidates to participate in our democracy, over the past number of years, I have noticed in my engagement with potential candidates in my constituency, be that at local authority level or otherwise, that one of the big factors that comes into the conversation almost immediately is the prospect of online abuse and being an online figure of ridicule. My concern is that we will end up in a situation where the people who stand for public office will be the people who are comfortable in the environment wherein the traits of the trolls are the norm. They are not the type of people we want running a country, local authority or other elective office. That point is well made. It is one that the committee needs to take on board.

Professor Mary Aiken

I thank Deputy Griffin.

I call Deputy Munster.

I welcome Professor Aiken and I thank her for her opening statement. My colleagues have already asked some of the questions I had intended to pose. On the research and the advisory committee which Professor Aiken has recommended, how should that be configured? For example, should it be under the new media commission? I would welcome a brief outline of how the UK Department of Digital, Culture, Media and Sport approached the development of the Government Safer Technology, Safer Users report?

Professor Mary Aiken

On the advisory group on misinformation or disinformation, I would have to defer to this committee to decide how that might be structured. It could be part of a regulatory group going forward but if disinformation is incorporated into the Bill sooner rather than later, you might not want to create a structure that would then have to be disbanded or it could be that members of the commission who are knowledgeable in this area could contribute to that knowledge over time. The point about misinformation or disinformation is it will continually evolve. We see fake news, deepfakes and so on. There is a constant evolution. Initially, potentially there is a requirement for an advisory group to consider misinformation or disinformation, which ultimately might end up being incorporated in a regulation. That is stage one. A stage two would be a longer role for that level of expertise in a regulatory context. Politicians are the architects of these solutions and they will know best how that should be done.

On the point with regard to DCMS, I was involved in the research which undertook an analysis and survey of the UK safety tech ecosystem. It was undertaken in conjunction with a specialist company in Northern Ireland, Perspective Economics, which has economists who are very knowledgeable in this area. They did an audit of the online safety technology providers using inclusion and exclusion criteria such as we will consider a company to be a safety tech provider if it does X, Y and Z and we exclude companies if they are delivering some of these services but they are not predominantly safety tech companies. In the report, there is a lot of detail in terms of the methodology of how we created the taxonomy. When we did the entire audit, we then made sense of the data by creating this platform system end-user taxonomy and in the information environment taxonomy. In the report I have just done in the US, we have built on that taxonomy to include another level within the taxonomy or classification system which is called cyber-physical. This is very interesting because it is the combination of vision detection software, for example, a camera that might detect a gun in a campus or school environment, and the automation of that process where an alert is sent out to law enforcement to stage intervention, for example, to mitigate active school shootings. That is called cyber-physical. We will continue to build on those reports. Ireland could lead the world in terms of creating its own safety tech sector. The major companies are headquartered here. We have some of the best experts in this area in Ireland.

I would love to see us going far beyond what we are proposing at the moment and coming up with innovative solutions. The point about developing a safety tech sector is that it drives innovation, builds business and creates jobs, while at the same time it delivers solutions to complex social tech problems such as harassment and disinformation.

I thank Professor Aiken. That was very useful.

I thank Professor Aiken, both for her testimony today and for her work in this area over a long number of years. I agree with Deputy Griffin. It is fair to say that abuse online is now the biggest deterrent to young people coming into public life. We are all going to lose out as a result of that.

I was struck by Professor Aiken's use of the phrase "the cyber pitchfork mob". She might recall the case of Presentation College Carlow, where false allegations were made that some male teachers in the school were involved in body-shaming. This took off with the cyber pitchfork mob and there were victims in this, namely, the male teachers in the school. A number of politicians and journalists engaged in attacks in the online space without checking, including the education spokesperson for one party. They never apologised when the allegations were subsequently discovered to be untrue. Professor Aiken's concerns are valid in that regard. She can comment on that if she likes.

We are currently dealing with the Online Safety and Media Regulation Bill 2022. One of the problems with that Bill, as Professor Aiken has noted, is that do not define online harms within it. Her proposal around a taxonomy of online harms would be very helpful. The Minister, Deputy Catherine Martin, has set up an expert panel looking at the possibility of an individual complaints mechanism. What advice would Professor Aiken give that panel, the Minister and us as legislators when dealing with that space? Our objective as legislators, which is also Professor Aiken's, is to protect people from online harm. What advice would she give us on the legislation?

My second question is a broader one but as a cyberpsychologist Professor Aiken may be able to answer it. Can the scale of online harms be measured, including here in Ireland? When Twitter and Facebook appeared before us, I asked them how many complaints they get, even with their own community standards, and how many they addressed. There were questions about take-downs earlier. They said they do not have those data or information, although I do not believe that to be the case. Perhaps Professor Aiken might be able to quantify the scale of the challenge.

Professor Mary Aiken

Like everyone else, I do not have access to the data of Facebook or others. We have seen an increase in anxiety and depression across the general population, and specifically in young people, over the last two decades or so. We have seen a surge in eating disorders and self-harm behaviours such as cutting. We have seen a surge in young people presenting for problems around suicide ideation or suicidal thoughts. The background to what is happening to young people and adults is that all is not well in cyber contexts. As somebody said earlier, I have been involved in this area for a long time. I feel very optimistic because this is the first time we have gotten towards a tipping point where people are actually beginning to take the concept of online harm seriously. When the UK created this spectrum of online harm in its White Paper, I was one of the advisers on that process. I wrote a book about it in 2014 called The Cyber Effect. My point at the time was, and still is, that everything is connected. You cannot look at bullying without thinking about harassment, and you cannot look at harassment without thinking about misinformation and disinformation. You cannot consider online harms without actually factoring in aspects like cyberfraud, which are in the classification system I have proposed in my paper. With online fraud, if an elderly person is targeted by a vishing or smishing attack, of which there was a surge during the pandemic, and if they lose their savings, is the harm there just monetary or is that elderly person psychologically devastated because they have been compromised in this way? We need this framework and classification system. Then, one by one we can begin to make sense of these harms and look at legislation that might tackle some, or hopefully in time all, of them and create a better cyberspace for all.

I was not invited to participate in the complaints expert group so I do not know what the protocol was for selecting people on that group but I am sure they will do a good job. The group is probably lacking an expert in artificial intelligence so it may reach out to get some advice in that area. With regard to a complaints system, our remit in Ireland would be greater than Ireland when dealing with online harms. If there are 1 billion users of social media and on a given day all of them decide to make a complaint, that cannot be moderated using human intervention alone, even triaging the complaints system. I would be interested to see the findings of that report. Would there be one person on a staff of ten in an office completely swamped by complaints or will there be smart systems that are able to triage incoming complaints and activate a solution?

I agree that we are probably going to have to use smart systems but how would we go about designing those? Who would design them?

Professor Mary Aiken

If we create a classification system then we will know what we consider to be a harm. Within that classification system the next step is to have a legislative instrument, although it may not just be a matter of legislation. We might also have protocols around good practice and best practice. If the social tech industry has to stage interventions and mitigate and triage online harms in an age-appropriate way, unless we have defined what is age-appropriate and created robust classification and sub-classification systems, it will fail. It will fail because the get-out clause will be that companies can say they did not understand or did not know, or that they thought they did their best. We have to make this crystal clear and the way to do that is to invest in an expert research group to help us in an Irish context, such as Perspective Economics or whoever else, that can step up to do the work and create the guidelines and protocols. Large social media companies have resources within them but what about everybody else? That is where the safety tech sector comes into play because now small companies that are developing a platform or an app can reach out to this sector. They may not have the resources of TikTok or Instagram but we could point them to companies that offer these resources at reasonable rates in order to help them comply with the spirit of good practice, or the spirit of the law if there is legislation involved.

Professor Aiken spoke about monitoring the trends and she seems to be attributing a lot of our problems now around eating disorders and mental health to the rise of social media. She is obviously looking at where we are going to go over the next decade. Does she predict a utopia or a dystopia?

Professor Mary Aiken

We have to be careful about attribution. I used the term "associated with". Causation is difficult to establish.

If a young person is using a certain image-based website and he or she develops an eating disorder, that causation is extremely difficult to establish. Undoubtedly, it will be established over time when we see the class and group actions that will specifically set out attribution and causation. At the moment, my expert opinion is that there is a significant relationship between technology and these harms. Hence the need to regulate.

On the question of utopian or dystopian, I hope it will be utopian, but let us take the evolution of, for example, the metaverse. I am sure members are familiar with it; it is the next iteration of social media. What we know about virtual reality, VR, environments is that they are highly immersive psychologically. In other words, people have a presence and it feels real. The cyber-utopian view is for a company to launch the metaverse and only focus on the upside, for example, a fantastic experience. Someone puts on a VR headset and now he or she is at the beach with friends. Well, no. We are in Ireland, so people would be at a concert or a party with friends. What about the downside, though? What about putting on a VR headset and going to an event but experiencing an episode of cyberbullying or harassment? What happens to a child when 300 virtual avatars turn on him or her in that powerful psychological environment? There is no adult who could withstand that sort of harassment and abuse, and certainly no child. When we see these evolutions, we cannot just have a cyber-utopian view. We have to ask what could go wrong. We need to be asking that now about the metaverse. What could go wrong, including for children?

I thank Professor Aiken. I am happy to open the floor for a second round of questions.

I have a follow-on question. I tabled a Commencement matter on the metaverse and Ireland's relationship with it. A part of the challenge for us, as legislators, is that we are constantly trying to keep up with the tech. We are not techies. This challenge involves not only human intervention, but also AI. How are we going to be able to keep up with all of these changes? We see our first responsibility as keeping citizens and residents safe. That is one of our top priorities. How can we do that? Do we empower AI, machine-learning avatars to act on our behalf and make those decisions?

Tied to my question on utopian and dystopian, what does success look like for us? We are going to drive this Bill through and there is legislation forthcoming at European level. How can we ensure that not just children, but everyone feels safe in the online environment?

Professor Mary Aiken

That is a good question. It is not up to us to try to police cyberspace and stage every intervention. It is not up to parents to do that either. In the real world, parents do not man the doors at pubs asking people whether they are underage and telling those who are that they cannot go in. The duty of care has to fall to the those who profit in this space. That can work at multiple levels. Duty of care is a broad construct. It can have legislative aspects but also aspects of best practice, good practice and old-fashioned doing the right thing. What success looks like is a reduction in the risk of harm. We cannot eliminate harm in this context, just as we cannot eliminate it in the real world. Children go out, play, climb trees, fall and break an arm or sprain an ankle. That is part of growing up. What we want to do is reduce the risk of harm through social and technological interventions while holding the companies accountable for the protection of minors online, specifically very young children. Success looks like replicating what we have in the real world. In the real world, it takes a village to raise a child. We want to see that replicated in cyberspace.

I asked a question but Professor Aiken did not really answer it. She recommended an advisory committee and spoke about how the expert panel was missing an AI expert. What composition would she recommend for the committee? How can we guarantee that it is independent?

Professor Mary Aiken

Does the Deputy mean how to ensure that the people on the advisory committee are the right people?

Professor Mary Aiken

The requisite skill set needs to be scoped out, that being, knowledge and expertise in specific areas. Based on the submissions that the joint committee has received, it would be able to scope out this terrain and set criteria to determine which people are bested placed for these roles.

As to how to establish whether the advisory committee is independent, a due diligence process would need to be engaged in to see where there are potential conflicts of interest. Better still, candidates could be asked to declare not only their expertise, but also where they feel there are potential conflicts of interest. That would be good practice.

In non-technical terms, perhaps Professor Aiken might tell us more about how AI is used and works and how it monitors misinformation and disinformation.

Professor Mary Aiken

None of us is privy to the algorithms that are used by social media companies. That black box is part of the problem. The companies know what they are looking for and have their own community standards. Algorithmically, they triage disinformation – anti-vax content, for example – according to those standards. They train their AIs based on data sets of what bad anti-vaxxer content looks like and the AIs then go out and look for that. I do not know the inner workings of these companies, so I am just hypothesising, but where a high-profile individual like an influencer is involved, they might bring in human moderators to determine whether the account should be warned or sanctioned in some way, for example, deplatforming.

We are not privy to how these companies' AIs operate. As part of an investigative procedure under the commission's powers, people who are expert in artificial intelligence could go to companies and ask to examine how their AIs work and be educated in same. Some members of the joint committee have been involved in sessions with various companies in the social tech area and probably found different levels of co-operation. Some are open and transparent and want legislators to interrogate what they are doing. Others are less so. Perhaps there should be a uniform standard, and part of the remit of a special advisory group could be to tell companies to show it how their systems work in practice.

I thank Professor Aiken. Interestingly, she used the phrase "the cyber pitchfork mob". This has to do with how individuals get redress.

We might take the case of the Presentation College in Carlow where fake information was spread online, including by a number of prominent figures. In circumstances like that, and I have used a real example, how could those who felt that they were at the receiving end of the mob go about feeling that they could get justice?

Professor Mary Aiken

That is going to be very difficult. One could hold the platform accountable by saying that it had facilitated this.

Should we hold the platform accountable?

Professor Mary Aiken

I think both the platform and the individuals, ultimately. There was a good example regarding the abuse of young male footballers following the Euro finals. I am not a football expert but I know that following penalties the footballers suffered harassment and hate speech online. Interestingly, an investigation was done to identify the people behind these anonymous accounts. Effectively, what the police then did was move to identify the trolls or people who engaged in the defamation, hate speech, targeting and harassment. The police found out who they were and gave the names of these people to their football clubs and they were barred for life. To me, that is a really interesting form of dealing with the problem because there was interconnectivity between cyberspace and the real world. The point is that there was a real world consequence for the football fans who engaged in abuse and harassment in that they will never get to go to their favourite football club again to see a match, and I think that is critical for us. Consequences of one's behaviour is a basic premise of any judicial system.

In terms of anonymous trolls, should there be a requirement that when one signs up to an online service people should either identify themselves directly to that service or use a digital intermediary? I may not want to give my details to Facebook but I would give it to a trusted intermediary who validates who I am to that platform. Should that be a requirement?

Professor Mary Aiken

The difference between an anonymous net and what is called a nonymous net or n-net is a huge topic. I often debate this issue and when I argue against anonymity online I am told by libertarians that anonymity is a basic right. No, it is not. It is a 20-year old invention of the Internet and needs to be questioned.

Professor Aiken has previously referenced the era of self-regulation for social media platforms and now that has become a real issue. In the past, major platforms like Twitter and Facebook have been reluctant to censor posts and we have seen that with conspiracy theories and fake news. Even in the most recent US Presidential elections there was huge controversy over fake news. How can we counteract self-regulation?

Senator Byrne mentioned anonymous accounts. There is even the prosecution that leads to that. Different police authorities or the Garda cannot investigate appropriately without the actual information from these big tech companies. I would like to hear the professor's thoughts on that matter.

Professor Aiken is a pioneer of safety tech and referred to her engagement at the G7 Safety Tech Summit. What global changes are being implemented? The professor engaged with the Australian eSafety Commissioner. What is being done in Australia? How can we work in a collaborative way to bring about global change? I ask that because the Internet is a worldwide powerful tool. We also need to put some type of safety belt in place to protect people and I would like to the professor's thoughts on that.

Professor Mary Aiken

It is interesting that the Vice Chairman mentioned a safety belt. In 1965, Ralph Nader published his famous book, Unsafe at Any Speed, which led to the introduction of safety belts. There were so many crashes where people were very badly injured and then safety belts became mandatory. There is a parallel because the Internet is unsafe at any speed and I think that this is Ireland's seat belt moment. These companies are headquartered here so what we do in Ireland will impact the rest of the world and I do not say that lightly.

For us to approach this entire problem space then we need to do it in a systematic way. We need to define the problems, scope out the situation and create taxonomies. That is a major research effect, which needs to be funded. If we built that knowledge base then we need to audit our safety tech sector, see what companies are here, and then consider incentives. I mean we can create funding rounds to get innovators, especially young people, engaged in this area to come up with solutions. Young people are media savvy and are children of the Internet. The kids who come out of college now are in their 20s and are really well placed to address some of these issues. We can look at seed funding and nurturing innovation. For me, there is a whole raft of solutions. The G7 formally adopted safety tech as a resolution. Germany will chair the G7 summit next year and has declared a specific interest in developing safety tech in Germany, and Europe wide. We are all familiar with how much progress Australia is making and we can learn from some of the extremely interesting things that are being done in Australia.

Over this consultation period, I am sure that the committee members have themselves been somewhat stunned by the depth and breadth of this area. This is an enormous landscape to be navigated in terms of the committee trying to address all of these issues. I mean trying to think about the behavioural, technological, legislative, policy and political aspects of these issues and bringing that all together across a very wide range of harms and in an age appropriate way. That is an enormous task and I congratulate the committee for the work that has been done to date to get the legislation to this point.

How do we tackle it? Ireland could become a centre of excellence for safety tech. Ireland could work to collaborate with stakeholders be it social media or social technology companies. Ireland could work with academic institutions, policy makers and legislative specialists. Ireland has a small market and we could test different protocols here to find out what works, and whether something is efficient and effective. We could come up with test technological solutions to some of these issues. We could test whether they work with the involvement of privacy experts, ethical experts, AI ML or artificial intelligence machine learning experts. We could start developing our own solutions, which then can actually be part of our safety tech ecosystem. I am very optimistic that we will succeed with the right will and right funding. I note that Ireland has a levy in place which could help to fund such initiatives over time. One cannot create a safety tech sector with these solutions overnight. It will take a year to complete the research and it may take two or three years to get a sector up and running but long term it would be incredibly valuable and, most importantly, effective.

This discussion has been very informative. We have been given a lot of information to digest and a lot of thought provoking points have been raised.

We appreciate Professor Aiken's time. We all appreciate that she has a busy schedule. We thank her very much.

We will suspend briefly to allow the secretariat to take arrangements for our next session with representatives from Media Literacy Ireland and the Irish institute of future media. I remind those attending via Microsoft Teams to accept the second meeting invitation issued.

Professor Mary Aiken

I thank the Vice Chairman. I thank everybody for their questions and hope the next session goes well.

Sitting suspended at 2.40 p.m. and resumed at 3.03 p.m.

This session is with representatives from Media Literacy Ireland and the Institute for Future Media Democracy and Society at Dublin City University, DCU, to discuss online disinformation and media literacy I welcome our witnesses Professor Brian O'Neill, co-chair of Media Literacy Ireland, who is joined by Ms Martina Chapman, its national co-ordinator. I also warmly welcome Dr. Eileen Culloty from the Institute for Future Media Democracy and Society.

I will invite witnesses to make their opening statements, which are limited to five minutes, followed by questions from members of the committee. Witnesses are reminded of the long-standing parliamentary practice that they should not criticise or make charges against any person or entity by name, or in such a way as to make him, her or it identifiable or otherwise engage in speech that might be regarded as damaging to the good name of a person or entity. As some of our witnesses are attending remotely outside the Leinster House campus, they should please note that there are some limitations on parliamentary privilege and, as such, they may not benefit from the same level of immunity from legal proceedings as a witness physically present. Members are reminded of the long-standing parliamentary practice to the effect that they should not comment on, criticise or make charges against any person outside the Houses.

Professor Brian O'Neill

I thank the committee for inviting Media Literacy Ireland, MLI, to present its work here today. Ms Chapman and I will be delighted to add further information on the background to this campaign. I want to acknowledge Dr. Eileen Culloty who is our vice chair, and the significant work of the DCU Institute of Future Media, Democracy and Society to this area as well as the contribution by those at the preceding session by Professor Mary Aiken.

MLI was established in 2018 as an independent, informal alliance of organisations and individuals who are working together on a voluntary basis to promote media literacy in Ireland. Facilitated and supported by the Broadcasting Authority of Ireland, BAI,, MLI is an unincorporated body with members drawn from many sectors including the media, communications, academia, online platforms, libraries and civil society. We foster discussion, identify gaps in media literacy provision and try to bring stakeholders together to help fill those gaps. There are currently over 240 members.

The overarching objective for MLI is to empower Irish people with the skills and knowledge to make informed choices about the media content and services that they consume, create and disseminate across all platforms. At the heart of media literacy is the ability to understand and critically evaluate broadcast, online and other media content and services, in order to best manage media use. These critical thinking skills are also at the heart of countering disinformation, the topic of today’s meeting. This is something which MLI has been actively engaged in since 2019 with the national Be Media Smart campaign. This campaign encourages people to stop, think and check that the information they are getting is accurate and reliable.

The campaign originated as part of a European initiative to counter disinformation in the lead up to the 2019 European elections. In 2020, the campaign evolved to focus on accurate and reliable information about Covid-19 and in 2021 the initiative focused on the need to make informed choices about the Covid-19 vaccination based on accurate and reliable information.

The Be Media Smart campaign was delivered across TV, radio, online and the press supported by free air-time, editorial, online ad-credit, social media activity and events from a very wide range of MLI members. RTÉ, TG4 and Webwise made particularly significant contributions in the development stage while additional key strategic contributions were made by Virgin Media, Sky Ireland, Learning Waves, the commercial radio sector, the community media sector, Newsbrands Ireland, the Library Association of Ireland and the online platforms. All the Be Media Smart communication directed people to the campaign microsite for help and support, including advice on how to talk to friends and family who may be affected by misinformation or conspiracy theories in the context of online information sources.

This initiative clearly demonstrated the power of collaboration when MLI members play to their own strengths and marshal the full set of resources available to them. At the peak of the campaign, we estimated that about two million people encountered the "stop, think, check" message and research carried out by Ipsos MORI in June 2020 indicated that 27% of adults recalled the campaign, unprompted. For context, 13% to 17% is considered very good recall for similar campaigns. The initiative has been noted as best practice and the concept has been adopted in six other European countries.

While the work of MLI helps to counter disinformation, it is not the only focus. MLI understands media literacy as an umbrella term encompassing a range of related literacies required to function effectively, safely and ethically in a world where digital communication is an integral part of daily life. We regard this as a lifelong learning process.

MLI uses the media literacy framework set out in the BAI’s Media Literacy policy to help people develop the skills and knowledge to understand and critically evaluate media; to access and use media; and to create and participate via media in a responsible, ethical and effective manner in the creative, cultural and democratic aspects of society.

MLI acts as an enabler for media literacy stakeholders in Ireland. Our work is focused on four key work strands, namely, co-ordination, communication, innovation and promotion.

We would be happy to elaborate further under these headings to describe the activities across these areas.

It is important to note that while media literacy increases resilience to many of the issues associated with digital communications, it should not be seen as solution on its own. People form beliefs for complex reasons and skills and knowledge alone may not be enough to guarantee informed decision-making. However, the critical awareness fostered by initiatives such as Be Media Smart and the embedding of the messages such as Stop, Think and Check when we consume media is a vital contribution to the empowerment of citizens and enables them to be better able to judge the accuracy and reliability of information they encounter and consume. We look forward to elaborating further in the discussion.

I thank Professor O'Neill for that. I call Dr. Culloty to make her opening statement.

Dr. Eileen Culloty

I thank the Chairman and members for the opportunity to contribute to this discussion on disinformation and media literacy. I am speaking on behalf of Dr. Jane Suiter, the director of our institute and our members in DCU.

As the committee will know, disinformation is not identified as a category of harmful content in the Online Safety and Media Regulation Bill. However, as previous witnesses have noted, there are provisions at EU level to address online disinformation. It will be addressed indirectly through the general framework of the Digital Services Act and, more directly, through the EU’s voluntary code of practice on disinformation. Moreover, the European Commission recently established a European Digital Media Observatory, EDMO, with the explicit aim of co-ordinating and facilitating actions to counter disinformation. These include fact-checking, media literacy, research and policy analysis. Ireland is home to one of eight regional hubs that will develop this work. The Ireland EDMO hub is comprised of the DCU FuJo Institute, the University of Sheffield, the fact-check team at and NewsWhip. A key aim of all these EDMO hubs is to build capacity among the various stakeholders who are responding to disinformation at both national and EU level. However, it is important to note that the funding provided is limited relative to the needs of those stakeholders and the expected remit of the hubs.

Nevertheless, in terms of media literacy, the Ireland EDMO hub will work closely with Media Literacy Ireland and its members to assess needs and develop resources. Echoing what Professor O'Neill said, I would caution against a narrow view of media literacy as a solution to the disinformation problem. Increasing media literacy is necessary but not sufficient for building societal resilience to disinformation. That is because there are many other contributing factors to disinformation far beyond a lack of media literacy, information literacy and digital skills more generally. Yet the importance of those skills extends far beyond addressing disinformation. Media literacy, information literacy and digital skills are fundamentally entwined with issues of citizenship and inequality insofar as they influence people’s capacity to lead full lives in this century.

The Ireland EDMO hub, in addition to that role in media literacy, fact-checking and conducting research, will also play an oversight role in the EU code of practice on disinformation. The code has been in operation since 2018. Ireland is one of the few countries to systematically assess the implementation of the code through research conducted by the DCU FuJo Institute and commissioned by the Broadcasting Authority of Ireland, BAI. To date, we have published three reports on the implementation of this code by the online platforms. Our research identified major shortcomings in the information provided by digital platforms, which make it difficult for us as researchers, policymakers or regulators to understand the nature of online disinformation and the effectiveness, or otherwise, of what the platforms are doing in response. Currently, that code is being revised with the promise of better reporting structures and, potentially, sanctions for non-compliance but the final revision of the code has not yet been published. As such, we expect, through the EDMO hub and at the DCU FuJo Institute, to work closely with the new Future of Media Commission in assessing the future implementation of the code.

In conclusion, I note online disinformation is a highly complex but evolving problem. It is essential online platforms provide researchers, regulators, policymakers and all the other stakeholders with the necessary data to understand the nature of the problem and whether any of the countermeasures we are proposing and discussing are likely to be effective. To date, this has not been the case. At a societal level, an increased emphasis on media literacy is welcome but we need a very rounded understanding of the media environment, including the role and importance of protecting independent investigative journalism, the need for a strong evidence base to inform and evaluate policy, the need to respect fundamental rights, including freedom of expression and the need to support civil society stakeholders, and, most especially, those stakeholders who represent the interests of groups that are targeted by disinformation campaigns and other forms of online harms.

I thank Dr. Culloty. We will proceed to the question-and-answer session with members. Questions will be taken on the basis of the rota published and members will have five minutes for both questions and answers. I call Deputy Munster.

I have three questions. I will direct my first question to Dr. Culloty. Are the European measures in the Digital Services Act and the EU voluntary code of practice, as well as the work of her institute and the BAI on the code of practice on disinformation, sufficient or should specific regulation of disinformation be included in our Bill? Will the Bill improve matters in terms of online platforms providing the necessary data to researchers and others? What would help improve the current situation?

Professor O'Neill referenced the examination of the proportion of media literacy resources. How should literacy training resources be allocated? Should we target those most at risk or try to reach as many people as possible? What would be the best use of resources?

I call Deputy Culloty to answer the first two questions.

Dr. Eileen Culloty

Both those questions fit together. At EU level, there is a move away from self-regulation. The code of practice was entirely voluntary. The platforms were under no obligation to participate in it. We have consistently found, through the research we have done, that the information the platforms were providing was at an extremely general level. It was just not useful and, in some cases, it was entirely irrelevant. The big weakness there is that the newer platforms may not have any incentive to join at all. The proposal is to move towards a co-regulatory, rather than a direct regulatory, model. It is unclear how that will pan out. It relates fundamentally to the question of platforms providing sufficient information for everybody involved to understand what is going on. They say they cannot provide this information because it is of a sensitive nature or would not be general data protection regulation, GDPR, compliant. One of the central functions of the European Digital Media Observatory, based in the European University Institute in Florence, is to negotiate with platforms for a GDPR compliant way to access data. That is currently ongoing and we do not know how that will play out. That is my understanding and most researchers would be in agreement that the essential issue is there is nothing to compel platforms to share the information we need. At national level, if we are concerned about and discussing online harms, whether it be electoral integrity or harms to young people, it is a pretty crazy situation that there are companies that have this information and there is no instrument to compel them to share it with people who might be able to shed light on whether these harms are being manifest in one way or other.

I thank Dr. Culloty for that response.

I call Professor O'Neill to address the question on media literacy resources and training.

Professor Brian O'Neill

I thank the Deputy for the question. It is a good point. Both strands are in fact really important. The Deputy spoke about the need to have general awareness to ensure we try to maximise the reach of campaigns and some of these key messages. That has been one of the distinctive achievements we have highlighted. It has worked well being able to bring together a whole-of-media campaign around this.

In the same way, we have highlighted in our own discussions the use of scarce resources. Much of our activity is voluntary and the question is where to focus it. The Deputy is quite right that vulnerable groups and those least likely to be able to access or have available to them high-quality information resources are in need of targeted measures. That is a very important way to progress and something our members are considering closely with regard to our future campaigns. I stress that general awareness raising is also a long-term process. It is not something that is done on a single campaign by campaign basis but rather something that comes with responsibility for regulatory agencies on an ongoing basis to ensure media literacy is on their radar, getting attention year on year.

I thank Professor O'Neill.

I welcome both the witnesses, who gave really engaging submissions. I will start with Professor O'Neill. When I was a teenager, the Irish Independent developed the tag line, "Before you make up your mind, open it", and as a young journalism student it was always stuck in my head. Naturally, the newspaper was trying to get people to buy its product but it was a great tag line because it was associating its news product with a depth of thought contained in the pages.

I compliment the witnesses on the "Stop, Think, Check" campaign, and many advertisements - particularly the advertisement from RTÉ - stand out. The witnesses have said they have been successful in the first battle, ensuring we are encouraging such a process. Moving on, however, to the next battle, I wonder if that has been lost. If we are getting people to pause before thinking, what is their engagement with traditional media, whether that is purchasing a newspaper or an online article? I am curious as to how they ascertain their news and information. We can see national newspaper sales in this country are falling off a cliff. As we must ensure we can follow up the "Stop, Think, Check" campaign what are the thoughts of Professor O'Neill in that regard?

I thank Dr. Culloty, who in her concluding remarks spoke about how, at a societal level the increased emphasis on media literacy is welcome but there must be a rounded understanding of the environment, including the importance of independent investigative journalism and the need for evidence-based research to inform policy. Again, these are aspects of newsrooms that are being cut because they are in financial crisis. The very essence of good journalism is that aspect of investigative journalism. It is what this country's newsrooms, whether in print, on television or on radio, were associated with for many decades. Can what we need to exist come about, given our current trajectory? How will this be addressed in a digital and online world?

Professor Brian O'Neill

I thank the Senator for the question. The "Stop, Think, Check" campaign and the work of Be Media Smart provided specific and tailored resources. It was a multiplatform approach and was supported by different members of Media Literacy Ireland. The diversity was quite extraordinary, for example, reaching from community media to libraries to school library groups and across mainstream mass media. It was focused on bringing people to some key resources to help them develop critical skills about how they engage with and support media. There is a broad, multistranded approach to reaching audiences while focusing on the needs, as mediated by individual groups which best know their audiences bringing this out. That is one of the ways we have certainly tried to make this as effective as possible and reach those in need with the kinds of resources that will be effective for them. Ms Chapman could of course add much more to this because she was the director of operations on many of those initiatives.

Ms Martina Chapman

I am happy to contribute on this if that is okay. It is helpful to think of the Be Media Smart campaign and the effort of countering disinformation as a behaviour change process that will take much longer than the duration of one campaign. The first step in any behaviour change process is identifying that there is an issue, and in this case the issue is around disinformation. The campaign was very broad and targeted the general public. Its call to action was to stop, think and check.

To pick up on Deputy Munster's point about the next steps, to really effect behaviour change we must be much more targeted in our messaging and support for particular groups, especially the groups most vulnerable to disinformation. The natural next step for something like Be Media Smart is to work really closely with organisations that are trusted intermediaries for the groups that would potentially most vulnerable to disinformation. That may involve the creation of resources specifically for those groups or helping to arrange or develop interventions or events with those trusted intermediaries. The skills required are focused on the person and they will be used differently for individuals depending on their life stage, lifestyle and many other factors, including education and geography. It is quite complex work to get to the next stage and into really targeted support for those groups.

Dr. Eileen Culloty

The question about the Irish media system and particularly journalism and news media is crucial to this. It is interesting that during Covid-19, we saw huge levels of consumption and a return to and pronounced trust in traditional media. There was great uncertainty and much disinformation circulating online but the population at large was returning to traditional, trusted news media. The question, of course, is how news media might capitalise on that. During Covid-19, their audiences might have increased but the advertising market collapsed because everything was closed. The economic issue around the future of journalism is major. This is also recognised on an EU level with the European Democracy Action Plan, which has a sister media action plan. It is one of the prongs in the response to disinformation.

In our original submission on the Online Safety and Media Regulation Bill, one of our recommendations was that the concept of what could be funded under public funding should be extended to include news media. We were thinking that because many traditional newspapers are now online, they are producing new types of content, including video content and documentary, but many of the public funds available through the sound and vision scheme are inaccessible to them because they are news media. That seems like an anomaly that could be addressed.

There are other actions that Irish policymakers could take to help news media. If members watched the hearings for the Future of Media Commission last year, it was striking how often these matters arose. One concerns the defamation laws, which news media have continuously called out, along with the Council of Europe and Reporters without Borders, as a major threat to media freedom in Ireland. The threat of defamation and large payouts has a chilling effect. We have also seen an increase in what are called "SLAPP" cases, or strategic lawsuits against public participation, where just the threat of suing a news outlet can make it less likely to report in a particular way or on certain actors. It is a noted tactic among disinformation actors.

Addressing that but also looking at how we could support in an independent way traditional media moving into new types of content would be a key part of the response to disinformation.

I thank our guests for the very useful testimony and the discussion so far. My question follows on from Senator Cassells's point around trusted media and evidence-based news sources. If I want to go online to The Irish Times, the Irish Independent, the Irish Examiner, or the Business Post, in all of those cases I must pay to go behind a pay wall. I do not have a problem with paying for good journalism and news sources, but at the same time I know that if I do not pay I can get any sort of opinion that I want anywhere online. There is that challenge around the model. There is the question, which we are considering, around the public funding of good journalism, and how one can then guarantee the independence of the media if there is public funding behind it. This is a debate we have had in broadcasting, and increasingly it is one that we will need to have in print journalism. Perhaps the witnesses will touch on that.

I am conscious of the great phrase by the American governor and legendary politician, the late Mario Cuomo: "You campaign in poetry; you govern in prose". Unfortunately, as we have moved very much into the era of the soundbite, the quick clip, and the 30 second piece of information, a lot of subtlety gets lost. For us in politics, there are very few issues that are black and white. With a considered debate such as this, for example, one will not be able to knock it down generally into a very short news clip. Is that battle entirely lost? How can we encourage people to either engage in more long reads, or to look at the subtlety of some of the issues involved?

Over the past two years with Covid and the disinformation and misinformation that was shared and was a threat to public health, do the witnesses feel that the social media platforms did enough to combat some of the misinformation and disinformation, specifically with regard to vaccines, but specifically around public health messages generally?

Professor Brian O'Neill

There is a range of questions there, all of which raised some important thoughts around the quality of the media environment. From the perspective of Media Literacy Ireland, we are trying to bring together the full spectrum of all interested stakeholders with a commitment to the values of media literacy and what that actually means in our consumption, appreciation, and recognition of good-quality media. This is something that brings us all together and is the criteria of membership when we bring all of these stakeholders together. Internationally, the research will show that this commitment to an ethos of high-quality media, and resourcing and supporting for media literacy initiatives, does pay off in the long term. Ms Chapman has described this as a long-term behaviour change process. We have to see it in that light, but it is very much a long-term process. We have looked at countries such as Finland and Estonia, for example, where for various historical reasons they have chosen the media environment as something that really does merit that close attention. It has worked for them with all of these kinds of benefits.

We do of course have the very specific issue-driven campaigns. These speak to the information disorders and the crises we have had to face in the past years. This is where we certainly feel that the Be Media Smart campaign fits in very well.

By way of a response, when looking at the full range and complexity of the media environment let us not forget community media and the role of local radio, which are very distinctive features of the Irish media. These are broad-based and they support journalism and high-quality speech radio of very different kinds. These are very important avenues for reaching communities and for opening new opportunities.

The concern that high-quality journalism would always be behind a paywall has to be something that exercises policy makers in how to address that with regard to future democratic accountability and the quality of public discourse.

Our focus as media literacy specialists is about user empowerment. It is about supporting citizens, and necessarily the supply issues come into that, including what we can do to support citizenry and citizenship in engaging with media. This is how we would make the association between quality and our commitment and support of media literacy reinforcement. Ms Chapman and Dr. Culloty can add a lot more to the actual detail around the investment in quality media.

Ms Martina Chapman

I would like to follow on from the points made by Professor O'Neill. I served on the Council of Europe expert group of quality journalism in the digital age. Part of the finding of that group was the really critical role of media literacy in all of this. As Professor O'Neill and Dr. Culloty pointed out in their opening statements, media literacy is not a solution in itself to any individual issue that is thrown up in the world that we live in and our dependence on the media and communication technologies, but media literacy skills do underpin so many aspects of how we live our lives right now. A key part of media literacy is understanding how media is created, who owns it, who funds it, the editorial structure behind it, and the ways it is distributed. A lot of those structures have changed, particularly in the past ten years, so there is a bit of a lag between people's understanding of how the media landscape works now, compared to perhaps ten years ago. There is a job of work there in terms of media literacy and upskilling citizens to better understand, and to have the skills to make informed choices about the media they are consuming, and to make informed choices about any biases they might have themselves, and how these factor into the media choices they make.

Dr. Eileen Culloty

On journalism and trusted media, I would definitely echo what Professor O'Neill has said about the role of community media and regional media in general, and that these are a very important source. In our digital age, it would be fantastic if we could focus more on the concept of community media and what that could be, when it is now more accessible for people than ever before to make and produce their own media. We have a wonderful community media tradition in Ireland. We can see, however, that regional newspapers are closing, and there are massive consequences to that such as very basic democratic things like district courts not being reported. The national media are already under strain and cannot go taking on those functions. Such reporting could not be left to somebody running a local Facebook page so that is a huge issue in itself.

Part of what we are grappling with, therefore, is whether we look at the State coming in to provide direct financial support for this. There are certainly some elements such as the Sound & Vision fund and other schemes. Could they be moved into print journalism too? My colleague, Senator Shane Cassells, has also raised this.

Dare I say that the witnesses are identifying the same problems we have, but we are trying to look for some of the solutions.

Dr. Eileen Culloty

There is still quite a lot of work to do on the way we conceive the problem, not just in Ireland but across the board. Part of the panic around what is happening online and the decline of traditional media is that people are looking at that media with, perhaps, rose-tinted glasses. The traditional media did not always earn public trust. They often deserved public mistrust. We know from extensive research by media and communications partners - I am not specifically talking about Ireland but in general - that news media were subject to bias, were prone to corporate influence and all these other problems, and the media environment was limited to just those. We should not eulogise, or look back as though that was an ideal time, just because we are very concerned about the present.

The solution should not be just propping up older industries that are massively struggling with digital transition. Instead, with the amazing advancements in digital technologies, we could think about what media could be in the 21st century. At a national level, the debate on something such as public service media is not about how we keep RTÉ going but what public service media should be in the 21st century in a digital world. That might not mean funding a big, centralised organisation, but rethinking what way money is collected to serve the public.

Will Dr. Culloty paint that picture? What does 21st-century public service media look like?

Dr. Eileen Culloty

It would be about funding a wider range of different types of media, such as community media. It could, as Ms Chapman and Professor O'Neill can talk about, involve libraries, which play such a massive role in the information environment and are deeply embedded in communities. We could imagine funding going to different types of bodies that we do not even think of as traditional media bodies, but that play a very important role in the media environment. We already have the model of the sound and vision scheme for funding broadcast content and that of Screen Ireland for funding film and other types of content. Those models can be used. I do not want to say what the model should be, but I think that is where the discussion should be happening with different stakeholders. It is a commercial future, a public future and a community-type media future.

Professor Brian O'Neill

Dr. Culloty has captured the issue very well. I would add to that our focus on skills. Media literacy skills bring together access, critical understanding and creation. We are quite concerned that in some cases we are only scratching the surface of what we are able to achieve with short-term media literacy interventions. This has to be about enthusing and empowering young people so that they become the arrow and not the target of media, to reuse a phrase that was often repeated by our president. It is something that stands the test of time. It is about equipping people with the kind of skills to be able to use the media environment for the purposes of effective communication, learning about the world and as an extension of their own critical faculties. That is a large-scale educational initiative and something we continue to campaign on and try to encourage education policymakers to take on board.

I appreciate the point about education policymakers. I will put this challenge to schools of journalism as well. There is a responsibility on journalists and not just on educators. Maybe it is for those schools of journalism as well. Certainly, there have to be questions around some of the click bait journalism and so on that we see now. We are all grappling with these problems and I am looking to hear what the solutions are. Who will take responsibility for this? What is our role as legislators? What are the representatives' roles as educators and campaigning advocates?

Dr. Eileen Culloty

Professor O'Neill mentioned skills, which I think relates to the Senator's last question. One way public money can be used in a way that does not have the issue of interfering with content, because the main concern is to keep that separation, is around things like skills and resources. As newsrooms are cash-strapped, it is very difficult to upskill employees and invest in new technologies. I am thinking of traditional print outlets having to move into audio and visual formats. Public resources and public funds could be available to help them make those transitions.

Journalism trainers and educators have a massive role, not simply in terms of the type of values we instil in journalists but who is getting onto journalism courses. It was very notable that diversity was a major theme for the Future of Media Commission. It will be very important in Ireland, as we move forward, that the people writing the news and investigating issues have to be from diverse backgrounds. It is unfair to put all the blame for that on the media and say that they are not diverse enough, if the people going into the university courses in the first place are not from more diverse backgrounds. There is an opportunity to use scholarships and things like that, but also to go into schools at a much earlier age to show people that these are careers for them, there are paths for them and the media is for them and not made by certain types of people.

Professor Brian O'Neill

I will add one small point to that. Again, as part of MLI's focus on media literacy reinforcement, we are also trying to bring those media literacy messages back to professional journalism schools, such as mine in TU Dublin or DCU. That has to be about the curriculum preparing future media professionals so that they also engage with what media literacy is about. We cannot assume this is simply and narrowly defined in terms of actual professional media.

In addition, right across the whole spectrum, the learning waves programme, for example, is another case where we need to look towards the future needs of a rapidly changing digital media environment. How can communicators across the board be best equipped to use, evolve and develop the media? It is a tricky one in that the role of the State is not to sponsor media. That is not a solution in itself, but it is to encourage these kinds of developments so that our investment, through education, training and programmes such as these, is assisting the next generation of communicators and media specialists.

This meeting is clashing with the other activities of a few members. I will offer a second round of questioning to any members who are still online.

I thank the Vice Chairman. An issue we are also grappling with is around how the State funds public sector broadcasting and public media. We all agree it is important. I will draw an analogy. Those representatives who work in TU Dublin and DCU are in receipt of public funding, but it is done in such a way that the State does not direct them. It preserves academic autonomy for the most part. A model is already in place in the higher education system. Can they envisage a similar model that can effectively develop to ensure that we can be guaranteed good quality, trusted journalism at national, local and community level, while protecting and preserving that independence? What model should that be? They will be aware that as part of the Online Safety and Media Regulation Bill we will be establishing a new media commission, which will be a very powerful regulator in this space. It will manage some of those funds and it is likely to deal with the Digital Services Act and the Digital Markets Act that will be coming in at European level. Where do the witnesses see its role in this place?

Dr. Eileen Culloty

The Deputy is absolutely right that there are models for public funding.

Dr. Eileen Culloty

We have had public sector broadcasting for a long time so we know that models are there. Related to that however, it is not just about finding the model that people would trust, it is also about the wider market. You have to have commercial media as well. We do not want to create an imbalanced market where the commercial media cannot survive because there are these public-funded models. The real issue is to have a pluralistic media system because that is where trust can come from. You do not want any one node within the media system to have this dominant role. Other countries are exploring these. Part of the economic issue in Ireland is that we are a small English speaking country in a massive English language market with the UK on one side and the US a good bit away on the other side but hugely dominant. It is very difficult, even though as mentioned there are subscriptions, but it is difficult for Irish media to compete with subscriptions whereas in other countries it is not because their citizens are not also subscribing to the New York Times and to British outlets and so on. The economics of commercial media has to be part of the discussion along with how to fund public types of media.

I call Professor O'Neill.

Professor Brian O'Neill

To add a little to that, the work of the Future of Media Commission looks at this in detail, but as Senator Byrne suggested, providing that autonomy is that separation in terms of the role of public investment in the future quality media environment. We can look to the extensive experience from the past; the Sound and Vision scheme has been an important pioneering scheme in terms of investment in Ireland's media landscape and the experience of the independent commissioning. There are models around syndication of content that can provide useful ways to think about that. That would be for those directly involved in looking at these kinds of models. A point made by Dr. Culloty in that, being a small media market in an English speaking media world, close to many of the major players within this, we have built up a successful tradition of independent media. We have a regulator with strong experience of nurturing the different elements of that in a mixed media marketplace. Based on what I can see from the Government's strategy published yesterday around harnessing digital, it is looking towards the media commission continuing this type of work, supporting media literacy but also supporting all the actors contributing to this media landscape.

Dr. Eileen Culloty

I would like to add that in the context of the new media commission that when we talk about disinformation or democracy our concern is around news media. That is a very distinct concern in itself. Part of the issue of being such a small country in this large English language market is that it has a substantial impact on cultural production. In the future, who is going to pay for and provide Irish language content, historical documentaries or creative works, and where will people find them if they are all on Netflix or paying for other subscriptions and the public model has collapsed? That has to be part of the discussion as well. Part of the audiovisual media services, AVMS, directive is supposed to be that money will be set aside for native production but there is no guarantee that Netflix is going to prioritise this Irish content. Why would it? It is of no great interest to an international market. Another role for the State would be to think about where people will go for digital content in the future and is it all going to be at the mercy of what a corporate person working for a massive international organisation is going to decide, about where the Irish themed content, or the French themed content, goes?

I might challenge Dr. Culloty on that. Netflix may look at and have partnered in the development of some Irish content creation. Indeed, Netflix subscriptions now feature more stuff from Poland and the Scandinavian countries. Some stories are universal. Our challenge is to have the content creation facilities in place and the producers - which I know we have - and partner with those international streamers. We should be using the content levy, not just as a levy slapped on companies but to incentivise some of these companies to come and film and to partner with Irish content creators and independent producers. There are many Irish stories that could be sold all over the world if we package them in a particular way.

Dr. Eileen Culloty

For sure, and for large scale productions historically we have done quite well in film production. I am thinking of more small-scale type storytelling that would typically be found on a public service broadcaster, that does not necessarily have the kind of appeal for an international audience but is still valuable. That is what is at risk of being lost. That is what our public sector content providers and indeed our community media providers already do very well. My concern is we cannot leave that to the market, and certainly not to big international corporate players.

I thank Dr. Culloty. We have no other offerings. I will take this moment to ask one or two questions of Dr. Culloty in regard to her publication around disinformation and manipulation of digital media. She references four aspects of disinformation around bad actors, platforms, audiences and counter-measures. In the Irish context, how are such bad actors best identified and differentiated from those who are unknowingly contributing to misinformation or disinformation discourse? Will Dr. Culloty give us some further information in this regard?

Dr. Eileen Culloty

That is a good point because it gets to the heart of much that is missed when we talk about disinformation. It is very hard to define and it is quite time-intensive and resource-intensive to monitor an environment and understand what exactly is going on. Much of what we know about disinformation is because investigative journalists have undertaken extensive research. Organisations such as the Institute for Strategic Dialogue based in London have entire teams that focus on monitoring the environment. That means they follow all the accounts, using all the tools that are available to them, and this is all being done generally without the co-operation of platforms. It is very difficult in Ireland to see, at any point, what is happening right now because the infrastructure and the resources are not in place to monitor the environment. Subject to the work of the media commission will be the proposed electoral commission which will deal with disinformation and political advertising around elections, which presumably will have some link to the media commission. That is an issue that will arise because you cannot in an ad hoc way produce an understanding out of nowhere of who the bad actors in Ireland are and what the narratives are, without having this infrastructure in place that is doing monitoring work.

To add to our earlier session with Professor Aiken, we talked about the systems and the taxonomy, the platforms and the end point users, and building that infrastructure will be a huge challenge. What are Dr. Culloty's thoughts on the legal interventions on disinformation in order to preserve democracy, rights and freedoms? Integral to this will be the provision of the Online Safety and Media Regulation Bill in terms of the category of harmful content. What needs to be strengthened in those areas? Also, what about the characteristics of populations? We talk about young people who are more active on social media versus those who are probably most vulnerable to disinformation.

We talk about young people who are more active on social media versus those who are probably most vulnerable to disinformation and what types of interventions or safety mechanisms we can provide to protect the most vulnerable.

Professor Brian O'Neill

In Media Literacy Ireland, we always say that there is a range of measures that all work together towards achieving a better environment, better protections, empowerment for users, and a safer, more effective and fulfilling media experience.

In terms of the categorisation of this, this is a fast-moving and fast-evolving environment. It is important to be specific about what we are trying to solve and the best way of doing that. Of course, we are working with legislators and regulators to review this on an ongoing basis. That is where we as an alliance can try and bring the perspective of a wide range of media stakeholders. That involves our regulatory authorities.

Something that is particularly powerful here is the international sharing of experience that we have been able to benefit from. In the Irish context, we have particular experience of the issues and challenges we face. However, how is this experienced in other jurisdictions? What are the common definitions? We look to setting international standards and defining best practices. In our work, Media Literacy Ireland has been particularly effective in being able to share our experiences, which have been picked up by the Council of Europe and the European Commission work in supporting this.

I always come back to the point that media literacy has a very long history. It is to the credit of the Members of the European Parliament who really ensured it had a solid position in public policy agenda, found a key position within the AVMSD and is now very much part of the toolkit of regulators across Europe. That in itself is a very important achievement.

Where would Professor O'Neill identify gaps in provision of media literacy education resources?

Professor Brian O'Neill

There is an ongoing need to secure its sustainability and its long-term presence in the landscape. It cannot be seen as a quick fix at any point. That has been one of our constant struggles. The media literacy community will always remember that. It is great to have attention at a certain point in time, but that needs to be sustained in the long term. Ms Chapman might wish to amplify that point because that is something she is experienced for a long time.

Ms Martina Chapman

I have been working in the area of media literacy for almost two decades. In that time, I have seen different issues arise. People often think media literacy can fix things. Ultimately media literacy cannot be used to fix any of these issues. I will use a Covid example to describe what media literacy is like. When Covid first arrived, we were all encouraged to wash our hands because washing our hands helped to protect us from Covid. Washing our hands helps us to protect from a multitude of different infections. Media literacy is a bit like that. It is a really good first defence against a range of issues.

Picking up on the question about vulnerable groups, we are all somewhat vulnerable and it is very difficult to pinpoint particular vulnerable groups because how we engage with media is a very personal thing. It needs to be an ongoing process for us to build resilience. We all need to normalise the idea of developing and updating skills on an ongoing basis. Earlier Professor Aiken mentioned AI, machine learning and virtual reality. They are likely to present real issues for us in the future. I have no doubt we will be having conversations about the media literacy skills we need to help people to develop to manage some of the issues that are coming down the road with those kinds of developments.

Does Dr. Culloty have anything to add on the types of interventions on disinformation? She previously talked about the shortcomings she has experienced in engaging with digital platforms when try to get hard evidence to support research. I would like to get her thoughts on that.

Dr. Eileen Culloty

Some of the major interventions we saw from the platforms was that they started labelling content. They put labels on content and links to public information such as links to the HSE or government advice. On some occasions, they started removing content. Possibly the most significant development during the Covid pandemic was that the major platforms were suddenly willing to intervene in the way that they never had previously. The problem is that they have not been required to share any information about whether any of that was effective in any way. It is all well and good for platforms to claim they are doing something; there must be some channel to audit whether any of that is actually being done consistently and whether it is effective.

One of the pieces of work we did with the BAI was to look at how they were using AI to apply labels to content. In the Irish context, we found Facebook groups and TikTok pages about the Covid-19 vaccine that were completely inconsistent. In one instance, a fact check might have been applied to a piece of disinformation, while the exact same piece of information could appear on another Facebook group with almost the same members and no fact check was applied. Where TikTok would say that all Covid-19 vaccine content carried a label, we found some of the most popular searches did not carry those labels.

I emphasise that in coming back to something Professor Aiken said earlier about using technology to detect and categorise disinformation. As I did not catch all of what she said, it is possible that I missed some of the nuance. We need to be very cautious about assuming that these technologies can be the solution to this. We know that the platforms with all their resources are not able to accurately and consistently apply labels in a very basic way, which gives rise to concern about whether smaller companies with far less resources could do it.

In addition, it is very tricky to define disinformation; it is highly contextual. What is disinformation one day might not be disinformation a week later, as we saw with Covid because the scientific knowledge was evolving. For much of the content that is quite problematic, the facts are not always clear. An example that comes up frequently in content we look at would be claims of links between migration and crime. Academics are still debating the relationship between crime and migration and have been doing so for decades. If academic researchers are still debating what the facts are there, it is very difficult to say that even a fact checker, a journalist or someone else could come along and say that something is categorically true or false. Extending that, it is extremely difficult to say it is possible to rely on piece of AI to start categorising disinformation, not least when we consider that one of our major concerns with AI in general is its great potential to discriminate against certain groups, to not being close, and that can cause harm in itself.

I reiterate that it is an extremely complex area in which our core definitions of the problems we are trying to address are not clear. We need to be very careful about the kinds of solutions we come up with. A fundamental thing would be requiring platforms to be open to independent audits or at least to share with regulators, policymakers, researchers or somebody, so that there could be independent verification of what is happening online and we are not left speculating about what is happening and what might work.

That is an important point. I do not see anyone further looking to contribute so I thank our witnesses for an informative and engaging session. We know they are extremely busy but it is important and valuable for the committee to get their expertise and experiences in this vast area. We appreciate their insight.

The joint committee adjourned at 4.10 p.m. until 11.30 a.m. on Wednesday, 16 February 2022.