Skip to main content
Normal View

Joint Committee on Tourism, Culture, Arts, Sport and Media debate -
Wednesday, 6 Dec 2023

Online Safety, Online Disinformation and Media Literacy: Discussion

Today's meeting, which will be in two separate sessions, is with key representatives to discuss online safety, online disinformation and media literacy. Its timing could not be more pertinent after what we have seen in the past couple of weeks. I thank the witnesses for being with us here. In our first session, we will meet representatives from TikTok, Meta and Google. To this end, I warmly welcome from TikTok, Ms Susan Moss, head of public policy, and Dr. Nikki Suu, safety and well-being public policy subject expert; from Meta, we have the familiar face of Mr. Dualta Ó Broin, head of public policy in Ireland; and from Google, we have Mr. Ryan Meade, government affairs and public policy manager, and Mr Ollie Irwin, trust and safety team.

The format of today's meeting is that I will invite the witnesses to deliver their opening statements which are limited to three minutes. I ask them to be cognisant of that limit to give my colleagues on the committee as much time as possible to engage in questions over and back thereafter. As the witnesses may be aware, the committee may publish the opening statements on our web page. Is that agreed? Agreed.

Before we proceed to the opening statements, I wish to explain some limitations in relation to parliamentary privilege and the practice of the Houses as regards references witnesses may make to other persons in their evidence. The evidence of witnesses physically present or who give evidence from within the parliamentary precincts is protected, pursuant to both the Constitution and statute, by absolute privilege in respect of the presentations they make to the committee. Witnesses are reminded of the long-standing parliamentary practice that they should not criticise or make charges against any person or entity by name or in such a way as to make him, her or it identifiable or otherwise engage in speech that might be regarded as damaging to the good name of the person or entity. Therefore, if witnesses' statements are potentially defamatory in relation to any identifiable person or entity, they will be directed to discontinue their remarks.

Members are reminded of the long-standing parliamentary practice to the effect that they should not comment on, criticise or make charges against a person outside the Houses or an official either by name or in such a way as to make him or her identifiable. I also remind colleagues of the constitutional requirement that members must be physically present within the confines of Leinster House to participate in public meetings. Therefore, any member who attempts to attend from outside of the parliamentary precincts will be asked to leave the meeting.

I propose that we proceed with opening statements. We will begin with Ms Moss on behalf of TikTok.

Ms Susan Moss

Gabhaim buíochas leis an gCathaoirleach. I thank the members of the committee for this invitation to speak to them and contribute to this important discussion. I am head of public policy at TikTok in Ireland. I am joined by my colleague, Dr. Nikki Suu, safety and well-being public policy subject expert.

In my opening statement, I will provide the committee with a short overview of TikTok’s commitment to online trust and safety and ongoing response to disinformation.

By way of background, we first invested in Ireland in 2019 and we continue to grow our presence and operations here, with over 3,000 employees in diverse roles, including data privacy and TikTok's trust and safety team. TikTok is a platform where people can find content that entertains them, educates and provides a safe space for people to express their true self, whether that is reading through BookTok or bringing Irish music to a global audience through #Fleadh.

With a large and diverse community comes a responsibility to provide a safe and trustworthy environment. TikTok’s approach, first and foremost, is safety as a design rather than safety as a bolt-on. We do this in a number of ways, including third-party engagement with our European safety advisory council, bringing together leaders from academia and civil society.

Our community guidelines reflect our values and establish the kind of behaviour we expect on our platform. TikTok proactively seeks out and removes content which violates these guidelines. We enforce these rules using a combination of technology and our safety experts around the world. In order to support fair and consistent review of potentially violative content, moderators work alongside our automated moderation systems and take into account additional context and nuance which may not always be picked up by technology.

We believe in transparency. For this reason, we publish a quarterly transparency report as well as our reporting obligations under the EU Digital Services Act and the EU code of practice on disinformation.

TikTok is a platform for users aged 13 and above. We have processes in place to enforce our minimum age requirements, both at the point of sign-up and through the continuous proactive removal of suspected underage accounts from our platform.

Many of our safety features designed with teenagers in mind are a first for the social media industry. For example, when a teenager under 16 joins TikTok their account will be set to private by default and their ability to direct message is disabled. TikTok also has a 60-minute screen time limit as a default for under-18s. We also provide a suite of family pairing tools so parents and guardians can participate in their teen's experience and make choices that are right for their families and reflect their appropriate developmental needs.

On disinformation and media literacy, disinformation is not a new problem but the Internet provides a new avenue to an old challenge. We treat disinformation with the utmost seriousness and are committed to preventing its spread, while elevating authoritative information and investing in media literacy to help build resilience in our community. We place considerable emphasis on proactive content moderation and the vast majority of violative content is identified and removed proactively before it ever receives a single view on TikTok or is reported to us. As part of the Digital Services Act compliance programme, under which the code of the practice on disinformation will find a new legislative home, we have implemented a range of measures designed to keep our users safe across a number of key areas, including disinformation.

We welcome the opportunity to engage with the committee and I am happy to answer any questions members may have.

I thank Ms Moss very much. I am sure there will be many questions but we will get to that in a few minutes. I invite Mr. Dualta Ó Broin to give his presentation on behalf of Meta.

Mr. Dualta Ó Broin

Gabhaim buíochas leis an gCathaoirleach. I ask the Cathaoirleach to let me know timewise as I may be a little-----

I sure will; Mr. Ó Broin has three minutes.

Mr. Dualta Ó Broin

I am head of public policy for Meta in Ireland. I have been invited to speak to the committee today about online safety,

online disinformation and media literacy. I will provide a brief overview of Meta’s approach to these topics. I look forward to the committee's questions.

On online safety, while Meta believes in freedom of expression, we also want our platforms, Facebook and Instagram, to be safe places where people do not have to see content meant to intimidate, exclude or silence them. We take a comprehensive approach to achieving this by writing clear policies - community standards in the case of Facebook and community guidelines in the case of Instagram - about what is and is not allowed on our platform; by developing sophisticated technology to detect and prevent abuse from happening in the first place; and by providing helpful tools and resources for people to control their experience or get help.

We regularly consult experts, advocates and communities around the world to write our rules and we constantly re-evaluate where we need to strengthen them. Our content enforcement relies on a combination of people reporting content, AI technology and human reviews. Our online and publicly accessible transparency centre contains quarterly reports on how we are faring in addressing harmful content on our platforms, in addition to a range of other data.

We welcome that Coimisiún na Meán has finally been established to implement the revised audiovisual media services directive. From early on, we have been supportive of the objectives of the DSA and the creation of a regulatory regime in Europe that minimises harm effectively, protects and empowers people, and upholds their fundamental rights. In August we published details of additional transparency measures and user options which are part of our ongoing commitment to meet our regulatory obligations.

We welcome the publication of the Digital Services Bill 2023 yesterday and we look forward to Ireland having this legislation and regulatory resources in place to meet its obligations under the DSA by the February 2024 deadline. Members will appreciate that as Facebook and Instagram are in the process of being designated for the purposes of the revised AVMSD by Coimisiún na Meán and are now subject to regulation under the DSA, I am somewhat limited in what I can discuss on those matters.

I turn to what the committee has called online disinformation. We have taken significant steps to fight the spread of misinformation using a three-part strategy to remove content that violates our community standards. In Ireland we removed almost 1,000 pieces of misinformation from Facebook for this reason in the first half of this year. We aim to reduce distribution of stories marked as false by third-party fact checkers, and inform people so they can decide what to read, trust and share. As part of that effort, in the European Union we partner with 26 fact-checking organisations covering 22 different languages in the EU. In Ireland, we work with TheJournal.ie. We add warning labels to posts it rates as false and notify the person who posted it, pointing them to the fact checker’s article debunking the claim. For the first six months of this year, these labels were applied to 1.1 million pieces of content on Facebook originating from Ireland. We also impose strict penalties on pages, groups, and Instagram and Facebook accounts which repeatedly share misinformation. This includes not recommending them to people and moving all of the content they share, regardless of whether it contains false claims, lower down the news feed so fewer people see it.

I draw Mr. Ó Broin's attention to the time.

Mr. Dualta Ó Broin

I know the committee members have the opening statement in front of them. However, on the point of media literacy, we have been members of Media Literacy Ireland since it was established, and we have a significant range of resources and tools to assist users to understand their experiences on the platform. One of the most significant is the guide about how artificial intelligence powers the user experience, which was published earlier this year.

Mr. Ryan Meade

I thank the Cathaoirleach and members of the joint committee for inviting us to speak on the topics of online safety, disinformation and media literacy. I work with Google as government affairs and public policy manager in Ireland. I am joined by my colleague, Ollie Irwin, from our trust and safety team, who leads our Google safety engineering centre in Dublin. Our mission at Google is to organise the world’s information and make it universally accessible and useful. Information quality and online safety are integral to this mission and a critical part of our responsibility to our users. We want to contribute to a more responsible, more innovative, and more helpful Internet. The products we have built have been a force for creativity, learning, culture and discovery. Products such as Google search have helped educate billions of people around the world by opening up their access to information from across the web. Our video-sharing platform, YouTube, allows users to watch and upload original video content and share this with others.

Our strategy for tackling illegal and harmful content is tailored to each of our products, based on the nature of the service, how it is used, and the specific risks which may arise. We recognise that our products can only be as helpful as they are safe. We are constantly innovating and exploring new ways to keep users of our platforms safe online, including as new AI technology continues to advance.

At Google we are proud to play a part in connecting users to diverse sources of media and news. In line with our mission, Google facilitates access to information and contributes to media plurality by reducing barriers, increasing choice for consumers, contributing to a diverse news landscape, and promoting independent news outlets. We are committed to fighting the spread of misinformation online, because helping people sort facts from fiction has never been more important, something we saw most recently during the disturbing events in Dublin. During incidents such as these, we focus not only on tackling harmful or illegal content, but also ensuring our systems prioritise connecting users with high-quality news from authoritative sources. We also empower users with more information and context, which can help them better evaluate the content they encounter online. When you search on Google, for example, our "about this result” tool allows you to see more information about any result, such as who is behind the information or when Google first indexed the page.

Media literacy is crucial in tackling disinformation and improving online safety, and it is clear there is an unmet demand. According to a report by Ipsos, fewer than one in ten Europeans have participated in any form of online media literacy training, while 60% of Europeans say they are interested in learning more. Since 2018, Google.org has supported more than 75 organisations creating positive online experiences, including many that provide media literacy and online safety programmes for children. In Ireland this includes Barnardos’ online safety programme for schools, which includes Google’s open-sourced Be Internet Legends curriculum. This flagship programme has been updated to add lessons on the new challenges around AI in easy to understand language. Since 2019, it has reached more than 150,000 primary school children aged between eight and 12. As a member of Media Literacy Ireland, we support the Be Media Smart campaign, which is currently running across a variety of platforms, encouraging people to stop, think and check that the information they are getting from whatever source is accurate and reliable.

In March 2021, Google contributed €25 million to help launch the European Media and Information Fund to strengthen media literacy skills, fight misinformation and support fact checking. The objective of this fund, which is independently run, is to help strengthen the media literacy skills of adults and young people, support and scale the critical work of fact checkers, and strengthen the expertise, research and resources to tackle misinformation. At the last count, more than 70 projects have benefited from €11.2 million since applications first opened, including Uisce Faoi Thalamh, a recent report on online disinformation in Ireland from the Institute for Strategic Dialogue.

Will Mr. Meade conclude as he has run out of time?

Mr. Ryan Meade

There is one more paragraph in my opening statement, which the committee can read. I conclude by saying there is no one-size-fits-all solution to media literacy and tackling disinformation, and it is not something Google does alone. Co-operation between academics, policymakers, civil society and technology companies is key. Only by working together can we have the biggest impact. We are committed to playing our part and collaborating to find ways to fight against disinformation. We look forward to seeing what we can achieve together.

The witnesses' brevity is appreciated. I turn to my colleagues, who I am sure have lots of questions lined up. They have five minutes each, with a little latitude because everybody may not be with us today.

I start by expressing my disappointment that X-Twitter is not here today, and the reason was outlined in its correspondence with the committee. Can we have clarity on exactly what legal issues it referred to? It said "including and because of ongoing legal proceedings". Can we have clarity on that, if possible? It also said in correspondence that it would be willing to answer questions and correspond in writing or in a private session. I do not think it is good enough that it is refusing to attend the public session. I put that on record. Can we also be furnished with the legal reasons as to why it cannot attend today?

Is that agreed? Agreed.

I start by addressing all of the witnesses. I want short answers to some of the questions. First, how is content moderated? Will they tell me what percentage is automated and what percentage is moderated by people? Are those people directly employed, how many are employed in that capacity, and where are they based?

Ms Susan Moss

I start with the number of people we employ in moderation. We employ more than 6,100 people moderating European Union languages. We have a split between AI technology moderation and human moderation. We would not have the level of detail to give a percentage split between the two, but I am happy to follow up on that.

That is 6,100 employed.

Ms Susan Moss

I should also say there is a split between in-house and outsourced.

That is 6,000 employed in moderation.

Ms Susan Moss

Within the European Union languages. There are more globally.

Mr. Dualta Ó Broin

Across the board we have 40,000 people working in trust and safety. Of those, approximately 15,000 are in content moderation.

That is 15,000 in content.

Mr. Dualta Ó Broin

That is a mix of in-house and external. I do not have a breakdown of what that is.

There are 15,000 people and then there is automated.

Mr. Dualta Ó Broin

In our transparency report we break down how much of the content removed was removed by AI or what user reports came in.

However, that might be flagged by AI to somebody for review but that is the only level of breakdown I have. For example, with spam, almost 100% would be removed by the systems. With hate speech, it is approaching 90%.

Will each of the witnesses furnish the committee with that breakdown of automated versus people directly employed and where they are based? I put the same question to the representatives from Google.

Mr. Ryan Meade

A number of those answers are in our transparency report that we recently published under the DSA. Rather than trying to crossover with that, we can provide those after.

Does Mr. Meade have the figures to hand here?

Mr. Ryan Meade

I will ask Mr. Irwin to talk through the breakdown the Deputy mentions.

It is just the figures, because time is of the essence.

Mr. Ollie Irwin

In answer to the Deputy's first question on automated versus manual, taking YouTube as an example, we can see from our recent transparency report that approximately 95% would be automatically flagged for review and the remaining 5% would be a mix of user flags and priority flaggers.

Okay. What is that 95% or 5% in figures? What does that amount to in people?

Mr. Ryan Meade

That refers to the content, rather than people. Therefore, that is the content that is first flagged by an automated system rather than by a human. Our automated systems and humans work very much in concert. It is a combination of AI technology flagging material and then there are humans in the loop to undertake reviews.

Mr. Ollie Irwin

The AI technology has a confidence score and if it is low, it would then have to go for human review. That is in the case of something that needs contextualisation.

The others were able to give out the figures for how many were actually employed on the human aspect of it. Does Mr. Irwin know how many people are employed in this?

Mr. Ryan Meade

For that, I would refer back to the DSA transparency report and I will be happy to provide a reply afterwards. The exact figures are in there so I do not want to crossover with that.

Will the witnesses talk me through the process of having dangerous disinformation removed? If we look in the aftermath of the recent riots, An Garda Síochána has said it had contacted companies regarding online coverage, disinformation, the riots and the threats of violence. Coimisiún na Meán had also contacted companies similarly. What is the immediate procedure now? I do not want a long drawn-out answer. In layman's terms, what do companies do when they get those reports? What is done instantly? I will start with Meta and work down the list.

Mr. Dualta Ó Broin

I can tell the Deputy at a high level what happened in this instance. We saw the media reports and our law enforcement team made contact with the Garda in that regard. That contact was maintained between the Garda and the law enforcement outreach team throughout.

The law enforcement section from Meta made contact, as opposed to the Garda contacting it?

Mr. Dualta Ó Broin

I would not get too focused on who contacted who. The important thing is-----

No, but I am just asking-----

Mr. Dualta Ó Broin

-----that the contact was there and it was consistent.

Okay. Meta made the initial contact. What happens then?

Mr. Dualta Ó Broin

The purpose of that is to establish what is happening essentially. The law enforcement can then go through our portal to request certain actions regarding accounts and on particular types of content and that is contained in our law enforcement transparency report. However, the discussion at that stage would not have been about removing content in that instance.

What was the next step after that?

Mr. Dualta Ó Broin

That was on the law enforcement side. Completely separate from that, a large team was established across the company to bring together all the relevant experts to ensure decisions could be made as efficiently and as effectively as possible. It was dealing with the content that was coming through in the queues to ensure the decisions were being made quickly and accurately.

Was it the same or similar in TikTok?

Ms Susan Moss

I will briefly take the committee through the steps we took that day. We do not permit misleading or false information on TikTok, especially harmful misinformation that causes significant harm both to society and to individuals. We were absolutely confident in our response on that day. We were able to get ahead of content and events as they were unfolding. The first thing we did was to activate our crisis management protocols to both remove violated content and to prevent the spread of misinformation or disinformation on the platform. We then worked closely and proactively with An Garda Síochána to support it, and in addition to that, and more to the Deputy's question around disinformation, we scaled up our external fact-checking organisation. We have 15 external fact-checking organisations. They are IFCN-accredited. We activated our emergency fact-checking procedure in collaboration with those fact-checkers. We have a fact-checking organisation here in Ireland and it was flagging content not just on TikTok but horizon scanning across the Internet and flagging to us these individual claims it was seeing. That then helped us to prevent that type of content spreading on TikTok. Just to give an idea of what we were seeing, we saw 25 individual claims. That is not 25 posts; that is 25 individual stories. For example, one of them would have been that the military was moving into O'Connell Street. We were able to fact-check that and determine it was inaccurate. Then, what we do while content is being fact-checked is that with an abundance of caution, we will remove that content from being recommended to our community. When it is fact-checked, we will either remove it, or if we cannot verify a claim, we will then label that content to make our audience aware that this content contains information that has not been verified. It is very fast-moving. It really was a confident response from TikTok in this instance.

I thank Ms. Moss. Is Google similar?

Mr. Ryan Meade

What I would say in respect of the day of the incidents is that we took a number of proactive steps on becoming aware of them; the first incident obviously being the very distressing stabbing and then the subsequent unrest in Dublin. Both of those triggered our instant protocols where we undertook 24-7 monitoring to see were these incidents going to trigger material that would be in violation of our policies, either because of incitement to hatred or dangerous disinformation. On the first day, we did not see our platforms being used in respect of those events, although our teams were monitoring it. Other proactive steps taken were looking down reviews and edits in Google Maps around the site of the incident. We have invested in these instant protocols and policies around enforcement and they kicked in very quickly. We subsequently had a number of useful discussions with Coimisiún na Meán just to share information on the steps we were taking and we continued to monitor through the weekend. In general, we did not see material that would have triggered, for example, a notification of threat to life or limb, for people using our platforms.

Can I ask all three of the witnesses a "Yes" or "No" question related to age verification. Do their organisations request age verification or for an ID to be uploaded as proof of age before somebody can create an account on any of their sites? It is a "Yes" or "No" question.

Mr. Dualta Ó Broin

No, but we do it on challenge so if there was a reason we would suspect somebody-----

"No" is the answer from Meta.

Mr. Dualta Ó Broin

----is not of age, then we would.

What about TikTok?

Ms Susan Moss

No, but we have age-assurance policies.

No. Okay. What about Google?

Mr. Ryan Meade

Not at the point of account creation.

There are other Deputies with questions so I will have to move on.

I thank the witnesses for attending to discuss this very serious issue. I join Deputy Munster in saying that it is an absolute disgrace that Twitter X has not agreed to participate in this meeting. All so-called social media giants have a role to play in curbing the spread of disinformation. We can talk about Garda numbers and presence all we want, and about strategy and being able to predict what happened, but disinformation, the harmful content, the incitement to hate being shared on all social media platforms contributed in a massive way to the events of a number of weeks ago. The scale of disinformation on social media outlets is out of control. We have to accept that. The scale of fake news is out of control. The scale of incitement to hatred is escalating and it is all being ramped up on social media platforms. In their opening statements, the witnesses refer to things such as the importance of a safe space, trustworthiness and about addressing harmful content with sophisticated technology. However, I have to say this is failing. It is failing to curb the spread of disinformation and incitement to hate. We are talking about multibillion euro corporations here; they have to do more. I know the witnesses mentioned the figures of the number of staff working in curbing the spread of disinformation but the companies have to do more.

I will go to Meta first. Mr. Ó Broin referred to 1,000 pieces of misinformation of the first quarter of this year.

Surely, that is only scratching the surface. Is that fair to say?

Mr. Dualta Ó Broin

I would say that is misinformation, which violates our community standards. That type of thing would be like saying drinking bleach cures Covid-19, for example. There is a whole lot of other content that would come under other community standards.

In general, in terms of community standards and incitement to hate, misinformation and disinformation, would Mr. Ó Broin say Meta is still only scratching the surface in terms of what is out there?

Mr. Dualta Ó Broin

That 1,000 is a very particular category of content to which I am referring. In our other transparency report, we would have the details of the millions and billions that-----

Would Mr. Ó Broin say Meta is getting most of it or only a small proportion?

Mr. Dualta Ó Broin

It is a constant challenge. We are not going to claim we have this sorted and that we are going to stop investing in this. We have to continue to evolve our operations. The people who want to share this content will also evolve their operations, some of which are quite sophisticated.

In terms of trying to find a solution to this, Meta owns WhatsApp, and a number of weeks ago when this incident kicked off, we know and are all aware of what happened. I read the WhatsApp messages and heard the voice notes calling for immigrants to be killed. We have heard them. I am not sure if Mr. Ó Broin heard them but I certainly did. Would Mr. Ó Broin agree there was a failure to curb this? That is serious incitement to hate. It was out there and it spread like wildfire before anything was done. What is Meta going to do to stop that happening in the future?

Mr. Dualta Ó Broin

WhatsApp is a fundamentally different type of technology. It is end-to-end encrypted. In that case, we cannot scan the content of the messages. We cannot go in and scan what is actually happening in the messages in the same way as we could scan posts on Facebook on public profiles etc. We rely on users reporting the content from WhatsApp in-app. That is not just because of end-to-end encryption. That is because of EU law as well in relation to this-----

However, by the time a user reports something like that, it has gone too far. The group is already on the street. They are out there and activated. Surely, we are going to have to think of another way to curb that type of messaging.

Mr. Dualta Ó Broin

I am sorry to cut across the Deputy. We have taken steps to reduce virilaty, for example. One of the early things we learned with WhatsApp was that virality was a significant problem. Messages were being shared with thousands upon thousands of groups. Therefore, we reduced the number of groups with which people could share messages. We reduced the number of times that content could be forwarded. We added a feature whereby people could click out of WhatsApp to establish whether a story is true or not by searching on a search engine. There are things we are doing, Deputy.

Mr. Dualta Ó Broin

I am not saying it is-----

It is not enough. That was an audio message, obviously. We heard about the image of the tank rolling into Dublin, which was again widely shared on WhatsApp. Now, that is an image. That is something visual. Surely, that type of thing can be stopped.

Mr. Dualta Ó Broin

Again, if it is shared on WhatsApp within the encrypted space, unless it is reported to us, we cannot take action on it.

I will refer to TikTok for a second if that is okay. I will refer to another incident that appeared on my TikTok news feed. It does not reflect the things I look at, but it was a video of an individual outside Government buildings discussing the welfare of that poor five-year-old girl who is receiving serious treatment and using that to incite hatred. This was on that stream, and there was a follow-up video. It is all well and good someone like me viewing that but if it is viewed by younger people or people who might be inclined that way, this is inciting hatred. It was there, however. How does Ms Moss explain something like that appearing on the platform?

Ms Susan Moss

There is room for improvement and we recognise that. We do not create a permissive environment on TikTok for this type of behaviour to find a way to grow. To refer back to some of the numbers, we proactively remove 96% of harmful misinformation. More than 80% of that is removed before it gets a single view and more than 61% is removed within 24 hours. We will not catch every single incident of it. If people watching see a post they believe is harmful and that creates misinformation, we would urge them to report that through the TikTok channels.

We might look at some of the reporting from, for example, the Institute for Strategic Dialogue. The Uisce Faoi Thalamh report, which Mr. Meade referred to, looked at the misinformation and disinformation ecosystem in Ireland. It stated categorically that misinformation and disinformation bad actors are struggling to get through on TikTok. I absolutely appreciate that we will not always get it right-----

Ms Susan Moss

That was in the report.

I am sorry; I have to dispute that because these videos are appearing regularly. I had to listen to what this individual had to say, which I was appalled by, but because of that extended view of the video, the second video then came up. That is happening right throughout the platform. Ms Moss talks about figures of 80% and 96%. It is not good enough. The damage is being done. Incorrect information, lies, racism and incitement to hate are being spread. I am sorry, but these are multi-billion-euro corporations. We have to do more to stamp out this. We all have a responsibility. Government absolutely has a responsibility as do Coimisiún na Meán and this committee. It is getting out of hand and we are going to see more instances like this.

Ms Susan Moss

To that end, we are a signatory to the code of practice on disinformation, COPD. We have extensive reporting obligations under the COPD. It will find a legislative footing under the Digital Services Act. The DOPD is a co-regulatory procedure. We are also regulated in disinformation under the Digital Services Act. Therefore, where there is a systemic risk we, as a platform, have a responsibility and disinformation is included in those systemic risks. There is regulation but we, as a platform, have not waited for regulation. We have gone above and beyond to implement measures to try to catch disinformation. I absolutely accept that it is an industry-wide challenge. There is no finish line to catching this kind of content. We can do better and will do better.

I thank Ms Moss. Do I have time for one more question? This is for Meta. We are receiving a lot of communication from people who are concerned about the concept of shadow banning, particularly with regard to the conflict in Palestine and Gaza. There are theories out there, some of which are, in fact, very hard to dismiss, that much pro-Palestinian content that is trying to portray what is happening in Gaza and the destruction, devastation and death is being shadow banned on the Meta platforms. In other words, some people are finding that their videos and reels are getting far less views. Some are finding it very hard to actually find pro-Palestinian accounts. There is real concern. As we speak, I understand there is a discussion on this issue in the audiovisual room upstairs whereby pro-Palestinian activists who are looking for a ceasefire and for the killings and murders to stop are finding it hard to get traction on Meta, in particular. Can Mr. Ó Broin comment on that, please?

Mr. Dualta Ó Broin

The comment is that our intention is to apply our community standards objectively and fairly regardless of what the content is. Speaking generally, because I do not want to speak about the Israel or Gaza conflict in particular, we see this from both sides in conflicts. We will get complaints of this nature from both sides in a conflict.

There are regulators now in place to ensure we are applying our community standards objectively and fairly and with equity across the board, and not just the European Commission but Coimisiún na Meán here as well. It is our intention to apply them fairly and with equal treatment to everyone but the regulators are there to oversee that. I will make a general point. Part of the reason we set up the oversight board in 2019 was that we realised that, in certain instances, it should not be left to platforms like ours alone to make a decision on what should or should not remain on the platform. That is an additional layer of oversight we have, independent of the regulatory structures that are now set up in the EU.

I thank the witnesses. I apologise if I was a bit abrasive in the questioning but we all have to do better in terms of stopping this.

I thank Deputy O'Sullivan. Next on my speaking rota is Senator Sherlock.

I thank everyone for coming for this hearing today. I, too, join with Deputies O'Sullivan and Munster, along with the Chair, in expressing that frustration in terms of representatives from X not appearing here today. This is really my first question. I have read the statements carefully and listened to what the witnesses said about the efforts they are making across all four platforms, as I understand it today, in terms of the education piece or pre-bunking, as YouTube called its initiative. I want to ask about the platforms' recommendation system.

The question that has to be asked is whether the model upon which all four platforms operate is undermining all efforts to try to prepare people for the content they will see. Ultimately, it is the model of the four platforms that is pushing the most extreme content on people. What has been done to alter the platforms' recommendation systems? We know from UN investigators that Meta played a determining role in the Myanmar genocide in 2017. In Russia, the European Commission has spoken about how the recommendation system, particularly in Meta, has played an influential role. What changes has Meta made to its recommendation role? Is it the case that the recommendation system is the reason we are having these conversations? It is not only that I post photos of my children, and that is fine; it is that Meta is promoting vile content that poured fuel on the fire of what happened last Thursday week. Maybe I will start with Meta.

Mr. Dualta Ó Broin

The notion that we are just interested in any engagement and it does not matter what the context is could not be further from the truth. Our business model is based on advertisements. Brands do not like to be put beside harmful content so it is in our direct commercial interest, and as well as in the interests of our social responsibility, to remove and reduce as much harmful content as possible from our platforms.

In relation to the recommender system and the search system, we have introduced a mechanism that means certain terms are banned from search. If you are searching for certain things, you will get pop-up notifications about sources of information in relation to that.

In relation to the recommender systems, we brought in options so users can decide whether they want the algorithmically defined feed or a chronological feed based on their friends and family. In addition, we published a document, as referred to in my opening statement, on how the AI system actually works, which is important for people to understand what is happening and why are they seeing the feed they see when they go onto Facebook or Instagram.

I suppose the issue is that it is not working. We saw some of the content that was on Facebook. We spoke about what was on WhatsApp. We have seen what is on TikTok. The efforts do not go far enough.

Mr. Dualta Ó Broin

Coimisiún na Meán and the European Commission will be determining whether the efforts we are making to reduce illegal content and other harmful content on our platforms are sufficient. That is one of the reasons they have been established.

Okay. TikTok might come in on this question.

Ms Susan Moss

I would be happy to do so. To explain slightly, the algorithm on TikTok is slightly different. It is driven by a content graph as opposed to a social graph. The content you see is based on your own "Interesting" list; in other words, what you have indicated to us you are interested in and what your topics of interest are, as opposed to what a family member or acquaintance is sharing. That is how the algorithm works. An inherent challenge in any recommendation system is ensuring the breadth of content we service to you is not too narrow, is not too repetitive and is not amplifying the same type of content repeatedly. We work with AI ethics experts and with our own content advisory council to ensure we deliver content that is based on the variety of your own interests. Something we have done, because the Senator asked-----

What if I am interested in hate?

Ms Susan Moss

We do not permit hate, hate speech or hateful ideologies to appear on TikTok. Therefore, that would not come as part of your algorithm or your recommender system.

The Senator asked about some of the recommendations we are doing. We have a non-personalised feed on TikTok. If you do not want to have content serviced to you based on your interests, you can opt out of personalisation. The other thing we do is we have a refresh system so you can refresh TikTok as if you have downloaded it for the very first time so it is learning your interests once again, maybe in a different way. We also have a filtering process. If you do not want to see content based on religion, for example, you can put in hashtags in the background to remove that type of content from your feed.

That engagement and that filtering process requires a lot of engagement from a TikTok user. Some very young users, and even older users as well, would not necessarily be able to navigate that. We are very much reliant on individuals.

I am almost out of time but I have one other question that I would like to ask after Google has answered this question.

Mr. Ryan Meade

Recommenders are one way that users find content on some of our products, like YouTube. They can also find it through search or through following topics, etc. It is a system that is valued by users and by creators because it allows them to create an audience based on the content they are producing which may be not at the very top of people's search. If it is not pop music or sport, but it is, for example, agricultural or political, a recommender system allows them to find an audience of people who are interested in that. It is an important tool but, as the Senator has indicated, safeguards are required to ensure it does not end up being harmful.

Before I ask Mr. Irwin to talk about that in more detail, I will mention that we are continually iterating on how recommendations work. Next year, on YouTube, we will be rolling out a product update which will ensure that teen users, if they are watching a sequence of videos which on their own are not harmful but in sequence may threaten to be a risk for well-being or mental health, will no longer be able to continually watch that type of content. I refer to the sort of content that may relate to body image and that sort of thing. We are definitely attending to the question of recommender systems. They are valued by users. They are important in terms of the way the systems work but there are always ways we can improve. I might pass to Mr. Irwin to mention some of the things we have done.

Mr. Ollie Irwin

I will touch on some of the safeguards but I will be brief. For some topics, such as Covid, you will see high-quality information pinned at the top. We have worked with organisations like the WHO and local health bodies in order to ensure that when you are searching for information, this high-quality information is pinned at the top. It is the same for elections. You see high-quality authoritative information pinned at the top. The feed may be different but the high-quality information rises up.

The Senator mentioned prebunking. I would like to just touch on that. We launched the prebunking campaign across central and eastern Europe. It involved videos of counter-narratives to what research was seeing across platforms. For example, we were playing videos of counter-narratives on anti-immigration to try to prebunk or debunk some of the myths that were going around and people would see that pre-roll before seeing videos. That is an experiment to try to counter bad information with good information.

I must move on. I call Senator Warfield.

I thank the Chair. The guests are all welcome here. Cuirim fáilte rompu. It is a pity Twitter did not bother to show up. I echo the sentiment expressed by my colleagues.

I ask the witnesses to set out their organisations' respective plans for the European and local elections next year, focusing solely on paid advertising as opposed to organic political content. Obviously, the concerns here are huge. I wonder if there are plans to black-out advertisements? If so, when will this kick in? Will there be a ban on international interference and maybe AI advertising as well? How can one tell if an advertisement is created by AI? I would appreciate if people would be brief on this. There are three sections I would like to go through and this is one of them.

Ms Susan Moss

I am happy to start. Just to confirm, TikTok does not permit political advertising on its platform.

Mr. Ollie Irwin

I will go through some recent safeguards. We launched a new policy just two weeks ago that requires disclosure if you use generative AI on a political advertisement. We already have our election advertisements verification process, under which the actual person placing the advertisement or behind the advertisement has to be verified. That verification process also helps with our transparency site. If you go onto our elections transparency site, you will see each advertisement, how much was spent and who was the spender. It is a question of full disclosure based on that verification process. We also have our long-standing policies across misrepresentation. That would address, for instance, a deep fake of someone on something like YouTube.

Are there any plans for a blackout in the week or month in advance of any election date?

Mr. Ryan Meade

No. We would rely on the transparency and verification. Where there are local regulations in respect of blackouts, we would respect those. Where there is no law in place and the electoral system has not decided on blackouts, we would not impose those. In all cases, there will be verification and transparency.

What about taking out an ad as foreign interference?

Mr. Ryan Meade

I think that is covered by what Mr. Irwin said in that the person placing the ad has to be verified as an EU entity.

Mr. Dualta Ó Broin

As a high-level point, we recognise how significant the period ahead is from an elections point of view, not just globally but also here in Ireland over the next two years. We have announced for the US that there will be a blackout but we are not going to put that in place unless, as Mr. Meade said, there are local requirements to do so.

As for transparency, we have been running the ads library since 2017 and ads are stored there for a period of seven years. Interestingly, all ads, regardless of whether they are political, are now in the ads library as well. That is another transparency product we have added.

On the verification requirements, you need to be in the country to run an ad in Ireland. You need to have a location here, be able to have a disclaimer and have a whole verification process, which I am sure many committee members are familiar with, with steps to be gone through for that. We know how significant the period ahead is and we are doing everything we can to ensure we approach our role properly.

On AI, we have a similar policy on the use of synthetic media. If anyone uses our AI tools to create something, that will have a watermark on it, and we have a policy on deepfakes as well.

To go back to Google, am I correct in saying you only have to be in the EU to take out an ad in Ireland or do you have to be in Ireland?

Mr. Ryan Meade

For the European elections, we allow advertisers to be in a different place because, obviously, some candidates will be part of EU political groups and they are allowed to place ads.

I want to touch on what we are hearing about censorship and the disproportionate censorship of Palestinian voices. I have seen stories about hiding hashtags and content takedowns. We often see people using workarounds to try to trick the algorithm, such as by using an "@" sign in the word "Gaza". Does that work? Mr. Ó Broin said that complaints of this nature are coming from both sides, but it has been suggested it is more intense on Palestinian voices, given many Israeli accounts might be official entities.

In 2022, following the 2021 Palestine-Israel conflict, a Facebook-commissioned Business for Social Responsibility, BSR, report concluded that based on the data reviewed, Meta's actions in 2021 appeared to have had an adverse human rights impact on the rights of Palestinian users to freedom of expression, freedom of assembly, political participation and non-discrimination and, therefore, on the ability of Palestinians to share information and insights about their experiences as they occurred. Our concern is that there is a huge threat to the flow of information, credible journalism, civil society voices and human rights defenders. I am also conscious that under the DSA, there is a need for transparency and shadow banning is covered. I would welcome a comment from all the companies, given they have all been accused of significant and disproportionate censorship of Palestinian voices. I am not just talking about Meta.

Mr. Dualta Ó Broin

It is similar to the response I gave to Deputy O'Sullivan. Our intention and objective is to apply the policies fairly, regardless of who the user is or where they are coming from. The issue is the content or the behaviour, either of which can be violating, and the intention is to apply that fairly and equitably. It would not make sense if we were to take any other type of approach because we would then be accused of being partisan, which would lead us down a path that would be extremely difficult for us to follow. That is our intention and it is what we are telling regulators we are doing. We are outlining in reports that we are applying these policies objectively and, obviously, regulators will then make a decision on whether we have done so.

Moreover, our community standards are not standing still. They are not set in stone, and that is one of their strengths in contrast with laws, which are. They are constantly evolving, we are constantly learning from things that happen and we constantly update them to ensure they are right where they need to be. We do that by consulting not just law enforcement experts and regulators but also NGOs throughout the world and fundamental rights organisations to ensure we are getting the balance right, but it is a tricky balance to achieve.

Dr. Nikki Soo

At TikTok, we apply our standards equally, regardless of the event. This was, of course, a significant event. It has never been more important to make sure we apply our standards carefully. The Senator asked about using hashtags or faked activity. We deployed our team really quickly, but I have a couple of statistics on this issue to share. Since 7 October, we have removed more than 24 million fake accounts globally relating to the conflict and more than 500,000 bot comments under hashtags relating to the conflict. It is not really about any side as much as making sure the content is accurate and legitimate. Apart from that, in general, wherever we find any content that violates our community guidelines, including violent content and incitement to violence, we will remove it.

Mr. Ryan Meade

I am not aware of any specific allegations in respect of our platform, so if there is anything the Senator can share, we will be happy to take a look at it. Events such as these challenge all the processes and policies that have been put in place, which are very much designed on our side to be as unbiased and even-handed as possible, based on whether the content specifically violates our polices.

I might ask Mr. Irwin to talk about the technicalities behind how we try to achieve that.

Mr. Ollie Irwin

Touching on bias, we have a responsible innovation team that tests our models for bias via adversarial testing. We will be challenging the model to produce outputs and it will then be tuned to address those biases.

Can a user trick the algorithm by using an "@" sign, for example?

Ms Susan Moss

We are alive to those kinds of things, such as using "@" symbols. We have more than 40,000 trust and safety professionals globally and they are absolutely aware of those kinds of tricks to bypass our systems.

Mr. Dualta Ó Broin

The misspelling of hashtags is another method we have seen being used in the past to try to get around our moderation systems. As Ms Moss said, we are alive to it.

Mr. Ollie Irwin

This whole technique goes back to the days of spam.

I thank each of the witnesses for being here and having the trust and confidence in their systems to allow them to be scrutinised in the public domain, something that people in X, or Twitter, are obviously not happy to do. I am sure the public policy team from X is watching these proceedings as we speak. I invite them to come before us, a public committee, in a fully transparent manner. I invite them to drop an email to the owner suggesting he desist from commenting on affairs within Ireland, which he patently knows nothing about. He has personally served to stoke hatred and conflict in recent times in Ireland and he should be deeply ashamed of those actions.

I may be somewhat unusual in believing social media in general is a force for good. As a species, we have been seeking to communicate with one another since time began, using whatever means were available to us at the time. Mr. Gutenberg was berated for doing something so rebellious as to put the spoken word into a book, and the mobile phone is the recent expression of our desire to communicate with one another in an exceptionally sophisticated and technologically advanced manner. If what happened in O'Connell Street had happened 40 years ago or even 30 years ago, which it could have done, and had been organised by people ringing one another to tell them to get to O'Connell Street because something was going down, we would not have had representatives from Eircom before the committee to ask them how that had happened.

It was simply the devices that were available at a time to communicate and mobilise for whatever reason they saw fit. That needs to be said. It is also the responsibility of all of us to ensure whatever kind of social interaction happens in this country and indeed globally is done in a respectful manner and with respect for people in all their shapes, forms and expressions and does not serve to stoke hatred or disharmony. These are things that are unfortunately becoming more common in modern society. I thank the company representatives for being here, for listening and for engaging. We may be somewhat misguided in seeking to constantly call out social media companies in their willingness to remove content and remove it in a quick efficient manner. The real nub of the matter, which was referred earlier by Senator Sherlock, is how social media chooses to share posts with users, be it TikTok, Facebook or Twitter. Frances Haugen, a former employee of Facebook, appeared before this committee last year. She outlined to us what she felt was a very sinister policy deep within Facebook or Meta that essentially saw the development of a multiplicity of algorithms to share content the company felt would drive engagement to the greatest possible extent. That is where we really need to look when we are looking under the bonnet of all social media companies.

I have a question to begin with for TikTok and perhaps Meta. When a user initially signs up to use the platform, they are given the choice at that time - as far as I am aware and perhaps the witnesses can clarify it for me - about whether they are served content recommended by an algorithm or simply content emanating from the people they choose to follow or engage with on the platform. Is that the case? Is that option given to people at the very beginning of their engagement? If so, do the companies think it would be a good idea to regularly remind people they have the option to remove themselves from that algorithmically-engineered world and simply engage with the people they have chosen to engage with through becoming friends or whatever the equivalent is on TikTok?

Ms Susan Moss

I thank the Deputy. People can choose to have a non-personalised feed on TikTok. That means the content that is surfaced to a user is not based on their personal interests or what they have indicated through behavioural signals is what they are interested in on TikTok. Rather, the content will be served to the user based on their locality, so that is one of the options.

What about Meta?

Mr. Dualta Ó Broin

I would have to check what the actual sign-up flow is in terms of what the options are, but I am happy to come back to the Deputy.

Okay. We are now moving into a space where even something we may have trusted in the past to engage with photo and video content, namely our two eyes, cannot be relied on anymore. It is not possible. I have a pretty serious fear around the deepfake technology that is becoming so advanced and advancing every day in its capacity to do fairly extraordinary things with video in particular. Are the companies represented today equally adept at innovating, given the technology is rapidly moving? Deepfake video technology can take anyone in this room right now and have us say pretty much anything the creator wants. What kind of technologies are the companies evolving to ensure that kind of content is not shared? Once it is out there and circulating it is very difficult to pull it back.

Mr. Dualta Ó Broin

I thank the Deputy. I have linked to our adversarial threat report in my opening statement and it covers generative AI and goes into the types of trends we are seeing in that. The technology has improved and our systems are scaling and are scalable further to deal with generative AI as a threat. A lot of the co-ordinated inauthentic behaviour type, which is where we might see this generative AI being used, is based on behaviour rather than the content itself. To date, the content itself has not necessarily proven to be the thing that helps us remove networks of co-ordinated inauthentic behaviour. The second thing is that we have found that while they have capabilities on content, the capability to doctor images goes back, though I appreciate it has taken a leap in the last while, and the real challenge for the bad actors trying to basically upset the democratic processes in various countries - and some of them succeed - is around the behavioural elements. That means getting accounts to be able to act as the conduit for that content. The content to date has not necessarily been an issue.

I ask Mr. Ó Broin to be brief because we are way over time and I want to give other witnesses an opportunity.

Mr. Dualta Ó Broin

I am sorry. There are extensive details in the report about what we are doing.

Ms Susan Moss

AI can certainly make it more difficult for people to be able to distinguish between what is real and what is fake, so at TikTok we have AI labelling on content. That means our audience are able to determine whether there is synthetic media in part or in full entailed in the videos. We are also constantly developing our systems to be able to proactively identify that synthetic media. In addition to that, this is an opportunity but it is also an industry-wide challenge and for that reason TikTok has joined the framework for responsible development on AI. We are joined together with industry partners to ensure AI is designed and applied in ethical ways and in a way that is of benefit to society and is also developing AI in a universally-accepted way.

Mr. Ollie Irwin

Google was the first MTC to launch its own AI principles back in 2018. At a technical layer, Google DeepMind is developing synth ID, which gives a digital watermark to generative AI-created images. We also have a number of policies that require disclosures. We have "about this image", where a user can double click on an image and see the origin of it and how long it has been online. A lot of these tools are to empower the user to better understand what they are seeing. We also have Bard, our own experiment in generative AI. It has classifiers both at the prompt level, that is the input level, built on our safety policies and at the output level so it is not producing content that is violative of our policies.

May I ask one very brief question?

Something that always fascinates me around the issue of misinformation and hate speech is who in the organisations present gets to decide what is hate speech, as opposed to an impassioned speech from a protestor at the end of O'Connell Street, or somebody saying Ireland is at war.

The witnesses should reply in a word.

These are really critical decisions. Who in the organisations gets to make them?

Mr. Dualta Ó Broin

The word is "depends", unfortunately. It really depends on whether it is a clear-cut case or whether it requires greater scrutiny. If it is a clear-cut case-----

Who is it eventually escalated to, if there is indecision about whether this is just somebody making an impassioned speech or somebody who is actively, deliberately engaging in incitement to hatred?

Mr. Dualta Ó Broin

It would be the organic content policy team in our company.

Dr. Nikki Soo

Similarly, it would be our content policy team and trust and safety, but in consultation with fact checkers and experts like Tech Against Terrorism, because it really depends on the topic and we want to ensure we get it right.

Mr. Ryan Meade

In the European Economic Area, Google Ireland Limited is the governance entity, so the policies that apply are ultimately under the governance of the board of Google Ireland here in Dublin. On an operational level, we have a major trust and safety hub in Dublin, but we work with colleagues around the world to develop and refine those policies. It is exactly as the Deputy says; one needs to have a robust policy that is enforceable consistently.

Yes, and it needs to be somebody who has an in-depth knowledge of the political landscape of whatever country or region the particular content is being shared in. That is important as well. I thank the Cathaoirleach.

I welcome the witnesses. I was watching for a while and apologise for being late, but I was at another meeting.

First, my own view - and I have been quite vocal on it - is that we need to put minimum ages on access to social media accounts for children. The reality is that there have been various reports published which say that kids as young as eight or nine years of age have access and open accounts. They are not being put on by their parents; they are actually able to open the account. Responsibility needs to be put on all social media companies to put the proper security procedures in place. Significant profits are being made from the business, and there is a responsibility on the companies to safeguard our children. That is their responsibility as companies that have put their platforms out there.

There is also a responsibility on parents. I am not trying to abdicate that, and I am speaking as a parent of three young kids who is quite afraid of what is ahead for kids. My own child, my oldest young lad, is 12. Thank God he is the only child in his class who does not have access to a phone. All the rest of them have, and are on all the various platforms. Snapchat, you name it. They are sending photographs from there. It is critical that the responsibility is put on the companies by Coimisiún na Meán, and I have put that forward on a number of occasions. We need to put a minimum age of at least 16 years of age to allow children any access to any of the platforms.

The witnesses are in here speaking to politicians. We are talking about hate, and we are the subject of it. Every one of us here have been subjects of it. What way do social media companies deal with stuff that is targeted at those in public life? Do they treat us differently from the general public? The reality is they do. The companies do not treat us correctly or protect us, and we are the people who are in public life, putting our faces out there.

More directly to Facebook or Meta, does it have a policy that gives more protection to the general public than those in public life? Does it allow people to put up knowingly false information, whether it is about negative character and ability claims, sexual orientation, gender identity or anything negative about them? Does Meta have a policy to allow that to stay on its platform, if it is somebody who is in political life, whereas it would be taken down if it was a private individual?

Mr. Dualta Ó Broin

We allow a higher level of criticism for public figures on our platform.

Mr. Dualta Ó Broin

That is the way in which we have developed our policies.

Explain to me why Meta thinks we should have a higher level of criticism or be entitled to a higher level of criticism, even if something is false information. Please explain that to me.

Mr. Dualta Ó Broin

The Senator mentioned a number of different types of content. There is a high likelihood that some of that content would be removed. With regard to criticism of a public figure, the reason for that is having consulted with experts, NGOs and advocacy organisations at a global level, which is where we try to set our standards, there is a desire among the advocacy and NGO communities to be able to criticise politicians, and to be able to call out some of their practices. That is why the policy has been set in that way.

On the broader issue, we have been engaging with the task force on safe participation in political life. We were privileged that they came out to Ballsbridge last month and spent half a day with us talking through all of the measures that we have in place to protect politicians, including the tools and resources that we have in place. We are committed to engaging with the task force, the Oireachtas, and any Member individually in ensuring that their experience on the platform is what it should be.

Does Mr. Ó Broin think it is right that knowingly false information, when Meta specifically identifies in its policy certain things that can be taken down for a private individual, should be left up there because we are in public life? We are all private individuals with families, who are all entitled to the same protection.

Mr. Dualta Ó Broin

One of the things we did earlier in the year was we brought our expert in on that policy to present in Leinster House, and to explain why we put that policy in place. I am not an expert in that policy so I cannot give the Senator exactly the type of detail that he could but that is what we did. Unfortunately, the event was not that well attended. He also presented to the task force on safe participation in political life on the same subject.

Could Mr. Ó Broin forward the name of the person who does that policy piece with regard to what Senator Carrigy has raised, and perhaps we could invite them in to have a discussion? That would be a good follow-up on what the Senator has raised.

I am astounded that a company would allow that to happen, that it allows people who are in public life representing the general public to be targeted or tarnished with stuff that is lies. It is false information and damaging to their good names, and it is allowed to be left there because they are politicians. They are also people who have families, kids, wives, husbands or whatever, and I do not think it is acceptable that such a policy is in place. There will be fewer people going into public life.

That is why the task force was set up, due to the online threats, attacks and the reduction in the number of people putting themselves forward for public life. I am going to ask the other two organisations the same question. If they think that is acceptable and allowable, why would anyone go into public life? Why would anyone put themselves forward to be that target, and be abused? Maybe their kids could be targeted online because they are a person in public life. What is the policy of the other two companies, TikTok and Google?

Dr. Nikki Soo

I will take this first. I completely appreciate that no one should be subject to abuse. Specific to our policies, we call out abusive behaviour, including threats and degrading statements that might embarrass or mock an individual. This extends to public figures. Apart from that, I understand what the Senator means about being in the public eye. Although there are some critical comments that we may enable that are of public interest and so might be allowed, any sort of serious abuse is prohibited. We will remove blackmail, doxxing or anything like that. That violates or community guidelines and we do have that in place.

In addition, we are trying to foster a sense of community on our platforms. One of the things we have is a feature that prompts people, whenever they are trying to post something that is potentially violative and negative, it will come up and remind them that this possibly is not a very nice thing to say. We found that 40% of people actually do change how they want to comment. That is one of the ways we try to encourage our users to be mindful.

There are a range of other tools that we have to empower users to control their experience. With reference to what the Senator mentioned, those are the few things we have in place in terms of policies and tools.

Mr. Ryan Meade

I would say that our policies apply across the board. If we have a policy on threats or incitement to violence or hatred, we apply that across the board. In some cases, when it comes to undertaking a review, we have to apply a certain amount of public interest balancing. In some cases, the context of the content would be important. We attempt to avoid infringing on people's right to engage in legitimate political expression. I would be happy to look at any specific examples the Senator has but we are certainly working all the time to ensure our policies are applied consistently, in respect of both protecting people from harm but also allowing for valid political discourse.

I want to read this. I seem to be getting a difference between the companies with regard to how they deal with politicians. Would I be right in saying that?

Mr. Ryan Meade

I cannot speak for any other company but my understanding is that we would not have a specific policy that deals with public figures. It is more that with regard to the public interest test that we apply across our policies, the status of someone as a public figure may be relevant to that.

So if the information is relevant to their role as a public official rather than to their private life, their appearance or anything like that. The latter would all be taken down, would I be right in saying that?

Mr. Ryan Meade

I could not prejudge any review but I will say that the policies will be applied consistently. That is our aim in all cases.

Would I be right in saying that about TikTok, that across the board, there is no differential between-----

Dr. Nikki Soo

Yes, we treat everyone equally. Everyone has to abide by our community guidelines.

Mr. Dualta Ó Broin

It is the same for us. The community standards are the same. Similar to what Google has said, if it is a public figure, that enters into the equation.

So one company treats everyone the same, and the other two companies treat a public figure differently with less high standards?

That is the case even if the information in question relates to their private lives and has nothing to do with their roles as politicians and nothing to do with policy. Such information is allowable.

Mr. Dualta Ó Broin

We would be happy to look at examples. We would be happy to set up an engagement with the Senator or with the committee. The key is what are the examples of the types of content.

The key is that Meta treats public figures in a different way from how they treat private individuals. That is what Mr. Ó Broin is saying.

Mr. Dualta Ó Broin

It is.

That is what sets Meta apart from the other technology companies represented here today. Am I right in saying that?

That was why I tried to ask the questions in that way. There seem to be three different levels, to be honest, from what I am hearing. At the higher end of the scale, at Meta, it appears from what I have heard that politicians are treated differently. Even if a story has something to do with a person's private life, it is open season.

Mr. Dualta Ó Broin

That is not the case. I can say that is not the case.

To a certain extent, it is. That is why people will not get involved in public life.

The Senator must draw to a conclusion. We come to me. I will begin by thanking our guests for being in front of us. This meeting was planned for October, as our guests know, but was moved back for one reason or another. I appreciate their coming before the committee. I, like all of my colleagues, am disappointed that X will not participate in this public engagement in the same way the other companies are engaging. We appreciate that engagement.

It is always interesting to be the last speaker at a meeting because so much has been teased through. To back up what Senator Carrigy has said, I hope our guests will take away how delicate and fragile our democracy is at the moment, and the impact their companies have on that fragility. We saw that in the most explosive, if people will forgive the pun, and exposed way on 23 November. While I accept our guests are saying their companies are doing a certain amount, as they asserted in their presentations, there is no doubt in my mind that the technology available was an enabler that helped to gather those crowds in our capital city in a short space of time. Everybody else, including the Garda and, arguably, Coimisiún na Meán, were trying to play catch-up. That technology enabled the people who went out to cause destruction and mayhem on 23 November. It was organised through WhatsApp groups and on various other platforms. I certainly feel that enough is not being done to curb that behaviour. I must say that. Enough is not being done to curb that behaviour.

I will also make a comment that relates to what Senator Carrigy is trying to extrapolate. I hope that more is done. Our democracy is in question. We talk about trying to get women involved in politics. I have no doubt the behaviour we are talking about affects the gentlemen within this organisation and in public life. However, we are trying to meet quotas and get women involved in politics but will have no chance of doing so if platforms continue to allow insults and personal attacks. They are not just coming from fake accounts or robots. They are coming from people we know on the ground and who live in our communities. Those insults and attacks do not breach community standards. The community standards are horrifying when you are exposed to them and start to dig into them.

Mr. Ó Broin said that Meta's community standards are continually evolving and that it is engaging with its European partners and others globally. In the past three, four or five years, can he identify any marked difference in what it considers a breach of community standards? How has that tightened up to protect citizens over the past four or five years? Mr. Ó Broin mentioned that Meta has evolving guidelines around community standards. Can he point to anything tangible over the past three or four years to show that those guidelines protect people?

Mr. Dualta Ó Broin

The one that springs to mind relates to synthetic media and the way in which artificial intelligence, AI, is used on the platform. That is the example that immediately springs to mind. It has come in within the past year.

I am picking up on comments that people have made. Mr. Ó Broin said Meta is relying on users to make reports on WhatsApp. That is very worrying.

Mr. Dualta Ó Broin

That is not just as a result of end-to-end encryption. It is the legal situation from an EU standpoint in respect of private messages.

Is that likely to change? Will it change with the new guidelines coming down the road? Will that in any-----

Mr. Dualta Ó Broin

Unless it will be a fundamental change to the way in which the EU sees private messaging,-----

It will not happen.

Mr. Dualta Ó Broin

-----I cannot see change happening. Perhaps others know better.

I will pick up on a comment made by Mr. Meade about stopping children watching videos that are harmful. In the shortest way possible, because I am tight on time, will he talk us through and tease out how Google does that?

Mr. Ryan Meade

I can certainly follow up with more detailed information. It is a new product feature on YouTube. We know that young people may watch videos which, in and of themselves, are not in any way harmful and are purely innocent content. However, our expert advisory group has indicated that the pattern of watching the same types of video over and over again can be harmful. For that reason, we are making it impossible for users to do so without a break. That will be rolled out-----

Is that to warn users they need to take a break? Is it warning them of harmful content?

Mr. Ryan Meade

The content itself is not harmful. What we are trying to interrupt is a pattern of harmful viewing, if you like. There is nothing wrong with the content itself and if it were against our policies, it would be removed. We are trying to interrupt a flow that someone might get into and which might have a damaging effect on their mental health.

The following question is for all the witnesses. Have they identified any patterns around the sources of misinformation? Are there patterns of misinformation coming through? Have our guests recognised any particular sources? They may not be able to name those sources but I am asking the question broadly if the witnesses could speak to it. If they see a pattern, can they identify it and remove it quicker so that the algorithms do not get the chance to have the spread they currently have? I might start with one of the TikTok representatives.

Ms Susan Moss

We have created a repository or media library. Where we have information that has been fact-checked, we put it into the library. That makes it more efficient and quicker for our moderators to be able to verify and correct, and ban content as a result. That is helpful. It is coming from our external fact-checkers.

I will touch on the Dublin riots and some learnings around disinformation in that regard. We believe we responded well to the Dublin riots but we, as a company, need to double down on political extremism and toxicity. We have working groups around that and they are already operationalised. That was one of our learnings around disinformation and the outcome of the riots.

I will come to Meta.

Mr. Dualta Ó Broin

That is one of the reasons I included the reference to the adversarial threat report. That is a series that has been going on for six years now and which looks at the networks and types of behaviour we are seeing, where they are originating and how they are funded. We are looking at the patterns and the other platforms people are using to spread this type of information. When we see those coming back, our recidivism policies kick in to remove them. That is an enormous repository of information not just for us but for any regulators or other companies. They can learn from what we have done.

I ask Mr. Ó Broin to forward the details of the policy organisers he said were in Leinster House to talk to public representatives about that variation.

Mr. Dualta Ó Broin

I would just ask that that is handled through me. We might make a decision depending on what the ask is and what the format of the event is. That would be important for that particular individual.

I ask Mr. Ó Broin to forward the information to the clerk to the committee and we, as a committee, will take it from there in a private session. Is that okay?

Mr. Dualta Ó Broin

Yes.

I thank Mr. Ó Broin. A number of months ago, we read in our trusted and reliable newspapers about the sad fatality of a young person. It was clearly identified that TikTok trends were involved. I use the term "trends" because I do not want to call them "dares", but I do not know the correct terminology. Our guests know to what I am referring. What is the correct term? Is it "trends"? Okay. Such a trend led to and caused the death of a young person in this country. I raised the issue at the time with representatives of TikTok. What is being done to ensure that trend is discontinued and young people are not so exposed?

Ms Susan Moss

I thank the Cathaoirleach. I am enormously conscious that there is a grieving family at the heart of this and that a 14-year-old girl is no longer with her family. In the circumstances, I cannot comment on the individual case but I will say that TikTok will comply with any formal inquiry that might arise. I might hand over to Dr. Soo, who works in this area.

Dr. Nikki Soo

I am happy to summarise what we do around challenges and trends. We review all alleged harmful trends and challenges very seriously. Anytime something like-----

I presume for a challenge or a trend to gather momentum it must have been circulating for quite some time.

Dr. Nikki Soo

I cannot speak to that. It really depends on what we are looking at. Sometimes something goes viral.

Dr. Nikki Soo

It is hard for me to speculate on something like that.

I am not asking Dr. Soo to comment on the case. In general terms, for a trend or a challenge that can be harmful to young people I am assuming that cannot happen in five minutes and it takes time.

Dr. Nikki Soo

If something goes popular on TikTok, we actually moderate again. If it has gone through our moderation - AI and human moderators - and we have found that it is a popular video, it will actually go through human moderators one more time.

How long does that process take?

Dr. Nikki Soo

It depends on the popularity of the content.

Are we talking about hours or days?

Dr. Nikki Soo

In terms of the challenges, we proactively remove within 24 hours 92.3% and proactively remove any related content in terms of rate 98.3%.

So, it is 24 hours. I am talking about the time because that is what impacts young people most and allows for circulation. We are talking about 24 hours before something might be taken down and picked up by moderators as being harmful.

Dr. Nikki Soo

I do not think that is what I am saying here. I do not want to speculate on that amount, but I would say that on average a piece of content takes up to two hours if it is violative. That is what I would say.

Therefore, within two hours moderators will pick up on it and can take stuff down in that time.

Dr. Nikki Soo

Yes, but that is on average and not relating just to this.

This has been a really interesting and important discussion. I do not think we are in any way done this. There is a long way to go on this and a lot more to do.

We have a split session today. We now need to take a small break after which other witnesses come before us. We will certainly follow up on some of the suggestions and ideas mentioned today. I thank all the witnesses for being with us. I propose that we suspend briefly to allow witnesses to withdraw before resuming in public for our second session.

Sitting suspended at 3.12 p.m. and resumed in public session at 3.14 p.m.

We are now in our second session discussing online safety, online disinformation and media literacy. I am delighted to welcome Dr. Leo Pekkala, deputy director of the Finnish National Audiovisual Institute, KAVI. I thank him for joining us today. The format of the meeting is such that I will invite our witness to deliver an opening statement, which is limited to three minutes and which will be followed by questions from my colleagues. As Dr. Pekkala is probably aware, the committee may publish its opening statements on its web page.

Before inviting Dr. Pekkala to deliver his opening statement, I wish to explain some limitations in relation to parliamentary privilege and the practice of the Houses as regards references witnesses may make to other persons in their evidence. The evidence of witnesses physically present or who give evidence from within the parliamentary precincts is protected pursuant to both the Constitution and statute by absolute privilege. However, witnesses who give evidence from a location outside the parliamentary precincts are asked to note that they may not benefit from the same level of immunity from legal proceedings as a witness giving evidence from within the parliamentary precincts does and may consider it appropriate to take legal advice on the matter. Persons giving evidence from outside the jurisdiction should be mindful of their domestic law and how it may apply to the evidence they give. Witnesses are reminded of the long-standing parliamentary practice that they should not comment on, criticise or make charges against any person or entity by name or in such a way as to make him, her or it identifiable or otherwise engage in speech that might be regarded as damaging to the good name of the person or entity. Therefore, if witnesses' statements are potentially defamatory in relation to any identifiable person or entity, they will be directed to discontinue their remarks. It is imperative that they comply with any such direction.

Members are reminded of the long-standing parliamentary practice to the effect that they should not comment on, criticise or make charges against a person outside the Houses or an official either by name or in such a way as to make him or her identifiable.

I ask Dr. Pekkala to deliver his opening statement.

Dr. Leo Pekkala

I am grateful for the invitation to appear before the committee. KAVI has a legal obligation to advance media education, and promote media literacy. KAVI's department for media education and audiovisual media, which I head, also functions as an independent media regulatory authority tasked with overseeing age restrictions for audiovisual programs.

We are responsible for implementing the national media education policy, which states that media literacy is a civic skill for all. We approach media literacy comprehensively to comprehend and analyse media, create media, and engage in a secure and responsible manner in media environments. This approach also considers the perspective of active citizenship across all these aspects. It is hard to stress enough the importance of having a national policy for promoting media literacy. The policy is the backbone for all media literacy work done in Finland and supports different organisations in their work in promoting media literacy. The current government programme of the Prime Minister, Petteri Orpo, states that critical media literacy and awareness of cyber risks in order to reinforce broad social resilience should be boosted. Our ultimate goal is to promote peaceful society, functioning democracy with stable economy and by doing so we hope to work for a good life for everyone.

I will outline some examples of successful media literacy initiatives. We have been organising a media literacy week since 2012. The Finnish games week has been running as long. We have a media literacy school website which collects and shares teaching and learning resources for media education which is free for anyone to use. Another simple fairly recent initiative has been “media literacy coffee breaks” for government civil servants, in which civil servants gather together to discuss current issues on media literacy, following the Chatham House rules. This way we have managed to create cross-sectoral co-operation within the government structures.

I wish to conclude by reflecting on some issues related to media literacy discussion internationally. There seems to be an ongoing obsession with measurement of success of media education or media education programmes or projects. However, it is unclear what is being measured and how. Typically, the measurements we see, like the media literacy index, focus on technical skills, access or other external factors, but not on critical thinking skills, which are most important.

Another problem is the solutionism, where media literacy is seen as the solution to all problems in society. However much I believe that promoting media literacy is really important, I do not believe that we can solve all the problems in society by investing only in media literacy.

Achieving results in media literacy requires policy support, time, patience and resources. If you think media education is expensive, try ignorance.

I thank Dr. Pekkala. We are delighted to have him. As he may know, we had a session before he joined us with some of the tech companies here, namely, Meta, TikTok and Google. This fits in neatly with what we were discussing and the wider work of the committee.

I thank Dr. Pekkala. It is good to get this insight from a Finnish perspective. What does he think media literacy means?

Dr. Leo Pekkala

To us, it means access to the media, the capabilities of producing and using media and being part of the media discussion, and being an active citizen in society. It combines an holistic perspective, as we see it. It is not just a technical skill; it is a kind of competency, but not only in regard to digital natives, given I do not really believe in digital natives. Young people may have technical skills, but whether they have critical thinking skills is another issue. To be media literate, you also need to have critical thinking skills connected to the technical skills you may have.

I understand the access to media part of it, using digital media, having the technical skills and so on. The critical thinking part is a bit more difficult. Will Dr. Pekkala elaborate on how KAVI addresses that critical thinking issue and ensures responsible use of social media platforms?

Dr. Leo Pekkala

We try our best. You need a multifaceted approach. One way we have tried to address this issue is by working together with both governmental and non-governmental organisations, given we believe you need multiple channels. In many ways, we try to think in a way whereby we co-ordinate the overall objective but we do not tell organisations how they should reach their goals. We believe it is better to have different ways of speaking about media literacy and what it means. It requires a lot of education and discussion, and it never ends. We believe you can never really be ready and say you are now media literate and no longer need to think about it. It is a long process, connected closely in our system to the comprehensive education system, where media literacy is part of the curriculum as a transversal skill. It is connected to that, but it is also outside school and we work mostly outside school. We meet different age groups.

That is interesting. In Finland, therefore, there is a great focus on education and media literacy is taught within schools. Obviously, as Dr. Pekkala said, a lot of that is about the technical aspect and access to these platforms. One of the reasons this session is being held relates to the incidents of violence and upheaval in our capital city a number of months ago, and the call on social media platforms and social media giants to take some responsibility for their role in the spread of misinformation and disinformation. How is that education done within the school system? Am I correct in assuming that there is an effort within education and the teaching of media literacy for young people in particular to show responsibility for their own actions and the type of content they share or engage with?

Dr. Leo Pekkala

In the school system and the national core curricula for comprehensive education, it starts in early childhood education and continues to second level education. Media literacy is seen as developing a set of competencies and skills and young children can learn what can be done with various devices and how stories can be created. You learn to understand that behind every story, there is always someone with some kind of idea. Later, at the teenage phase, you start to create stories and little movies. Basically, you have to develop the skill all the time, but when you learn to create, you also learn to understand that all the different stories and pictures you see have been created by someone. That, we think, helps foster the discussion. Moreover, teachers discuss issues and help students understand who is behind different issues and how they should check facts if they see something that is probably not true. Nevertheless, a lot of media use is for fun, so we do not want it to be too serious. Playing games, for example, is not always bad and can also be good fun and can develop certain skills.

The Deputy asked about the responsibilities of social media platforms, especially the very large ones, and there is a problem in that regard. Of course, for a small country such as Finland, we do not have any of the headquarters within our country and legal system, so we have to see what happens in this space in Irish or Dutch law. We have little power in that respect and that is why we have to invest in education. It is not only school education but also something that happens outside school, in hobbies and with grandparents. That is why we also target the adult population, given they are important and we cannot focus only on children and young people.

I thank Dr. Pekkala for joining us. What he just said points to my first question. Often, when we hear about online safety, we hear it in the context of young people, but I am equally worried about my mum and dad, for example, at home and the type of media they might see on YouTube, where they get their political content. Am I right to be equally worried about older people even though, in the main, younger people seem to take up the focus in online safety?

Dr. Leo Pekkala

The Senator raises a very important question. As I said, we generally think our children and young people get some kind of education during their school years and their compulsory education, so we are not too worried about them. There is also, of course, a huge population of adults who have never received this kind of education, especially the ever-growing senior citizen population, who are often the grandparents of children and young people and spend a lot of time with them out of school.

It is really important to reach out to them. One way we have been doing that is through work with non-governmental organisations and public libraries. Public libraries also are really important especially for senior citizens. I also learned to use public libraries since I was a child. Libraries are a good place to reach the adult population and to share information. Also, senior citizens in particular quite often need technical help to understand how to use different services that society is offering more and more digitally. While they are being guided in how to do banking business or deal with tax issues, they also get information about fake news and disinformation, etc. In the ideal world, that works really well. This does not happen in every library and everywhere so there is a lot of work to be done but that is one example of a success story.

That is really helpful. Some of our public libraries are like Internet cafés. The computers are in more use than the books and it is probably a great environment for this. Dr. Pekkala made a good point about banking because, again I mention my mum and dad, the two-factor authentication is an awful difficulty for them. That is a practical example of educating people on those issues and using it as a chance for media literacy.

Does Dr. Pekkala have any other example of projects, both for young people and older people, that have been successful?

My final question is my understanding is that the institute has a legal duty to provide media education. Is that unique in a European context? I am sure there is research we can do here but it is my understanding the institute has a legal duty in this area and is that unique in Europe for that to exist?

Dr. Leo Pekkala

First, two different examples of success that the institute has run over the years and still runs are the media literacy week and the games week. Both of them are organised in a way that the institute only co-ordinates the organising of the event and the themes. However, for example, the media literacy week, has more than 50 organisations organising different campaigns and there are hundreds of different local events across the country during this week. They keep on going and are mostly based on volunteer and NGO work. Those are really good examples. One thing I will stress is the continuity. One week in the year is definitely not enough for almost anything but this is a themed week that focuses on these issues and brings up in the media the point that we are celebrating this theme this week but the work continues throughout the year. We need perseverance here and to have a long-term plan. The media literacy policy helps in our case because it creates the framework for the work we do. With regard to the legal duty, we are not the only one. When the institute was established in 2012, I was asked to come and start up this office. We were one of the first ones but the Nordic countries all now have a governmental office that has the same legal duty, more or less. There are certain differences on the large scale, and there are some others who have this but it is not very common. There could be more, because we have clearly seen the benefit of having the legal framework to support our work. I always encourage my international colleagues to work towards getting a real policy that lasts for different parliamentary elections and seasons so that the policy continues even if the parliament or government changes and is more of a long-term thing.

I thank Dr. Pekkala and have one question for him. I really appreciate his contribution so far. My question concerns the platforms. They appeared before the committee today. Will Dr. Pekkala talk a bit about what responsibilities the online platforms have in Finland and what tools regulators have to deal with any issues that may arise?

Dr. Leo Pekkala

So far, there are not any online platforms that would meet the definition of the audiovisual and media services directive located in Finland. The government organisation responsible for defining or finding out legally whether we do have, and if one day we would have, such a service located in Finland is the Finnish Transport and Communications Agency, which is under the Ministry of Transport and Communications, whereas we are under the Ministry of Education and Culture. At the moment we do not have any platforms.

In terms of what the platforms do there is very little jurisdiction except of course about the kinds of things that are under the audiovisual media services, AVMS, directive or under criminal law. Platforms must remove content if it is illegal. Otherwise, we are on the outskirts of that, legally, at the moment. I hope I helped.

Yes, Dr. Pekkala did and I thank him. That is my line of questioning finished. Does Senator Carrigy want to come in? No. I thank Dr. Pekkala most sincerely for his attendance. I know he is a leading light and a shining light when it comes to media literacy and online media literacy, particularly in schools and in education. I really appreciate him sharing those thoughts and experiences with the committee. Go raibh míle maith agat.

Dr. Leo Pekkala

I thank the Chair. To finish, I will say that Coimisiún na Meán in Ireland is doing great work and building up its organisation. I have good colleagues there and I trust that Ireland will be and already is one of the leading countries in Europe in this. There is a lot of work being done there. I thank the committee for the invitation and the discussion.

Apologies, can I come in for a second?

Did the Senator want to come in?

If Dr. Pekkala is not rushing off, a colleague has just thought of a question or comment he wants to make. This is Senator Carrigy.

I thank the Chair and I apologise for coming in now; I had to go out to something else. I have a question about something I mentioned at the earlier meeting and that I feel very strongly about as a parent of young kids. It is children with access to social media accounts. When we discussed the Online Safety and Media Regulation Bill, we discussed having a minimum age for social media accounts and putting the responsibility on the companies to implement that and putting stringent controls in place to now allow young children to open or set up accounts. What is Dr. Pekkala's view on that from an EU perspective?

At the time, this was being looked at from an EU-wide point of view but I feel Ireland should go ahead rather than wait for the EU to make a decision. While Coimisiún na Meán is looking at this, I believe we need to go ahead and implement it here ourselves. The reality is that kids as young as eight and nine years of age are actively on social media where there is a significant amount of harm and disinformation. What are Dr. Pekkala's views on that?

Dr. Leo Pekkala

This is a very complicated issue as I am sure the Senator is well aware. The general thinking in Finland is that at the moment, it seems almost impossible to get reliable age verification systems in place. Before we have such things, it will be very difficult to restrict access.

Our solution at the moment is that we try to invest in education as much as possible and educating the children themselves because rather young children can understand these topics and most children do not actually want to see things they do not like. We invest in educating them but we also invest in educating the parents and try to have the discussion. It is hard work and one can never succeed 100%.

I do not really think that strict restrictions about not giving mobile devices to children of a specific age would work because there is always a way to find access to the content. The only solution would be educating, but it is complex and it takes time, and one can never have a full 100% result.

Unfortunately, there is no silver bullet in this case either, but the platforms could do much more if they would reveal their algorithms openly to researchers and to governments and they would invest more on these issues. I believe there is more work to be done.

I thank Dr. Pekkala and Senator Carrigy. I thank Dr. Pekkala so much for being with us today. It has been a very fruitful engagement. I am sure we will be coming back to Dr. Pekkala for more engagements in the future.

I thank the members. I propose we suspend the meeting briefly to allow witnesses to withdraw before resuming in private session to deal with the housekeeping matters and correspondence. Is that agreed? Agreed.

The joint committee suspended at 3.42 p.m., went into private session at 4.03 p.m. and adjourned at 4.09 p.m. until 1.30 p.m. on Wednesday, 13 December 2023.
Top
Share