Léim ar aghaidh chuig an bpríomhábhar
Gnáthamharc

Joint Committee on Children, Equality, Disability, Integration and Youth díospóireacht -
Tuesday, 16 Apr 2024

Protection of Children in the Use of Artificial Intelligence: Discussion (Resumed)

We have received apologies from Deputies Sean Sherlock and John Brady. The agenda item for consideration this afternoon is engagement with stakeholders on the protection of children in the use of artificial intelligence.

We have had a few sessions on this already that people may have tuned into. Joining us at the meeting this afternoon are representatives of X, Ms Niamh McDade, head of government affairs, UK and Ireland, and Ms Claire Dilé, director for government affairs, Europe. From TikTok, we have Ms Susan Moss, head of public policy and government relations, and Ms Chloe Setter, child safety public policy. From Meta, we have Mr. Dualta Ó Broin, head of public policy, and Mr. David Miles, director of safety policy, Europe, the Middle East and Africa. You are all very welcome to the meeting.

I will go through the normal housekeeping matters first. For anyone joining us on Teams, please be aware that the chat function is to make us aware of any technical issues or urgent matters. It is not for people to make any general comments or statements during the meeting. I remind members of the constitutional requirement that they must be physically present within the confines of the Leinster House complex in order to participate in public meetings. I will not permit a member to participate where he or she is not adhering to that constitutional requirement. Therefore, if anyone attempts to participate from outside the precincts, he or she will be asked to leave the meeting. For anyone who is joining us on Teams, please ensure that you confirm that you are on the grounds of Leinster House before making your contribution.

I remind members that witnesses who have agreed to come before the committee are doing so to try to assist members with the subject matter under discussion. Please bear this in mind when asking questions. We will suspend the sitting at around 4.45 p.m. for a short break.

In advance of inviting the witnesses to deliver their opening statements, I want to advise them of the following regarding parliamentary privilege. The evidence of witnesses physically present or who give evidence from within the parliamentary precincts is protected pursuant to both the Constitution and statute by absolute privilege. Witnesses are reminded of the long-standing parliamentary practice that they should not criticise or make charges against any person or entity by name or in such a way as to make him, her or it, identifiable or otherwise engage in speech that might be regarded as damaging to the good name of the person or entity. Therefore, if their statements are potentially defamatory in relation to an identifiable person or entity, they will be directed to discontinue their remarks. It is imperative that they comply with any such direction.

The opening statements will be followed by questions and answers with members. The witnesses will be called in the following order: Ms Claire Dilé, Ms Susan Moss and Mr. Dualta Ó Broin. I call Ms Claire Dilé.

Ms Claire Dilé

I thank the Chair and members of the committee for the invitation to attend today’s meeting on the topic of the protection of children in the use of AI. I am director of government affairs for Europe at X and I am joined today by my colleague, Ms Niamh McDade, our head of government affairs for Ireland and the UK.

As we aim to build a global town square and provide everyone with the ability to connect, debate and share information, we are committed to ensuring a safe environment for all our users. X’s purpose is to serve the public conversation, and we believe that freedom of expression and platform safety can and must coexist. We welcome the opportunity today to discuss X's work to keep users, especially young people, safe on the platform.

I will start by saying that X is not the platform of choice for children and teens and we do not have a line of business dedicated to children. Users on X must be aged 13 and if a person tells us they are under 13, they will not be able to sign up for an account. According to our data, in the first three months of 2024, users aged between 13 and 17 accounted for less than 1% of X’s active account holders in Ireland. Although minors represent a small fraction of X's user base, we are fully committed to the protection of this group, which is a more vulnerable audience online, and have a number of tools and policies to protect them on our service. X’s age assurance process combines self-declaration of age with additional technical measures to ensure that the account holder’s age is genuine and that appropriate controls are in place to protect children. By default, 13- to 17-year olds have high privacy, safety and security settings in place on their accounts. For example, they will not see sensitive media, including graphic and adult content, and their direct messages are closed and location is turned off. Additionally, advertisers cannot choose to target this age group.

We believe verifying users’ ages and soliciting parental consent for app downloads could play a pivotal role in addressing age verification. This approach could leverage existing processes, filtering all inappropriate apps for minors. It would also act as a privacy enhancer across the ecosystem by avoiding the need for personal information sharing at the individual app level.

X is also a proud member of several child protection initiatives, such as the Tech Coalition, WeProtect, the Internet Watch Foundation and the Children Online Protection Lab, and we continue to welcome opportunities for co-operation with child protection NGOs.

Regarding X rules, we remain steadfast in our commitment to keeping everyone on X safe. Our rules require users to ensure the content they upload and their behaviour complies with our rules as well as all applicable laws and regulations. We also confirm that AI-generated content is subject to X rules and we enforce policies irrespective of the source of creation or generation of such content. We take the opportunity of this hearing to confirm to the committee that X remains committed to the fulfilment of the Digital Services Act compliance obligation and intends to fully comply with relevant legislation in relation to AI.

A number of our policies are particularly relevant to the protection of children, and our supporting statement that we sent prior to the hearing provides further details on our rules. In particular, we confirm that X has zero tolerance toward any material that features or promotes child sexual exploitation. If child sexual exploitation content is posted on X, we simply remove it. Fighting this kind of content on our service is our number one priority as a company, and our policy covers media, text and illustrated but also computer and AI generated images.

We have strengthened our enforcement with more tools and technology to prevent bad actors from distributing, searching for or engaging with CSE content across all forms of media. We also remove any account that engages with CSE content, whether it is real or computer generated. Our priority is that we are able to catch it and take action regardless of whether it has been generated using AI. In parallel, we continue to invest in human and automated protection and content moderation.

Turning to AI-generated content, at the outset, we want to clarify that X does not have a generative AI product live in the EU or Ireland at this time. With respect to misleading media, we have developed and continue to expand important resources. For example, our synthetic and manipulated media policy prohibits users from sharing synthetic, manipulated or out-of-context media that may deceive or confuse people and lead to harm. Furthermore, our community notes product addresses a wide range of sophisticated media types, including AI-generated content, by allowing contributors from a diverse group of people on X to write notes on media wherein they can write a note on a specific image or video and the note will be shown automatically on other posts with matching media. This community-led approach has significantly increased the scale and speed by which potentially misleading media is detected and labelled on the platform. There are currently more than 100,000 active contributors to community notes in EU countries, which accounts for 35% of the global contributor base. To demonstrate the impact of this product, in the last month in the EU region, there have been 130 million note impressions on community notes.

Finally, we come to our recommendation algorithm. First, it is important to note that on X we give users a clear choice over their use of recommendations. People have two options to view posts in their timeline, either "for you” or “following”. Under the ‘‘following’’ tab, they will only see posts from accounts they follow, and under the ‘‘for you’’ tab, they will see recommended posts for them from both their networks. Every day, people come to X to keep up with what is happening and the ‘‘for you’’ tab aims to deliver them the best of what is happening in the world right now. This requires a recommendation algorithm to distil the millions of daily posts on X down to a handful of top posts that ultimately show up in the ‘‘for you’’ timeline to make it easier and faster for users to find content and accounts relevant to their interests. Recommendations may amplify content, so it is important they are surfaced in a responsible way. Our recommender systems are designed to exclude harmful and violating content by integrating with visibility filtering systems, and we have several ways of preventing potentially harmful or offensive content and accounts from being amplified, including using machine learning technology and reviewing user reports.

To increase transparency and accountability, X open sourced its algorithm on GitHub in March 2023. Therefore, it is for everyone to be consulted there, ahead of the DSM coming into force and X’s engineering team published a Blogpost on our website to explain to the public how this algorithm works in a simple way. We welcome feedback on recommendations from people using X receive. For the For You timeline recommendations, feedback can be provided by selecting Not interested in this post/topic. We use this as a signal to recommend less of this type of content to the user.

Additionally, controls are important both in helping people on X to curate their own experiences and to provide good feedback for our recommendation system. A variety of options are available for people using X to control what they do and do not see on our service. Features include, but are not limited to mute and block features, the option to filter notification, extensive privacy and safety settings, as well as the ability to turn off autoplay for video.

We submitted a longer supporting statement before today's meeting which members will have received. We have much more to tell about our policies and rules in regard to this topic but in the interests of time I will stop now. I thank members for the opportunity to appear before the committee today. I look forward to the discussion.

Ms Susan Moss

Gabhaim buíochas leis an gCathaoileach agus le comhaltaí uile an chomhchoiste as an gcuireadh freastal. I am head of public policy at TikTok and I am joined by my colleague, Ms Chloe Setter, child safety public policy lead. We appreciate the opportunity to appear before the committee today on this important topic of the protection of children in the use of artificial intelligence. At TikTok, we strive to foster an inclusive environment where people can create, find community and be entertained. More than 2 million people in Ireland use TikTok every month, which not only demonstrates how popular the platform is, but also underlines the responsibility we have to keep TikTok safe. Safety is a core priority that defines TikTok. We have more than 40,000 trust and safety professionals working to protect our community globally. We expect to invest €2 billion in trust and safety efforts this year alone, with the majority of our European trust and safety professionals based here in Ireland.

Artificial intelligence, AI, plays an integral role in our trust and safety work. We know that content moderation is most effective when cutting-edge technology is combined with human oversight and judgment. The adoption and evolution of AI in our processes has made it quick to spot and to stop threats, allows us to better understand online behaviour and improves the efficacy, speed, and consistency of our enforcement. Nowhere is that more important than the protection of teenagers.

Leveraging advanced technology, all content uploaded to our platform undergoes moderation to swiftly identify and address potential instances of harmful content. Automated systems work to prevent violative content from ever appearing on TikTok in the first place, while also flagging content for human review for context and closer scrutiny. We make careful product design choices to help to make our app inhospitable to those who may seek to cause harm. For example, we meticulously monitor for child and sexual abuse material, CSAM, and related materials, employing third-party tools such as PhotoDNA to combat and prevent its dissemination on our platform.

Developing and maintaining TikTok's recommendation system, which powers our For You feed is a continuous process as we work to refine accuracy, adjust models and reassess the factors that contribute to recommendations based on feedback from users, research and data. TikTok's For You feed is designed to help people to discover original and entertaining content. A number of safeguards are in place to support this aim. For example, our safety team takes additional precautions to review videos as they rise in popularity to reduce the likelihood of content that may not be appropriate for a general audience entering our recommendation system. Getting these systems and tools right takes time and iteration. We will continue to explore how we can ensure our system is making a diversity of recommendations. I understand that the introduction of new disruptive technologies inevitably triggers unease and artificial intelligence is no exception to this rule, prompting legitimate concerns around the legal system, privacy and bias. It is, therefore, incumbent on all of us to play our part in ensuring that AI reduces inequity and does not contribute to it.

We have robust community guidelines in place governing the use of AI-generated content on our platform. TikTok supports transparent and responsible content creation practices through our AI labelling tool for creators. The policy requires people to label AI-generated content that contains realistic images, audio or video, in order to help viewers contextualise the videos they see and prevent the potential spread of misleading content.

The policy requires people to label AI-generated content that contains realistic images, audio or video, in order to help viewers contextualise the videos they see and prevent the potential spread of misleading content and we are currently in the process of testing the automatic labelling of AI-generated content.

Listening to the experiences of teenagers is one of the most important steps we can take to build a safe platform for them and their families. It helps us avoid designing safety solutions that may be ineffective or inadequate for the actual community they are meant to protect. Last month we launched TikTok's Global Youth Council, a new initiative that further strengthens how we build our app to be safe for teens by design. The launch comes as new global research with over 12,000 teenagers and parents reveals a desire for more opportunities to work alongside platforms like TikTok.

At TikTok, we aim to build responsibly and equitably. We work to earn and maintain trust through ongoing transparency into the actions we take to safeguard our platform because we know that saying "trust us" is just not enough. For example, we have established a dedicated transparency centre here in Ireland where vetted experts can securely review TikTok's algorithm source code in full. We also recognise the need to empower independent critical assessment and research of our platform. TikTok provides transparent access to our research API in Europe which is designed to make it easier to independently research our platform and is informed by feedback that we are hearing from researchers and civil society. To empower continued discovery on TikTok, we recently announced a dedicated STEM feed that will give our younger community a dedicated space to explore a wide range of enriching videos related to science, technology, engineering, and mathematics.

Protecting teenagers online necessitates a concerted and collective endeavour, and we share the committee's dedication to protecting young people online. For our part, we will strive to continuously improve our efforts to address harms facing young people online through dedicated policies, 24-7 monitoring, the use of innovative technology and significant ongoing investments in trust and safety to achieve this goal. We thank members for their time and consideration today and welcome any questions they may have.

Thank you, Ms Moss. I invite Mr. Ó Broin from Meta to make his opening statement.

Mr. Dualta Ó Broin

I thank members for the invitation to appear before the committee today to discuss the subject of the protection of children in the use of AI. My name is Dualta Ó Broin. I am head of public policy for Meta in Ireland. I am joined by my colleague David Miles, who is safety policy director for Europe, Middle East and Africa with Meta.

While Meta believes in freedom of expression, we also want our platforms, Facebook and Instagram, to be safe places where people, particularly young people, do not have to see content meant to intimidate, exclude or silence them. We take a comprehensive approach to achieving this by: writing clear policies, known as community standards in the case of Facebook and community guidelines in the case of Instagram, about what is and is not allowed on our platform; developing sophisticated technology to detect and prevent abuse from happening in the first place; and providing helpful tools and resources for people to control their experience or get help. We regularly consult with experts, advocates and communities around the world to write our rules and we constantly re-evaluate where we need to strengthen them.

AI plays a central role in reducing the volume of harmful online content on Facebook and Instagram. Our online and publicly accessible transparency centre contains quarterly reports on how we are faring in addressing harmful content on our platforms, in addition to a range of other data. This includes how much content we remove, across a broad range of violations, and how much of that content was removed before any user reported it to us.

There are some violation areas where AI is extremely effective. I refer, for example, to fake accounts, where over 99% of violations are identified by our AI systems. An example of a more difficult violation area for AI is bullying and harassment. In this area we removed 7.7 million posts from Facebook and 8.8 million posts from Instagram in the fourth quarter of 2023. Of these posts, 86.5% on Facebook and 95.3% on Instagram were identified by our AI systems and removed before they were reported to us by a user. One of the reasons that AI is not as effective in this harm area yet is that bullying and harassment can be quite contextual and not as immediately apparent as a fake account. That said, our systems are constantly improving. The same metric for the bullying and harassment violation for the fourth quarter of 2022 was 61% in the case of Facebook and 85.4% in the case of Instagram.

In addition to the actions we take to remove harmful content, we have built over 30 tools and features that help teens have safe, positive experiences and give parents simple ways to set boundaries for their teens. We have included a link to the timeline of these tools in our written submission. Further information about these tools and features and how they work can be found in our Instagram Parent Guide and our Family Centre. Additional resources on supportive online experiences can be found in our Education Hub for Parents and Guardians. While these centres and guides give parents the ability and resources to navigate our tools and products, we understand that it can be overwhelming for parents to stay on top of every new feature and product across every application.

In the US, the average teenager uses 44 applications on their phones.

We believe that a significant step forward can be taken at a European level to ensure that parents only need to verify the age of their child once and that their child will then be placed into an age appropriate experience on every single app. In Meta's view, the most efficient and effective way in which this would work would be at the operating system or app store level, although there are other alternatives. This would not remove responsibility from every app to have processes in place to manage age effectively and my colleague, Mr. Miles, can go into the steps that we at Meta take. The question of age verification is complicated, however we believe that the time has come to move forward with an effective solution that addresses the concerns of all stakeholders, including parents.

I will skip the section on education in the interests of time but am happy to answer any questions on it.

As set out in our submission to the justice Committee in March, as part of Meta’s commitment to transparency, we have published more than 20 AI system cards that explain how artificial intelligence powers recommendation experiences on Facebook and Instagram. In that submission, we described the way in which we use these systems to improve the user experience and make it safer and we also described the tools and controls available to users to control their experiences.

Finally, it is sometimes claimed that Meta is financially motivated to promote harmful or hateful content on our platforms to increase engagement. This is simply untrue. This content violates our policies and prevents our users from having enjoyable experiences. As a company, the vast majority of our revenue comes from advertising. Our advertisers do not want to see such content next to their ads. It is clear therefore that we are financially motivated to remove content which violates our policies as quickly as possible once we become aware of it.

I hope this gives members of the committee an overview of some of the uses of AI by Meta, and we look forward to their questions.

I thank Mr. Ó Broin. I will open the meeting up to questions and start with Senator Seery Kearney.

I thank the Cathaoirleach and thank the witnesses for taking the time to be here, and in person, which I very much appreciate. My remarks will probably be more pertinently directed to TikTok and Meta because I take the point that X is more for older users.

I come from a place where my view of social media is that, by design, it pushes an operant conditioning in behaviour modification. It uses the circular model of trigger, action, variable reward and investment. It involves deliberate manipulation and capture behaviour modification. It captures attention and it is attention that is monetised as being that which advertisers can have access to. That is the basis of an advertising-based business model here.

I read the advisory of the US Surgeon General, entitled Social Media and Youth Mental Health, and i am concerned about young people and their access to smart phones, which I think should be banned. Brain development has changed. Mental health has been affected by the amount of time spent on social media as well as the content that is thereon. I also read the report from Coimisiún na Meán that looked at video platform services, how online harms and its evidence review.

I met Coimisiún na Meán last week and it urges a whole-of-society response to this and I completely agree with that. This is not all at the witnesses' platforms' doors by any means. It is for us, as policymakers and legislators, to take action. Yes, the platforms as the technology companies. Parents and caregivers, and children and adolescents themselves, need to set their own boundaries, so I am not abdicating responsibility to anybody else. Do the witnesses acknowledge that harm, that impact on that behavioural modification and developmental modification that has occurred as a consequence of social media and its design? I looked at some of the statistics of mental health illness among college students with the advent of the smartphone.

Since 2010, in the United States, anxiety has increased by 134%, depression has increased by 106%, ADHD is up by 72%, bipolar is up by 57%, anorexia is up 100% and substance abuse addiction is up 33%. I have a colleague, Mr. Roderick Cowen, who speaks a lot of how cognitive resilience is behind in the design of social media and we need cognitive security for young people in its design. Consequently we need transparent risk assessments that come from this basis of mental health focus and there needs to be impact assessments published. I do not believe there has been transparency around that.

The design and development of decisions within social media companies need to prioritise safety and health, including children's privacy and their age verification. I note that WhatsApp reduced its age recently. I would not approve of that. The response to reporting of problem content, for all of the companies but particularly X, is abominable.

Social media needs to come with a mental health warning. If an individual of any age, but particularly a young person, is on a social media platform, a warning needs to come up that he or she has been on for 15 minutes and he or she needs to take a break from it. There needs to be a health warning that comes up that limits the time people spend but that is in contrast to the business model, which is about attention holding. I would like to know what their position is and whether they acknowledge the harms. I would also like to know about cognitive resilience and cognitive security and what steps are they taking in that regard. We can talk about all the harmful content. That is all agreed, but that is a distance from the fact that their design is something that has brought behaviour modification.

Ms Setter wants to respond. Anyone else would want to indicate they want to reply.

Ms Chloe Setter

I thank the Senator for her question. We totally appreciate concerns around screen time addiction and the well-being of our younger users - those aged 13 and upwards - on TikTok. As with anything in life, we need to find that balance. We believe that digital experiences should bring joy. They should be positive experiences. There is a great opportunity for young people to learn, to socialise, to discover new things and to development new digital skills. Having a positive relationship with Apps, platforms and the online world is not simply about measuring time. It is about the quality of the time spent and what they are doing. If they are learning things or if they are socialising, there are different types of experience that young people can have.

I am sorry, but that is not what I am asking. I am asking about the fact that a young person spending a significant amount of time on social media is interrupting his or her development of social skills with other human beings, not learning things on social media. I agree there are positive possibilities here but that is not what I am getting at.

Ms Chloe Setter

Absolutely. There is currently no collective agreement among experts on the amount of time that is good but we have been working with external experts on this. We have worked with the Boston children's wellness lab to develop a screen time limit and all under 18s on TikTok have a 60-minute default screen time limit. Once they reach the 60 minutes, they have to input a passcode if they want to keep watching. The reason we do that is based, as I said, on working in consultation with experts on how young people approach these kind of things and being intentional about the amount of time they spend. Giving them those nudges, those pieces of information such as that they have spent 60 minutes, helps them develop critical thinking and gives them a reminder. We also surface other take-a-break reminders. We do not send push notifications, for example, to ages 13 to 17 after 9 p.m. and to those 16- to 17-year-olds after 10 p.m.

I refer to research we have done, for example, with Internet Matters, an NGO. We did work with teens and parents in Ireland, the UK, France, Italy and Germany. We asked young people and parents about what worked for them in this sort of scenario and they told us they wanted more data about their usage. We introduced a dashboard which gives them a weekly notification or recap of how much time they have spent and what time of day they were spending time, and gave them screen time breaks, which we have done with the screen time limit.

A key thing to mention here is perhaps family pairing. We have some excellent family pairing tools that allow parents and guardians to customise their teens' experience. They can set the screen time limit.

You can set the time of day. It can be weekends only or school holidays only. It is a very customisable experience. In that scenario, the young person would need to get the passcode from their parents. They cannot simply override it. There are things parents can do to help, such as to have those conversations about what a healthy amount of time is. That might vary depending on the child, the time of day or what sort of day it is, a weekday or so on.

Mr. Dualta Ó Broin

I thank the Senator for the question. In a minute, I might ask Mr. Miles to go into the approach we take to designing our products generally and some of the tools in respect of time limits and so on. There are two things I will mention. The Senator mentioned risk assessment. This is one of the requirements of the Digital Services Act. Strategic risk assessments have been submitted to the European Commission. In our case, they have also been provided to Coimisiún na Meán. Part of that is that providers are required to assess the risks that might arise from their services in a whole range of areas, including mental health and the protection of children. Providers are also required to describe how they are mitigating those risks. Obviously, the regulators will look at whether we are assessing and mitigating the risks correctly. They are not going to be made public immediately. That will be done on an annual basis. The first are to be published in October or November of this year. I just wanted to say that the risk assessments are already provided for. They are covered in the Digital Services Act.

Is that in the context of self-harm, suicide and toxic body image or does it relate to the actual function of social media, the scrolling, which is, in and of itself, a variable reward mechanism that changes cognitive function?

Mr. Dualta Ó Broin

The Digital Services Act goes into quite a lot of detail as to what risks need to be considered and what feeds into those risks. The design of the products and the recommender systems are among the things repeatedly mentioned and explicitly called out in the Digital Services Act. That is there. It is just not public yet. It is being considered by the European Commission.

On the whole-of-society approach, while I am in no way trying to absolve any responsibility on our part to do what we need to do in this space, this is why we are practically proud of the funding we have provided towards the establishment of the FUSE programme. That is a global first and something Ireland can be very proud of. It follows the UNESCO whole-of-education approach to bullying and harassment, which includes the online element. That is designed to build resilience and the healthy relationship teens and the whole school community can have with the online world, including social media. I will ask Mr. Miles to say a few words on design and so on.

Mr. David Miles

It is a very good question. It is clearly a concern among society and among parents. I have been in the online safety space for approximately 20 years, since well before children were a dominant factor on the Internet. There is no question that the youth demographic has grown dramatically not just in Europe, but worldwide, so things need to change, which is only right. The first thing to say on the design of the products is that, if you look back to where they were eight to ten years ago, you will see that they were designed predominantly for adults. There has been a significant shift as that youth demographic has grown. I have been involved in NGOs, UNICEF and other environments in that kind of space and I know that a spectrum of harms has emerged that we have to deal with. There are some really important cognitive issues that need to be dealt with. We work really closely with experts, researchers and academics in this area and have done so for many years. I have worked with the UK Council for Internet Safety's evidence group for nearly ten years. We try to take a really evidence-based approach. In the last four years, I have implemented what are effectively codesign workshops with children and parents to look at how they are using the technology and to get their input and participation in how we design feeds and other functions. As a result, we have implemented close to 30 different tools and functions over the last two and a half years based on that. What was really interesting about getting young people involved is that there were some things we assumed would work well but which did not while there are others they really took on board, which has really informed how we have done things.

I refer to the take a break prompt, which is a really standard concept in terms of taking a break and being prompted to take a break after a certain amount of time. There is also quiet mode, which is shutting down from one's followers so that one can have a rest from that overnight and so on. Even with the parental supervision tools, we have worked with the United Nations Convention on the Rights of the Child and the Irish Data Protection Commissions' guidance on children's fundamentals to implement the best interests of the child framework, which puts the participation and the rights of young people into the design of those products. For example, if one implements a parental supervision tool on Quest or on Instagram, both the child and the parent opt into that and it is a teachable moment there when they can talk about that. It is about features and functions like that.

I will give a final example. Last week we announced an on-device nudity filter, which will effectively put an interstitial over every single nude image and will warn anybody sending an image that they need to think about what they are sending. That is a really big move for us globally. We believe it will deal with a lot of issues in schools around unsolicited images. It also complements our strategy around StopNCII, Stop Non-Consensual Intimate Image Abuse, which is about women's safety, to take it down. We are working with the National Centre for Missing and Exploited Children, NCMEC on this. It actually allows us to hash images and get them sent to NCMEC or to a women's safety organisation to try to have that content taken down. The technology we use is multi-platform so we just open source that to other platforms so the content can get taken down across multiple platforms.

We as a business are evolving very quickly. The committee's concerns are justified. In areas like suicide and self-harm we rely on an expert committee of more than 27 experts in that area to guide us on getting those controls right. In January, as a result of their advice, we no longer show any suicide, self-harm or eating disorder content really in any form through recommender systems, reels and stories. We have made a dramatic move in that area to cut that back with the guidance of experts to make sure that youngsters are not cognitively affected but also that they can signpost to our many partners and experts to get the assistance they need.

The nature of social media has changed in the past ten years and I feel we are responding to that. We will always need to do more because it is a fast-moving space. I thank the Senator for the question.

We will move on. Senator Tom Clonan is next.

I thank the witnesses for coming here today. I have listened very carefully and particularly to Senator Seery Kearney's questions. The Senator has covered a lot of ground that I would like to cover so some of my questions might be fairly specific. With regard to X, reference was made to child sexual exploitation and that X first of all removes material and then reports material NCMEC. Does X report any of that to law enforcement?

Please forgive me as a layperson who is a 58-year-old middle-aged man who is not very tech savvy, but my next question is on the anonymity and the lack of identification on X. I have used X and Twitter as a journalist for 13 or 14 years and as a politician. I find the vast majority of very negative trolling or abuse comes from accounts where the person hides behind anonymity. Is that something that can be changed? Would the witnesses see that as a desirable thing or an undesirable thing? Would it be a curb on freedom of expression? Is it possible? Is there any other environment in which people cannot be identified? On the road people have a manner or means by which to identify other road users and there is a kind of social contract around most forms of communication, but if one has absolute anonymity, in my subjective experience, that anonymity is generally associated with any of the negative experiences I have had on the platforms.

With respect to Meta, the witnesses answered the question about putting up notifications telling users they have used a certain amount of time. Some of what the witnesses describe is reactive in terms of how companies respond to possibly unanticipated or undesirable outcomes from technological innovation that is designed to drive attention, engagement and interaction. Do the companies have a fundamental philosophical or ethical position to which they adhere and, if so, what is it? Is it rooted in educational philosophy or classical moral philosophy such as Platonic or Aristotelian philosophy? Where is their ethical or philosophical domain and is it proactive?

I am sure all the witnesses heard about the recent publication and broadcasting of findings from research about the extreme grooming, self-harm, eating disorders, body image and suicidal ideation that appear to be phenomena across the different platforms. Is this something companies are proactively trying to remove? The witnesses said they identify this material and take it down. Is there a way of blocking it entirely? Forgive me if some of the questions sound like stupid questions. I am speaking as a layperson, although I do use all of the platforms except for TikTok because I saw what it did to Simon Harris, God help him. He is the TikTok Taoiseach. It has taken over his life entirely. It has gone in an unanticipated direction.

Ms Claire Dilé

The Senator's question is a very good one. Child sexual exploitation is the most serious violation of all but it is also a criminal offence, so obviously we work with NCMEC because we send it information so that it can prosecute criminals and constitute a bank of images of child sexual exploitation on social media that will help us be more effective at fighting the phenomenon. We co-operate with law enforcement because, as I was saying, it is a criminal offence so it is very important for us to be able to work together with law enforcement agencies so that they can identify the people who post this kind of material on X and then prosecute them, so that is very important point.

Does X do mandatory reporting to law enforcement or does it wait for law enforcement to come to it?

Ms Claire Dilé

We share information with law enforcement when it comes to child sexual exploitation content. We also have an obligation to share information with law enforcement in cases of imminent threat to life or physical harm to a person. We also have a legal obligation to do so under the Digital Services Act for child sexual exploitation or any threats to life to a person, so we would share with the point of contact in the EU member state and by default with Europe. We also share with the Irish law enforcement agency.

It is also a good question when it comes to anonymity. There is no such thing as anonymity on X. It is more the possibility for people to use pseudonyms. The reason we do it is because some people might feel more comfortable using X without disclosing their real identity, for example, if they are not comfortable with their sexual orientation, if they are a whistleblower or if for one reason or another, they do not want to use X with their real identity. In some countries, this might be more important for them. I assure the Senator that even if someone uses a pseudonym, we work with law enforcement, and when we work and co-operate with law enforcement, under the request, we would give it some information for it to identify the person behind the pseudonym.

For instance, it can be IP logs of an email address or a phone number, so if even if a person is using a pseudonym on X, if we have a report from law enforcement we will still give the information for them to be able to identify a potential offender. That is the first thing. The second thing I want to say is the fact people have anonymity does not mean they have impunity on social media. Even if they are not using their real name, our rules will still apply if they are violated, up until account suspension. I will leave my colleague, Ms McDade, to answer the question on suicide and self-harm.

Ms Niamh McDade

I thank the Senator for the question. It is such an important topic. To clarify, X prohibits any content that promotes or encourages suicide, self-harm and eating disorders in addition to that. It is a clear violation of our rules and policies. We respond to user reports on that and and we also remove content on a proactive basis via our safety team if they see that content. As an addition, we partnered in the past with different organisations to trigger a prompt to appear when a person searches for key terms associated with suicide and self-injury and lots of different terms in that space, for example. That is a key intervention point where we will direct users in Ireland, for example, to the Samaritans to provide them with help and support. We have that step in place as well, but it is a clear violation of our rules and policies.

To give some figures on that, in 2023 more than 900,000 posts and 8,000 accounts were removed for violating our policy around promoting suicide and self-harm. Those are some useful figures there as well. We also recognise there are communities on the platform that support people, such as those who are recovering from an eating disorder or engaging in conversations around suicide and difficult conversations there. Some of that conversation can be helpful to individuals. We are careful to protect some of those conversations but we prohibit any content that would encourage that. It is a careful balance and that is why our content moderation team and safety team are geared towards striving to have balance between essentially removing violative content, but allowing conversation to happen on that basis as well.

Ms Chloe Setter

The Senator mentioned a range of different harms and potential harms there. I wanted to set out that generally we work to remove content that can be violative on the platform proactively as much as possible, so using technology and artificial intelligence to detect content. We believe we are fairly successful in doing that with regard to the proactive removal. Data from our third quarter of last year, for example, indicates 96% of all content removed was done proactively without a report from a user, with 77% of that having zero views and 91% of it removed within 24 hours. That is across all potential violative content.

If I may, I will focus a little on child sexual abuse because I think we all agree it is a particularly egregious crime. One of the top priorities for us as a platform is to ensure we do not have that content on the platform and we do not provide a safe place for predators. TikTok is designed, essentially, to prevent the risk of harm from child sexual abuse. We do not allow under-16s to do direct messaging or private messaging and that is purely because we recognise the harms that can happen in private spaces, like grooming. We do not allow under-18s to livestream, again because there is a higher risk of violations happening when something is happening live. One cannot download the content of anyone under the age of 15 and all content that goes onto TikTok goes through a review using automated technology to look for child sexual abuse content. It looks for hashes which are known child sexual abuse images. All content uploaded to the app goes through that process and that helps to prevent people trying to reupload violative and illegal content. When we become aware of such content we, like colleagues, report to NCMEC and take immediate action to terminate accounts. We do this on a voluntary basis of proactive detection. In our most recent reporting to NCMEC, 83% of the reports were what is called “actionable” and that is something we are really striving to increase. That basically means it is quality information that can be handed to law enforcement and help to protect and safeguard individual young people who may be being abused. The industry average is approximately 50% and we are trying really hard to ensure we give quality information in order that it can help protect children in real life.

The platform is designed to be inhospitable to offenders. We block key words that are known to be used. We work with partners and external agencies to know what those key words are. We block known URLs to child sexual abuse content, provide deterrence messaging if someone is searching for that type of content and do not encrypt our messaging spaces, which makes them more inhospitable to would-be offenders. We are part of a number of partnerships and expert agencies on tackling child sexual abuse and welcome the ongoing debate in Europe around regulations to tackle child sexual abuse material. If there are further questions, I and my colleague Ms Moss are happy to answer.

Mr. Dualta Ó Broin

The Senator mentioned philosophy. In our transparency centre, we describe our philosophy in respect of the values we employ when we are thinking about the safety of the platform and the privacy of the users.

It might be interesting to the Senator to know we follow the United Nations guidance for businesses so we have an annual human rights report which goes through the risks in our business from a human rights perspective and how we are thinking about and mitigating those. I am happy to share links to those reports afterwards.

On being proactive, there are two things. In addition to taking down content as rapidly as possible, there is something we are doing to impact on the behaviour of users. If a user posts a comment on Instagram, for example, which looks like it could be abusive or against community standards, he or she will be served with an interstitial asking if they are sure they want to post it because it looks like it might violate community standards. That is successful in trying to not have abusive content put up in the first place. Second, Stop Non-Consensual Intimate Image Abuse, StopNCII, and Take it Down are two successful examples of preventing abusive content being uploaded. StopNCII allows users who think they are at risk of a previous partner or somebody else sharing intimate images of them to upload those images to the service. That means those images cannot be uploaded to our services. Those are two examples of where we are stop it getting it on the platform in the first place.

Mr. Miles might talk about NCMEC.

Mr. David Miles

We detect more than 85% of all cyber tips. We have invested hugely in detecting such content. It is heavily automated, inevitably. There is some human moderation but we can do that at scale. Of that content, 99% is removed before a user even sees it. When it goes to the clearing house, it is categorised by severity and passed back to law enforcement, which acts on that severity scale. That is really important in the way we do it.

Hotlines play a key role. The public report these kinds of things to hotlines. The INHOPE network and IWF are important partners to us, as are organisations like the WeProtect Global Alliance, which are dedicated to tackling child sexual exploitation and abuse, which is probably one of the most heinous crimes.

Sadly, there is a large familial offline dimension to child sexual abuse that has to be tackled too. We have emphasised moving towards preventing this kind of content being shared because every image shared revictimises the victim, even if it does not lead to contact offending. We have invested heavily in safety alerts and pop-ups so that if somebody searches for a term in one of our apps, they are told the contact is illegal and signposted towards Stop it Now and a range of hotlines and helplines that can help them think about what they are doing and caution them. We need to deter them from sharing this kind of content that they might have shared out of poor humour or bad taste but which nevertheless revictimises the victim.

Law enforcement plays a key role. To give an indication of the numbers, in the two years from 2020 to 2022, we dismantled 27 abusive networks. In January 2023, we disabled 490,000 accounts for violating our child safety standards. It is at significant scale and that is why artificial intelligence is really important, as is the expertise we have in our team to try to take this content down and work closely with law enforcement. I hope that answers the question on NCMEC.

They are depressing figures. I commend Meta's success in identifying the posts and taking them down. It is dismaying. I thank the witnesses.

The questions I had intended to ask are similar to those of other members, so I will try not to repeat them. I thank all the witnesses for attending. We cannot talk about AI without talking about the platforms' offerings to children. Fianna Fáil held its Ard-Fheis at the weekend and the Tánaiste described the impact on children from social media and from being constantly online as the new public health crisis of our time. That is how serious this is. I agree with him and support his warning to social media giants to get underage children off the apps, or else the Government will force this in order to deal with the new public health crisis. That is what the Tánaiste is saying. I speak to parents daily and there are huge concerns. We have sent out clear guidance to schools to help parents, and soon we will see funding to support the banning of smartphone use during school time. This is happening.

As an example of how children are affected, a constituent of mine runs a well-known physiotherapy clinic. He will not mind me saying its name, the Realta Clinic. He has been advocating for standard posture training for children in schools to prevent posture-related problems in later life, and the posture problem is directly related to children being on their phones at all times. This is a long-term problem and we really need to work to see what we can do about it. The Meta representatives spoke about working with communities throughout the world and holding workshops, which is very good, and they referred to the 30 types of tools they have introduced, which is fine, but I am a firm believer that when we speak about harmful content and its reporting, there has to be a timeframe whereby it can be introduced quickly.

Others have spoken about fake accounts, for example. How can we sort them out? There are issues in that regard. As one representative said, these platforms are businesses. There were references to mental health, body image and other issues. There is great concern and there needs to be change. I agree with the Tánaiste that this is the public health crisis of our time. While it is great to have the representatives before the committee, at the end of the day our children are our future and they are now very adept with smartphones. What steps are the platforms taking to get underage children off their apps? That is the biggest concern facing us.

I could go on with further questions but, as I said, they are similar to those of previous speakers. I again thank the witnesses for attending.

Ms Susan Moss

I might hand over to Ms Setter in a moment but will address some of the Deputy's initial points. I agree schools are a place for education, not for smartphones and the Internet. On reporting timelines, when an item of content on TikTok is reported, the average time it takes to action that report is two hours. That gives an idea of how quickly we are working in this space. Ms Setter might speak to age verification and the Deputy's concerns in that regard.

Ms Chloe Setter

Age assurance, or age verification, is a very complex and sensitive issue that requires a multifaceted, holistic approach. I can talk through how we at TikTok try to tackle the issue of underage users. We see it as an ongoing process that begins at the start, namely, at the point of download. If the device’s account settings, whether on the phone, iPad or whatever, are correct, technically an under-13 should not be able to download the app in the first instance. We recognise, however, that that is not always the case, so the next phase we have is a neutral age gate.

By “neutral”, I mean it does not give any information about what age you need to put in and it just requires a date of birth. Again, we recognise that not everyone is truthful and young people try to find ways to circumnavigate the systems, so that is why the second phase is the detection to try to find these accounts which we believe to be held by those under 13. All of our moderators are trained to look for this and to flag and suspend those accounts if they see them. We receive a lot of reports from parents telling us they think their child is on the app and should not be, and users can report in the app itself. We also use technology to look at keywords, bios and different information to help surface those accounts. We take a safety first approach, so if there is any uncertainty, we always suspend the account and the onus is then on the user to verify their age using a variety of methods like ID, credit card, facial age estimation and so on.

It is a challenge the industry faces. To give the committee a sense of the effort we are putting in, we remove on average 20 million suspected underage accounts every quarter globally, so we are removing a lot of these accounts and trying to make sure that we get them off the platform as soon as possible. We do not want to have under-13s on the platform, which is designed for those aged 13 and upwards.

There is currently no agreed best practice or position on age assurance as to what "good" looks like. It is something that is very much in the minds of regulators, and we are meeting with regulators across Europe and the world on this topic pretty regularly. There are also standards being developed in this space by the ISO to look at what best practice looks like and what competence levels apply. It is very much an evolving space so we are very much in the mode of listening, assessing and trying to make sure we have the best current approach for now, but also looking at what we can do in future.

Mr. Dualta Ó Broin

I thank the Deputy for the question. I saw the coverage of the comments from the Tánaiste and the Ministers, Deputies Foley and Deputy Donnelly, over the weekend, so we are very aware of the concerns in that space. I would mention the FUSE programme again. It is great to see that the Minister, Deputy Foley, and the Department of Education are now bringing that on and further developing it, which is fantastic to see.

In the age verification space, I will ask Mr. Miles to talk through what it is we do at the moment. We have been hearing this not just from the Minister, Deputy Foley, and the Tánaiste but across Europe, and not just from policymakers but from regulators as well. There are a number of initiatives under way and a number of working groups have been set up by the European Commission to look at the question of age, and that is also tied to the question of a potential European ID.

There are complications in this space. For example, there are privacy issues to be dealt with and resolved. That is why we are taking the position of trying to ascertain the ideal outcome of all of this, which is that the entire ecosystem would be able to rely on a reliable signal in regard to the age of the user. There are a number of points where that could be done. One is at the app store level and there are potentially others, including the telcos and the devices themselves. We believe that if that were mandated at a European level, that signal could then be transferred not just to us but to every single app, including the smaller apps that are just starting and can sometimes flare up in terms of popularity among younger users. That would be a step forward and would be a resolution of the age verification question.

We would still have huge responsibilities to ensure all of the users are then placed into an age-appropriate experience, but it would move us beyond this question of age verification and how to solve age verification at a European level. We advocate that it would be more effective for Ireland to be advocating with the European Commission, which is looking at this area and considering how best to bring forward a harmonised EU-wide age verification solution, than doing it member state by member state, which brings us back to a fragmentation approach, which is the very issue we are trying to avoid. Mr. Miles might address the steps we take at the moment in terms of managing age.

Mr. David Miles

We ask for a birth date on sign-up and, clearly, you have to be 13 or over to be on Facebook and Instagram. We use artificial intelligence – what are called age classifiers.

They are incredibly effective at monitoring the first few months of a young person's activity. We also default them into a private default position where they are limited in terms of the amount of messaging they can do and the people they can follow. That sensitivity or private by default position means certain things are not available to other people and so they are in that restricted default position.

What we find in those first few months, for example, is that if a child is ten and somebody posts something saying "happy 10th birthday" but they say they are 13, we will quickly spot that, and we take a lot of content down. Artificial intelligence has allowed us to do that at scale in recent years. The technology was not there a few years ago, so we are very encouraged by that.

We are also the first company to roll out globally age estimation for those under 18 who want to change their age. What is really interesting is that we used a technology from a UK company called Yoti that age estimates effectively by having people do a video selfie. A total of 90% of the young people who tried to change their age and used age estimation stayed on the platform and proved to be an authentic user. It really stopped them from trying to change their age part-way through that 13 to 18 years journey. It is those kind of things that are very important.

The other really important thing is we are really about trying to deliver age-appropriate content for 13-, 14- and 15-year-olds. A 13-year-old is a very different person from a 16-year-old. If we can get the age verification right in the way that Mr. Ó Broin talked about in terms of industry standards, and in fairness France and the UK have tried to implement the age verification of pornography and both of them have actually stopped doing that, so it is a complex issue, we think that in terms of the delivery of age-modelled, age-appropriate content, that will make the user experience a lot better and it will mean regulators can be satisfied we are delivering age-appropriate content to authentic users, which is very important.

We will suspend the meeting, and when we resume, Senator McGreehan will be next.

Sitting suspended at 4.42 p.m. and resumed at 4.52 p.m.

We are resuming in public session. We will get straight back to the questions. Senator Malcolm Byrne is next.

I am grateful for the opportunity to attend this meeting. I have dealt with a number of the witnesses previously at the media committee. I appreciate much of the work they do, but they will forgive me for saying that I am not convinced, particularly when we talk about the capacity of AI to protect children online in the circumstances concerned. Even though Mr. Ó Broin talked about how Meta's AI is able to pick up many of the questions, we are not guaranteed the safety of children or young people on the platforms. I appreciate that it is never possible to have a 100% guarantee, but I think a great deal more can be done.

I have three questions. The first is for Ms Moss and Ms Setter. They will be aware that a "Prime Time" programme is due to air on RTÉ this evening. I obviously have not seen the programme, but I understand that it will show how it was relatively easy for somebody purporting to be a 13-year-old to set up an account, access TikTok and see content that may not be deemed appropriate. It is as easy to do it with the other platforms, but TikTok has a significant teenage population on its platform. Are the witnesses not concerned about that?

Ms Susan Moss

I thank the Senator for his question. We are aware of the report. I have not seen the full report, which will not be aired until this evening. I want to assure every member of this committee, as we assured Coimisiún na Meán this morning, that we are looking into this as a matter of urgency. I want to say that we do not prohibit the glorification or promotion of self-harm or suicide on TikTok. However, RTÉ's investigation has revealed some elements that are of concern to us. We have taken action immediately on that. Context is important here. Like the Senator, I have only seen the reporting of this matter this morning. As a result, I can only speak on the basis of what I have seen. We estimate RTÉ to have seen 1,000 videos during its experiment. From those, it flagged ten videos of concern to us through screenshots.

The majority of them were not violative but we absolutely recognise that they should not have been shown to anyone under the age of 18. Of the content that was shown to us by RTÉ, two of those videos were violative and we have taken action, and one was not violative. As far as I am concerned one negative experience is one experience too many. We have gone back and are looking at it and taking action. RTÉ's experiment has to be seen in the context in which it was conducted, in the sense that it is not, as far as we are concerned, representative of how an individual normally experiences TikTok. As far we can tell by the reporting, RTÉ looked at a number of videos but skipped videos unless they were related to mental health. The average person does not interact that way with TikTok. They have a variety of interests, a multiplicity of different interest levels, from camogie to interior design. No one individual just looks at mental health content. I need to see the full reporting in its entirety, but I stress that we are committed to continually looking at this area and how we can strengthen our processes, particularly for our younger users.

I do appreciate the point Ms Moss is making. My concern frequently relates to a phrase being used today by the Australian eSafety Commissioner, which is that online content once seen cannot be unseen. I appreciate there may be hundreds of videos about camogie or interior design, but the issue is the dangerous material that people have seen. All present will be very familiar with the work of the eSafety Commissioner on the Online Safety and Media Regulation Act. In many ways, we wanted the position of the online safety commissioner here to be based upon the model. Her office today issued formal notices to Facebook and X over the misinformation or disinformation that applied to some of the riots and trouble that has been going on in Sydney in recent days. It does not appear that the AI picked up on some of this but what was highlighted included extreme and gratuitous violent material that was available to be viewed by teenagers. Some of this disinformation and misinformation originated from Ireland and Irish X accounts and they are currently being shown on Australian television.

I will put this point to Meta and X. I get that AI is picking up a lot of the misinformation and disinformation and I get that there is human content moderation. Clearly, however, there are concerns that teenagers and the wider population are being exposed to serious misinformation and disinformation, including, in the words of the eSafety Commissioner, "extreme and gratuitous violent material". There was a specific instance of that today. These are real-life examples. No matter what Meta and X are doing, they are not succeeding. I ask Mr. Ó Broin or Mr. Miles to provide Meta's perspective on this issue.

Mr. Dualta Ó Broin

I can come in on that point. I have not seen the report in question. Obviously if something like that is happening, it should not be happening. We have very strict policies in place in respect of extreme and gratuitous violence. In addition to those, we have policies in place in respect of under-18s seeing that type of material. In some instances where content is not in violation, a warning screen will be placed in front of that content. It depends on the severity and nature of it. I will come back to the Senator on the Australian example-----

I appreciate that. I acknowledge that there are efforts to address this from Meta's perspective, particularly in terms of human content moderation. I know from our engagement with Meta at the media committee we have acknowledged that. It is still a concern, however. The concern is that once it is seen, it cannot be unseen. If a teenager or, in some instances, a younger person who accesses Meta's platform sees it, that can have profound consequences.

Mr. David Miles

That is one of the reasons that back in January we really tightened the threshold there. If a youngster is even considering suicide or self-injury we will take the content down and send it for human review straight away. That aims to stop that situation where something has been seen and cannot be unseen. That is really important.

The other thing, in addition to the experts we have, is that post the Molly Russell inquiry, we are running a workshop with the Molly Rose Foundation. Those kinds of cases can help to inform our thinking and that of others in terms of the way that we deal with these issues and in creating nuanced approaches to address things like suicide notes or the livestreaming of those kinds of things. We need guidance on those kinds of things to try to make sure that we get them right. We are very restrictive in that area now and we feel that we are making good progress. There is always more we can do but it is an area that we feel particularly strongly about because suicide can stem from bullying, harassment and from other harms too. It is important that we catch those behaviours as quickly as possible.

Mr. Dualta Ó Broin

If I might add to that, a large part of content moderation work is realising that there will never be a job done moment, where we can say it is all finished. There will always be evolving threats and there will always be incidents that appear around the world. One of the issues with a time set, for example, in relation to individual complaints, is that the way our systems work is by prioritising the most harmful content and getting it reviewed and removed as quickly as possible. That is how we design our systems to address that question. I will look into the example the Senator gave and come back to him.

I appreciate it. I will be honest with X and say that I have at least been somewhat impressed by the efforts on behalf of TikTok and Meta but I have not been on the part of X. I raised this at the media committee before, that is, the fact it is getting rid of human content moderation. It is a really serious problem. The model that recommendations are driven by likes and shares is a serious problem. It needs to change its recommender or algorithm system. Other platforms do but in particular X does. I talk to school groups and youth groups and ask them about the content they see. The platform they express the greatest concern about is X. They tell me what they see on X and the content is far more gratuitous, far more violent and far more sexual than that of other platforms. One sees far more trolls and bots and misinformation and disinformation on X. Its AI model is not picking it up. Whatever about not picking it up for the wider population, not picking it up for children and teenagers is really dangerous. What I am asking is whether the witnesses accept it is a serious problem for its platform, more so than for any of the others. What action is it taking?

Ms Claire Dilé

I thank the Senator for his question. First of all, what happened today was a terrible incident and for the people who came across this content, of course it was terrible. Our objective is and should be that we are proactive. I agree with the Senator that we can make more effort in making sure that the content is both detected and removed as quickly as possible. I agree with the Senator when he says that one system on its own, such as AI, will not be to catch everything and in the case of our platform, we have a lot of content that is text and AI might not be as good at catching text content as it is at catching images.

The Senator is right that we have to make more progress and invest a lot more in technology and people. Members might have seen that we announced about two months ago that we are launching a centre of excellence for moderation in Austin. We want to grow and in particular we want to have moderators of service who are fully employed by X. We really want to get better on moderation both through relying on people and also on technology. I agree with the Senator that we are not-----

I am sorry for interrupting Ms Dilé but how can she argue that when in November 2022, prior to Elon Musk taking over X, Twitter as it was then employed 5,500 human content moderators? Today it employs 2,500. That to me is not a sign of a commitment to human content moderation.

I agree that technology can do a lot of good stuff, but it will not capture nuance and everything else. Does Ms Dilé not accept that more than halving the number of human content moderators shows that X is not committed to this and that it could become over-reliant on AI?

Ms Claire Dilé

Like I was saying, we are putting in place a variety of tools and policy in the context of enforcement action. Technology is one of these. With AI we are getting better at working at scale because it is true that we have to work at scale and AI assists us in this context.

We went through a transition last year. This has resulted in lay-offs. This is public knowledge and Senator Byrne is aware of that. Now we are in a place where we are rehiring people in-house. We are building in that direction. I assure the Senator that the direction of the company is to hire more people to do this work.

I am sorry to interrupt again, but could Ms Dilé quantify what she means when she talks about hiring more people? Will X be returning to 5,500 human content moderators, which was the kind of number it had prior to the Musk takeover?

Ms Claire Dilé

I am sorry, but could Senator Byrne repeat his question?

Ms Dilé says X is hiring more people. What are the numbers? For instance, by the end of this year, how many human content moderators will X have globally and within the European Union?

Ms Claire Dilé

I cannot give Senator Byrne this information because I do not have a specific number, but I am happy to follow up in writing with more information on the questions posed.

I might ask that that would happen.

One of the problem with some of the content that viewers are seeing, including teenagers, is that the algorithm for the recommender model works on the basis of likes and shares. Given that about 20% of accounts on X are bots or trolls, if I am a bot or a troll and I want to push a particular agenda, that is what I do. Does Ms Dilé not accept that is part of the problem?

Ms Claire Dilé

The way the recommender system works is that, first, you can be in a recommender system that is recommended for you but you can also decide to switch off and not be following a timeline that is algorithmically organised. The timeline can be non-chronological. Then, the way it works is that it chooses a number of pieces of information, such as for instance the people you follow and your interests. It also aims at not showing you a tweet that you already saw. Every content that is in violation of our rules will not be recommended to people. For example, if we find content that has violated our rules, we would not push it to people. We would de-amplify in the timeline.

In addition, an element of our policy enforcement which is called "freedom of speech but not reach", by which for certain policy violations, for instance if a user violates our policy on civic integrity, hateful conduct or harassment it would be drastically de-amplified on the platform. With the recommender system we would de-amplify everything that has been in violation of our rules factually. We try to show content to people who have interest in it but we do not want to show to people content that would violate our rules. When it comes to sensitive media for the adult user community of X, they can decide whether or not they want to show sensitive media. They can decide, for instance, whether they would like to see violent content relating to the war in the Middle East. They can decide to see it or not by default, but if they are under 18 then by default they will be in a locked environment whereby they will not see such content. There will be filters on the content or the media and if they try to click through it they will not be able to see it. That is our way of trying to protect the younger audience but also to make sure that content that is in violation of our rule is not pushed to people. When we apply the label of "freedom of speech not reach", it is no longer possible to share the content. As Senator Byrne says, it is possible only in very limited situations.

I appreciate the protections for children, but part of the difficulty there is that X is placing a lot of responsibility on the user. That is important in particular when one thinks about 13- to 18-year-olds who may not be fully aware of how the systems work.

It is a general challenge on the educational side. I still have to ask why the platform does not take down some of those accounts far more quickly. I accept the point about anonymity or pseudonymity and understand why certain people like to be or have to be anonymous but about 20% of the accounts on the site are multiple accounts, bots and trolls that are constantly engaging in disinformation or incitement to violence. The platform continues to allow them to exist without taking action. It expects 13-, 14- or 15-year-olds to know that they need to switch off and while I accept that some of them will, it is the wrong approach. The responsibility lies with the platform to take action. The regulator will need to take a far greater role here but I would hope that the platform would be a bit more proactive. I am saying this on foot of my experience, having watched this over the last number of years. TikTok and Meta know what my criticisms are but it must be said that those companies are making efforts to address some of the concerns. Having listened to our guests today, we are not getting that from X.

Ms Claire Dilé

I will focus on the protection of minors because that is the topic of our discussion today. First, minors represent less than 1% of our audience but we have a responsibility to protect them. We know this and we take it very seriously. We know that it is very important. We know that they are a vulnerable audience and we want to be very careful with them.

First, people on our service who are between the ages of 13 and 18 are put in a protected environment that is different from the environment for people who are over 18. For example, they will not be able to receive direct images from other users. They will have to accept receiving images from other users. We will not allow for precise geolocation of these people or for localisation of their posts. All of these settings are turned off by default. We protect, by default, their posts. Their posts can only be visible to their followers and are only searchable by them and their followers. We do not target them with ads. It is completely forbidden to micro-target minors on our service with advertisements. In addition, we make sure they are in a protected environment by putting them in an age-gated environment so that they will not have access to any graphic or sensitive media by default. They do not have to go and tick the box. It is ticked by default and they will not have access to this content. It is our responsibility to make sure the environment they are in is more protected than the adult environment. That is very important for us.

Senator McGreehan is next.

Thank you. Our guests are very welcome to today's meeting. Many topics have been thrashed out already today. We would all agree that social media is an absolute cesspit and that X is the worst, to speak plainly. X is the worst, in my experience, but social media generally is not always a nice, comfortable place. While I do not want to be presumptuous about ages, most of us grew up in the era of disposable cameras and Nokia phones and are grateful for that because we did not have to put up with so much content coming at us. I have been listening to the conversation today and am wondering if we are coming at this from the wrong angle. There is no safe way to lie in the middle of a road, especially for a child. We could give guidance. We could tell a child to wear bright colours, do it at off-peak times or wear a light but social media is really not a safe place. We can give guidance and tell a child to do this and that but it is not a safe place. RTÉ did a "Prime Time Investigates" programme on TikTok focusing on self-harm and suicide content. It looked at the content that a 13-year-old who logs on sees. The researchers did not search for topics, like or comment on videos. They did not engage with any content. They just watched videos shown by TikTok on the "For You" feed and that feed went straight into content that is dangerous for a child, including content about depression, self-harm and suicide.

Welcome to TikTok as a 13-year-old. It is absolutely frightening. I have four young fellas. One was on the verge of wanting to be on social media. It is not safe to lie in the middle of the road. Car companies make their products safer. They have a responsibility to do that. We hear of safeguards and that the companies are doing various fancy things with algorithms and so on. We also talk about age verification and account verification but we wait for others, such as the European Commission, to do it. Do the witnesses not think that, because their companies make so much money from the products those businesses have created, they have a responsibility to be the best they can be and to put forward proper age verification? You would not let a child lie in the middle of the road. Would the witnesses be comfortable with their own 13-year-olds being on their platforms? Do they believe they are safe places? I would not send a teenager down a dark alley and yet we are allowing this. While it is also the fault of parents, adults and policy, the companies have created these products and responsibility lies with them. There are a great many incidents on all of the platforms, although, to be honest, Facebook and Instagram are probably the safest places. Do the companies not have a fiduciary responsibility to their clients and users not to put them in the middle of the road and to do their damnedest to verify ages, to ensure that young users are not getting racy content and not to send 13-year-olds, young vulnerable minds, straight to self-harm on their first day on TikTok?

While it is a long time ago, I remember being a 13-year-old. It is a scary and lonely place. If someone had shown me self-harm and suicide, I fear what the 14-year-old Erin, as opposed to the 13-year-old Erin if I had to have been 13, would have been. I fear what would have happened if I had been educated about self-harm and suicide. We did not get that in the Cooley Mountains in County Louth. You did not get it from your local papers or from school but we now get it from TikTok and it is on the companies' platforms, their product that they are providing to us and to our children. That is more of a statement but I would like some opinions.

Ms Susan Moss

On the Senator's point, we do have an obligation. We have both legal obligations and our obligations on the platform. Both the Digital Services Act and the Online Safety and Media Regulation Act 2022 obligate companies like TikTok to take measures to protect minors from content that might impair their moral, physical or developmental needs. Article 28 of the DSA places an obligation on all platforms to address the safety of minors so, in the first instance, we have legal obligations. In Ireland, both criminal and civil sanctions apply to companies like ours. Ms Setter will address some of the more specific items the Senator mentioned.

Ms Chloe Setter

I accept the Senator's concerns. We really do hear them. I am deeply motivated by those concerns too. In my role as child safety lead, I am often asked whether I would allow my child on TikTok if they were 13. My child is seven months old so I have a while to go yet. We have designed the app using the concept of safety by design. Alongside that, we use technology and human moderation to enforce a safety approach. Teen safety is a very complex space. Our understanding of it is evolving and the research is continuous.

I joined TikTok a year ago. I worked in tackling child exploitation for various charities for more than 15 years. I have also worked directly with victims of sexual abuse and other abuses, so I am deeply motivated to tackle these problems. I joined TikTok because I want to make the digital world safer and I want the platform to be a place where my 13-year-old or my nieces, who are of a similar age, can enjoy content. I am confident that the resources and the measures we are taking are robust. I cannot say they are perfect. That is why we have a continuous approach to safety. We work with experts and have the youth council to advise us. We are listening directly to young people about the experiences they are having. That is ultimately what we have to keep doing: listening to young people, taking their advice on board, working with experts, taking their views on board and developing resources that help people on the platform who are struggling or having a difficult time. It can be a place where such people actually find help. We have a great responsibility in that sense. We have developed many well-being resources around suicide and self-harm with the Samaritans and other expert organisations in order that the platform can be a place where people find community and safety. We have put a great deal of effort and resources into that. However, I take the Senator's concerns. We do not take this lightly.

Mr. Dualta Ó Broin

I am happy to say that Meta takes our responsibilities in this space extremely seriously. We have outlined some of the steps we take in this regard in our opening statement. We also take a great many other measures in the space. We have our legal obligations in Ireland and in Europe. A number of regulators are looking at the measures we have in place and at whether they are sufficient. It is going to be an ongoing and evolving issue. As I said to Senator Byrne previously, content moderation and safety online is never going to be something that is done or solved. It is a question of constant vigilance. Where improvements can be made, they should and will be made as quickly as possible.

Mr. David Miles

We work with experts in this field. They have helped us enormously. We had a real crackdown in this area and on even just viewing this content earlier in the year. That was a result of those experts' guidance. Content moderation is required in addition to technology. You need to be on it and you need trained content moderators who understand the issue. We have two global experts in suicide and self-harm on our payroll. There is a dedicated part of our safety team dealing with suicide, self-harm and eating disorders. That is really important. Those people have come to us from clinical psychology and well-being environments to advise us. They manage the expert group. We will keep making changes, evolving and listening to young people to make sure they feel safe on our platform and are signposted to some of the really great and trusted organisations in this area that do amazing stuff on the ground, such as SpunOut. Those people make a very valuable contribution. We do a lot of signposting towards those groups in local languages within the app to make sure that young people get the support they need.

Deputy Creed has his hand up but we need him to turn on his camera and confirm that he is within Leinster House if he wishes to ask a question.

I thank the Chair. I am in Leinster House. I am trying to keep an eye on the Joint Committee on Justice through the monitor as well. I have been listening intently while watching a muted operation from the other committee room. I have a number of questions. They are probably somewhat similar to my colleagues' earlier lines of questioning. However, I will first welcome our guests and thank them for their presentations. Nothing I am going to say is a reflection on them as individuals, but I have to confess that I have a rather jaundiced view of the social media platforms. It is not fair to target one. Many people are active across multiple platforms; it is the cumulative impact we should be concerned about. On that impact, it is not as if the jury is out; the jury is in. My colleague Senator Mary Seery Kearney outlined in graphic detail what the jury has found in respect of the impact of social media.

I put that on one side and then look at the content of our guest's presentations, which, if they were being scored by media advisers, would tick all the boxes in terms of the terminology used, such as trust and transparency, empowerment, content moderation, etc. These are all of the buzzwords we want to hear. The truth is that the jury is in. Enormous quantifiable damage is being done. There is widespread concern about unquantifiable damage in terms of cognitive impairment, particularly because of the age at which young people are active on social media platforms, and the consequences of the level of engagement. It is nothing strange for young people to be on social media platforms for six, seven or eight hours per day and that is not healthy. I contrast the presentations the committee received with both the jury findings and the awaited fears we have in terms of the impact social media is having on individual citizens, children and society at large. It is corrosive and worrying. In a way, this level of engagement is useful only up to a point and, as legislators and for our regulators, we need to be much more aggressive.

In that context I have a couple of questions. Much is made of the digital age of consent in Ireland bieng 16 but a person being legally able to open an account on TikTok, Snapchat or whatever at the age of 13. Is there a case to be made for an alignment between the age of digital consent and the age at which people should be allowed to open an account? The point was made that so many million people were taken off some platform as a vindication of the company's commitment but nobody should be on a platform if they are underage. There is no evidence that the State gives a driver's licence to 12- or 13-year-olds so why should we allow or accept a situation where a social media company can say that it did not validate the application properly but the child told the company he or she was 13 or 16 years old? There are ways and means. If social media companies were serious about this, they could verify and validate applications and disallow accounts that are not valid. The business end of social media is about the number of accounts held and the attractiveness of those accounts to advertisers.

I confess I will take it with a grain of salt but I would be interested in the witnesses' response on the issue of digital age of consent and the alignment with the age to open an account and how, notwithstanding all of the advances in technology, artificial intelligence and so on, people can still draw a coach and four through the application process and say "I am 13" or "I am 16" when they are eight years old, and open an account. It is just absurd we accept that kind of ráiméis as an explanation for non-compliance with the law. I fear the damage being done as a consequence of early interaction on social media platforms is both quantified and unquantifiable, if those are not mutually exclusive terms. I believe that much of the damage we know is being done will be far outweighed by the damage we are not yet aware of in terms of cognitive impairment in particular.

My second question is in respect of advertising on social media platforms. Less than 12 months ago, An Garda Síochána appeared before the Committee on Justice to discuss this issue. Most of us have seen cases before the courts in recent times of money mules who have been recruited by advertising on social media platforms to make available their bank accounts in order that money laundering is made much easier. The money is popped into the account and popped out to another bank account and young people, unwittingly and unknowingly, who have been targeted on social media platforms through advertising are before the courts.

This can lead to criminal records and all of the consequences that flow from those. My question around placing advertisements on social media platforms is somewhat akin to the issue of opening an account. What is the level of validation used by social media platforms when somebody wants to advertise on their platforms? How is it that these criminals can advertise on social media platforms, and can recruit - I think the terminology used is "herders" - unsuspecting, innocent teenagers as young as 15 years of age to have their bank accounts used for criminal purposes? What information do the social media platforms have on these people? Do they share that information with law enforcement in Ireland, with An Garda Síochána, and across the world with law enforcement in general? Should it not be the case that advertising for those purposes should not be allowed on platforms? Are the platforms obliged to identify a legal entity or an individual as somebody with a passport and PPS number before they can place an advertisement on social media platforms? Those are my questions for the moment.

Who wants to answer those points first?

Ms Chloe Setter

I am happy to address the points on age and on digital consent. I thank the Deputy for the questions. On the age of digital consent, GDPR requires every organisation that processes personal data to have a legal basis for doing so. Consent is one of the six legal bases allowed by the GDPR. For change to happen, in terms of the digital age of consent and the age at which people can come on the platform, that would need to be determined by additional regulation. That is something for regulators to decide, perhaps.

On the issue of age, we were the platform that highlighted that we remove 200 million suspected under-13 accounts on average each quarter across the world. When someone signs up to the platform in the first place, there is currently no off-the-shelf solution that would completely solve the issue of knowing the ages of users immediately.

No, I do not accept that.

Ms Chloe Setter

I totally understand how that sounds.

How is it that the State can identify ages and does not give out driver licences to 13-year-olds.

Ms Chloe Setter

The way in which-----

If platforms get into the high street and engage face-to-face with users and not online, they will not end up doing this.

Ms Chloe Setter

Basically, the only way to verify someone's age is for him or her to provide identity documents. There are a number of issues around that. For example, many young people do not have identity documents. One in three people around the world do not have any identity documents. This is particularly the case-----

Then to not allow those people on the platform.

Ms Chloe Setter

-----for younger users. If we wanted to check the age of a user we would have to check the age of every single user, and every single user-----

We should. The platforms should do that.

Ms Chloe Setter

That may well be what regulators decide.

So they should.

Ms Chloe Setter

However, there are considerations around privacy. Not all individuals are comfortable sharing their passport information, birth certificate or driver licence with tech platforms so there needs to be either an intermediary or a different system set up that builds trust in that. Unfortunately, it is not accepted generally that people want to do that. We have to give credence to people's privacy concerns on this topic and who should handle that data.

I am sorry to say, I think that is----

Will the Deputy allow the witness to answer the question as we will run out of time. I have questions as well and have not come in yet. To be fair, can we allow the question to be answered without interruption?

Ms Chloe Setter

As I said, this is a really evolving space. Standards are currently being developed by the International Organization for Standardization, ISO, and the Institute of Electrical and Electronics Engineers Standards Association, IEEE, about what good looks like in terms of age assurance and verification.

It is quite a nascent space in terms of the technology. There are a number of safety tech companies that provide that kind of third-party service. Our colleagues from Meta mentioned Yoti. We also use Yoti at TikTok. It is a facial age estimation tool that helps us to have more confidence in the age of our users. We have to acknowledge that people have concerns, rightly or wrongly, about providing their personal data to tech platforms and how it is stored and used. There are also concerns about not everyone having identity documents. We do not want to create a situation where people are excluded from online life because they do not have identity documents. What I am saying is that it is complex. I am not saying we are trying to avoid scrutiny in this space. We are working with regulators on this. We are speaking to the French regulators, who have developed their own proposal. The Spanish have developed their own proposal. We are working with them and other industry bodies to look at what we can do. We ultimately do not want under-13s on our platform. We want people in the right age category, so they have the right age experience and enjoy themselves on the platform. It is not in our interests in respect of advertisers or in any other way to have underage accounts on the platform.

Does anybody else want to come in on those points?

Ms Niamh McDade

I will just come in quickly on the point around ads, just to confirm that fraud and scams, including any financial scams, are a violation of our rules and policies. Regarding scams more broadly on the platform, not just paid advertising, that is also a violation of our rules and policies. We will action that content through proactive enforcement. We also accept user reports and we engage with law enforcement on that topic as well. To provide an example of some of our engagement in that space, alongside TikTok and Meta we are part of a voluntary charter on tackling online fraud and scams in the UK. That is something that certainly could be discussed for putting in place in Ireland as well. Part of that is direct engagement with law enforcement, piecing together what they are seeing, what we are seeing and trying to bridge the gap between any information sharing there. It is certainly an interesting space. This is something we absolutely prohibit on our platform. There is no space for that type of content. I think there was a specific reference to money muling, which is also a violation of our policies.

Does Meta want to add anything?

Mr. Dualta Ó Broin

Similarly, advertising of that nature would be in violation of the policies. Committee members might not be aware that since November 2023, no ads are being shown to under-18s on Facebook or Instagram in Europe, the EEA or Switzerland. That remains the case. That is the only additional point.

If I can follow up, no ads are being shown to under-18s where you believe you know the age of the applicant. However, you accept that there are many people on your platform whose age you cannot really verify. You are basing that on the basis that they have stated their age when we know there are many people on your platform who are not the age they said they were. Following up on the previous speaker in respect of the advertising issue, am I understanding correctly that it is a reactive process rather than a proactive process? Anybody can place an ad without being validated as to who they are with clear forms of identification as to their legal status, the purpose of their advertising and so on.

Ms McDade might come in first on that and then I will bring Mr. Ó Broin back in.

Ms Niamh McDade

I thank the Deputy for the question. Yes, we have an onboarding process for advertisers on the platform, meaning they go through a range of different checks. I am happy to follow up on the specifics of that with our advertising team and the safety team involved with that. We have a process in place and they go through a number of checks before they are onboarded to advertising on our platform. In addition to that onboarding process, they must abide by our terms and conditions for advertisers which, as I said, prohibit the advertising of fraudulent, scam ads, all of that, as well as our wider rules and policies.

Mr. Dualta Ó Broin

As was discussed previously, there are challenges in respect of age verification. These are not things that we are making up just to absolve ourselves of responsibilities. There is a huge amount of work being done at the regulatory level, at a European level, to discuss the appropriate place, when you take the competing interests of privacy and safety into account, and to identify the right way forward. We believe that the right way forward would be to ensure that it can be done once by a parent and that the signal will then be transferred across to every single other app in the ecosystem, not just the other larger apps but also the smaller apps.

That does not absolve us of responsibility, however. When we receive that signal, we need to ensure that those users are in the appropriate age experience for their age group.

Thank you. I have a few questions I want to ask. This is more for TikTok and it relates to the algorithms. It is true of all platforms that if you post something and you are getting a large number of negative comments, the algorithm picks this up. What it nearly wants is negative feedback. How are children being protected from that? Part of what is going to be covered on the programme to be broadcast on RTÉ tonight is that we could have young people with something going on in their lives, whether it is worry about school or whatever, and all of a sudden it goes from that to them seeing really negative content. I understand from all the platforms about the wider, really serious things around child sex abuse and all that. What we are trying to get to the bottom of here is the underlying negativity. The more screen time children and young people are exposed to, the greater the chance that they are exposed to these negative algorithms. It becomes a vicious circle and they kind of go down a rabbit hole. That certainly seems to be the case, not just from what is to be reported tonight but from this committee heard from other groups on this topic. They were saying there tends to be a ripple effect and that it gets out of control.

Particularly for teenagers and preteens, they are at a very vulnerable age, and let us be honest, there are many people under the age of 13 who should not be on the platforms but who are. There is all the stuff around body image. As someone in my age category, I often think that we have to try to get through to younger people that this stuff online is not the real world and that no people are walking around looking like that. You often have to get to a certain age in your life to understand that. We have to make sure that all protections that can be in place for younger people and teenagers are there. Specifically on the algorithms, when it is identified that it is doing this negative drive, why can that algorithm not be stopped or banned? Maybe it is not as simple as that but I cannot understand why it would not be the case.

There has been a good bit of discussion on age verification and ID verification. One thing I have always thought for all platforms is that people should have to have some level of ID. We can avoid all those bot accounts if people have to say who they are but it is also relevant in respect of age. I accept what has been said about people not wanting to put forward their documents but I do not know if we should be given that choice. It is not like it is going to be shared in the public domain. It is between the company and the person when they want to sign up to this, that they would have to provide some level of ID. It would erase a huge number of the problems.

Specific to X, I wanted to pick up on a question Senator Clonan asked earlier around information being shared with the Garda in certain cases. If someone is sharing very harmful content, but the account is anonymous or it is not obvious from their picture or username who they are, and I screenshot that content and go to the Garda, and let us say it comes into a case, at the point the Garda contacts X, does X have to reveal who that person is? If the person tries to remove their account or content, is it still accessible in some wider system, cloud or whatever? I would just like to get some clarity on that. Perhaps someone would like to respond first on the algorithms.

Ms Susan Moss

I will take that one. I certainly recognise your concerns and will try to address everything you have mentioned as quickly as possible, starting with the potential for content that is popular for the wrong reasons to be surfaced to younger users.

Ms Susan Moss

On the Cathaoirleach's point about the algorithm, it is important to clarify the algorithm on TikTok - each system is different - is driven by a user's own interests and actions. That is, whether someone likes a video, watches it, skips it or indicate they are not interested in it. It is not driven by the popularity of the comments, shares or likes because TikTok is driven by what we call a content graph and not a social graph.

Around the algorithm more generally, it is definitely important that we as a company offer choice to our community and we do that in three ways. First, we offer a non-personalised feed. That is our obligation under the Digital Services Act. That means the content is not based on users' interactions. The second thing we do is offer a feed to our younger users that is entirely dedicated to STEM. It is endorsed by our Government here and features the Department of Health SciComm Collective account which features young Irish scientists explaining complex themes. The content is verified for its age-appropriateness by an NGO and also for its accuracy. The third feed we offer to our community is to ensure they have the ability to refresh their feed. They can refresh the TikTok feed and will be served TikTok as if they had just joined for the very first time.

I accept the Cathaoirleach's point. There is an inherent challenge in all recommender systems in ensuring the breadth of content we are showing to young people is not too narrow and repetitive. There is content that you see in isolation may not be violative. We definitely recognise seeing too much of that is not a good thing, whether that is extreme fitness or content around dieting. It is not good for anyone's self-esteem. What we do in that instance is use what we call a dispersion mechanism. We try to disperse a filter bubble from happening, so users see a variety of content that reflects their variety of interests. They are some of the approaches we take.

Mr. Dualta Ó Broin

On the recommender system, Mr. Miles and I have spoken previously about the way in which they are used to make the experience safer, especially in the context of the non-violating content in the SSI space, which is now not going to be seen by teen users. I also draw attention to the system cards we published last year that break down exactly how these systems work and how AI powers them. There is a great deal in that transparency centre around the system cards and how they work. It is not just one system working in isolation but several systems working together, sometimes at different times. I come back to the point around what is in the Digital Services Act. We are required to assess the risks across a number of headings that our technology may pose and put steps in place to mitigate those. That is part of the conversations we are having with the regulators at the moment.

I thank the witnesses. What about the criminal aspect? That is something I am very interested in.

Ms Claire Dilé

I can start with this and then give the floor to Ms McDade to speak about the recommender system. There are two situations with our co-operation with law enforcement. The first one is proactive referrals. Under the DSA, when there is risk of harm to the life of a person or bodily harm and we become aware of this situation on our platform, we have to disclose the information to law enforcement. In that case, we give it what we have, which is IP logs. If there is an email address and the phone number of the person, we also provide them. Then law enforcement will, for instance, work with phone operators or other services to find out who this person is, but we have the obligation to disclose the information we have. We are also co-operating with law enforcement voluntarily and we have an online portal available to it where it can make removal requests to ask us to remove content. It can make information requests about that user and can also ask us for preservation request, so in a case where a user would suppress their account we have an obligation under the GDPR to keep them for a certain number of days, but not more than that, because then we would be in breach of the GDPR. We keep them for this number of days, but if we receive a preservation request from law enforcement we keep material for twice the time. We need to do it for 90 days in order that law enforcement can carry out its investigation and when it has enough information about the person it can make a request for information on our platform.

Okay. I am more interested in someone removing the information knowing they are going to get in trouble. There is an obligation on the company to disclose that information to the Garda if it comes to them with a request.

Ms Claire Dilé

Yes. It is regardless of whether they remove the content. If law enforcement has good reason to believe this person will commit a criminal offence, even if the person suppresses information or their account, law enforcement will contact us, we will have a conversation with it and, based on its information, we will disclose to it the information it needs for the process of its investigation.

I thank Ms Dilé. Senator Seery Kearney is next.

I thank the Cathaoirleach. I thank the witnesses for the engagement, which has been very good. I appreciate they sit there and take a lot of stick from us, so I have two positive things to say. First, I congratulate Meta for the creation of Threads. It is brilliant. It is safe. It is shown to be a very positive experience compared with X, which sadly is a vile experience. I say that as a politician. It is just vile from beginning to end. It is a necessary evil, but any time I have ever reported anything, it never violates X’s community guidelines, so I do not really have any time for X, to be perfectly honest.

TikTok’s algorithm is doing something very positive at the moment. My husband has a private TikTok account. He never posts anything. He never would do or create anything, but it is good for us to look up cat videos and all that kind of funny stuff. We have an eight- going on nine-year-old daughter who likes TikTok because she hears other people talking about it and she is allowed see it so long as she is sitting beside one of us. She did a video because she saw a filter she liked. He let her do a video and supervised. The algorithm immediately picked up on the fact she was underaged and closed down his account, or he had to go back and verify whatever. I wanted to say I had real-time experience of that happening. It was a fantastic thing to happen in our house, because we got to say, “See? TikTok don't want you there.” That put an end to that demand, which was fantastic.

I wanted to acknowledge those two things. I am aware there is an awful lot with Meta. I know from the minutes of the previous meetings there are an awful lot of very positive things that go on. However, I have a couple of beefs. One is WhatsApp’s age requirement being reduced. That is unforgivable. I do not know why the company would take such a retrograde step at this time. I really do not understand.

I push all the witnesses again on safety. I am aware there is a time limit and the user has to put in a code if that time limit is hit, but putting in a code is not flagging the fact there are mental health implications. We can even look at time spent with friends by age group. I appreciate it is about smartphones full stop, but since social media the number of minutes of engagement children have with other real, live children has diminished to a frightening extent. If one goes into any coffee shop, there are people sitting with their children and they are on a phone and their child is on a phone. There is this lack of human engagement that is just frightening and, I believe, a major contributor to issues with mental health, anxiety and all of that. The message comes up after an hour. I would need to be convinced an hour is enough. At that point it should tell the user they need to go and talk to a human being. We have warnings on the sides of cigarette packets and all sorts of other places.

It is a fact the mental health of younger people is being affected by the length of time and the addictive nature of social media. I want to push the issue of what the companies can do to address that mental health element when they have a business model that runs contrary to that. With all the safeguards that are being talked about, they have a business model the runs on keeping them there. By the time they become an adult, they are already well addicted. We changed how we ring doorbells and press lift buttons after Covid. We change our behaviour. Young people are all thumbs when typing and we are not. I am still typing with all my fingers.

That behaviour has changed, but that is only symptomatic of what is going on cognitively. There is a need for very clear mental health flags here.

I ask our guests to be really brief because we only have five minutes left.

Ms Chloe Setter

I agree about the change in habits in society. It is quite stark when one looks back. When I was young, I was always being told off for watching too much television and not interacting. This was because I was sitting in my room watching TV. Now, the shift we have seen is from TV to online. I would go back to the previous point I made. Time is an important factor but it is also about the quality. When we speak to young people, we find that many of them consider that they are interacting when they are online. They do not see that they are not interacting because often that is how they communicate with their friends and the people they know. There are so many positive skills that can be developed for younger users that will hopefully see them well in life, including digital skills, creating content, using filters and using these complex tools that I struggle to use myself.

I do not deny that, but we see what can happen on social media. A person can just go on X and put up a post. I mute everything and never see what is on there because it is just so appalling. People can, through a screen, say and be who they are but their in-person behaviour could be very different. We see teenagers with heightened levels of anxiety in our schools. We have had to bring in mental health supports and that is not all related to Covid. I appreciate that a little of it is due to the pandemic but a good deal of it has got to do with the fact that teenagers are communicating by means of devices as opposed to in person and are not using cues like eye contact. All of those types of social cues are being lost.

Ms Chloe Setter

Definitely, and that is why the balance is important. That is why we have the screen time limit and the family pairing tools that allow parents to decide. This is because, obviously, each teenager is different. An hour might too much for one but not enough for another. It also depends on the content. If they are on the STEM feed and learning about science and maths, parents might be more inclined to let them be on their devices than if they are looking at cat videos.

There is also a sense of community that can be found online and surveys have shown that young people of colour, and LGBTQ+ individuals, for example, have found a great sense of community, belonging and authenticity from the communities they find on TikTok. That is a really huge part of TikTok. I am in a same-sex relationship and have just had a child. I look at TikTok and see loads of great, inspiring content about same-sex parenting, for example. That is something I had never seen online before and it has really helped me in my own personal circumstances. There are positives but there is a balance to be struck. That is why we invest in those family pairing tools to help families to decide for themselves what is best for their teenager.

I will give the last word to Mr. Miles.

Mr. David Miles

It is a really interesting that screen time is something that has emerged as an issue in the last year or two. Prior to Covid, the concern was about video games and violent content. There has definitely been a shift. The Senator might be right that there was a Covid effect. The French Government, for example, has established a screen time committee to look this. The work of people like Professor Przybylski, of the Oxford Internet Institute is very informative. He has done some really interesting studies on what is the optimum amount of screen time. There is quite an amount of work going on in the academic environment. Academics are asking if we have a problem, how we quantify it, what is good screen use and what are the outcomes. I am happy to share links with the committee because that work is really interesting. The committee that President Macron formed, of which we are a part, is discussing these issues right now. It is discussing whether there is an optimum and whether we can get proper clinical guidance on this and think about it in a mature way, recognising that there is real concern. It is also asking whether platforms can bring in tools which reflect that guidance and I am optimistic that we can probably move in that direction, given that it is a significant concern.

The French Government requires parental consent for children under 15 to access WhatsApp, which should be introduced here too.

Ms Claire Dilé

I do not think that the Senator had any specific question for X, but I would like to make a general comment on the question of how we deal with GDPR consent and age verification. Of course, we apply GDPR consent according to place. In France, it is set at 15 years old and we respect that in that country.

We ask for parental consent before people can use our services. We are also part of the special committee on screen time. We testified before that committee on what it is the appropriate screen time for people.

On the issue of age verification, we are not opposed to it on our platform as such. The issue is more that we do not really have a technical solution in place. In countries like the UK, France and the US, this conversation is a bit more advanced. In France, for example, it was a matter for the CNIL, the data protection authority and ARCOM, the audiovisual regulator, to come up with guidance on the systems for social media platforms to use. There was a lot of discussion around the system of double anonymity. At the time, we thought that a system of double anonymity might work but we are still awaiting a recommendation on that. It has still not been published. The discussion at EU level is on an age-appropriate design code and we are all part of that. We need to continue to have this broader discussion with all interested parties including the social media platforms, the app stores, search engines and the wider community to see what solution will be able to bridge concerns around better privacy and better protection. The issue is about making sure we know who are users are and what age they are. When we reach that point, and reach a conclusion on this, we will be able to use a system that has been vetted by the data protection regulator as well.

Mr. Dualta Ó Broin

I know we are out of time but I just want to thank the Senator for her kind words on Threads, which were great to hear. In relation to WhatsApp, I can follow up with more detail but essentially, globally it is designed as a 13+ system. The European region was an outlier and we got into line with that. The change was notified in February. It was discussed with the relevant regulatory bodies and we believe it is bringing it in line with the expectation of users of the service. We appreciate and recognise that concerns have been expressed and we take those concerns very seriously.

I thank all of our members, especially those who stayed for the whole meeting. I also thank all of our guests for coming in. I know it can sometimes be difficult for social media companies to come before the committee and we really appreciate their willingness to be here. I propose that we publish the opening statements on the Oireachtas website. Is that agreed? Agreed.

The joint committee adjourned at 6.03 p.m. until 3 p.m. on Tuesday, 23 April 2024.
Barr
Roinn