I thank the Cathaoirleach for the invitation to meet with the committee today. I am the online safety commissioner at Coimisiún na Meán and I am joined by Karen McAuley, our director of policy for children and vulnerable adults, and Declan McLoughlin, our director of codes and rules.
Coimisiún na Meán is a new regulator for broadcasters and online media, established in March 2023. Our broad remit includes regulating online platforms based in Ireland and carrying out the previous functions of the Broadcasting Authority of Ireland. I will focus mainly on our work in relation to online safety, particularly the protection of children.
One of my priorities on appointment was establishing a youth advisory committee made up of young people and their representative organisations. We consulted the committee on our draft online safety code in January. Our broadcasting code of programme standards and our children’s commercial communications code protect children in the broadcasting space from inappropriate communications.
Media literacy is another important aspect of our work. We want children to have the skills to engage online and manage the risks. Earlier this month, we supported Safer Internet Day which focused on young people’s views on technology and the changes they want to see. We have helpful information on our website about online safety and how to make complaints to platforms and seek support in relation to harmful content. Yesterday, we opened a contact centre.
We are putting in place the online safety framework in Ireland. This framework has three main parts, the first of which is the Online Safety and Media Regulation Act which is the basis for our draft online safety code. This code recently went out for public consultation and that consultation process closed at the end of January. The second part of the framework is the EU Digital Services Act, which became fully applicable on 17 February, and the third part is the EU terrorist content online regulation, for which we have been a competent authority, together with An Garda Síochána, since November 2023.
It is important to note that the era of self-regulation is over. Our online safety framework makes platforms accountable for how they protect users, especially children. Our draft online safety code proposes measures such as age verification and parental controls. It also proposes that complaints are dealt with in a timely manner. Among the supplementary measures we are proposing are safety by design and recommender system safety. We are responsible for regulating services which have their EU headquarters in Ireland and the European Commission also plays a role in relation to the largest platforms. Co-operating with our counterparts across Europe and globally is important.
The committee’s decision to include a focus on the protection of children in the use of artificial intelligence, AI, is welcome. AI is an increasing feature of children’s lives. It was not designed for children nor does it use a safety-by-design approach. It presents both opportunities and risks. We recognise that many online services use technology such as AI. The following examples take account of measures set out in our draft online safety code which are directed towards ensuring children are protected from harmful content. We also published an expert report on our website in December regarding online harms. We propose that platforms introduce effective age verification to ensure that children do not access age-inappropriate content. We are not proposing to specify the techniques that platforms use, as we recognise that technology evolves. However mere self-declaration of an age is not an effective form of age verification. AI can be used in age estimation, where platforms can use AI to make inferences as to a person’s likely age.
AI-driven recommender systems can present risks, including the amplification of harmful content online, the recommendation of age-inappropriate content, disinformation, the facilitation of inappropriate relationships between adults and children, and excessive amounts of time online. One of the supplementary measures we have proposed in our draft online safety code is for platforms to take safety measures to reduce harm caused and to conduct a safety impact assessment. AI can be a useful tool for content moderation to improve online safety. It can recognise and remove illegal or harmful content. It can also limit the exposure for human content moderators. However, AI can also make inaccurate or biased decisions. One of the measures we consulted on in our draft online safety code is timely and transparent decision-making in relation to content moderation.
Since being appointed, I have had invaluable opportunities to meet children and young people and organisations representing their rights, as well as Government Departments.
A consistent message is the importance of children, young people and their parents and guardians being supported around online safety through information and education. Facilitating us all to develop AI-related competency, including the ability to recognise which content has been generated by AI and why, can empower us all online. Generative AI is present in systems such as Chatbox, interactive games and, more worryingly, has even been offered as a friend on social media. While AI can help children learn and play, it also poses risks. Children may place too much trust in AI systems. It may provide unsafe or false information. Children can come across content that is age-inappropriate.
There are growing concerns in relation to AI-generated content, particularly through the manipulation of imagery, through deep fakes, and AI-generated child sex abuse material. Harmful and illegal content, AI-generated or otherwise, will come under the scope of regulation through our online safety code and the Digital Services Act and should be addressed by the platforms in line with those rules. In addition, a separate EU AI Act is being adopted. If AI is to work for children, children need to be front and centre in its design. Given children's right to be heard in matters affecting them, they need to be afforded opportunities to participate in decision-making, about how AI can serve their interests and how the risks of AI can be mitigated. We are a new statutory body and this is our first meeting with this committee. I want to assure members of our commitment to use our functions to serve children well. We are happy to take members' questions.