Léim ar aghaidh chuig an bpríomhábhar
Gnáthamharc

Joint Committee on Children, Equality, Disability, Integration and Youth díospóireacht -
Tuesday, 20 Feb 2024

Protection of Children in the Use of Artificial Intelligence: Discussion (Resumed)

As we now have a quorum, we will begin in public session. We have received apologies from Senators Ruane and O'Sullivan. Today we will continue our engagement with stakeholders on the protection of children in the use of artificial intelligence, AI. We had our first session on this last week. We are joined today by representatives of Coimisiún na Meán, including Ms Niamh Hodnett, the online safety commissioner, Karen McAuley, director of policy for children and vulnerable adults, and Mr. Declan McLoughlin, director of codes and rules. They are all very welcome and I am sure they tuned in to our proceedings last week.

Before we begin, I must go through some housekeeping matters. I advise anyone who is joining us through Microsoft Teams that the chat function is only to be used to make us aware of any technical issues or urgent matters and should not be used to make general comments or statements. I remind members of the constitutional requirement that they must be physically present within the confines of the Leinster House complex in order to participate in public meetings. I will not permit a member to participate where he or she is not adhering to this constitutional requirement. Any member who attempts to participate from outside the precincts will be asked to leave the meeting. In that regard, I ask members who are joining us via Microsoft Teams to confirm that they are on the grounds of the Leinster House campus before making their contribution.

The evidence of witnesses who are physically present or who give evidence from within the parliamentary precincts is protected, pursuant to both the Constitution and statute, by absolute privilege. Witnesses are reminded of the long-standing parliamentary practice to the effect that they should not criticise or make charges against any person or entity by name or in such a way as to make him, her or it identifiable or otherwise engage in speech that might be regarded as damaging to the good name of the person or entity. If their statements are potentially defamatory in relation to an identifiable person or entity, they will be directed to discontinue their remarks. It is imperative that they comply with any such direction.

I now invite Ms Hodnett to make her opening statement, after which we will have a question-and-answer session with members.

Ms Niamh Hodnett

I thank the Cathaoirleach for the invitation to meet with the committee today. I am the online safety commissioner at Coimisiún na Meán and I am joined by Karen McAuley, our director of policy for children and vulnerable adults, and Declan McLoughlin, our director of codes and rules.

Coimisiún na Meán is a new regulator for broadcasters and online media, established in March 2023. Our broad remit includes regulating online platforms based in Ireland and carrying out the previous functions of the Broadcasting Authority of Ireland. I will focus mainly on our work in relation to online safety, particularly the protection of children.

One of my priorities on appointment was establishing a youth advisory committee made up of young people and their representative organisations. We consulted the committee on our draft online safety code in January. Our broadcasting code of programme standards and our children’s commercial communications code protect children in the broadcasting space from inappropriate communications.

Media literacy is another important aspect of our work. We want children to have the skills to engage online and manage the risks. Earlier this month, we supported Safer Internet Day which focused on young people’s views on technology and the changes they want to see. We have helpful information on our website about online safety and how to make complaints to platforms and seek support in relation to harmful content. Yesterday, we opened a contact centre.

We are putting in place the online safety framework in Ireland. This framework has three main parts, the first of which is the Online Safety and Media Regulation Act which is the basis for our draft online safety code. This code recently went out for public consultation and that consultation process closed at the end of January. The second part of the framework is the EU Digital Services Act, which became fully applicable on 17 February, and the third part is the EU terrorist content online regulation, for which we have been a competent authority, together with An Garda Síochána, since November 2023.

It is important to note that the era of self-regulation is over. Our online safety framework makes platforms accountable for how they protect users, especially children. Our draft online safety code proposes measures such as age verification and parental controls. It also proposes that complaints are dealt with in a timely manner. Among the supplementary measures we are proposing are safety by design and recommender system safety. We are responsible for regulating services which have their EU headquarters in Ireland and the European Commission also plays a role in relation to the largest platforms. Co-operating with our counterparts across Europe and globally is important.

The committee’s decision to include a focus on the protection of children in the use of artificial intelligence, AI, is welcome. AI is an increasing feature of children’s lives. It was not designed for children nor does it use a safety-by-design approach. It presents both opportunities and risks. We recognise that many online services use technology such as AI. The following examples take account of measures set out in our draft online safety code which are directed towards ensuring children are protected from harmful content. We also published an expert report on our website in December regarding online harms. We propose that platforms introduce effective age verification to ensure that children do not access age-inappropriate content. We are not proposing to specify the techniques that platforms use, as we recognise that technology evolves. However mere self-declaration of an age is not an effective form of age verification. AI can be used in age estimation, where platforms can use AI to make inferences as to a person’s likely age.

AI-driven recommender systems can present risks, including the amplification of harmful content online, the recommendation of age-inappropriate content, disinformation, the facilitation of inappropriate relationships between adults and children, and excessive amounts of time online. One of the supplementary measures we have proposed in our draft online safety code is for platforms to take safety measures to reduce harm caused and to conduct a safety impact assessment. AI can be a useful tool for content moderation to improve online safety. It can recognise and remove illegal or harmful content. It can also limit the exposure for human content moderators. However, AI can also make inaccurate or biased decisions. One of the measures we consulted on in our draft online safety code is timely and transparent decision-making in relation to content moderation.

Since being appointed, I have had invaluable opportunities to meet children and young people and organisations representing their rights, as well as Government Departments.

A consistent message is the importance of children, young people and their parents and guardians being supported around online safety through information and education. Facilitating us all to develop AI-related competency, including the ability to recognise which content has been generated by AI and why, can empower us all online. Generative AI is present in systems such as Chatbox, interactive games and, more worryingly, has even been offered as a friend on social media. While AI can help children learn and play, it also poses risks. Children may place too much trust in AI systems. It may provide unsafe or false information. Children can come across content that is age-inappropriate.

There are growing concerns in relation to AI-generated content, particularly through the manipulation of imagery, through deep fakes, and AI-generated child sex abuse material. Harmful and illegal content, AI-generated or otherwise, will come under the scope of regulation through our online safety code and the Digital Services Act and should be addressed by the platforms in line with those rules. In addition, a separate EU AI Act is being adopted. If AI is to work for children, children need to be front and centre in its design. Given children's right to be heard in matters affecting them, they need to be afforded opportunities to participate in decision-making, about how AI can serve their interests and how the risks of AI can be mitigated. We are a new statutory body and this is our first meeting with this committee. I want to assure members of our commitment to use our functions to serve children well. We are happy to take members' questions.

I thank Ms Hodnett for her opening statement. I am sure she was tuned in last week when there was a really good discussion. I am sure there will be a number of questions today and will start with Deputy Sherlock.

I am delighted the comisiún is present today. We had a very good session last week that was really instructive for us as members. This is a natural dovetailing of that. My first question is simple and is about the pronunciation of Coimisiún na Meán. Is it "coimisiún na meán" or "coimisiún na maan"? Does it depend on whether I am from Munster or Connacht?

Ms Niamh Hodnett

I hope all the questions are as straightforward as Deputy Sherlock's. I say "coimisiún na maan" as I have Munster Irish but "coimisiún na meán" is the pronunciation if you have Connacht Irish. It is the Irish for "media commission". We have not been given an English name which we are quite proud of, so we use our Irish title at all times.

So if I say "coimisiún na meán", am I on safe ground?

Ms Niamh Hodnett

Yes.

Míle buíochas.

I will go straight to the issue of effective age verification. To quote directly from Ms Hodnett's statement, the coimisiún is proposing:

that platforms introduce effective age verification to ensure that children do not access age-inappropriate content. We are not proposing to specify the techniques that platforms use...

How interventionist is the coimisiún going to be on that? I want us to get beyond codes and protocols and into the hard legal space. If Internet providers or the Googles of this world are not adhering to codes of conduct, what are the legal mechanisms that are open to the coimisiún to go after entities that are not protecting children adequately? In other words, I see Ms Hodnett's statement on effective age verification as leaving it in the hands of the ISPs, or the industry as it were. If the coimisiún is not more interventionist than that, they will be out the gap. They still will be getting away with the same behaviours if the coimisiún does not have a stick to beat them with, so to speak. That is my second question.

I need Ms Hodnett's language to be harder than that, as otherwise I am fearful that children will not be protected, as the coimisiún is taking on the might of a global multibillion euro industry. The coimisiún is quite a small regulatory house although I do not mean that in a pejorative sense. How will the coimisiún take them on without the weight of legislation behind them, as well as the means to put in place very large fines? Can Ms Hodnett tell me more about what the coimisiún is proposing in that sense?

Ms Niamh Hodnett

I thank Deputy Sherlock. That is the kernel of our issue and we are very satisfied with the powers we have been given under the Online Safety and Media Regulation Act. It is important to say the era of self-regulation is over. This is the introduction of effective regulation and our online safety code is not optional but is a binding code. The code has to be legally robust and the measures we have to include must have a strong legal basis. A breach of the code or non-compliance with the code is underpinned by Part 8B sanctions of our Online Safety and Media Regulation Act. We can impose fines of up to 10% of relevant turnover or €20 million, whichever is the greater. Indeed, there can even be criminal sanctions for some of the more egregious breaches of non-compliance with the Act. It is important to say that self-regulation is over and we are now into an era of effective regulation. The Digital Services Act, which is our other online safety tool, took full effect across Ireland over the weekend. There we play a role, together with the European Commission, and fines of up to 6% of worldwide turnover can be imposed under the Digital Services Act. Our third tool, namely, terrorism content online regulation, is underpinned by sanctions of up to 4% of relevant online turnover.

That is very reassuring. As a relatively new regulatory body, the coimisiún is giving a lot of time and thought to these issues. I can clearly see that in the submissions to us today. On the effective age verification, do I have it right when I say the thinking within the coimisiún is that it is up to the industry to produce the verification process but if that is not robust enough in the coimisiún's view, it will be back to state it is not good enough and wants something more stringent than that?

Ms Niamh Hodnett

Yes. The age verification measure is a mandatory measure. The online safety code has a suite of mandatory measures that have to be imposed within it. They are derived from Article 28b(3) of the audiovisual media services directive, together with the Online Safety and Media Regulation Act. One of those measures has to be an age verification measure. We are requiring that the platforms would have effective age verification measures in place for access to the platforms and robust age verification measures in place for access to age-inappropriate content such as pornography or extreme or gratuitous violence. While we are not mandating what particular form, as each platform is slightly different and currently has different forms it uses for age estimation or age assurance, we are leaving it open to the platforms not as a collective but for each platform individually to decide what is the appropriate form of age verification for it to meet that obligation and then to report to us on the measures it is using in that regard.

That is very reassuring. I speak both as a typical TD representing my constituents and as a parent. There is an onus on parents to step up to the mark here and to police but it is important to have the weight of law behind you as a parent or as any adult who is supervising children. My last question within the time allowed to me is in relation to AI recommender systems. In her opening statement, Ms Hodnett stated "One of the supplementary measures we have proposed ... is for platforms to take safety measures to reduce harm caused and to conduct a safety impact assessment." Can she deconstruct or unpack that for me? Does that give me hope that the coimisiún has a serious and robust role to play in respect of insuring that AI-driven recommender systems can be policed?

Ms Niamh Hodnett

In respect of the binding obligations under Article 28b(3) of the audiovisual media services code hat I referenced, recommender systems or dealing with the amplification of harmful content is not included there. We propose to include recommender systems as a supplementary measure. Consequently, they will not be in our first code but these would be supplementary measures that we think are important to have in place to address the harm caused by the amplification of harmful content through recommender systems. These supplementary measures will need further engagement with the European Commission through a process called the TRIS directives and we have been talking with the European Commission about these issues since November on an ongoing basis. They may also be addressed by the Digital Services Act, as well as by the European Commission. Indeed, the European Commission opened an investigation yesterday into TikTok in relation to its recommender systems.

Ms Hodnett is telling me that this is an iterative process. While there is no hard law or EU law, there may be law coming down the tracks regarding recommender systems? No such law exists to police recommender systems as it stands. Is that the current state of play or am I misreading Ms Hodnett?

Ms Niamh Hodnett

In respect of our first draft online safety code, we propose recommender systems to be a supplementary measure and therefore, not one of the suites of measures to be included under the audiovisual media services directive or our Act. However, the Digital Services Act does call out recommender systems. It is already a binding obligation for the large platforms, for example, to deal with the mitigation of harmful content as part of their need to do a risk assessment of their platforms and to introduce mitigation measures.

These are binding obligations already imposed on the very large online platforms under Articles 34 and 35 of the Digital Services Act. In addition, Article 28 places an obligation on all platforms to address the safety of minors.

I beg the Cathaoirleach's indulgence; this will be my last question. Ms Hodnett states it is a supplementary measure. To me, that means it is an add-on, rather than being core. Is it possible to police recommender systems under Irish or EU law in order to have such systems turned off, rather than on, as the default for any user of a platform, if that makes sense?

Ms Niamh Hodnett

It does make sense. The Digital Services Act states that platforms have to take mitigation steps to address the risk caused by recommender systems. It is not as black and white as defaulting to on or off. In our draft online safety code we have proposed as a supplemental measure that the platforms would carry out a risk or safety impact assessment specifically in respect of recommender systems. Some of the measures on which the platforms will report to us will include whether they should default to off or on. We have a long list of measures that could form part of this recommender safety plan, but those are our supplementary measures. The Digital Services Act does not state in black or white whether the systems should be on or off; it just says that the risks posed by recommender systems in terms of the amplification of harmful content online should be mitigated by the large platforms and that the platforms should take steps to protect minors.

I thank our guests for coming in. I echo what my colleagues have said about how useful and interesting this exchange is for us, and on many levels how reassuring. I congratulate the witnesses on child safety online getting an "A" on the Children's Rights Alliance scorecard, as reported on "Morning Ireland" this morning. It is good to hear good news. The State's face has been saved with that high score. I thank the witnesses for that.

Aside from their role, has anyone taken a case against any of the tech giants that are located here? Have any Irish citizens taken any cases here? Has there been litigation for damage or vicarious damage to children or vulnerable adults? By extension, what is the position in respect of persons harmed in another jurisdiction? Senator Seery Kearney is probably rolling her eyes listening to a lay person asking these questions.

If a person is caused harm by these platforms and the AI systems and recommender systems in another jurisdiction, does he or she take legal action here because the companies have their headquarters here or is that happening abroad? I do not know whether our guests are aware of developments in these areas.

I have an interest in vulnerable adults. There was reference to artificial intelligence drawing inferences about the age of a person. I assume those AI systems could also be used to make inferences about a person's vulnerability or otherwise. Have such systems been used to protect vulnerable citizens or to target them for a particular type of content or messaging?

Ms Niamh Hodnett

Our role is to regulate the platforms. We have not been involved in litigation between private individuals and the platforms. I am not aware of such litigation being extant. Our role in regulating the platforms is to regulate them under our three regulatory tools, namely, the Online Safety and Media Regulation Act, the terrorist content online regulation and the Digital Services Act. We do not regulate individuals; it is very much at the platform base. The platforms have obligations under the terrorist content online regulation or the Digital Services Act and we police those obligations. We recently stood up a supervision and enforcement team that investigates platforms for breaches of those obligations or binding rules. Yesterday, we established a contact centre through which members of the public can raise issues, such as guidance on raising a complaint with a platform or possible breaches of the Digital Services Act. This is not the same as our individual complaints framework, to which my colleague, Karen McAuley, might speak, which we were seeking to put in place a year after the online safety code is in place. It is, however, the start of that road or ladder in terms of being able to contact us. As of yesterday, individuals can contact us through our contact centre or by email to raise issues relating to potential breaches of the Digital Services Act or for guidance on how to raise a complaint with a platform. That is not the same as the litigation to which the Senator referred. We are not aware of such litigation.

As regards the AI systems and vulnerable adults, I am not aware of the actual impacts of AI systems' monitoring behaviour or otherwise in respect of vulnerable adults. I will raise it with Karen McAuley but I am not sure if-----

Karen McAuley

I will speak to the complaints piece first. To follow up on what Ms Hodnett has said, we are seeking to take a suite of measures under the Digital Services Act, our own legislation and the forthcoming online safety code. As Ms Hodnett stated, we opened a contact centre yesterday to support people with queries they may have in respect of their rights and to allow them to raise concerns they might have with reference to the DSA. For information, the phone number of our contact centre is 01 9637755. Members of the public can also contact us at usersupport@cnam.ie. That is what is live at present.

For context, there are several other pieces to this. The second is a provision under the Digital Services Act relating to trusted flaggers and national digital services co-ordinators. Coimisiún na Meán, as the national digital services co-ordinator here in Ireland, can award the status of a trusted flagger to a third-party entity. Last week, we published an application form and guidance for organisations that may wish to apply to us to be a trusted flagger. Trusted flaggers would be people who have necessary expertise and competence as regards identifying and notifying illegal content to the platforms, rather than to us. The idea is that trusted flaggers who identify illegal content on a platform have a priority pathway to notify the content to the platform in order that action can be taken by the platform. That is the second strand.

My apologies if my reply is a bit long-winded. The third stream relates to nominated bodies. Under our legislation, namely, the Broadcasting Act 2009, as amended, we are required to set up a nominated body scheme whereby third party external entities will be selected by an coimisiún to notify us of harms or issues related to platforms and compliance with our online safety code. That is a third stream we are working on. We have started work to develop that scheme this year.

With regard to individual complaints, members will be aware that sections 139R to 139ZB, inclusive, of the Online Safety and Media Regulation Act make provision for an individual complaints scheme. Our plan for this year is to start work to develop that individual complaints scheme with a view to having a scheme drafted by the end of the year. It is a process that we need to go through that is provided for under the legislation in terms of consultation and the issues the scheme will need to cover. We hope that by early next year at the latest, we will be in a position to present that scheme to the Minister. At first instance, the scheme will focus, again with reference to the Act, on complaints relating to children and harmful content affecting children online. I hope that gives the Senator a sense of the breadth of options that are available and coming down the line.

With regard to vulnerable adults, we are aware, including from submissions we received in response to our call for inputs last year about how we develop the online safety code, as well as the consultation we have conducted on our draft online safety code, that many individuals and organisations are rightly concerned about online harms for vulnerable adults.

My colleagues may wish to speak a bit more about the area of vulnerable adults in respect of the code. We are aware that, at a minimum, with AI tools and recommender systems it is not only children and vulnerable adults who may be more at risk but also individuals who have protected characteristics. Some people, by virtue of having, for example, a disability, a particular ethnicity, nationality or immigration status or being LGBTQI+, women and so on, may be more at risk. It is less about the technology itself but about its uses, its impact and how it is applied.

Mr. Declan McLoughlin

I will make some supplemental remarks around the wider issue of vulnerable adults. At the core of the code is the concept of regulating harmful or illegal content. The code sets out a number of definitions and types of content. The Irish legislation most especially refers to content related to cyberbullying, humiliation, feeding and eating disorders, and the areas of self-harm and suicide. The European directive, which we draw on, talks about age-inappropriate content and the vulnerability of children who are not in a position to meaningfully engage with content that might be more suitable for adults. The basis of the code is dealing with people in vulnerable situations. The basis of the various mechanisms in the code around terms and conditions, limitations in uploading content, age verification and so on is to address people in vulnerable situations and manage those situations within the framework of the code.

In terms of AI being used to identify vulnerable adults and other people, my general understanding is that there are some initiatives in that field but I will have to come back to the committee with more details on that. I know from meeting with some of the social media platforms that in certain instances they will make alterations to how their systems work in order to protect some of the users of their services. Again, I do not have the details of that right now but I will follow up with a supplemental response on the examples we have.

Forgive me but I have to go to another committee now. I really appreciate the presentation and responses.

Deputy Dillon is next.

I thank all of our witnesses and guest for joining us today. This is a very topical issue. As a young father, it is great to see Coimisiún na Meán set up now, with Ms Hodnett and her team in place. I wish her the very best of luck in her role.

I will follow on from Senator Clonan's point and talk about the principles around safety by design. All of these platforms use different types of technology, different algorithms and so on. What specific measures are being taken to ensure these platforms are abiding by the principles of safety by design? What type of feedback is Coimisiún na Meán receiving in that regard during its engagement?

Ms Niamh Hodnett

Safety by design is really important to us. As part of our consultation on the draft online safety code, which we published in December, we proposed in our guidance that safety by design should underpin all measures being adopted by the platforms. In order to ensure that there is not a whack-a-mole scenario after the fact, safety really needs to be baked in at the outset - at the design of the product, algorithm or service - because it is very difficult to address the issue, or retrofit it, afterwards. As part of the guidance, we also consulted on having a child-centred approach in the context of Article 24 of the EU Charter of Fundamental Rights, which puts the child's right at the centre. It is one of the specific supplementary measures that we consult on, in addition to having it in guidance around our online safety code. We also have it as a supplementary measure, that safety by design would be taken on board by the platforms in the design of their products so that in the same way that they would do a data protection impact assessment or a privacy impact assessment during design of a new product or service to ensure it is GDPR-compliant, a similar safety impact assessment would be undertaken at the outset too.

Does Coimisiún na Meán review that risk assessment? Is it presented to the organisation so that it can be validated and stress tested? Is it stress tested vis-à-vis protecting those under a certain age or others who are vulnerable, in the context of what Coimisiún na Meán is trying to do? New technology is being expanded continuously. The apps on our phones are being updated continuously so how does Coimisiún na Meán keep up with that?

Ms Niamh Hodnett

We regulate the platforms but we do not regulate any particular form of technology. We have tried to use a technology-neutral approach in our draft online safety code, as is the case with the legislation. We are currently reviewing the 1,300 submissions we received to the consultation on the draft online safety code. In January we consulted our youth advisory committee, which we stood up in December, on it.

Once the code is in place and is binding, our platform supervision and enforcement team will be monitoring compliance with obligations under the Digital Services Act, the terrorist content online regulation and the online safety code. Platforms will report to the team in relation to that. We have significant powers under the Act to request information, carry out audits and monitor compliance with the various obligations.

In terms of personnel, Ms Hodnett mentioned an enforcement and supervision team. How many people are on that team or is An Coimisiún continuing to build it?

Ms Niamh Hodnett

We are continuing to grow capacity at the moment. When we were established in March of last year, we had the 40 staff of the Broadcasting Authority of Ireland. We have a broad remit. In addition to online safety, all of the previous functions of the BAI are within our remit. We have grown to 90 people at this point and we have sanction to grow up to 160 people. We are very much in capacity-building mode. We have now put in place our director and principal officer levels and have finished the recruitment process for assistant principal officers who will join us in the coming months. We are currently advertising in the market for higher executive officers. As each new level comes in, we will grow to capacity so that by the end of the summer we will have the 160 people for whom we have sanction. We may have to go back to the well at that point and look for sanction to grow further in order to be able to do everything we want to do in our work programme.

That is great to hear.

One of the major issues that arises is anonymity and related prosecutions. We have seen an enormous rise in the number of anonymous accounts used to attack individuals. There are also concerns around individual complaints mechanisms and subsequent prosecutions that would be processed by An Garda Síochána. I ask Ms Hodnett to outline how Coimisiún na Meán intends to use its full suite of powers and strategies to deal with anonymity. What powers will be utilised to address those who post harmful content and engage in harmful behaviour?

Ms Niamh Hodnett

We regulate the platforms. We create the binding obligations with which the platforms have to comply, including the removal of, or reduction in, harmful content as well as the prohibition on illegal content. We do not regulate individuals and would not have access to the IP addresses behind anonymous accounts, for example. That would be a matter for law enforcement and An Garda Síochána. That said, we work very closely with An Garda Síochána and have been doing so since our establishment. We have very good engagement with An Garda Síochána in relation to a number of matters that have arisen over our tenure.

On the individual complaints framework, we have been taking in contacts and complaints under the Digital Services Act since 17 February. Individuals are raising queries with us about how best to make a complaint and we would always say to flag it to the platform first. We are aware that there is a lot of complaint fatigue or flagging fatigue out there among the public but now that the era of self-regulation is over, we will be holding the platforms to account and making sure they are dealing with complaints in a timely and diligent manner, as is their legal obligation under the Digital Services Act. We have launched a campaign entitled "Spot It. Flag It. Stop It.", which underlines the importance of reporting content because that changes the obligations on the platform, where illegal content has been flagged to them. It changes that exemption from liability or exemption from having to generally monitor what is out there. Once that content is flagged, they are on notice if it is illegal and they have to take steps to address that complaint. They have to form a decision and provide that decision to the user who has flagged the content for removal.

In terms of anonymity, however, it is one of the harms or one of the reasons for many harms occurring that we flag in our harms reports. We published a report on online harms in December of last year at the same time as we consulted on our draft online safety code. It points to a number of factors that give rise to harmful content online. One of those factors is anonymity. However, there can be times when anonymity is also useful for people online when they are trying to explore new communities or find out information.

Again, however, the issue here is around prosecutions for those who continue to create multiple anonymous accounts. That is a real concern. When we had the platforms into appear before the Joint Committee on Media, Tourism, Arts, Culture, Sport and the Gaeltacht, in one way, they were accepting around freedom of speech and those who have multiple accounts with different aims. It led to real long delays and issues around the identification process, however. Is Ms Hodnett concerned that it is very difficult to enforce or get prosecutions for those who actually do create these accounts? Is it the case that it is the responsibility of the regulator or the platform itself to manage those who create accounts by any means? The fact is that people do not need verification identification or a passport or something that verifies they are who they are and the subsequent follow-up to a potential investigation. That is still in my view a gap that platforms need to address.

Ms Niamh Hodnett

Where our work touches on that is that under the Digital Services Act, the platforms have to do a risk assessment of the risks with regard to their platforms and they have to take mitigation steps to address those harms. Therefore, it may arise in that instance. Otherwise then with regard to verification, we do propose age verification for minors accessing age inappropriate content such as pornography or extreme or gratuitous violence.

Is that a cookie or how is that age verification verified?

Ms Niamh Hodnett

We have left the manner and form of the actual age verification to be implemented by the platform open to the platforms themselves. We are conscious that each platform may have a different approach or that technology may overtake if we were to be too prescriptive in prescribing one particular measure. There are European measures afoot as well as to whether there should be an overall European approach to age verification. I will hand over to my colleague, Karen McAuley, who is on that committee.

Karen McAuley

With regard to age verification, the European Commission through DG Connect has recently set up a task force to explore the whole area of age verification. It involves organisations across member states, both our counterparts but also, for example, counterparts in the Data Protection Commission and so on. It is a complex issue. There is not consensus, and we saw that as well in response to submissions we received to our call for inputs last year. There is not consensus yet on the best way to approach it. As Ms Hodnett said, it is an evolving space. Part of our thinking is that rather than prescribing a particular methodology, we put the onus on the platforms and services to identify what approach to take, as appropriate to their services and platforms, and be able to demonstrate its effectiveness. Therefore, they will need to gather, monitor and share that data in order to demonstrate the effectiveness. It is not enough for them to say it.

To come back to the Deputy's point, however, internationally, there is a huge amount of interest in the whole area of age verification, but it is one where there is not a shared approach or shared understanding. The Deputy will be aware that at European level, an EU consent project is live and there is also work to develop an age-appropriate design code that will take into account that area. They are also looking at an EU identification proposal at EU level. There are, therefore, a number of initiatives under way to try to pinpoint an approach that is effective and has due regard to the various rights that are engaged, including those of children.

Is there any type of timeline associated with when Coimisiún na Meán would like age verification to be implemented?

Mr. Declan McLoughlin

As soon as possible is the obvious one, but if we are looking at a European-wide process, as the Deputy knows, that can be very consultative and very detailed. I know the European Commission is actively addressing it. As Karen said, there is also an international group. The commission is also a member of the Global Online Safety Regulators Network with which we are engaging. It is, therefore, a very active issue. It is something on which people are very focused. It is difficult to give a timeline, however, because we are dealing with finding the right technological solution that guarantees privacy, which is hugely important because age verification raises a range of different issues, but also that it will be effective and, very importantly, is something people will trust. We had that from some of our submissions to the draft code. There were a lot of concerns about the potential risks around age verification processes and risks to privacy but also just risks to people's freedom to engage privately on the Internet and on websites and the like. There are, therefore, many thorny issues. It would be very difficult to give an absolute definition or timeframe.

One additional point probably does not directly address the Deputy's question about prosecutions, which is a real challenge and has been a challenge in the courts themselves for individuals trying to vindicate their rights. The online safety code once it comes into place will have in place content and reporting mechanisms. Even if someone does not know who the provider of the potentially harmful content is, he or she can still flag and report the content as illegal and harmful and then the video sharing platform service has an obligation to react to that flagging and also provide a response to what it did to address the concerns around the content, whether agreeing that it is problematic and taking steps or saying it is not problematic and giving the person an answer. There are real issues with prosecution. However, in terms of our focus on limiting harms insofar as it is humanly practical, the content flagging system and reporting system is a mechanism. Then, we have gotten requests for them to report on how effective it is. We have an obligation to assess the effectiveness of any of the mechanisms, and, if things are ineffective and there are bad faith actors not meaningfully engaging in their obligations under the code, then we have the significant sanctions Ms Hodnett talked about earlier. There is a suite of different elements in there. Prosecution is a real challenge. It has been a challenge for a while. I am not sure the commission on its own will be able to solve that problem, but we absolutely agree that it is something that does need to be dealt with in a way that addresses the rights of people who are impacted, but also the right to free speech and to engage privately online.

I thank Deputy Dillon. I call Deputy Costello.

I thank our witnesses for coming in. It has been quite an interesting discussion, particularly this week and last week as well. I welcome the news regarding the commission and its actions following the applicability of the Digital Services Act over the weekend, Given that it is an ongoing investigation, however, the less we say about that, the better.

I welcome that the witnesses said several times that the era of self-regulation is over because in reality, self-regulation is not regulation at all. We have seen platforms shirking their responsibility. We have seen platforms engage knowingly, I would argue, in dangerous and exploitative business practices of which our children are definitely on the cutting edge and receiving end. That has caused quite significant harm. I welcome, therefore, Coimisiún na Meán's commitment to regulation. The witnesses are very new in their roles; they are beginning their journey. I absolutely welcome the regulation. A nervousness I have, however, is that in many other areas, we have brought in regulators and they have not been as sharp-elbowed or sharp-toothed as we have needed them to be. The witnesses are very new so I am not saying this about them, but I am expressing a concern that hopefully, going forward, this strong sense of strong regulation we are getting from them really comes true.

I had some notes on other regulators but I will leave that aside and not start talking about others. One of my concerns is about regulatory capture, particularly when the witnesses are talking about people coming to consult with them with their safety plans and about safety by design. We have seen with other regulators that when consultation like that happens, these things become frozen. The regulators will say that they helped to design it so they know it is fine, even though problems emerge later and they are not willing to look at it. I want assurances from the witnesses that in such cases, they will be willing to open up and peek under the bonnet and revisit previous decisions or have an independent system where they can be looked at again.

Is there a timeline for when Coimisiún na Meán hopes to have the online safety code finished, ready to go and enforceable? Has there been any engagement between the youth advisory committee and the commission? Is the commission involved in these things? Is it able to act as a conduit so that the voices of young people are heard and they have an opportunity to engage with the commission?

I had questions on the trusted flagger but I think they have been answered.

I wish to say something about the AI-driven recommender systems. We received some very clear evidence from the experts last week on the dangers these pose and the view that they should be turned off by default. I would recommend that and I would welcome the commission's views on it.

Ms Niamh Hodnett

The Deputy asked a number of questions. If I have forgotten any of them, he should pull me up on it. The era of self-regulation is definitely over and we are now into effective regulation. As I said, we have a suite of regulatory tools, namely, the Online Safety and Media Regulation Act, which is underpinned by sanctions of up to 10% of relevant turnover or €20 million, whichever is the greater; the Digital Services Act, where we act together with the European Commission in respect of the large platforms and which is underpinned by sanctions of 6%; and the terrorist content online Regulation, where we act together with the Garda Síochána and which is underpinned by sanctions of up to 4% of relevant turnover. These are significant financial sanctions which move the dial regarding the conversation about online safety and effective regulation.

There are also provisions in our Online Safety and Media Regulation Act for criminal sanctions to be taken in respect of directors for particular measures. That is another significant tool. It is not just the criminal sanctions and civil sanctions. Under the Act, we can request information or appoint auditors to monitor what is going on.

Our online safety code will be very much a binding online safety code, with set measures contained within it. We are currently reviewing the 1,300 submissions we received from our consultation in December. We will be deciding what we need to change, not in a regulatory capture way but in light of all the submissions we have received. Are we staying with our draft online safety code or are we making amendments in light of the submissions we have received, as part of that consultation process, from NGOs, platforms and also the European Commission? At that point, we will then decide if we move to adopt our online safety code or if we need to engage in a further process with the European Commission under the 2015 technical regulations information system, TRIS, directive. Under this directive, if we are going to impose additional obligations, such as our supplementary measures, we are required to engage with the European Commission in a process that typically takes three to four months and take on board its comments before we can move to final adoption. We will certainly have the online safety code this year, but there are still a number of steps we need to go through before we can finalise that.

We consulted our youth advisory committee on our online safety code in January last. All three of us were involved in that consultation. We had a very positive meeting with our youth advisory committee which represents youth aged under 25. Half of the members are individuals under 25 and half represent children and young people's views. We took comfort from their very positive feedback on our draft online safety code. It is not won and done. We are maintaining an ongoing conversation with members of our youth advisory committee on a range of matters and will continue to meet with them this year as the code progresses to update them on how their views have been taken on board.

As of yesterday, we are part of the digital services board provided for under the Digital Services Act. We are the digital services co-ordinator in Ireland. There is one such body in every EU member state. As part of that digital services board, we feed through Ireland's view on the Digital Services Act. We have a special relationship with the European Commission under that Act. We were one of the first to have a co-operation agreement with the European Commission because so many of the large platforms have their Europe, Middle East, Africa, EMEA, headquarters here in Ireland.

Mr. Declan McLoughlin

It is very clear that the organisation needs to be effective in meeting its very significant regulatory obligations. There is obviously the application of the Act and the effective use of those powers in a proportionate and impactful way. It is also about how the organisation works. As a new organisation, we are looking at how we want to operate as a regulator. What expectations do we have of our staff? What are our values? That is something we are still teasing through but we have started that work. Some of the key values that have come through is the concept of independence, the concept of being impactful and, while it sounds like motherhood and apple pie, the concept of being courageous as a regulator. We are looking at this from the point of view of what our powers are, but we are also having an internal dialogue about how we ensure not only that we enact our powers but also that we have a strong culture that supports success in how we do our business. That is also worth emphasising as an organisation.

Ms Niamh Hodnett

I return to the Deputy's previous question on the AI-recommended system. I stress that we are very concerned about the amplification of harmful content online. It is not expressly provided for in our Online Safety and Media Regulation Act, nor is it one of the express obligations that can be imposed under Article 28b(3) of the audiovisual media services directive. Nonetheless, we feel it is a necessary measure to protect minors from harmful content online. We consulted on that as part of the supplementary measures we feel are necessary in order to provide for safety for children, particularly online, for a recommended safety plan to be in place to mitigate those risks.

Since Saturday, when the Digital Services Act became fully applicable in Ireland, there are now obligations. Since August, all large platforms have been obliged to have in place a recommender system that is not based on profiling, for example, a chronological recommender system. That obligation is set out under Article 38 of the Digital Services Act. That obligation has been on the large platforms since the end of August 2023. In addition, platforms are required to be transparent about how their algorithms are being used. They are also required to address the risks posed by algorithms and recommender systems, including the risk to the right of a child under Article 24 of the EU Charter of Fundamental Rights. They are required to conduct a risk assessment and put in place mitigation. Those obligations are contained in the Digital Services Act in addition to the obligation to protect the safety of minors.

I thank Ms Hodnett. I apologise; I need to go to another committee.

I welcome the witnesses to the committee and wish them the very best of luck in their roles in Coimisiún na Meán. We have been discussing this matter in both Chambers and these committee rooms in this House. I have had many interviews on radio stations on the call for an online media commission and the importance of it. The commission's work is not underestimated by Members of these Houses.

I am a parent of four young very inquisitive and normal boys. The thought of the online world is petrifying. Thankfully, most of us grew up in a world with none of that. We were lucky if we had access to the Encyclopaedia Britannica in a neighbour's house. That is as far as it went for us. We are battling a whole new world. I am not on top of all of this as a parent and as somebody who navigates the online world for my own work.

I apologise if these questions have been covered already; I missed the start of the meeting. What measures will be taken to ensure the responsible use of AI in online platforms and to enhance child safety and privacy? How can parents and guardians actively participate in that safeguarding? Given that we cannot close down every door, how can we empower everybody to go through these changes for our children's and indeed our own underline experience? Ms Hodnett mentioned that Coimisiún na Meán would be consulting with the youth forum. As we go through the changes and the commission implements its code of conduct etc., will it be consulting the Department of Education to roll that out in schools and empower and educate our children to understand that this is a critical part of our education system? How will we see that as parents coming home and how will we be able to support our children and educate ourselves on the new online world that will be the norm in future?

How will the regulator work with a platform to prioritise children's safety, especially in terms of monitoring content and harmful interactions? Has the commission a role? Forgive my lack of knowledge of the old commission’s setup. Will the new commission have a role in monitoring or will it wait for a complaint to be made first? Is the commission proactive or reactive?

Ms Niamh Hodnett

The Senator asked many deep questions, so if I forget any, she might revert to us and my colleagues and I will answer them.

We believe this is a whole-of-society issue. We are one part of it as the regulator and we alone will not solve it. There is a role for all of us to be more civil online and to think about digital civility in our interactions. Regarding education, good work is being done in schools by the Department of Education, Webwise and CyberSafeKids, which appeared before the committee last week. Parents need to be empowered to know how to engage with parental controls and protect our children online and what conversations we need to have with them. Most importantly, our children need to be empowered, as they are the digital natives who are managing their education, communication and social lives online. There are many positive benefits online that we want them to enjoy, for example, connection and education. There is also a role for the Legislature in passing the rules that allow us to enforce our regulation. The commission is not alone in this, as it is a whole-of-society issue. Ireland is not alone in this either. As Mr. McLoughlin and Karen McAuley alluded to, we are working closely with the European Commission, the Global Online Safety Regulators Network and a range of regulators in other countries. We have been meeting the Department of Education, the Department of children, the Department of Health and many other Departments since our establishment to understand how best to engage with all of them. We have a role in media literacy.

With our online safety code, we are imposing binding obligations for parental controls. Under those obligations, the platforms would have to have parental controls that were easy to use and easy to find. They would also be subject to binding media literacy obligations to educate children and parents in how to use those tools effectively.

We have been working closely with Webwise, which does great work under Oide and the Department of Education. Webwise has useful tools on its website for the primary school and junior cycle curriculums. Some of them are known as the Respect Effect. Webwise is working on materials for a programme for teachers in respect of AI, including education about introducing AI and knowledge about AI.

A draft EU AI Act is being finalised at the moment. It will have transparency obligations on AI. In the case of generative AI, for example, it will have to be said whether content has been generated by artificial intelligence so that it is possible for children and ourselves to be able to recognise when that happens. That is not the case at the moment and it is difficult for us to tell.

Mr. Declan McLoughlin

The benefit of the Online Safety and Media Regulation Act is that it comes at the issue from a wide range of perspectives. There is the online safety code and its related provisions, including enforcement provisions. There is statutory guidance, which allows the commission to pinpoint societal issues - first and foremost are those set out in the Act, such as suicide, self-harm and eating disorders – and tease out what that means and what best practice should look like in a social media video sharing platform context. Importantly, there are other functions, such as the nominated bodies. This provision under the Act allows the commission to develop a scheme whereby certain bodies in society, for example, civil society bodies, are essentially appointed as nominated bodies and they have certain powers and functions under the Act that would allow them to tell the commission whether there are consistent issues with a platform, for example, non-compliance with the code. There are provisions for us to undertake audits. There is also individual complaints handling.

When it comes to empowering children and parents, the strong provisions in the Act around media literacy are important. Coimisiún na Meán is in the Media Literacy Ireland network, which comprises a wide range of stakeholders that are focused on media literacy through empowering people to critically understand engagement with the online world, create content and protect themselves. That is reflected in the online safety code, under which it is proposed that the video sharing platform services produce annual action plans on media literacy and report to the commission on same.

The commission is empowered to apply a range of different perspectives to this issue. There is the hard regulation, engagement with the platforms and, regarding the public, there is media literacy. The Act is flexible in terms of responding to changes in technology and harm. The commission can lever a range of strong powers to address issues like empowerment and non-compliance.

When will the commission reach a point where sanctions for infringements can be applied? We have an important year ahead with elections and so on. If a deepfake or the like was put up online and not taken down, when would we feel that an avenue was open to someone and that sanctions would be applied in respect of that infringement?

Ms Niamh Hodnett

Once the online safety code is in place, we can apply sanctions for the infringement of that. The Digital Services Act has been in place since Saturday, 17 February, and is fully applicable. We or the European Commission can take enforcement action against the large platforms. Of the platforms that are still large but do not fall into the category of “very large online platforms” under the Digital Services Act, we are the responsible regulator for the ones that are established in Ireland.

The Senator asked whether we were proactively monitoring. We have established a supervision and enforcement team, which will proactively deal with the various platforms within its remit and escalate issues to the enforcement team where an investigation is warranted. Our contact centre, which we established yesterday, is taking complaints about breaches of the Digital Services Act. We just stood it up yesterday, so it is still new for us, but we will be assisting people who wish to raise complaints or queries. Where breaches of the Digital Services Act come to our attention, it helps us with market monitoring and building a picture of various platforms’ compliance with their obligations.

It is good to hear everyone’s perspective, so I thank the witnesses.

I will begin by showing my appreciation of the fact that, although the commission has only been in place for a year, a large amount has already been accomplished. The commission has been forthright in the media in discussing and explaining matters. I appreciate that. I am optimistic about what lies ahead.

In 2016, Dr. Cathy O’Neil published her book, Weapons of Math Destruction. In that, she flagged the idea that big data would increase inequality and undermine democracy. We also had people like Professor Mary Aiken discussing the cyber effect and publishing a book at the same time. The confluence of both started me thinking about what life was like for our child. Children are open to direct bullying and their presence on a platform in and of itself brings about an opportunity to be bullied. Even if they are not bullied overtly, their self-esteem is attached to how many likes they get and whether a certain group at school liked or did not like something. Behaviour starts being modified to chase the dragon that is the social media presence and the amplification of same. My child is eight years old and she comes home from school talking about YouTubers. I have to sit her down and say that those people are not living in the real world. When the girls at school talk about so and so or we watch YouTube on the big screen in the kitchen – I do that so that I can keep an eye on it, have conversations with her in the same way as my mother would have had with me about TV programmes and try to keep up with the culture – I have to explain that a lot of what she is seeing is not real and is not our everyday lived experience and that her chats with others in school or her own expectations about what life should look like should not be based on that.

There is not the same regulation of content, but children pursue that self-esteem through that lens of what is on platforms.

The next issue relates to the content that is on there, which we talk about that. I have great faith in the coimisiún's regulation, and its reputation relating to the day of the riots has been exemplary. What I have heard of what it did to click into action is brilliant and I commend our guests on their assistance and on getting on to the platforms, reminding them of the issues and reminding them to have manners on the day. A large volume of content was taken down and, for all we saw, a large volume did not get to be seen in the way it could have been. There is this content, and there is the pulling of people into an echo chamber of their own beliefs. I attended a public meeting on referendums last night with one of our councillors. A woman stood up, held up a book that she claimed was the Constitution and pointed out that the Bunreacht is not really the Constitution. We have all been served with that document. That woman absolutely believed what she was saying; she said it in an absolutely authoritative manner. At the same time, I was standing there thinking that arguing with her would be difficult, but what she was saying had come from an echo chamber of communities on platforms that emphasise certain issues. That comes from recommender systems and pulling people into reinforcing their own beliefs.

Those are the two obvious issues, but there is a third one. I cannot but recommended the coimisiún's website, which is brilliant. It sets out what is illegal and what is related to the various legislation and that is all incredibly well done, but there is a third aspect that relates to a cultural shift. We have been lured into a friction-free world whereby if I need to look up something, I can have it straight away. If I go onto different search engines where privacy is respected, it does not happen in the same way. They will not fill in my credit card details or do all these things in the instant world we live in. There is also, however, a business model at the heart of all this that is about capturing our attention and selling it. We are the product that is being sold to advertisers. Are we educating people about that issue in our literature and saying enough? I fully believe children should not have phones under any circumstances. There are heaps of things online that they should not be looking at. They should not be on YouTube. The idea that ten-year-olds are in their rooms unsupervised on these websites is horrific. There is, however, a cultural shift towards instant gratification, a lack of attention, behavioural modification and so on. How are we going to capture that? Switching off the recommender algorithm is one way of undermining that business model, but if that is only for children and we do only that, can we enforce it? Are there workarounds? Alcohol companies that are not allowed to advertise their products containing alcohol are now flooding the sides of rugby pitches with their 0.0 products, so they have got around that. How can we anticipate the workaround? Is enough research being carried out? I am trying to solicit our guests' opinions or where they think there might be gaps still to come, such that we can try to move ahead of that.

Ms Niamh Hodnett

I thank the Senator. There is a lot to unpack there. The chilling effect on democracy is one of the concerns. In addition to the protection of minors, as a policy division, we have focused on different specific areas for 2024. Children and vulnerable adults are an area of focus, with Ms McAuley heading up our efforts as director of policy. We also have a director who is focused on democracy, elections and disinformation, because we are very conscious that this year there are so many elections, including local, European and national, and referendums.

From all the groups we met, including various groups that represent minorities, women and people originally from other countries, many of them did not want to go forward for election. We engaged with the task force on safe participation in political life because a lot of individuals did not want to go forward as candidates because they were concerned about the implications of incitement to hatred or violence or hate speech being directed towards them just for being who they are, and that has a chilling effect on democracy that we are concerned about. We do think the amplification of harmful content through recommender systems is yielding this chilling effect on democracy or is causing polarisation. The amplification of harmful content is also raising toxic beauty issues that can lead to, for example, eating or feeding disorders, while depressing content can lead to self-harm or suicide. That is why we have proposed in our supplementary measures to address the recommender systems through having a break in this amplification of negative content in some way with a recommender safety proposal or a recommender safety impact plan to be in place.

For the large platforms, under the Digital Services Act, there is already an obligation to mitigate the risks that are brought about by the amplification of harmful content through recommender systems and to have available an alternative recommender system that is based not on profiling but on, for example, a chronological feed.

In respect of bullying and self-esteem, a focus of our online safety code is to address cyberbullying. Again, eating and feeding disorders would be at the more extreme end and have an impact on self-esteem. The issues the Senator raised highlight that we are just one cog in the wheel. We are the regulator and we can enforce binding obligations, whether under parental controls, media literacy or dealing with complaints in a timely and diligent manner, but there is a whole-of-society role regarding online safety. This applies to education in our schools by teachers to children and also for us all as a society as a whole to think about. When we engage online, it should be done in a more empathetic and civil way and we should think about the impact on others of what we post. It cannot be addressed purely through our blunter tools of regulation, but we are here to move away from that era of self-regulation and to put effective regulation in place to ensure the obligations that are there are being complied with by the platforms.

Mr. Declan McLoughlin

On the whole-of-society approach, when we were developing the proposal and draft code, we carried out some surveys of public attitudes. We asked a range of questions, one of which related to where the responsibility for keeping social media users safe should lie. It was an unprompted question, so the answers were off the top of people's heads. People responded with social media companies, the users themselves, parents and families and the Government and politicians, while the media commission came in probably lower than it should have, given we are probably not fully in the public mind just yet. We then asked whether enough was being done to keep social media users safe and whether each of the groups they had identified were doing enough. In the case of the Government, the commission, social media groups, Internet service providers, users and society, the consistent message was that more needs to be done.

It comes back to that point about a cultural shift. There is a strong demand whereby people know where the responsibility rests, namely, with themselves, the regulators, the Government and other parties, and the clear message in this, albeit one, survey is that everyone needs to do more. It speaks to that cultural shift that needs to take place.

There is no doubt about that. The coimisiún's representatives will be more sensitive than most to the boundaries of its powers and where the gaps lie beyond that. My concern is that people do not know. As I said last week, when I first came into the Oireachtas, I had been in the GDPR space for a few years, had seen when it went terribly wrong and had to address it for companies when that happened. The Data Protection Commission is fantastic. Whatever about its critics, it has done an amazing job in a short space of time. It had significant pressure on it in Ireland. In a way, our guests get the benefit of its experience in seeing how it can be done. When I first came into the Oireachtas, I felt as though I was wearing a tinfoil hat. I never allow push notifications or sign up to emails or anything like that which could suck you in to staying online. I find it horrific that people go out for dinner and sit at a table with both of them on their phones and no human interaction. We are each being siloed off into our own worlds in which we are marketed to.

I do not think people know this. I am not sure it falls to Coimisiún na Meán to educate people in this. Its job is to keep people safe from the effect of it and to mitigate, regulate and enforce with regard to the platforms being responsible in how they do it. The fact of what is being done is not spoken about. Whose job is it to do this? I am not abdicating any responsibility as it is the Government's job. The Data Protection Commission can preserve our data but once someone takes out a presence on a platform, they are signing up to a contract that will go on for years. They have no idea what rights they have signed away that are hidden in this. People are drawn in, not knowing what they have engaged in, because they think about the reality, which is great. I have contact with my cousin in Australia and we swap pictures of our children and talk about our lives in the messaging service. We do all of this and we see the uplift of it, without realising that maybe we have the presence of mind not to be bothered. I would much rather sit and have coffee with somebody than this. There are people who are susceptible to being drawn in. I see it with my own child's resistance to me saying "but". They have it. They know YouTubers. They do whatever. We are against the tide. How do we communicate that this is happening? The cultural shift has moved to this friction-free world. We want everything instantly and we will buy into WhatsApp, which had the audacity last week to say "No" to the Minister, Deputy Foley, with regard to age verification. However, it has obligations. The companies will try to avoid regulation where possible and deny what is really going on.

Ms Niamh Hodnett

We published a call for input last summer to ask people what harms we should address and what measures we should put in place. We got a lot of informative views from this, which raised many of the issues Senator Seery Kearney has set out. Our policy division has a focus on children and vulnerable adults. It also has a focus on democracy, elections and disinformation, a focus on codes and rules, and a focus on illegal content such as terrorism and child sex abuse material.

We also established a user engagement division. In this we have a colleague who is focused on education, user awareness and media literacy as well as a colleague who is over the contact centre and a colleague who is over handling complaints. We are a relatively new organisation. We are growing in size and capacity. As we build on this capacity and get the basic building blocks of regulation in place this year, which is what we are trying to do, we will be able to build up the educational role to be able to look at ways of doing it. We have engaged with the Minister, Deputy Foley, on looking at ways to empower schools and children with regard to online safety. We will also look at other avenues such as, perhaps, how to provide toolkits of assistance to people to guide them with complaint handling or how to raise a complaint. We are constantly looking at ways of how we can build the education role through our user engagement division.

Mr. Declan McLoughlin

It is also worth mentioning again briefly that we are part of a global online safety regulators' network. This includes the eSafety Commissioner in Australia, the regulators in Korea and South Africa, Ofcom in the UK and ARCOM in France. It is an expanding group. There are also observer members from the United States and other jurisdictions. This network has established a user awareness and training working group. While we have the very definite focus in our own organisation which Ms Hodnett has just outlined, we are not going it alone. The eSafety Commission in Australia, as we all know, has greatly developed its training awareness resource. It is one of the powerful impacts of that organisation as a leading light on how to engage citizens. We will learn from it. We will also learn from other jurisdictions so that together we build a common understanding of the most impactful way to let citizens know their rights. The contact centre is one example of this. One of its main purposes is that people can call the contact centre to explain they have a problem and ask what they can do. The contact centre means that people are not going around in the dark trying to find out how to resolve an issue. There is somebody at the end of the phone who can help them. This is the first important step that we have prioritised. We are doing work on our own but we are also engaging internationally with leaders in this field such as the eSafety Commission. It is very definitely a very strong part of our remit and our focus.

Clearly Coimisiún na Meán is very focused.

Karen McAuley

I would like to make an additional point. Earlier Ms Hodnett mentioned the meeting in mid-January with our youth advisory committee to consult it on our draft online safety code. I hope its members will not mind me saying that one of the issues they spoke about a great deal, which was very helpful, was the importance of education and awareness raising, including on rights in a digital environment, for children and young people and for parents and guardians. I have in mind what some of what the young people spoke to us about, and which Ms Hodnett spoke about in her statement, which is the importance of children being part of the conversation in helping us to find solutions that work for them. We are speaking about education for children with regard to online safety. Regardless of how we frame it, we should hear from them about what works for them. They know what kind of messaging and forms of communication they will engage with so that the messaging really taps in to them and suits where they are at. This was very important for the young people who said it.

It was a very good initiative to engage them. One of the great things about the DPC is that it publishes where there had been enforcement. Without giving away particular details, it publishes trends of where it has had to intervene. Does Coimisiún na Meán envisage doing this? If not, I highly recommend that it does. From a practitioner's perspective I was able to advise industries or particular sectors of our economy that this had happened; they might not want to be compliant or they might be dodging out of it but I could say it had happened. There was the carrot and stick of good reputation, forward planning and protection of privacy, which are all fantastic, but if companies do not do this and do not have this vision they could look at the DPC website which had criminal prosecutions and the other trends. I imagine that Coimisiún na Meán will put in an annual report the sort of things it will engage in and that will come up.

Criminal prosecutions were mentioned. Who will be in court? This has been a problem in health and safety. It may be the directors of the company but nobody ever goes to jail. It is always a fine. There is not quite the same stick that there would be if it involved the Non-Fatal Offences Against the Person Act and people on the street. Who will go to court? We would all like to think of Mark Zuckerberg being pulled into the Criminal Courts of Justice but that will never happen.

Ms Niamh Hodnett

On Senator Seery Kearney's first point, the trends will be very beneficial. We have not yet had an enforcement case but the work of the Data Protection Commission in publishing its previous enforcement decisions is instructive. It shows trends and it also shows guidance. It can encourage others on how best to come into compliance without having to run the rigor of an enforcement case. There is a lot of merit in this approach. It is one we will certainly bear in mind as we move forward with our own enforcement powers.

With regard to criminal prosecutions they are against an organisation but they can also be against senior directors or officials in the organisation where they are the guiding will and mind. It is both the organisation and individuals.

They are the directors in Ireland. When we are dealing with multinationals but the organisation's headquarters is in Ireland what will that look like?

Ms Niamh Hodnett

Again we have not yet got our online safety code in place. It is only once that is in place that there will be enforcement. First we will be monitoring for compliance with the obligations. We will be looking at whether it would be appropriate to go a civil or criminal route with regard to the particular potential breach involved. It is very premature at this point in time

The civil powers of injunction are very good.

Ms Niamh Hodnett

Yes, as are the civil administrative sanctions. The fines can be quite high.

Mr. Declan McLoughlin

To make another point on the trends and analysis, our director of research started this week. There will be a research unit that will build out with staff in the coming months.

Research is something that we have identified as being really important to inform not only our own work but also the public. We will also be hiring data analysts, data engineers and so on to interrogate data which will feed into the wider range of work. I do not want to speak for what we might do in the future but I expect research very specific to our remit on online safety and wider research will be part of our future outputs and activities.

It is very comprehensive. Brilliant. I really wish you well. It is exciting. Thank you, Chair, for your indulgence.

The Senator has been here for the whole meeting in fairness. Could I ask for the contact details and for the helpline again? I am not sure if there is an advertising campaign in relation to the helpline. There is. That is great. It would be good for people because at least it is a first step when someone is in the middle of a situation. We will also advertise them on our website along with the opening statements but the commission might take the opportunity to say it again.

Ms Niamh Hodnett

We have an advertising campaign now, "Spot it. Flag it. Stop it." That is our campaign to raise awareness that the Digital Services Act is now directly applicable. If someone spots harmful content online, they should flag it with that platform. We would always say to go to the platform first and that will help to stop it. We ran a campaign on the media literacy point of "Stop, think and check" on disinformation as part of Media Literacy Ireland. It is for people to think when they read information online how reliable it is, and to stop, think and check it.

Our contact centre opened yesterday. It is open from Monday to Friday, between 8 a.m. and 6 p.m. The number is 01 9637755. There is also an email address, usersupport@cnam.ie. We also have further information on our website, cnam.ie, and there is a section on online complaints. It has a detailed section on how to go about flagging a complaint. If it is a matter of life or death, we would say to always contact law enforcement, that is the gardaí, first.

That is great. It is good for people, and even ourselves, to have that information. When you go into politics you go into a certain role but if we come across this, what is it like for children? It is a scary environment. I agree with much of what Senator Seery Kearney said about YouTube. It is the big thing for kids. The big ambition is to be a YouTuber. It is so far removed from anything that any of us grew up with.

I thank our guests so much. It has been great. I am glad we had the time to have a really good detailed discussion. I thank everyone for coming in and all the members for being here and their engagement.

Do I have agreement to publish the opening statements to the website? Agreed

The joint committee adjourned at 4.33 p.m. sine die.
Barr
Roinn