Skip to main content
Normal View

Joint Committee on Media, Tourism, Arts, Culture, Sport and the Gaeltacht debate -
Wednesday, 19 May 2021

General Scheme of the Online Safety and Media Regulation Bill 2020: Discussion (Resumed)

The committee is in public session and I welcome my colleagues and our guests. We have one item of committee business to be addressed before I call on our witnesses to present. I ask my colleagues to note the draft minutes of our public and private meetings on 12 and 13 May. Is it agreed that they are formally agreed and that no matters are arising? Agreed.

This meeting has been convened with representatives from Facebook Ireland Limited, Twitter and TikTok in the sixth of our public hearings to discuss the general scheme of the online safety and media regulation Bill 2020. I welcome the witnesses to the meeting, who will be joining remotely via Microsoft Teams. I welcome Mr. Dualta Ó Broin, head of public policy at Facebook Ireland Limited, Mr. Ronan Costello, senior public policy manager with Twitter, and Dr. Theo Bertram, director of the government affairs and public policy in TikTok. They are all very welcome.

The format of the meeting is such that I will invite witnesses to make opening statements, which will be followed by questions from members and colleagues on the committee. As the witnesses are probably aware, the committee may publish the opening statements on the website following the meeting. I will call each organisation to deliver its opening statement in the following order. We will first have Mr. Dualta Ó Broin from Facebook, Mr. Ronan Costello from Twitter and then Dr Theo Bertram from TikTok. Before I invite them to deliver their opening statements, which will be limited to three minutes per organisation and I ask the witnesses to please adhere to this as much as possible, I have some housekeeping to do and I hope they can bear with me.

Witnesses are reminded of the long-standing parliamentary practice that they should not criticise or make charges against any person or entity by name or in such a way as to make him, her or it identifiable, or otherwise engage in speech that might be regarded as damaging to the good name of the person or entity. Therefore, if their statements are potentially defamatory in relation to an identifiable person or entity they will be directed to discontinue their remarks. It is imperative that they comply with any such direction. As our witnesses today are attending remotely from outside the Leinster House campus they should please note there are some limitations on parliamentary privilege and as such they may not benefit from the same level of immunity with legal proceedings as a witness physically present does. Members are reminded of the long-standing parliamentary practice to the effect that they should not comment on, criticise or make charges against a person outside the House or an official either by name or in any such way as to make him or her identifiable.

I also remind members of the constitutional requirement that members should be physically present in the confines of Leinster House or the Convention Centre Dublin in order to participate in the public meeting. I will not permit a member to attend where he or she is not adhering to this constitutional requirement. Therefore, if any member attempts to do so from outside the precincts they will be asked to leave the meeting. I ask members to identify themselves when contributing for the benefit of the staff of the Debates Office preparing the Official Report. I also ask them to mute their microphones when not contributing to reduce background noise and feedback. I ask them to use the "raise your hand" button when they want to contribute and, of course, to keep their mobile phones on silent or, better again, switched off.

Our guests are very welcome and we are eager and anxious to hear from them today. As they know, the legislation we are working on is very important and their presence today will have a huge input into it. They will be afforded three minutes per statement and then we will hand over to members who will have five minutes each. If questions are directed to them, I ask the witnesses to be as concise as possible in their answers. It would be much appreciated. Please do not feel offended if I have to interrupt to keep the meeting very focused, which I intend to do. I now invite Mr. Ó Broin to make his opening statement on behalf of Facebook Ireland Limited.

As we have a technical glitch we will suspend the meeting for a few minutes to rectify it.

Sitting suspended at 12.39 p.m. and resumed at 12.43 p.m.

I thank our guests and my colleagues for their patience. We have addressed a technical glitch so we will now be able to hear everyone. Without further ado, I invite Mr. Ó Broin to make his presentation.

Mr. Dualta Ó Broin

I thank the members of the committee for inviting Facebook to participate in these hearings as it conducts its pre-legislative scrutiny of the Bill. As members will be aware, Facebook has been calling for regulation of harmful online content since 2019. We do not believe it should be left to companies such as ours alone to decide what should and should not be allowed to remain online.

Members may have had an opportunity to review our written submission, which contains our seven main recommendations for the Bill. I do not intend to try to cover all of those points in the three minutes available to me. Instead, I will focus on three key areas. The first of these relates to the establishment and appointment of the regulator. When the Government announced its intention to introduce legislation and appoint an online safety commissioner in March 2019, Ireland took a leadership role in the ongoing global debate on how to appropriately regulate harmful and illegal online content. The rapid establishment of a fully resourced and staffed regulator is crucial for several reasons. One of these is to ensure existing EU law is implemented, but another is to provide the research and evidence base for decisions to be taken by policymakers and legislators in Ireland. In our submission, we call for the rapid establishment of an online safety commissioner and the prioritisation of existing EU online safety law.

The second point is that the safety of our users is a priority for us. The idea that it is in our interest to allow harmful content on our platform is categorically untrue. Creating a safe environment where people from all over the world can share and connect is core to our business model. If our services are not safe, people will not share content with each other and, over time, will stop using them. Advertisers do not want their brands associated with harmful content, and advertising is Facebook's main source of revenue. For this reason, in consultation with subject matter experts for more than 15 years, we have developed policies, tools and a reporting infrastructure that are designed to protect all of our users. We continue to evolve these policies and launch new safety features. To give an example, recently there has been a focus on abuse directed at sportspeople and other public figures on Instagram. Last month, we launched a message filter on Instagram which redirects abusive messages so people never have to see them. This filter, which we hope will be effective in tackling this challenge, is rolling out on Instagram in Ireland now.

I commend the secondary school students who appeared before the committee recently to give evidence in respect of online bullying. It has long been our view that while we will continue to play our part in addressing and removing this type of content from our platforms, online bullying requires a multifaceted, multi-stakeholder approach. That is why we have invested heavily in Ireland’s National Anti-Bullying Research and Resource Centre in Dublin City University. The centre’s FUSE programme, which we are pleased to fund, takes an evidence-based approach to online safety and gives students, teachers and parents the tools to speak about and address online bullying. As the Oireachtas Committee on Education, Further and Higher Education, Research, Innovation and Science heard from the National Anti-Bullying and Research and Resource Centre last week, FUSE has, since its launch in 2018, has engaged with over 130 schools across all counties, with positive results.

I look forward to today’s engagement and I hope to be able to respond to as many of the members’ questions as possible.

I thank Mr. Ó Broin. He was much quicker than I had anticipated. I invite Mr. Costello to make his presentation on behalf of Twitter.

Mr. Ronan Costello

I thank the committee for its invitation to participate in today’s session. In this opening statement, I will provide our observations on the Bill while briefly discussing the international regulatory context and Ireland’s unique role at this milestone of Internet governance.

The open Internet has been an unprecedented engine for economic growth, cultural development and self-expression. At Twitter, we define the open Internet as a global and borderless digital space where anyone can freely access information and express themselves, and where businesses can start, innovate and compete with relative ease. However, we believe this vision of the open Internet faces challenges that we all acknowledge, including Twitter and members of this committee. Its fruits are being undermined by the consolidation of industry, the regression of civil and respectful discourse and the subversive spread of harmful disinformation. Twitter asks that we approach the regulatory response to these challenges from a shared foundation in that the laws we make should protect, not diminish, the global and open Internet. They should nurture fair competition, not choke it. They should encourage diversity in approaches to content moderation. They should mandate a degree of interoperability between services. Institutions such as the EU and the Irish Government, which are now at the frontier of this new era, should be mindful that the regulatory models and penalties they enact will be exported across the world to service political agendas of all kinds.

With regard to the general scheme of the Bill, we wish to highlight three primary concerns. First, the global and borderless nature of the Internet is best preserved by the establishment of regional and global standards that reflect its structure. In our view, the harmonisation of regulatory standards will safeguard fair competition because companies of small to medium size will be better able to shoulder a single compliance regime. Fragmentation benefits only the largest players. As Ireland is part of the EU's digital Single Market, it should consider delaying the enactment of provisions that may overlap with proposals currently under development in the context of the Digital Services Act, the Digital Markets Act and similar initiatives at EU level.

With the proposals contained in the general scheme of the Bill, Ireland is setting a global benchmark. We contend that several of the sanctions, as currently envisaged, create an unhelpful international precedent. I refer specifically to the extent of proposed financial sanctions, the provision for service blocking and the criminal liability of senior management. Already, we see countries imposing punitive financial penalties to make the business environment difficult for platforms with unwelcome positions on freedom of expression. Service throttling and blocking are often used to limit citizen access to news, information and minority or opposition perspectives. Non-compliance is met with custodial threats for company employees. These actions challenge the foundational principles of a free Internet. We humbly ask that Ireland acknowledge this unfortunate trend and reflect its support for the open Internet in the sanction regime set out in the general scheme.

Finally, with respect to the proposed financial sanctions, it should be recognised that there is a large cohort of platforms for which sanctions of the order of those set out in the general scheme would be an existential concern. In the platform sector, content policy like that outlined is often competition policy by other means. The implementation of equitable sanction regimes is a key part of promoting fair competition. The alternative is a landscape too costly for all but the largest players. I look forward to discussing the Bill with committee members today.

I thank Mr. Costello very much. Finally, I call on Dr. Bertram to speak on behalf of TikTok.

Dr. Theo Bertram

On behalf of TikTok, I thank the Chair and members of the committee for inviting us to attend. We are grateful for the opportunity to provide feedback to the committee on the general scheme of the Bill and I am happy to answer questions on any topics members may choose. I am sorry that I cannot be there today in person. It may not sound like it, but I am an Irish citizen thanks to my mum. I have many fond memories of travelling to Dublin, not so much of the ferry crossing, but once I got there I always loved it. Having worked in tech for the last decade or so, I am also proud that Dublin has become one of the most important global destinations for digital industry. As the director of government relations and public policy for Europe for TikTok, I am very pleased that Dublin is a major global hub for our business.

For those who are not familiar with TikTok, it is a global, short-form video platform that provides its users with a vibrant, creative experience in a fun and safe environment. Our mission is to inspire creativity and bring joy. Since beginning our journey in Ireland, we have gone from strength to strength. Our Europe, the Middle East and Africa, EMEA, Trust & Safety Hub established in Dublin in 2020 is led by Cormac Keenan, with many senior global leaders based here. The hub is designed to enhance TikTok's localised approach to content policy while also supporting our ongoing objective to put safety at the heart of all that we do. Members of the committee may have noted our recent announcement about our European transparency and accountability centre being based in Ireland, which will allow experts and policymakers like the committee members to come and see at first hand how we moderate and deliver content. At least one committee member has been to our transparency centre in the US, at least virtually. We also have a data privacy team based in Dublin, which is overseen by the Irish Data Protection Commission. TikTok has hired more than 1,000 people in the past year, bringing its total headcount in Ireland to more than 1,100. Our ambition is to continue growing. We have also announced our intention to establish a data centre in Ireland. This will involve a further $500 million investment that will create hundreds of jobs.

TikTok is dedicated to protecting its users from misinformation and disinformation on the platform. In fact, we were one of the first platforms to create a Covid information hub dedicated to disseminating information from the WHO, and we are actively working with the Irish Government and Irish health authorities to build awareness of the importance of vaccines in the fight against Covid-19.

Youth empowerment and education is a huge focus for us, and our youth portal is a place where teens and parents can build their digital literacy skills as part of the growing TikTok community. TikTok is also the lead sponsor of this year's St Patrick's Festival and we are contributing €500,000 to various community aspects to this year's campaign to support local artists and live events.

Ensuring a safe online experience is of paramount importance at TikTok. Our platform is designed to inspire creativity and joy, and we know people are at their most creative when they feel safe, and that is why we support the objectives of the draft legislation. TikTok welcomes a systemic approach to regulation, one that looks at systems and processes rather than individual pieces of content.

TikTok is also encouraged by the commitment in the general scheme of the Bill to a proportionate, consistent and fair approach that recognises the different nature of regulated services and the rapidly evolving technological environment. There is an important balance between online safety and protecting fundamental rights such as freedom of expression, and the enshrinement of democratic values within the proposed media commission's constitution is a positive step in that respect, helping to strike the right balance. I thank the committee for the opportunity to discuss these issues today and I look forward to answering any questions.

I thank Dr. Bertram. We have run out of time. I thank all the witnesses who made presentations. The overarching theme coming through is their support for this proposed legislation and for the commission, which I am thoroughly delighted to hear. We will now take questions from members. I call on Deputy Mythen. He has five minutes for questions and answers.

I thank the witnesses for attending. I welcome the submissions and look forward to their answers, as they are an integral part of the global communications network.

I will start with a general observation. I do not think there is a public representative who has not had a friend or colleague harassed online or worried parents contacting our constituency offices to complain about their children suffering because of alleged abusive content published on one of the witnesses' social media platforms. I know they are here to comment on the general scheme of the online safety and media regulation Bill. I welcome that, but the public are rightly worried when existing protections are not fit for purpose. The latter erodes people's confidence in any further protective measures, especially when they perceive global companies such as the ones before the committee are not acting in their full interests.

The witnesses stated they have worldwide experts in many fields working on their teams. They all outlined the systems used to protect people who use their sites but there are ongoing and significant complaints about abuse, bullying and harassment on the platforms. One of the consistent complaints we hear is the length of time it takes for this type of content to be removed. What is the current timeframe and objective from when complaints are made to the removal of harmful content? What mechanisms are in place in the organisations to protect staff, especially their mental health, because we know they have to go through a lot of disturbing and violent content?

Last week, we heard from the Children's Rights Alliance about the importance of an individual complaints' mechanism for the safety of young people online. What is the perspective of each of the witnesses on that issue and could they explain the reason for it?

Could the witnesses also flesh out why they want to remove interpersonal communications services from the scope of the proposed legislation? We know children and young people are accessing online platforms, including but not limited to violence, pornographic images and child sex abuse. How is this content still accessible on the platforms? At present, what steps are taken if a child is underage and how do they verify the age of users?

Those are very good questions and I want to make sure Deputy Mythen gets the answers he requires. We have four and a half minutes so I will ask each of the witnesses to be as concise as possible in responding to the specific questions that have been asked.

Mr. Dualta Ó Broin

There are quite a number of questions there. I will focus on one or two and if I do not get to the answers now, I might get to them while replying to other members.

On the length of time involved, the goal is to get content which violates our community standards removed as quickly as possible. We do not have a strict turnaround time on that. Our goal is to get to it as quickly as possible. With the advances we have made in artificial intelligence, AI, in recent years we are getting to a lot more content before it is even reported to us, but that is an ongoing process and we will continue to evolve and improve our services.

I wish to respond to the point about interpersonal communications services because that is important. Currently, there is one line of justification in the general scheme. Interpersonal communications services are being treated as just another category of service that the media commission should regulate, but we see them as being a very distinct category of service where there is a huge amount of existing Irish and EU law and ongoing judgments from the European Court of Justice on what private companies and what-----

Mr. Ó Broin's screen has frozen so I might stop him there and move on to Mr. Costello.

Mr. Ronan Costello

That is no problem. I can jump in there. I will address content moderation in general given that it was the first issue the Deputy raised. I included an annexe in my opening statement that addresses some of this and goes to the heart of one of the things that Twitter wants to emphasise, in that we have been much more proactive in the past four or five years in reducing the burden on users to feel that all content moderation is initiated by them reporting content to us and that that is always the starting point of content moderation. That is not the case at all any more and hopefully into the future that burden on users will reduce further. At the moment, 68% of the content that we remove for violating Twitter's rules on abuse is proactively surfaced by machine learning. This goes to a key point, which is that the most effective and scaleable way to do content moderation now and into the future – although user reports are always helpful from a contextual point of view – is to deploy machine learning in such a way that it gets better and better at identifying abusive behaviours. Abusive behaviours are common to all accounts, they are spamming a particular person or they are spamming a hashtag to try to get a topic trending or something like that. The better we can be at deploying machine learning and looking at abusive behaviours, the less the burden there will be on a user to report things to us and hopefully then the better their experience will be on the platform.

I thank Mr. Costello very much. Dr. Bertram has a minute and a half.

Dr. Theo Bertram

On the length of time, we had 89 million pieces of content removed in the last six months of last year.

Of that content we removed, which makes up approximately 1% of all the content uploaded, 92.4% was removed before a user had reported it, 83.3% before there was a single view and 93.5% within 24 hours of the video being created. I think length of time is important. It is right that the committee holds our feet to the fire on that. I also believe, however, for our platform at least, that the number of views is probably even more significant. A video that stays online for a few hours but is only seen by a small number of people is probably less impactful than a video that is online for a very short period but is seen by more. That is how our moderation system focuses on where the views are going.

Welfare of moderators is a really important issue for us. Ireland is one of the destinations for tech companies because of the great pool of talent there. There is a highly competitive industry to attract that talent. We want to be the best at it. We provide a mixture of mental and physical wellness supports for our trust and safety workers. We provide professional support, wellness talks around stress, a programme now around building resilience and whole different range of things. I am happy to explain more.

I thank Dr. Bertram. I have to cut him off but we will get other opportunities if he needs more time. This is just the first round to extrapolate the points he is making. I thank Deputy Mythen. I will move on to Senator Malcolm Byrne, who has five minutes.

I thank the witnesses. I will preface my remarks by saying that social media companies can be a force for good. We were able to see much of that during the course of the pandemic. Today, however, we are primarily focusing on the question of online safety and how we combat online harm.

I agree with Mr. Costello that we need to ensure democratic values are built into the Internet and into online expression and that we should not, in most instances, seek to curb freedom of expression. Part of our responsibility is that with freedom of expression comes responsibility, however. In the first instance, our role as legislators is to protect our citizens and those resident here in the online public space and, indeed, cyberspace. I hope the witnesses would want to do that.

I will follow up on Deputy Mythen's question in order that we can get an idea of the scale of the challenge. I have a number of questions and I will put them all together. In the context of 2020, perhaps each of the witnesses might identify the number of individual complaints their companies received in Ireland, that is, complaints from individuals about content, and then the number of profiles they removed as a consequence. I do not want percentages, I would prefer numbers.

I was conscious of Mr. Ó Broin's point, as covered in the media this morning, around delays to the legislation and addressing the importance of the online safety commissioner. From a Fianna Fáil point of view, we are talking about asking the Minister to establish the office of the online safety commissioner immediately on a non-statutory basis. In other words, that office would acquire powers as time passes. Specifically, if we establish the office of the online safety commissioner on a non-statutory basis here, would each of the companies fully co-operate with it?

I will turn to the question of anonymous accounts and bots, which is obviously an area of concern, and the question of ensuring that individuals are able to identify themselves. This is in reference to digital identifiers. Down the line, we may be able to use blockchain and so on to do this. I refer, however, to using a system whereby somebody must prove his or her identify either directly to the companies or through a digital intermediary.

Mr. Costello might tackle my final question on the challenge around multiple accounts because it is more directed at Twitter. He spoke about using machine learning and so forth, which is very important. In a situation where, for instance, an individual is operating nine Twitter accounts or if some individual seems to be linked to multiple accounts, perhaps coming from outside the State from somewhere like Belgrade, for example, do alarm bells sound in Twitter? Most of us might operate our own Twitter account and possibly a business account or one or two others. I do not know how some people have time to operate multiple accounts. Surely, there is a system of alarms whereby Twitter would immediately look into tackling a situation like that.

The Senator has only left a minute and a half for our guests to answer that very comprehensive set of questions. I ask our guests to be as direct as possible in the answers to the questions they are asked. I will begin with Mr. Costello on this occasion.

Mr. Ronan Costello

That is no problem. I am happy to jump in there. I will first address the questions that were directed at us regarding pseudonymity and such. The key issue here is that pseudonymity is often conflated with abuse. It is not a precursor or a prerequisite to people being abusive online. From a Twitter point of view, more specifically, it is not a shield against our rules. For the purpose of enforcing our policies, Twitter's approach to real name accounts and pseudonymous accounts is exactly the same. A violation is a violation, no matter the account's identity.

As I mentioned previously, which addresses the Senator's earlier point about people having multiple accounts, which may be engaging in abusive behaviour, our focus is on how the account is behaving, not on the identity the account may have, although that is a data point we can take into consideration. The Senator mentioned, for example, that we can identify one account, which is reported to us by a user or by a trusted partner or non-profit organisation, and then detect if it is connected to several others. That would be a violation of our multi-account abuse policy.

The key point here is that pseudonymity is not a shield against our rules. It does not protect someone from the enforcement mechanisms we have because the enforcement mechanisms are so focused on how accounts behave rather than how they appear.

Okay, there is absolutely no time left in this slot but I will indulge or witnesses for one minute each. I will go to Dr. Bertram first.

Dr. Theo Bertram

I have those numbers of videos and accounts globally but not for Ireland. I will be happy to write to the committee with the numbers. On the online safety commissioner, we are very happy to work with the committee on that and to explore what co-operation means.

There is anonymity and there is accountability. The crucial thing is that we have accountability for all our users so that if someone does something that is in breach of our rules or if he or she breaks the law, we are able to identify that person. I do not think that means a person cannot be anonymous as long as he or she is accountable. We are in some ways a different platform to the other companies in that ours is a video-driven platform. A person with a video camera to his or her face might be less likely to be anonymous, which is harder to do, and hopefully, less likely to drive that conflicting and controversial content.

I thank Dr. Bertram. We will move to Mr. Ó Broin.

Can we get those global figures before the Chairman moves on to Mr. Ó Broin?

Dr. Theo Bertram

Yes. The total number of videos we removed in the second half of last year was 89 million. That is approximately 1% of all the videos uploaded to TikTok. I can break that down for the Senator further if that is helpful.

We might come back to that, Chairman.

That is fine.

Dr. Theo Bertram

I do not want to take up too much time. We can send the Senator detailed numbers for Ireland.

Mr. Ó Broin and Twitter might also send the actual numbers.

I am afraid Mr. Ó Broin only has one minute.

Mr. Dualta Ó Broin

That is no problem at all. I will try to be as brief as possible.

On the Senator's question around user reports, I do not have the figure. We publish, on a quarterly basis, the amount of content we remove in each of the categories of our community standards and the portion that is discovered before a user reports it to us. Again, a key focus is prevalence and how much of this content is actually being viewed on our platforms.

Establishing the online safety commissioner immediately on an administrative basis would be a very good signal to send not just to everyone, but also, in particular, to other member states. It would signal that Ireland is taking the leadership role presented by the audiovisual media services, AVMS, directive and taking a very proactive approach to regulation in the space. We would be very keen to co-operate with such a body immediately.

There is a balance to be struck on the point of anonymity. The approach we take is that we have a real name policy. If a user or anyone has any doubt around whether somebody is not who they say they are, he or she can report it and we will verify the identity of the individual.

I must move on. I ask members to be specific and concise in their contributions in order that our guests get an opportunity to answer their questions fully.

I welcome our guests. As has been stated, we are setting a global benchmark with this proposed legislation so it is important that we get it right. I welcome the companies' opening statements and commitments. I also welcome the financial support that Facebook is giving to the National Anti-Bullying Research and Resource Centre, but I do not feel it is sufficient. Social media, as it evolves, influences the lives of young people in the country. In that context, the platforms need to take more responsibility and ensure they are a safe place for our teenagers. Last year, the Minister for Justice, Deputy McEntee, brought forward legislation, the Harassment, Harmful Communications and Related Offences Act, that is very welcome but tighter regulation and greater education from the platforms themselves is urgently required. It has a serious impact on the mental health of the younger generation, especially teenagers. Are the three companies willing to contribute to a national campaign, in conjunction with the Department of Education, across all schools immediately that will educate pupils about the threats and abuse that exist online and that will do so when they are at a young age? There is a responsibility on them, as companies that are making money from these platforms, to fund such a campaign.

I was disappointed with the comment about the open Internet. I do not believe we can have an open Internet, and that is the problem. There is a responsibility on the companies regarding the information on there. Take the newspapers or radio stations, any information or news they put out, they have the responsibility to stand over it being 100% factually correct. In recent months, even journalists who have had articles printed are there to be abused and threatened online, which shows that we, as a country, are going in a bad direction. There is a serious responsibility on the companies.

Senator Malcolm Byrne mentioned anonymous accounts. That is an issue we need to look at. As politicians, all of us are trolled by fake accounts every day of the week and abused and threatened whenever we put anything online. The companies have a responsibility to deal with that. One contributor mentioned a multi-account abuse policy. Will our guests explain this and how it is implemented?

Dr. Theo Bertram

I am very supportive of the idea of a national campaign across schools. If we can work together as industry, Government and the NGO sector and child safety organisations, which have most expertise in the area, it would be a very good idea. The Senator is right that education and prevention is key to tackling online bullying. I also agree that we have responsibility in that regard too.

Mr. Dualta Ó Broin

We take our responsibility on this incredibly seriously. DCU's FUSE programme is available to all schools free of charge. It has been rolled out to 130 schools to date. One thing I was struck by last week was when of the student contributors stated that only 40% of students surveyed were aware that you could report material online. It is obviously on us to increase awareness of how to report content online. The FUSE programme is available to all schools. I think DCU reached out to the schools that were before the committee to offer the programme to them. All schools should get in touch with DCU. There is an idea of everyone working together, namely, us, taking our responsibility very seriously and showing what we can bring to the table, NGOs, as Dr. Bertram mentioned, and everyone else. That is how we can make real progress.

The Senator asked about funding for an education campaign.

Mr. Dualta Ó Broin

Certainly, we already do fund the anti-bullying centre in DCU and we would welcome the opportunity to do more.

Mr. Ronan Costello

In addition to our support for regulation that protects the open Internet, we and our CEO have said for several years that the company's number one priority is to promote healthy public conversation because we understand that you cannot have freedom of expression and you cannot have an open public forum online if people do not feel safe expressing themselves, the two priorities go hand-in-hand. You cannot have one without the other.

On politicians' experience on the platform, we absolutely appreciate that this is a challenge, in Ireland and in every other country where our platform operates around the world. On the specific question of multi-account abuse, this goes to our focus on the behaviour of abusive accounts. If we detect that an individual or individuals are running several accounts with the sole purpose of distorting the conversation by manipulating hashtags or abusing others, that sets off behavioural signals for us that indicate it is something that should be looked at. That is why my statement mentioned that 68% of the content in the accounts that we ultimately remove from the platform are proactively surfaced by machine learning because the machine is able to detect those kinds of behavioural signals. We absolutely appreciate that.

As a result of the fact that we are cognisant of the experience of people like the Senator and those in high-profile positions who may have overwhelming notifications at times, we are experimenting with features such as safety mode, which was discussed a couple of weeks ago at a company event. By using behaviour-based analysis, we may be able to detect that an account is getting an awful lot of incoming messages and safety mode would enable the account to automatically block incoming messages from accounts it is not familiar with or that the owner does not follow and that do not follow the owner. In other words, where there is no reciprocal mutual relationship.

I must ask Mr. Costello to conclude. We have to move on. Deputy Fitzpatrick was supposed to be next. In his absence, we will move on to Senator Hoey.

I thank the witnesses for attending. I have three questions on areas of interest to me. The first relates to content moderation, which is a huge area in the context of online safety. What led TikTok to decide to keep content moderation in house? Why does Facebook think it is acceptable to outsource content moderation? What are Facebook's thoughts regarding practices whereby employees of the outsourced services must sign non-disclosure agreements about their work? When we discuss things such as online safety, it is very hard to get a clear picture of what is happening if some of the people at the coalface are unable to talk about it.

Last week, the Irish Society for the Prevention of Cruelty to Children, ISPCC, was before the committee. It stated that online platforms are effectively facilitating bullying because nothing is done when complaints are made or that there is a really long period between when complaints about content are made and action being taken. The ISPCC gave the example of a mother trying to get content about her child taken down. It took months, if not over a year, for that content to be removed. What do the witnesses say in respect of the claim that the platforms are effectively facilitating bullying because it takes so long to get action on complaints?

My final question is for TikTok. What is it doing at an algorithm level to prevent harm? What are its engineers or operations doing with the algorithm? I am an out LGBT community member. I follow LGBT TikTok accounts, that is what I am interested in. Transphobic and anti-trans material is pushed at me all the time. I report that material. What is TikTok doing because it seems to me that the algorithm is not working. I do not want transphobic content, I am a member of the LGBT community. What machine learning or training is being done to mitigate harm?

I ask our guests to be concise in their answers.

Mr. Ronan Costello

I will answer the final question because I think the first two were directed at the representatives from the other platforms. As I mentioned, our emphasis on proactivity in content moderation has been developed over recent years and will continue to be developed in the future, such that the burden on users to initiate the reporting process to us is increasingly reduced. If we can train an algorithm or use machine-learning techniques to better identify abusive behaviours on the platform, such as toxic interactions and so on, we will be better able to tackle how people are exposed to content and how low-quality content appears or is amplified on the platform. In doing so, we hope that we can encourage and promote a constructive and healthier public conversation on Twitter.

Dr. Theo Bertram

On content moderation, we use both in-house and outsourcing. We are very proud of our in-house work. For both the outsourcing and in-house elements, we have requirements regarding wellness and welfare.

I heard the ISPCC's testimony and found it really moving and compelling. It identified a particular problem. In the example of the 13-year-old girl the CEO highlighted, he was pointing out that the problem with bullying is that it can be difficult to spot. It is not instantly visible to one moderator. It is one thing when we look at an account in the round, in an holistic way, but what happens when someone is doing one thing on one platform, one thing on another, and a third thing on another platform again? From the point of the platforms, they do not see that broad picture, which is why the CEO of ISPCC, as I understood his case, was saying that something needs to be done for that. We believe that the solution for us is something we are developing called the community partner channel, which is about trying to get organisations, such as, I hope, the ISPCC, to come in and give us that holistic view. Nevertheless, I recognise there is an issue we all need to address and it is a challenging one.

Finally, on the algorithm, I do not think I have enough time to do justice to the question in this short session. I would love to talk to the committee about it in more detail. While homophobic comments in any form are not allowed on our platform, we go beyond that. We do not just ban anti-LGTB, homophobic or hateful ideas but also do not allow, for example, any content that promotes conversion therapy. Similarly, the idea that no one is born LGBTQ+ is not allowed on our platform either. The way the algorithm works is a really interesting challenge. One thing we want to try to do and learn as a second-generation company is not to build filter bubbles. We are not just going to give users content that is always exactly the same as the things they have previously agreed with. We seek to diversify the content, but that should never promote to the user homophobic content-----

I apologise but I have to interrupt Dr. Bertram in order to allow Mr. Ó Broin to respond.

Mr. Dualta Ó Broin

I will focus on the content moderation question. We are aware of the hearing of the Joint Committee on Enterprise, Trade and Employment last week on this question and I think it was also discussed by that committee again this morning. I am happy to share a copy of the letter we sent in response to the Tánaiste when he wrote to us outlining the concerns that had been expressed to him. We have also written the Health and Safety Authority and are happy to engage with that regulatory body, or, indeed, any regulatory body in Ireland, in regard to the issue. Based on the hearings last week, it may be the case that we will be invited to appear before the enterprise committee to discuss the issue with our partner companies, but that invitation has not yet arrived.

It is not all bad. We all use our guests' platforms to get our messages across. In the context of some of the great social events and movements in recent years, social media has been used really effectively. On issues such as the marriage equality referendum and climate action, social media platforms have played a really important role. On the other hand, there is what Senator Hoey outlined, namely, the toxic abuse that happens online.

My first questions are for Mr. Costello. Some of the horrible anonymous online abuse aimed at journalists in recent weeks has come to the fore. These are journalists who are just doing their job and they are subjected to harmful anonymous abuse. It is really important that journalists be allowed to use these platforms to get across accurate information and news. How can we counteract the abuse they get from anonymous accounts? Mr. Costello might also respond to Senator Malcolm Byrne's question on the scale of the problem and the number of videos that have been taken down.

Turning to Mr. Ó Broin, I presume I can ask questions specific to Instagram and WhatsApp since, as I understand, they are part of the same company. Online bullying in WhatsApp groups is very prevalent and I imagine it is much more difficult to moderate because what are almost chain messages are sent on from group to group or person to person. Screenshots are often taken and sent between groups. How is that counteracted? Is there a method of regulating and monitoring it? This is an important issue. Instagram has become a platform used for the spread of misinformation, more so than the others. I refer, in particular, to the vaccine roll-out and the effectiveness of vaccines. Instagram account holders with quite large followings are unchecked, unmonitored and unregulated in how they spread misinformation. These are not just expressions of opinion but rather false information that is contrary to public health. How can that be counteracted?

My final questions are for Dr. Bertram. I could be wrong, and he can contradict me if that is the case, but I assume that TikTok is one of the most popular platforms with the younger generations, and particularly those who are still in school. Dr. Bertram may have heard that when secondary schools students appeared before the committee, they asked for safe online behaviour, to protect against harmful material, to be taught to them by online activists. Would it be possible for the likes of TikTok to reach out to these online activists and resource them in order that they can visit schools and teach students about safe online behaviour? As Dr. Bertram may have heard, the students who appeared before the committee do not believe that parents or teachers have that connection.

Dr. Theo Bertram

I fully agree that the best people to convince young people of how stay safe online are their peers and heroes. At TikTok, we already work with a large number of creators to help create safety videos. If I could tie in this suggestion with that which was made earlier, perhaps there is something we can do with TikTok creators in Ireland to help build a set of videos that give support for how to stay safe online for all schools in Ireland.

Mr. Dualta Ó Broin

I recognise the work that the Webwise ambassadors are doing. They are excellent young people who engage with other young people in this space. It is worth looking into their work.

On misinformation, even before the start of the pandemic, we worked with the HSE to identify what we could do to support getting out authoritative messages that supported their campaigns. On both Instagram and Facebook, we have also financially supported the Department of the Taoiseach's campaigns. We address the issue by removing misinformation that can lead to real-world harm, such as claims that drinking bleach can cure Covid. Below that, we might say that an instance of misinformation can lead to real-world harm and it is then referred to our network of fact-checkers, with 130 organisations throughout the world-----

Quite often, posts are reported but are not taken down and no action is taken.

Mr. Dualta Ó Broin

If it is found to be false by our fact checker, we will put a warning screen in front of it which will say what the viewer is about to see has been found to be false by an independent fact checker. In that way, we are trying to educate and inform people about what the facts are. That is why we have the Covid-19 information hub, which contains all the authoritative information. It is not just we have done that, this is an ongoing thing. We are working with health authorities around the world, so if there is a new trend or piece of evolving content which is of concern to them, we will react to that. It is incredibly important.

WhatsApp, as an end-to-end encrypted service, is different. It falls within that category of communication service I was talking about at the outset. As an end-to-end encrypted piece, it is more difficult because it cannot be seen. Only the sender and the receiver have access to it, but there are reporting tools in place in there. We can keep taking down content, but it comes back to addressing the wider issue and working with the NGOs, the committee, school and academics. The FUSE programme is one example of that. I am sure there are other programmes out there.

I ask Mr. Ó Broin to conclude because I still have to let Mr. Costello in.

Mr. Ronan Costello

The abuse journalists and politicians get on the platform is a challenge. It is a challenge in Ireland and in all countries where Twitter has a presence. It is a challenge because those individuals are in high-profile positions and are putting content out there which gets a reaction, and some of that reaction is absolutely unacceptable. We want to be as proactive as possible to reduce the burden on politicians and journalists to be the ones who feel they have to reach out to us before something is done about that.

Both they and the users who follow them benefit from Twitter, because of their presence on it. Twitter is uniquely political in its content. It is uniquely full of journalism and journalistic content. The more proactive we can be in that respect and the more of a burden we can take off those users, the better, through machine learning and all of these tactics we are trying to deploy and get better at.

Deputy Munster has five minutes and I ask her to tell us who her questions are directed to.

I will direct my first couple of questions to Mr. Ó Broin, which are probably yes or no responses. Have I heard Mr. Ó Broin correctly in saying Facebook had no specific turnaround time for the removal of content?

Mr. Dualta Ó Broin

That is correct. We try to get to harmful content as quickly as possible-----

Does Facebook keep a record of individual complaints received in Ireland?

Mr. Dualta Ó Broin

Is Deputy Munster referring to number of complaints received in Ireland?

Yes. Does Facebook keep a record the number of individual complaints?

Mr. Dualta Ó Broin

This question has come up, not just in Ireland, but across many other countries. We do not have-----

Facebook does not keep a record, is the answer.

Mr. Dualta Ó Broin

At this stage, but we are working to provide that.

In recent weeks, we have had numerous groups in which advocate for children and their rights. They were saying children who have been victims of cyber bullying, for example, struggle to have the content removed. It seems to me Facebook is in favour a systemic complaints system, but all the groups we have had in are of the opinion an individual complaints mechanism is vital. Can Mr. Ó Broin explain his reason for favouring the systemic as opposed to the individual mechanism?

I recently saw media reports which stated Facebook intends to install end-to-end encryption on its private message service. The Irish Society for the Prevention of Cruelty to Children, ISPCC, has said it will make Facebook's child abuse detection tools completely useless. Could Mr. Ó Broin comment on that too? In its current form, the Bill does not cover encrypted services.

One of Twitter's main concerns seems to be around sanctions on social media companies and the affect they would have on the viability of businesses. Does Mr. Costello not agree significant sanctions are required in order to discourage social media platforms from breaching the legislation, if doing so may lead to financial gain?

Mr. Dualta Ó Broin

On the systemic approach to regulation, it comes to the question of effectiveness. We believe regulation which looks at the systems and allows the regulator to come in and look at how we are assessing risk and designing our services will be more effective than individual complaints.

Oversight on an individual complaints basis is what the external oversight board is doing. It is a completely independent body from our platform. It comes down to the question of how does one do the category of complaints or appeals a regulator could look at. We operate a trusted-----

If Facebook does not have a specific turnaround time to remove harmful content and does not keep a record of individual complaints received, surely that would add weight to having an individual complaints mechanism through which people could go, if dissatisfied with the way Facebook operates.

Mr. Dualta Ó Broin

I can see where the NGOs are coming from in terms of an individual complaints system, but given the scale of the platforms operating, it is difficult to see how the regulator would operate such a system. If Deputy Munster will allow me, within the legislation, there is this idea of a super complainant scheme which would allow for NGOs, as they do with us, to bring trends or themes they are seeing to the regulator. It is inevitable the regulator will use that as the basis for its investigations and audits on our services. That is where one is likely to see more improvement, in terms of looking at how the complaints system works across the board, what the policies are, how the content is removed etc.

It comes down to the question of the effectiveness of the regulation.

If Deputy Munster is satisfied, we will move on to Mr. Costello to answer the last part of her presentation.

I did not get an answer on what the Irish Society for the Prevention of Cruelty to Children, ISPCC, had said about the end-to-end encryption, that it would make Facebook's child abuse detection tool completely useless.

Mr. Dualta Ó Broin

I apologise for missing that. I am trying to keep everything as succinct as possible. We made the announcement in March 2019 of our intention to move our messaging services towards an encrypted basis and since that time, we have been engaging with law enforcement and NGOs around the world, which have expressed concerns about how effective our enforcement tools will be in that environment. We have not just launched this product or made the switch. We are engaging with people, because we want to hear their concerns and understand how we can improve the prevention of harm. That is the conversation we are having at present and one which we welcome with law enforcement here in Ireland.

I want to make sure Deputy Munster has her final question answered, so will move on to Mr. Costello.

Mr. Ronan Costello

The question was on the sanctions and our points we have made on the sanctions. I absolutely agree the Bill and the law, when passed, should have sanctions which are meaningful and effective, otherwise that would be a problem. Our issues with the sanctions, as currently defined, can be split into two categories. The first would be the international precedent. Unfortunately, we are seeing a trend internationally whereby sanctions in terms of the throttling and blocking of services and the criminal liability of senior management are incorporated into legislation in countries around the world and they are often used as the first point of call when platforms are deemed not to be compliant with a piece of legislation.

When countries in the EU or further afield pass laws with such models, they are often exported to other countries, which then point back to the EU and say that because a country there did it, it should be fine for that them to do it too. Unfortunately, however, the laws are enforced in a different way, with different political agendas and different motivations.

The other issue we have with, for example, financial sanctions is the potential that they would present an existential issue for start-up platforms or SMEs that are trying to establish themselves, or even platforms that are well known but may not have the same commercial success or footprint as the largest platforms. There is a concern that those financial sanctions should be proportionate and certain thresholds should be met before they are implemented.

I apologise that I could not be here earlier. I was speaking in the Dáil. I happen to think that the concept of social media is quite positive in that it has given us a level of communication which, in my early life, I certainly did not think would be possible in the early 21st century. I believe it does more good than harm in society. I may be unusual in that but it is a view I have always held. That is why it is exceptionally important for those who provide the ever-evolving array of social media platforms, reaching out and connecting to an ever-increasing number of people, both here in Ireland and throughout the world, to act responsibly.

If we place a significant value in the sense of the communicative power that social media affords people, particularly as it allows for voices that did not previously have a role in shaping society, policy and, ultimately, the way humanity evolves, the we must reach the conclusion that social media has been exceptionally powerful in giving voices to people who did not have them 20 or 30 years ago. The Irish political landscape was essentially denuded of the voice of the individual and social media has done very important work in allowing people to have their say in how our country and our world develop. That is why it is so important that when people engage in social media, they can trust they are entering an environment that is as safe as possible, on a par with the normal engagement that takes place outside of social media, and that they and their rights be sufficiently protected. Ireland should lead the way here. We should be very proud that a number of the world's largest social media companies have decided to base themselves here. I am very supportive of the proposed legislation, which will, as best it can, create that safe environment, particularly for young people who find themselves in a space that is often challenging because of bullying and so on.

Our guests are representative of the social media industry, if we can call it that. Do they acknowledge that there is an issue with providing a safe environment online? How do they propose that we resolve it? Legislation is necessary - I have no doubt about that - but my fear has always been that when legislation such as that proposed is brought forward, the pendulum swings much too far in the other direction. Those very voices I mentioned, which can often call power or the performance of a government into question, may be silenced or suppressed. How can we strike the balance between creating a safe environment where normal, respectful discourse can take place while, at the same time, not shutting down those voices that need to be heard and have been heard throughout the world in places, for example, where there has been terrible conflict? Social media has been the only mechanism for people to reveal to the world the kinds of atrocities that have been committed in certain places throughout the world.

How can we strike that balance? Will the proposed legislation before us do so? I recall legislation put forward five years ago which was designed to criminalise people who cause offence on social media, but who can determine what offence is? Who will be the arbiter of what is or is not in good taste, or what causes or does not cause offence? These are big questions we have to ask ourselves in the context of how we should regulate social media in future. I would like to hear our guests' views on how to strike that balance. For me, it is the big question regarding what we are proposing to do here.

I think we would need an hour to allow for responses to all the questions the Deputy put forward. I am afraid that only 45 seconds of his allocated time remains. I will indulge our guests a little. I would love to allow more conversation but we will tease out the issues.

Dr. Theo Bertram

I agree with much of what the Deputy said. I will not go over it because I would not be able to put it as eloquently as he did. There are two key issues. The more objective we can be in defining what harms are, the better. As the Deputy said, we have to think of the marginal voices, not just the majority. There will be important cases where we may not remove certain content from the platform because we are protecting voices that are uncomfortable for some people to hear but where it is important that they are heard. It is difficult to say what level of exposure teenagers should have to difficult content that deals with, for example, abortion and issues such as that. The question is how we should protect them from that but also how we can ensure they are educated about those issues. These are all difficult questions, involving balancing different sets of rights and so on. It is about objectivity and always ensuring that marginal voices are protected.

Mr. Dualta Ó Broin

Ireland has a significant opportunity to play a leadership role to establish a regulator that will, in effect, be a global first. It is a really important issue. To return to Senator Malcolm Byrne's comment on establishing a regulator on an administrative basis, that would be very positive.

Striking a balance is something we have been working on for 15 years. A recent example relates to the decision of the external oversight board regarding Donald Trump. The balancing of fundamental rights on both sides of that argument is very difficult to achieve. In introducing regulation, we should not seek to silence one group or stifle debate. Debate, and the exchange of views to which it gives rise, is very positive. We think the general scheme of the Bill allows the regulator the play a role whereby it would have oversight of the systems, ensure we are doing everything we can in regard to fairness and impartiality and keep our users safe.

Mr. Ronan Costello

I agree with Dr. Bertram, Mr. Ó Broin and the Deputy. The reason I emphasised the point on international precedent is because I agree that Ireland is uniquely placed at the moment. It is in a great position to set a benchmark for how the Internet is governed and regulated. It is a point of pride for the industry, and even for me to be talking about the issue. It is for that reason it is so important that Ireland seek to strike the balance that was talked about and to pursue objectivity, in both how it views how content should be moderated and how it views the industry. Whatever legislation is produced at the end of this process should work for companies of all sizes. Three companies are represented at this meeting but there are many thousands of others that have many different content moderation models that are not represented here, such as Reddit, Wikipedia and so on, which pursue community moderation. We are experimenting with that ourselves.

I echo the comments of the other contributors and agree regarding the points on objectivity and balance.

I confirm that I am within the Leinster House complex. I apologise that I missed some of the meeting. I was speaking in the Dáil. The issue I am about to raise may already have been covered.

When I was verifying my Twitter account, I had to go through the verification process. When I was seeking to verify my official Facebook account, I had to provide my passport and other information. Why is that not a standard requirement for all users? Certainly, the vast majority of the abuse I have received online over the years has been from anonymous accounts. There is a mechanism in place to remove anonymous accounts. Why is that not being done?

Mr. Ronan Costello

There are two things I would say in response to the Deputy's question. First, in respect of verification on Twitter, it has been around for several years now. It is there to authenticate and confirm that a person is who they say they are. Hopefully, over the next couple of months, users will see us roll out developments in that area. For example, the public verification process will be reopened so that increasing numbers of individuals can apply for verification and can verify that they are who they say they are, and also that a real individual is behind the account. That will go some way towards addressing the issue.

We are also nuancing what verification means by potentially rolling out different kinds of labels to enable a user to verify that they are acting as an organisation or a company, an individual, a Government representative, a State-backed media entity, etc. This endeavour to nuance verification labels and to confirm identities is something that we are definitely undertaking at the moment. Hopefully, users will see some progress on that which will mitigate some of the abuse that people perceive is coming from anonymous accounts.

Why has it taken so long? If I went into a town hall meeting with a cloak over my face and started to abuse everybody there, I would be taken and put outside the door immediately. Twitter has been around for more than ten years. Why has it taken until 2021 for this development to be announced?

Mr. Ronan Costello

To build on the Deputy's analogy, we do the same thing. A person's identity on Twitter has no bearing on the accountability that they have when we enforce our rules. If someone is engaging in abuse, they will be bounced from the platform and the content they have posted will be removed because it is a violation of our rules that they behaved in that way in the first place, regardless of the identity under which they did it. To use Dr. Bertram's phrase, there is anonymity and there is accountability, and one is not a shield to the other.

The processes do not work, however. Just last week, I reported a number of cases of abuse just to see what would happen and, as I expected, Twitter did not see any problem with the abuse that was reported. Twitter's systems just do not work. It should stop trying to pull the wool over people's eyes. What is going on is a disgrace and Twitter has been presiding over it for years. It is not good enough.

Mr. Ronan Costello

I am happy to follow up with the Deputy on the specific instances that he mentioned and the cases he reported. We do not tolerate abuse, harassment or the range of behaviours that are covered by our rules. If, last week, the Deputy reported specific instances of what he believed to be definite violations, in response to which he did not receive a satisfactory answer and was told that there were no such violations, we will follow up with him on that after the meeting.

It is the same no matter who one speaks to, whether it is politicians or other users. I am a verified user. God help the unfortunate teenager who is receiving abuse and is not being listened to either. It is a scandalous situation. Twitter must act now - not soon or in the future, but now. It should have acted ten years ago. What is going on is scandalous and it has been going on for far too long.

I now call Senator Warfield.

I echo Deputy Griffin's comments.

I have a question for the representatives Facebook. Today, Twitter made the same call as Facebook and requested that the proposed legislation to be delayed until the enactment of the EU Digital Services Act. The Digital Services Act does not make any major policy changes in the harmful content area. Therefore, I do not see how a delay would have any impact whatsoever. Any amendments that may be needed could be made at any time in the future. The call for a delay is contradictory to Facebook's white paper on charting the way forward on online content regulation, which makes strong calls for states to regulate platforms. Indeed, Mr. Ó Broin stated that Facebook has been calling for this for many years, that it should not be left to social media companies alone and that there should be regulation. That is completely at odds with how Facebook has lobbied the European Commission over many years on privacy and the code of conduct on hate speech. What Facebook is telling us here publicly is not what it sought when it lobbied hard behind closed doors against regulation. That is quite contradictory. I feel that the call to delay this legislation is essentially what Facebook might get away with publicly. I would appreciate a comment on why we should delay this legislation.

Mr. Dualta Ó Broin

I am happy to respond. I was also confused by the headline on RTÉ this morning. Coming back to our position on this legislation, if the fully resourced and staffed media commission could be established today, we would be more than happy to engage with it on both the AVMS directive aspects and on national legislative measures right across the board. There are aspects of it, such as the requirements on interpersonal communication services, which we do not think are feasible to implement. However, what we are saying is that given that we are where we are now, and the Digital Services Act is coming down the track, and from the timeline that I have set out it is likely that it could be in place before the media commission is fully staffed and this legislation is fully implemented, the priority should be to implement the EU online safety law that was agreed in 2018 and to focus on that. We have identified 17 areas in respect of which the Digital Services Act and the digital services co-ordinator's role are likely to overlap with the role which is set out in the general scheme of the Bill. It might be more prudent and a more efficient use of resources-----

The Digital Services Act does not make any changes whatsoever in respect of harmful online content. In fact, online safety legislation in Britain has been delayed by the British Government. It has been suggested that Mark Zuckerberg threatened to pull investment from the UK. Records from a 2018 meeting show that he might look elsewhere in Europe to invest. We have seen the legislation be delayed there. I do not appreciate the call to delay this most important aspect of the proposed legislation and to focus on the AVMS directive instead.

Mr. Dualta Ó Broin

If it were possible to introduce the national legislative measures today, we would engage with those rules as a company established in Ireland. The European Commission spokesperson has confirmed that not only will the Digital Services Act address illegal online content, but it will also address the systems that platforms have in place for harmful online contact. Therefore, given that the Digital Services Act is coming down the track, it may be more prudent to focus on that. If the Government and the members decide to progress with the proposed legislation before us, we simply ask that a separate part of the legislation be created so that the national measures and the European-wide measures could separated out so that there is clarity for citizens in Ireland and those in the EU in respect of their rights and the requirements that are placed on service providers.

Do I have time for one more question?

Of course.

I will address the following question to whichever of the witnesses wishes to respond. The future commission may want to standardise the transparency reporting of social media companies and could ask for the reports to be in the form national reports or state reports. Currently, the companies publish these statistics in the manner in which they wish.

Would our guests welcome a standardised transparency reporting system?

Who would like to answer that? We might go to Mr. Costello first on that question.

Mr. Ronan Costello

On transparency reporting, I would note that Twitter has been publishing transparency reports on a roughly biannual basis for coming up to nine years now. In the past year and a half, we launched a dedicated transparency reporting website. Ideally, every six months, if not more often in the future, updated transparency reports should be on the website, covering not only the legal requests that the company has received and how it has handled them, but also enforcement of our rules, including information about how many accounts we have actioned for violating rules around abuse, harassment, etc.

With respect to the commission requiring certain transparency measures, we would obviously comply with that. If that is the law in the EU, we will comply with that, no problem. I would say, however, and this goes to regulators making space for different approaches to content moderation, that if reporting is standardised and the figures and data points we are required to disclose are standardised, that necessarily puts different companies up against one another in a monolithic way. For example, a large company may have a huge volume of enforcement actions that it has taken and, therefore, smaller companies that approach content moderation in different ways may appear in a negative light by comparison. There has to be a recognition that different companies have different approaches to policies, content moderation and transparency.

I welcome all our guests. This Bill concerns the regulation of all media. The main thing I noted in our guests' opening statements was that they touched a lot on advertising and their business models. Advertising and its regulation can be as prominent in terms of online harm. The Press Ombudsman, Mr. Peter Feeney, stated that social media is sucking away the resources from newspapers and traditional media as sites such as Facebook draw readers' attention to a newspaper story that is read online. The newspaper receives no revenue from the reader but Facebook receives the online review. I agree with the ombudsman's view. Traditional newspapers that do all the news reporting and employ journalists saw their advertising revenues fall through the floor last year because of the pandemic. Advertising revenues have fallen by over €300 million for the national titles in this country, from a high in 2007 of approximately €375 million to approximately €60 million last year. Google and Facebook secured advertising revenue of €425 million in this space in 2019. How much did the platforms each of our guests represent receive in advertising revenue last year? Do they agree with a system whereby the social media companies pay the print and television companies for monetising their news content that is posted to the social media platforms? If social media suck the life out of traditional media, as the ombudsman stated in his report published last year during the pandemic, the sanity, calmness and objectivity of traditional media will be gone and a wasteland will be left behind.

On the issue of advertising and harmful online content, I agree with Professor Conor O'Mahony, the children's affairs special rapporteur, who said this Bill does not go far enough in tackling the meaning of "online harm" in particular areas such as online gambling. A serious amount of money is spent online by gambling companies on different platforms. Ms Tanya Ward from the Children's Alliance has spoken about the harm of online advertising by cosmetics companies. Do any of the companies our guests represent target marketing to children under 16? Would they agree with a code that bans advertising targeted at under 16s? Can we start with the representative of Facebook on those four questions and can I get a concise answer to each of my questions?

The Senator has asked direct questions that require specific answers, including figures and all the rest. We will begin with Mr Ó Broin.

Mr. Dualta Ó Broin

I thank the Senator. I can get the advertising revenue number for him and send it on. There is a fundamental misunderstanding as to how content appears on our platforms. We do not pull or put content onto our platforms. In many cases, news organisations make a decision to put their content onto our platform. There is a widely recognised-----

Mr. Ó Broin is diverting there. This issue has been tackled in Australia and our guests know that governments are looking at this. If the traditional media are dead, social media companies are not going to write the news. It was a fair question. Does Mr. Ó Broin agree with a system whereby social media companies would be paying for the monetising of news content online?

Mr. Dualta Ó Broin

Will the Senator allow to expand a little on the issue?

Mr. Dualta Ó Broin

This is incredibly important and we agree that it is important. We are exploring ways in which we can work with news organisations around the world. We have done it in the UK and it is rolling out in Germany now. The US was the first place in which we rolled out a news tab whereby we have made deals with publishers for specific content that appears in that news tab. It is not the case that we are averse to such a system, that we are walking away from it and throwing our hat in the air. However, a system must be put in place that recognises where the value is added and how the online ecosystem works. That is why I was trying to make the point at the beginning about how the content appears on the platform. We must figure out how to monetise the traffic that is diverted off our platforms onto the news media sites. That is where the crux of this will be and the news media organisations that figure out how to harness the power of that traffic will come out with a sustainable business model.

We will have to move on to allow our other witnesses an opportunity to answer the question. I do not feel the question was answered, but anyway, we will move on to Mr. Costello.

Mr. Ronan Costello

As I mentioned before, journalists get a lot of value from Twitter and we want to make sure that they see and feel that value and that we are investing in that. I will flag a couple of issues. I do not think we are involved in the conversation the Senator was having with Mr. Ó Broin but over the past couple of weeks, we have announced that we are going to offer journalists and news organisations the means by which they can generate revenue directly from their Twitter accounts. People can pay them directly for the work they are producing. Over the next couple of weeks and months, we will be introducing ways for journalists, news organisations and organisations of all kinds to introduce tiered subscriptions such that a user can have super followers who pay for exclusive access to content that is distributed through Twitter. In that sense, we are trying to find ways to encourage and reward the people who get the most value from, and contribute the most to, Twitter.

Dr. Theo Bertram

Picking up on some of the Senator's questions, we do not allow ads for gambling. On advertising revenue, we are a much younger company than the others represented today so we are at a much earlier stage. I do not think we have published our accounts in Ireland yet but when we do, I will be happy to share them. I cannot give the committee those numbers yet.

We are a platform with 15-second videos. Some of the things the Senator described are more common on text-based platforms where that content is shared. We absolutely want to build a sustainable long-term relationship with the news industry. We are just at the very start of engaging with news providers to work out how they can put their content on our platform and how we do that in a long-term, sustainable way. We must have a well-financed news industry and I hope we can play a part in that.

We will have to leave it at that, Senator Cassells.

Can we get an answer from Facebook and Twitter about the targeting of marketing and advertising to children under 16?

Mr. Dualta Ó Broin

We have strict rules about targeting that are different from the normal rules that apply to people who are over 18. The audiovisual media services directive, AVMS, directive brings in rules for that area in particular which is another reason it would be great to have a regulator established in Ireland to enforce those rules.

Mr. Ronan Costello

From the point of download in the application stores, Twitter makes clear that it is suggested for users who are 17 or older.

The app is not targeted at or pitched to a younger audience such as children. We suggest an age of 17 or older.

Dr. Theo Bertram

All ads are set to "off" by default. We do not allow ads aimed directly at minors or feature content which is likely to appeal primarily to minors.

I am located in the convention centre. I thank our witnesses for joining us. As my time is limited, I want to hear from each of them and get an understanding of their positions on what Deputy Griffin touched on, that is, online anonymity and the anonymous accounts out there. These trolls bully with abandon, spread racist and misogynist abuse and attack people based on looks, weight, age, gender and disability. As well as young people, God forbid if you are a public representative because you will be specifically targeted. I have spoken to many constituents and it is clear to them that the reporting to social media platforms does not work and that the cost of legal remedies is out of reach of normal people who need the law to be updated in this regard. We need to make our social media known more for the good in society rather than the toxic, unsafe hellhole they can be for so many. Anonymous accounts provide a shadow under which these people can hide and facilitate and encourage online abuse. As it stands, tech companies - forgive me if I am wrong - do not know who millions of their users are. No matter how good these companies' intentions, the lack of basic information means that any attempt they make to police their platforms and bring offenders to justice is a painful process for them. Basically, their hands are tied in respect of prosecution before they even attempt to start. That brings huge concern. No one should be prevented from using another name, but we have to make it harder for online abusers to hide in the shadows and cause the mayhem they cause. Information on users is critically important in this respect. I would like to see tech companies make more of the issue without always needing governments to intervene. I have heard a lot of words such as "prudent", "proactive", "enforcement" and "exploring" used. We need to see real action in this regard. Deputy Griffin talked about three simple steps in respect of verification of identity. That is hugely important, and we need to start moving swiftly on this. Furthermore, every user should be made verify his or her identity. That would be a hugely important step across all social media platforms.

Finally, we need to give users the option to block communication, comments and other interactions from unverified users as a category if they so wish. We need to see a timeline as to when this will be implemented. Ms Alex Cooney, CEO of CyberSafeKids, has said self-regulation simply does not go far enough. Again, I have heard from constituents about this. Families are seeking support, and it is often a lengthy and difficult battle to find a solution to the problem. What are the witnesses' positions on this and what specifically do they intend to do about it?

Mr. Dualta Ó Broin

Fake accounts are an ongoing issue which we address. We removed 1.3 billion fake accounts between October and December 2020. As a rule, we have a real-name policy on Facebook. As I explained earlier, we do not require ID when the account is being set up, but if anyone has any concern regarding the authenticity of the account, and if it is reported to us, we will require-----

On that point, how many complaints about online anonymous abuse has Facebook received from users on its platform?

Mr. Dualta Ó Broin

I do not have that figure to hand, and we would not have that-----

That is central to what is needed, though, in terms of a regulator to ensure Facebook is competent in removing these types of users.

Mr. Dualta Ó Broin

On a broader point, we expect the regulator to come in and require us to prove what steps and measures we are taking to combat the abuse the Deputy is talking about. We recognise that for elected representatives this is particularly challenging. We hold training sessions on an ongoing basis. There is actually one on Friday for all elected representatives and their staff in Leinster House.

As for the tools, Deputy Dillon mentioned blocking. I know he is talking about unverified accounts, but we have blocking tools in place for particular comments, particular words and so on, so we are not walking away from this. We expect ongoing and tough conversations with the regulator about this but we think that at the moment we have the right balance, in that when the account is set up ID is not required but if there is any suspicion of inauthenticity, from either a user report or our systems in the background, a flag is raised regarding inauthentic behaviour. We think that is the right balance.

Mr. Ronan Costello

I will restate that pseudonymity is not a shield against enforcement. A violation of our rules is a violation of our rules, regardless of the stated identity of the account. This is demonstrated by the fact that successful criminal prosecutions have arisen from instances in which Twitter has complied with law enforcement where law enforcement has deemed that a criminal threshold of abuse has been crossed and that user information was required. Pseudonymity is therefore neither a barrier to Twitter's enforcement of its own rules nor a barrier to accountability for criminal behaviour. I would say-----

May I ask Mr. Costello a question in that regard? The Minister, Deputy McEntee, recently passed Coco's law under the name the Harassment, Harmful Communications and Related Offences Act 2020. How can Twitter help to investigate alleged breaches of this Act by anonymous accounts?

Mr. Ronan Costello

We have a dedicated website - I am happy to send the link to the Deputy - for law enforcement to submit such requests where it is seeking either content removal or basic user information to further an investigation. That website has been up for several years. Because of Dublin's position as Twitter's headquarters for the Europe, Middle East and Africa region, it is here where we host a Europe-wide training session, typically annually, which includes An Garda Síochána. We work with law enforcement to train its members up on how to submit requests that are properly specced such that we can expedite a case and work with them in as efficient a manner as possible. There is a dedicated website for such law enforcement requests, which may arise from the legislation the Deputy mentioned.

I will ask Mr. Costello the same question as I asked Mr. Ó Broin. How many complaints has Twitter received in the Irish jurisdiction about online abuse associated with criminality in respect of which it has taken enforcement action?

Mr. Ronan Costello

I do not have that figure to hand. I mentioned that we have the website of the Twitter Transparency Center, which is updated roughly six-monthly. That includes global figures-----

These are basic questions about self-regulation. If we want to build confidence in the system about how social media companies are operating, these are basic, level one questions we are asking.

Mr. Ronan Costello

The Twitter Transparency Center has developed significantly over the years to include a lot more information than it did when transparency reporting began in 2012 and I expect it to develop further in the future.

We will have to leave it at that, I am afraid. I have indulged the Deputy for a little longer than his five-minute slot. I thank him. I appreciate his good line of questioning.

It comes to me now. I thank our guests for their presentations. They have been most helpful to our work and our deliberations. To me, social media platforms are currently like the Wild West - anything goes. I do not accept some of the statements today on the swiftness with which harmful online data are dealt with. This comes not only from my own experience but also from the experience of many of the witnesses we have had before us when it comes to protecting children.

I would also make the observation that I see the witnesses' platforms not just as platforms but as moving more into the space of publishers. When we think about our traditional media outlets, such as newspapers, television and so forth, there are really strict criteria regarding what they can put out in terms of fact-based information and there is also the question of objectivity. I do not see that currently within the social media cyberspace platforms. That is a serious part of the work with which we must deal.

The witnesses we have repeatedly referred to the lethargic attitude and reluctance of social media platforms to remove harmful comments and content that was extremely upsetting and harmful to young people and children. I have a specific question in respect of a matter to which Mr. Ó Broin has already alluded. I refer to the fact that there is no timeline for the removal of content. I will put the same question to Twitter and TikTok the same question. Do they have specific timelines in terms of how quickly they react and remove content?

Will the witnesses outline in a sentence or two their interaction with An Garda Síochána? When An Garda Síochána goes go to the social media platforms with content it believes is harmful and constitutes harassment to some of their users, how quickly do they respond? What hoops does An Garda Síochána have to jump through to actually get the response it deserves?

I will go first to Mr Ó Broin. I do not expect he has anything more to say in terms of timelines for the removal of content.

Mr. Dualta Ó Broin

As quickly as possible, our community standards enforcement report details content which is removed before it is even reported to us. We can go through all those figures.

I will stop Mr Ó Broin there. He made a very good point about community standards. Can he explain to the committee exactly what Facebook's community standards are? I was part of a very interesting debate with the Irish Independent last weekend where two very reputable and highly respected journalists, Mr. Kevin Doyle and Mr. Philip Ryan, talked about exactly that point of community standards. They have received continuous verbal assaults online, no more than politicians. They spoke about that very point of going to community standards or making a complaint to Mr. Ó Broin's platform and getting the response that it did not breach the community standards. Could Mr Ó Broin explain to the committee, and give some specific examples, of what breaches of community standards are for Facebook?

Mr. Dualta Ó Broin

Certainly. Community standards are the rules that govern what is and is not allowed on our platform. They have been developed over the last 15 years. To give an example of-----

Can Mr. Ó Broin provide specific examples of what breaches in those community standards are for the purposes of the committee?

Mr. Dualta Ó Broin

Certainly. The Harassment, Harmful Communications and Related Offences Act 2020 was referred to. The non-consensual sharing of naked images has not been allowed on our platforms for a decade. That is an example of where the community standards are coming ahead of legislation.

If somebody makes a complaint to Facebook that he or she has been the victim of relentless verbal assaults, does Mr Ó Broin consider that breaking its community standards?

Mr. Dualta Ó Broin

It depends on the context. Obviously, in the context of an individual reporting it and stating that he or she deemed this type of behaviour or that type of content to be bullying, then that is an important context which we take into account while we are making a decision.

Does Mr Ó Broin have any concerns in terms of that feeding into the erosion of democracy when it comes to journalistic experiences of online platforms?

Mr. Dualta Ó Broin

It is very important that we have a robust news media environment in Ireland and journalists have an incredibly important role to play in reporting on and outlining where are faults are and where we can do better. It is our job and our responsibility to ensure that our systems and services, and the tools and processes we have in place, are exactly where they need to be to provide as safe an environment as possible. We hear-----

I am going to stop Mr Ó Broin there. I must say, I have major reservations over what exactly Facebook's community standards are. I do not believe he fully answered the question. I must move on, however. Can Mr. Costello answer some of the queries I have raised?

Mr. Ronan Costello

I would actually go back to something that was said towards the start of this session around how much engagement must content be the subject of before it has to be reported to us. How is content amplified on the platform? How is it surfaced on the platform? These are things we are trying to address specifically with machine learning and more proactive measures such that we are prioritising and up-voting or up-ranking, if you want to call it that, the constructive and civil discourse on Twitter and de-amplifing or down-ranking the harmful, toxic and unhealthy engagement.

Can Mr. Costello answer the specific question on how long it takes for Twitter to remove harmful content about which it has received a complaint from a concerned individual?

Mr. Ronan Costello

It is the same as what Mr. Ó Broin said. It depends based on the policy it is being reported under. Again, in that sense, our increasing leverage of machine learning has enabled us to prioritise reports more efficiently based on their severity.

So, we do not have a timeline.

Mr. Ronan Costello

Not specifically, no.

I will put the same question to Dr. Bertram. How long does it take for TikTok to remove harmful content where, for instance, somebody feels they are being subjected to continual verbal assault or where somebody is being victimised on the platform?

Dr. Theo Bertram

We remove 93.5% of videos within 24 hours.

I thank Dr. Bertram for answering the question. I hope my colleagues will indulge me for the last couple of seconds with regard to the witnesses' engagement with An Garda Síochána, which I believe is extremely important. We know it obviously receives many complaints and has received them from journalists and concerned parents. I will ask Facebook the same question in terms of its swift reaction and response to An Garda Síochána. What hoops must it jump through? Is it a very direct process for An Garda Síochána?

Mr. Dualta Ó Broin

We have a law enforcement outreach team based in Dublin that engages with An Garda Síochána and a portal by means of which it can request information from us. All that is detailed in the law enforcement transparency report, which is a separate part of our report.

How quickly does it get that information in response?

Mr. Dualta Ó Broin

I would not have details on individual cases but it would not in our interest to hold information back if there is an appropriate legal basis to provide it.

I ask the same question to Mr. Costello.

Mr. Ronan Costello

Similarly, as I mentioned, we have a specific law enforcement portal through which reports can be submitted and then the public policy team can act as a liaison if reports need to be escalated further or if more context is required. That portal is there-----

It is a very direct process in terms of Twitter's engagement with An Garda Síochána.

Mr. Ronan Costello

As I said, we have hosted the training events for that on a Europe-wide basis and on an Irish-specific basis in Dublin.

Again, is Mr. Costello telling me Twitter has a very direct relationship with An Garda Síochána if it has concerns or issues around complaints about the platform?

Mr. Ronan Costello

It would know very well how to submit the reports and that there is a dedicated law enforcement team at the other side of those reports.

I thank Mr. Costello. I put the same question to Dr. Bertram.

Dr. Theo Bertram

Our legal enforcement response team globally is led from Dublin and is very well staffed. We have a very good relationship locally. Speed of response depends very much on what the urgency is from the law enforcement. If, therefore, the issue is imminent loss of life or threat to life then that happens, obviously, as an absolute priority and is extremely quick.

Colleagues and guests, we have come to the end of our session. We have had a very robust discussion that will be most helpful in terms of the work we as a committee must carry out in our pre-legislative scrutiny. I am sure it is only the beginning of many engagements we will have. I thank the witnesses for their participation and also my colleagues for being so engaged in this debate. We will now conclude our business for today.

The committee stands adjourned until 12.30 p.m tomorrow, when we will be meeting virtually here in committee room 3 to continue the scrutiny of the general scheme of the online safety and media regulation Bill 2020 with representatives from RTÉ, TG4, Sky Ireland and Virgin Media Ireland. Go raibh míle maith agat, gach duine.

The joint committee adjourned at 2.29 p.m. until 12.30 p.m. on Thursday, 20 May 2021.
Top
Share