Skip to main content
Normal View

Joint Committee on Justice and Equality debate -
Wednesday, 9 Oct 2019

Online Harassment and Harmful Communications: Discussion (Resumed)

Before I make introductions, I ask members and visitors to please switch off their mobile phones. They interfere with the recording equipment in the committee room.

The purpose of today's meeting is to continue a series of engagements on the issue of online harassment and harmful communications. We are joined by the head of public policy at Facebook Ireland, Mr. Dualta Ó Broin, and the lead counsel for content and regulatory matters, Ms Claire Rush. They are very welcome.

The representatives of Twitter Ireland are Ms Karen White, its director of public policy, Europe, and the manager of public policy, Europe, Mr. Ronan Costello. They are both very welcome. Representing Google Ireland is Mr. Ryan Meade, its head of government affairs and public policy, while representing the Internet Service Providers Association of Ireland is Ms Ana Niculescu, its chief executive. They are very welcome.

I will shortly invite the witnesses to make their opening statements. I propose to invite them to do so in the order in which I introduced them. There is no hierarchy; it is just the order on the list. Before I do so, I draw their attention to the issue of privilege.

Witnesses are protected by absolute privilege in respect of the evidence they give to the committee. However, if they are directed by the committee to cease giving evidence on a particular matter and continue to so do, they are entitled thereafter only to qualified privilege in respect of their evidence. They are directed that only evidence connected with the subject matter of the proceedings is to be given and asked to respect the parliamentary practice to the effect that, where possible, they should not criticise or make charges against any person, persons or entity by name or in such a way as to make him, her or it identifiable. Members of the committee are reminded that under the salient rulings of the Chair, they should not comment on, criticise or make charges against a person outside the Houses or an official, either by name or in such a way as to make him or her identifiable.

I invite Mr. Ó Broin to make his opening statement on behalf of Facebook Ireland.

Mr. Dualta Ó Broin

I thank the committee for inviting us to appear. I am head of public policy for Facebook Ireland and I am joined by Ms Claire Rush, our lead counsel on content and regulatory matters. We are based at Facebook's international headquarters in Dublin.

We recognise the real concerns of citizens and Oireachtas Members regarding the fast-moving and evolving nature of harmful content on the Internet, including on our platform. We take our role in keeping harmful content off our services very seriously. We recognise that if society were designing the Internet as it exists today from scratch, it would not ask companies to make the judgments that Facebook must make in regard to harmful content alone. We, therefore, welcome the fact that governments and policy makers around the world are taking an active role in addressing harmful online content. It is with their help, including that of the committee, that the rules which govern the Internet can be updated in a way that allows people the freedom to express themselves and allows entrepreneurs to build things, while also protecting society from broader harms.

Our community standards are the rules which govern what is allowed on our platform. We draft and update these rules in consultation with a wide range of experts, including NGOs and academics from around the world, including Ireland. They cover a wide range of content issues including hate speech, bullying, harassment, graphic violence and nudity. In many cases, our community standards go further than national laws, such as, for example, in the area of non-consensual intimate images, NCII. We have zero tolerance for the sharing of NCII on our platforms. Once we are made aware of these images, we remove them and use media-matching technology to prevent further sharing or re-uploading of the images. However, decisions about content can be complex and we do not always get it right. That is why we are establishing an external oversight board which will adjudicate at a global level on decisions we make on content. The decision of the oversight board will be binding on the company.

We submitted a detailed response to the consultation convened by the Minister for Communications, Climate Action and Environment, Deputy Bruton, earlier this year. In the interest of informing this discussion, I will summarise two of its points. We outlined how, in our view, an effective and efficient system of oversight by a regulator might operate. On the subject of notice and take-down, we recognised there can be a role for a regulator to review certain incidents where content is first reported to a service provider but not removed. We look forward to the publication of the related Bill and to engaging in future consultation opportunities.

Every piece of content on Facebook, including photos, text posts, comments, profiles and pages can be reported to us for violating our content policies. In each case, one can report the content by clicking on the link in the top right hand corner or, in the case of a comment, by pressing one's thumb on the comment for a few seconds if using a mobile phone.

If the content is found to be against our community standards, it is removed. We are also investing heavily in artificial intelligence, AI, so that we can more rapidly detect harmful and illegal content on our platforms, and it has been very successful in certain areas. However there are types of content which are more challenging for Al, such as bullying and harassment where context can be important. This is why we depend on reports from our community of users.

Our most recent community standards enforcement report, in which we publish quarterly breakdowns of the content that we have removed, demonstrates the efforts we are making to tackle a range of illegal and legal but harmful forms of content. For example, in the first quarter of this year we removed 5.4 million pieces of child sexual abuse material globally, 99.2% of which we removed before it was reported to us. The report also demonstrates the improvements we are making in developing Al tools to deal with challenging areas of harmful content such as hate speech, where our proactive detection rates have increased from 38% in the first quarter of 2018 to 65.4% a year later.

Our community standards recognise that bullying and harassment take place in many different places and can have many different forms. We do not tolerate this type of behaviour as it prevents people from feeling safe and respected on Facebook. In addition to removing content, we give users tools to help them protect themselves against bullying, such as blocking other users and controlling who sees your posts, and we operate a bullying prevention hub which gives young people, parents and teachers the tools and resources to address the complex issues which bullying presents.

In Ireland, we work with experts to inform our safety policies and deliver online safety programmes. In the past 12 months we have invested €1 million in a partnership with the national anti-bullying research and resource centre in DCU and SpunOut.ie. The main goal of this partnership is to help raise awareness of online safety and tackle the issue of online bullying among young people by offering online safety and anti-bullying training to every secondary school in Ireland. The programme is under way with more than 100 teachers from schools across Ireland attending the first set of training sessions in September. In addition, the research carried out through the school's training programme will inform SpunOut.ie's online safety resources for teenagers.

I hope that this gives committee members a brief overview of the steps we are taking to address harmful content on our platform. We have put these measures and more in place because we want users to feel safe and secure when they are using our services. Claire and I look forward to your questions and we would be happy to follow up in writing on any point which is of interest to the committee members.

I now invite Ms Karen White to make the opening statement on behalf of Twitter Ireland.

Ms Karen White

I thank the committee for its invitation to Twitter to participate in today’s session. My name is Karen White and I am director of public policy for Twitter in Europe. I am joined by my colleague, Mr. Ronan Costello, public policy manager for Twitter in Europe.

Twitter is an open, public service. Our singular mission is to serve the public conversation. We serve our global audience by focusing on the needs of the people who use our service, and we put them first in every step we take. Twitter is committed to improving the collective health, openness, and civility of public conversation on our platform. Our success is built and measured by how we help encourage more healthy debate, conversations and critical thinking. Conversely, abuse, malicious automation and manipulation detract from our purpose. We provide people on Twitter with a range of tools so that they can control and manage the type of content and accounts they see, ranging from being able to keep an account private to blocking, muting or reporting other individuals on the service. We also give people control over what they see in search results, through safe search mode, which is enabled by default. This excludes potentially sensitive content from the search results such as spam, adult content and the accounts an individual has muted or blocked. We strive to provide an environment where people can feel free to express themselves and we recognise that if people experience abuse on Twitter it can jeopardise their ability to do this.

An individual using the service is not permitted to promote violence against or directly attack or threaten other people in a range of protected categories. A person may not engage in abusive behaviour, which is an attempt to harass, intimidate or silence someone else's voice. We do not allow individuals to use hateful images or symbols in their profile image or header and individuals using the platform are not allowed to use their username, display name or profile bio to engage in abusive behaviour such as targeted harassment or expressing hate towards a person, group or other protected category. Under this policy we take action against behaviour that targets individuals or an entire protected category with hateful conduct.

With regard to self-harm and suicide, after Twitter receives a report of such behaviour, it will contact the reported user and let him or her know that someone who cares about him or her identified that he or she might be at risk. We will provide the reported user with available online and offline resources and encourage him or her to seek help. In response to certain keyword searches relating to these issues, using Twitter's search function, we direct individuals to online prevention resources. This service is available in Ireland, where we have partnered with the Samaritans, to whose website and support services we direct individuals.

Twitter does not allow individuals on the service to post or share intimate photos or videos of someone which were produced or distributed without their consent. Such material is sometimes referred to as "revenge porn". This content poses serious safety and security risks. We inform our users that sharing explicit sexual images or videos of someone online without their consent is a severe violation of their privacy and the Twitter rules.

People who do not feel safe on Twitter should not be burdened to report abuse to us. Earlier this year we made it a priority to take a proactive approach to abuse. Today, through the use of technology, 38% of abusive content that is enforced is surfaced proactively for human review. We have made meaningful progress towards creating a healthier service. Since we announced our focus on improving the health of the conversation occurring on Twitter, we have seen a 16% year-on-year decrease in reports from people about other users allegedly abusing them on Twitter. We have seen a 45% increase in the number of account suspensions for those who attempt to create new accounts following the suspension of their original accounts. Over the first quarter of 2019, this amounted to more than 100,000 account suspensions for these reoffenders. We are suspending three times more abusive accounts within 24 hours of a report and are taking two and a half times more private information down, with a new, easier reporting process.

We have always provided an appeals mechanism for individuals who have had enforcement action taken against their accounts but earlier this year we launched a new feature that allows people to appeal within the Twitter app itself. This change has meant that we have been able to respond to people 60% more quickly than we had previously.

We have well-established relationships with law enforcement agencies, including An Garda Síochána. We have continuous global coverage to address reports from law enforcement around the world and have a dedicated online portal to swiftly handle requests from law enforcement. We provide regular training on our policies and procedures and have publicly available guidelines for law enforcement on our website.

The committee has asked for our recommendations on the proposed legislative changes. It is important that legislation is as consistent as possible with existing legal frameworks to avoid uncertainties and discrepancies. The effectiveness of any legislative solution relies on it being proportionate, technically feasible, and flexible, particularly given the diversity of companies within the digital ecosystem. In this context, the committee will need to consider and assess how different legal and illegal harms manifest themselves across different platforms and varied jurisdictions.

We all share the objective of protecting our systems of due process and our commitment to freedom of expression. Preserving these tenets in regulatory proposals can be achieved by collectively ensuring there is clarity on the obligations of all stakeholders, thereby avoiding an outcome whereby companies could overreach or erroneously remove content that should otherwise be kept online. A clearly defined scope in that regard will assist Twitter and, I imagine, others.

In order to ensure people can continue to express themselves freely and safely on Twitter, we must continue investing further in our proactive technology and safety tools, as well as developing policies which keep pace with the changing contours of the public conversation we see on our service. We stand ready to work with this committee as we continue to explore options to ensure that all people are protected from online harassment and harmful communications. I thank members for their time. We look forward to taking their questions.

Mr. Ryan Meade

I thank the Cathaoirleach for the opportunity to contribute to the committee's deliberations on the topic of online harassment, harmful communications, and related offences. I work with Google in Ireland as government affairs and public policy manager and am based in our EU headquarters here in Dublin. Google supports all efforts by legislators and governments to engage with stakeholders in considering appropriate protections, remedies, and forms of redress for individuals who are the victims of online harm. A range of governments, technology platforms and civil society groups are currently focused on how best to deal with illegal and problematic online content.

There is broad agreement on letting people create, communicate and find information online while preventing people from misusing content-sharing platforms like social networks and video sharing sites. We recognise that there can be a troubling side of open platforms and that bad actors have exploited this openness. We take the safety of our users very seriously, and we are committed to ensuring that inappropriate content that appears on our platforms is dealt with as quickly as possible.

Now 21 years old, Google has grown from a small start-up to a global company with legal obligations in each of the countries in which it operates. We work hard to protect our platforms from abuse and have been working on this challenge for years, using both computer-science tools and human reviewers to identify and stop a range of online abuse, from get-rich-quick schemes to disinformation, to the utterly abhorrent, including child sexual abuse material online. We respond promptly to valid notices of specific illegal content, and we prohibit other types of content on various different services. A mix of people and technology helps us to identify inappropriate content and enforce our policies, and we continue to develop and invest in smart technology to detect problematic content hosted on our platforms.

As well as making significant investment in technology and human resources, we have engaged with policymakers in Ireland and around the world on the question of the appropriate oversight for online content-sharing platforms. Google is supportive of carefully crafted and appropriately tailored regulation that continues to address the challenges of problematic content online. We are keen to work constructively with legislators to build on the existing legal framework and to build trust and confidence in the systems and procedures that ensure online safety.

Having considered the committee's issues paper, our comments today are directed towards those aspects that concern the role of Internet service providers in preventing online harassment and certain harmful communications. We have submitted a longer written statement which outlines all of these points in greater detail. In this statement, we have also provide some comments on the approaches taken on this issue in other jurisdictions where Google operates and which were mentioned in the committee's issue paper.

In the statement, we suggest a number of central principles that should be considered for approaching oversight of content-sharing platforms and problematic content online. These include clarity, suitability, transparency, flexibility, overall quality and co-operation. I have set them out in more detail in the longer statement and can refer to them later if the committee wishes.

In framing any measures in this area, it is important for legislators to have regard to and build on the existing legal framework. We operate in an environment where extensive regulation of online content and actions already exists and is being enforced. Many laws, covering everything from consumer protection to defamation to privacy, already govern online content. From consumer rights legislation to the new EU Audiovisual Media Services Directive, online behaviours come under the scope of a diverse and evolving set of legislation, multi-stakeholder initiatives and regulators.

Specifically relating to regulation of Internet service providers as a means of combating online abuse, the distinctive role played by Internet service providers is reflected in the EU legislation that underpins the regulation of electronic commerce in Europe. The eCommerce directive provides strong incentives for service providers to establish and operate efficient notice and take-down procedures. A service provider that does not operate such procedures will be exposed to potential legal liability for unlawful content hosted on its platform. Many service providers, including Google, have developed extensive infrastructures which provide efficient tools for the reporting and removal of illegal content.

The eCommerce directive has the advantage of setting out different requirements for different types of Internet intermediaries, rather than being aimed at a particular business activity. It has led to the growth of a wide variety of services and business models, and is flexible enough to cover the multiplicity of activities and content types online. For example, an online news site can contain content authored by the news organisation, along with material licensed from third parties and also user-generated comments - the news site will be directly responsible for the editorial content it publishes but will have different legal responsibilities with respect to user comments that the website is hosting as an intermediary. This online intermediary liability regime has fostered the huge economic and cultural benefits of the Internet while ensuring platforms are taking appropriate and speedy actions in removing unlawful content online.

In addition to legal regulations, Google has over the years developed extensive community guidelines and content policies that offer clear rules on what it does not allow on its platforms. These often go above and beyond the law and we employ thousands of staff around the world, working 24 hours a day, to ensure violations are acted upon. Companies have also worked together to address these challenges, for example, with the Global Internet Forum to Counter Terrorism, a coalition sharing information on curbing online terrorism.

We continue to improve on our processes and our technology to enforce these rules. We continually review and update our policies based on new trends and invest in new machine learning, ML, technology to scale the efforts of our human moderators.

ML is helping us detect potentially violative content and surface it for human review. For example, YouTube removed 9 million videos during the second quarter of 2019, 7.9 million of which were first flagged through our automated flagging system.

For example, YouTube removed 9 million videos during the second quarter of 2019, 7.9 million of which were first flagged through our automated flagging system. Of those videos, 81.5% had no views at the time they were taken down.

We thank the committee for providing us with an opportunity to contribute to its deliberations on the topic of online harassment, harmful communications and related offences. Addressing problematic content is a shared responsibility across society, in which companies, Governments, civil society, and users all have a role to play, and it is appropriate that this committee is hearing from a variety of voices on this topic. We hope the committee will give our suggestions for approaching oversight of content-sharing platforms due consideration and look forward to further discussion.

I call Ms Ana Niculescu, who is here on behalf of the Internet Service Providers Association Ireland, ISPAI.

Ms Ana Niculescu

I thank the committee for the invitation to address it. ISPAI is a not-for-profit organisation and the legal entity delivering the hotline.ie service, which is the national reporting mechanism where the public can anonymously, securely and confidentially report suspected illegal content online. hotline.ie's remit is distinct and limited to combating online material which is simultaneously harmful and illegal, namely child pornography, also known as child sexual abuse material, CSAM. The hotline.ie service has been effective in having CSAM swiftly removed from the Internet because it operates within a clearly defined legislative framework where the harm is clearly illegal. Additionally, it is a transnational crime which allows for decisive international action when CSAM is hosted outside Ireland. There is no burden of proof in respect of consent as engaging a child in sexual activities is prohibited under all circumstances and any presumed consent would be null and void. When talking about CSAM, it is important to note that we are talking about materials showing the extreme abuse of children, the majority of them infants to 12 year olds. For example, 84% of the material classified as child pornography from January 2017 to June 2019 showed children estimated to be four to 12, with 7% aged three and younger.

hotline.ie has been working in an environment predominantly governed under the Child Trafficking and Pornography Act 1998, amended by the Criminal Law (Sexual Offences) Act 2017. It is clear national legislation that comprehensively defines what would constitute child pornography and corresponding offences.

The broader context commonly referred to as industry self-regulation consists of, as a non-exhaustive list, the EU directive on combating the sexual abuse and sexual exploitation of children and child pornography, which provides for an international framework. The e-commerce directive is also transposed into Irish law and sets out the requirements and exemptions for intermediaries' liability for illegal content. There is a national co-ordinated multi-stakeholder approach, where hotline.ie operations and procedures are agreed and overseen by the Department of Justice and Equality. Since inception, we have been working in conjunction with national law enforcement and in co-operation with Internet companies operating in Ireland for the removal of CSAM from the Internet. Our hotline.ie analysts are internationally trained by INHOPE and Interpol in content and age assessment. There are national reporting mechanisms such as hotline.ie in more than 40 countries worldwide, coming together as the International Network of Internet Hotlines, INHOPE, allowing for international co-operation and co-ordination. When child pornography is hosted in Ireland, we will issue a notice for takedown to the Internet service or content provider and will notify the paedophile investigation unit of the Garda National Protective Services Bureau. In most cases, the notified material is removed from the Internet within 24 hours. When CSAM is hosted outside Ireland, hotline.ie will still notify the paedophile investigation unit and thereafter the INHOPE member hotline in the relevant jurisdiction, so that CSAM may be swiftly removed at source.

INHOPE figures show that 64% of CSAM reported across different international jurisdictions in 2018 was removed within 72 hours. As previously outlined, in Ireland, it is removed within 24 hours. However, slower international removal times may be due to CSAM hosted in countries without a hotline presence, a lack of designated points of contact and streamlined notice for takedown process with Internet companies, or countries with inadequate or deficient legislation.

The self-regulatory model is effective in the circumstances in which hotline.ie operates and may be useful to look at as a reference point for other types of content but it would warrant further scoping.

For those who may not be aware, the hotline.ie service will be 50% co-funded by the European Union through the Connecting Europe Facility's safer Internet programme until June 2021. The other 50% is co-funded by ISPAI member Internet companies, comprising Internet service providers, hosting providers, search engines and mobile and telecommunications operators. Hotline.ie was established as one of the key recommendation of the Government's working group on the illegal and harmful use of the Internet in 1998. It provides a free, secure and anonymous online reporting mechanism, content assessment expertise to ensure reported material is subject to objective evaluation in accordance with rigorous standards by reference to Irish law, a triage function through assessing public reports and notifying only content that is most likely illegal under Irish law, thereby reducing the burden that would otherwise be placed on law enforcement by, for example, assessing complaints that may not prove to reside in Ireland and operating the notice and takedown service, which is recognised worldwide as a vital and efficient tool in the removal of child sexual abuse material at source and which, in turn, reduces its availability on the Internet, disrupts the cycle of sexual exploitation and prevents further victimisation of children. In my written statement I have provided statistics that are worth looking at.

I stress that we welcome the committee's engagement with a variety of experts and stakeholders and we are grateful for the opportunity to contribute to these discussions. We are also supportive of the Government's attempts to address the spectrum of harms and that different harms may require different legislative and regulatory responses, with due account of the fact that the Internet is an ever-evolving, technically challenging and complex environment. Superimposed on that is a continuum between offline and online behaviour that contributes to the complexity of developing effective responses and remedies. I am happy to provide further information in writing as required. I thank committee members for their attention.

In bringing in speakers I will refer to them by their first name. We will all be relaxed. We had a little discussion about the order earlier and we will go with the order I suggested. I invite Deputy Jim O'Callaghan to speak.

I thank the Chairman and I thank everyone for coming in this morning. So the witnesses are aware of the function of the committee, we are all Members of the Oireachtas considering whether there needs to be greater regulation. When we talk about regulation what we mean is the extent to which social media companies should be liable under law for harmful information they store and which they and others publish. Do the witnesses accept that regulation in Ireland should increase beyond what is provided for in the e-commerce directive? I will begin with Mr. Ó Broin on behalf of Facebook.

Mr. Dualta Ó Broin

I thank the Deputy for the question. I will allow Ms Rush to go into the specifics on the intermediary liability point.

Ms Claire Rush

I thank the Deputy for his question. As outlined, including in Mr. Meade's statement, the e-commerce directive already provides for a liability regime for the removal of unlawful content. What we are here to discuss is a broader level of harm, some of which may be illegal and some which may be lawful or permissible under the terms of the law. We are here to engage on how these can be appropriately defined, how an appropriate notice and takedown regime can be defined and how there could be a role for a regulator in developing standards or guidelines to which Internet service providers or online intermediaries should have to adhere. Naturally as part of this there would need to be a reasonable sanctions approach. We appreciate this and we understand it entirely. The nitty-gritty detail would be a matter for the Legislature.

Our position is that there would be a phased proportionate approach that would look at systematic adherence to the standards or guidelines a regulator might lay out as opposed to focusing on individual instances of content per se.

Can I take from her answer that Ms Rush accepts regulation should increase from what it is at present?

Ms Claire Rush

There is room for a new regulatory regime.

I will ask Ms White the same question. Does Twitter accept there should be further regulation?

Ms Karen White

The issue of regulation really is one for law makers. We have certainly appreciated the collaborative approach of the committee, other committees and various Departments on this issue. I joined Twitter more than five years ago and remember one of the first consultations I undertook was with the Law Reform Commission. There has been a very collaborative approach. As our CEO has outlined, and as we have said on the record many times, we view regulation as a net positive. It can have very good outcomes. In this sense, we certainly view our role as educators and we work with Governments on trying to educate on the technologies available and how our services operate. Ultimately, it is for law makers to decide the breadth of that-----

To summarise, Twitter favours regulation but it wants to be consulted in respect of the changes, which is a reasonable request.

Ms Karen White

We definitely respect the consultation process. A number of the self-regulatory measures already in existence, including the EU code of conduct on illegal hate speech of which Twitter was a founding member, the code of conduct on disinformation and others, are certainly bearing fruit. There are a lot of models in existence, particularly in Europe, that can be leveraged. When moving to a place of reviewing any existing legislation or introducing new legislative measures we must ensure we are not fragmenting the legislative frameworks in place in Europe.

Is Google in favour of increased regulation?

Mr. Ryan Meade

As I said in my opening statement, we operate under a series of regulations and we think there is always scope for legislators to look at how the frameworks can be improved to get better outcomes for people who use our services. In this sense we are in favour of regulation and looking at what is in the existing legal framework and what scope there is for improving it where new regulations are introduced. In the document we submitted we make suggestions on principles that could apply to this additional oversight. Specifically, I referred to the e-commerce directive to emphasise that it provides an opportunity for legislators to oversee notice and takedown procedures. In a sense, this is what is happening not just in Ireland but elsewhere. People are looking at how we can use the regulation.

With regard to the principles I mentioned, a number of people referred to clarity. Platforms as much as users would benefit from clarity on what the definitions are, what is actually expected of platforms and how we can operate them.

Another key point is suitability and that the form of regulation that applies is suitable to the specific service. We are sitting here together today but we all have quite different products and services that operate in different ways and present different issues and challenges. We suggest legislators should definitely take this into account.

Google engages in multijurisdictional business. Its businesses operate in various countries irrespective of boundaries. This can create difficulties for the company when it comes to the application of laws. Is it in agreement with the president-elect of the European Commission, who has indicated she wants to introduce a digital services Bill that would apply throughout the European Union and would have an impact in this regard?

Mr. Ryan Meade

The proposed digital services Bill would take a look at some of the intermediary liability systems in place at present. As other witnesses have said, that regime has been in place for approximately 20 years. Society and technology have moved on and it seems quite appropriate that legislators continue to look at it. We do not have an issue with legislators doing this. As I said in my opening statement, we should be cognisant of the benefits the intermediary liability regime has provided for the flourishing of online services.

I will ask Ms Niculescu the question that I asked the others, although she is in a slightly different position. Should there be increased regulation beyond what exists?

Ms Ana Niculescu

It is important to have clarity on the results sought from industry and to have clear definitions and criteria to identify the harms that fall within the scope of the definitions. We could use examples from other jurisdictions, such as the German Network Enforcement Act, which provides for a tiered system. It is important to look at the model in New Zealand where there is user education and empowerment, with a culture of reporting. Self-regulation is tier two. There is also oversight from a statutory body.

We are looking, in part, at harmful publications and communications. It is a very vague term. As I said last week, people should be allowed to be critical of politicians. That could be categorised as harmful and is not what we are talking about here. An example which we will all agree is harmful, illegal and should not be facilitated is child pornography, an issue raised by Ms Niculescu. I am sorry to be blunt about this but do the witnesses agree that their businesses have increased the incidence of child pornography significantly throughout the world?

Mr. Ryan Meade

I am not certain that I would accept that premise. As I said in my opening statement, the openness of the Internet and the availability of these communications technologies have troubling aspects, one of which is the proliferation of this and other illegal material. I am probably not qualified to give an answer on whether any company has-----

I am sorry to interrupt. Mr. Ó Broin gave a figure in his opening statement, that in the first three months of this year, Facebook removed 5.4 million items of child sexual abuse. Does Google have any figures that it can give?

Mr. Ryan Meade

I do not have the figures in front of me but I know that child sexual abuse material is the issue that we take most seriously. We collaborate with law enforcement around the world and report every incident to the National Center for Missing and Exploited Children, NCMEC, in the USA. We have made our AI and machine learning technology available to others so that they can use the technology that Google has developed to identify and remove any such material on other platforms.

Do Ms White and Mr. Costello think that social media companies should have any specific liability or penalties imposed on them for the unknowing facilitation of child pornography?

Ms Karen White

The e-commerce directive lays out the legal framework relating to illegal material being hosted on our service and generated by users. We have a zero tolerance policy for child sexual exploitation, CSE, material. If we find it, accounts are suspended immediately and the content is reported to the National Center for Missing and Exploited Children. We work with a range of organisations from the Internet Watch Foundation to INHOPE, which runs 46 hotlines across 40 different countries. We work closely with law enforcement on these matters. The chief executive of the Internet Watch Foundation stated in a previous report that 1% of the content the foundation had removed for violating CSE policies was found on social media services in 2017. It is a very low percentage in that respect but is nonetheless a grave issue. In our most recent reporting period, for which we provided figures of our enforcement actions in the Twitter transparency report which is published biannually, we took action on 29,824 pieces of content for violating our CSE policies. We are certainly open to further collaboration in this area but we take the issue of child safety on our service very seriously.

We all agree that we want to ensure that there is no child pornography on the Internet. We had a witness here last week, Mr. Joe McCarthy from UCD. He suggested that before a person is able to open an account with Facebook, Google or Twitter, he or she would have to provide details about his or her identity.

If such a rule were in place, would that not have a significant deterrent effect on people putting child pornography on the Internet, because they could be identified?

Mr. Dualta Ó Broin

We have a real name policy at Facebook. We require our users to use the name that they use in real life. We believe that the policy reduces the number of incidences of people violating our community standards.

How does Facebook check that?

Mr. Dualta Ó Broin

There is AI working on different patterns of behaviour relating to fake accounts. If somebody makes a comment about a politician and the politician feels that the account is not a real person, he or she can flag that to us and we can then put that account into a cross-check where the account has to be authorised. The account holder then has to provide proof of identity to us. While I understand the conversation about verification of accounts and the reasons for it, it raises a wider question about the operation of the Internet. We would essentially be required to verify all users globally before they could use our systems. There are a number of considerations, including whether the system of identification is available in each country. Are people ruled out from using our services because they do not have access to an ID? In Ireland, for example, there is a cost to obtain ID. That is not to say that we do not take the issue of child sexual abuse extremely seriously. We take proactive steps. As others have said, we have zero tolerance for this type of behaviour. There are genuine concerns about how proof of identity would operate and whether us holding all of that information about more than 2.7 billion people globally would respect the data minimisation principles within the general data protection regulation, GDPR. That is not to say that we are not looking at this as something that we have to do better. We are looking at artificial intelligence to determine whether it can assist us in verifying accounts. We talk about fake accounts and removing them. In the first quarter of this year, we removed 2.2 billion fake accounts from our platform, 99% of them before they were reported to us. We take the issues of authenticity and child sexual abuse extremely seriously.

I want to let my colleagues in and so will finish here. If it were the case that there was criminal liability or penal sanction such as a fine for social media companies that happened to facilitate, unwittingly or otherwise, the publication of child sexual abuse material, would that not have a deterrent effect in reducing it and changing the companies' actions?

Mr. Dualta Ó Broin

I will allow Ms Rush to come in on this point. The Deputy is getting close to a monitoring obligation on all content on the platform. That would fundamentally change the basis on which our companies operate, the liability regime and the e-commerce directive.

Ms Claire Rush

To echo what everybody is saying, existing regimes are in place. While they may not be that visible or well-known to everybody who walks down the street who may not know that we have reporting, notice and takedown and that we remove content when we have notice of it, it is something that we have been doing for a long time. We have always taken that seriously and are always responsive to these reports. That system of notice and takedown and liability upon notice has allowed Internet companies to develop and flourish. That is not to say that there is not room for new conversations about broader responsibilities for other types of harmful content, but we need to be careful about adjustments or alterations to the fundamental nature of the intermediary liability regime, as has been operating across Europe since the e-commerce directive in 2001.

There are a number of points that I wish to tackle in my questions to our witnesses today. First, to put my cards on the table and be very clear, I believe the principle under which the companies operate is wrong. I do not accept this made-up term of "intermediary". To me, the companies are publishers and should have the liabilities of publishers. As an industry, it has pulled off an amazing trick over many decades, which has enriched the companies and their shareholders to a vast level, and the cost is too great. I want to be absolutely clear and our witnesses can respond to it. I am aware of the response about intermediaries and publishers and the difference the companies always seek to strike on this.

All of the companies' representatives have collectively outlined the takedown process. Will each of them give me a figure that is spent globally by their company on that process? Let us start with Google, which I understand, roughly, give or take the small change, is a €100 billion plus revenue company. Is that roughly correct, give or take? Let us take the parent company, Alphabet.

Mr. Ryan Meade

We are a profitable company. Alphabet is a very profitable large company.

Will Mr. Meade give me the figure that his company is currently spending on takedown activity?

Mr. Ryan Meade

I am not sure if I have that figure but I can tell the Deputy the number of people we put against it, which is approaching 10,000 globally.

Will Mr. Meade give me a rough figure? Is it a billion?

Mr. Ryan Meade

I do not want to mislead the committee, so I am not going to give the Deputy a number, but I would have to follow up-----.

Is it half a billion? Does the company spend two billion?

Mr. Ryan Meade

I honestly do not know. I would be happy to follow up.

Did Mr. Meade not think this was worth finding out?

Mr. Ryan Meade

It is worth mentioning that we have ramped up our operations on this. We have a very significant operation here in Dublin. We have hundreds of people working on this, both on developing the policies that we apply but also on enforcing them. We have some very cutting-edge research happening in the US, where we are leading the way-----

I appreciate that but my time is limited. Mr. Meade said all that in his opening statement.

Mr. Ryan Meade

I know. The Deputy is looking for a number and I do not have that number.

If Mr. Meade went to that much effort, I am just wondering why he does not know how much his company spends on its defence, which is that it takes the material down.

Mr. Ryan Meade

If we have that figure publicly available, I am happy to give that to the committee.

I would appreciate that. I ask Facebook the same question. For the record, and I am open to correction, Facebook has a value of €40 billion.

Mr. Dualta Ó Broin

Similar to Mr. Meade, I do not have that figure.

Will Mr. Ó Broin confirm to me that Facebook is about €40 billion of a revenue business operation?

Mr. Dualta Ó Broin

I bow to the Deputy's research on that point, but similar to Mr. Meade, I do not have that figure to hand.

It was not important enough to find out how much Facebook spends on taking-----

Mr. Dualta Ó Broin

Pardon me for cutting across the Deputy, but what I would say is that we have 30,000 people working across the globe on different sites on a 24-7 basis to respond to reports and to remove harmful content from our services. I take the point that the Deputy has made on the spend, and as Mr. Meade has said, we are also happy to look into it and to come back to the committee.

To be clear, I do not have a problem with businesses making profit. Absolutely not. I believe businesses should make a profit. I am a business person. I do not ever advocate the idea of wholescale censorship or that we should live in a restrictive world. The Internet does good things. My problem with Internet companies is that the harm cannot just be discarded.

Each of our witnesses made their way into work somehow this morning, whether it was by private car, bus or whatever. None of them would one expect the company who made the motor car to make it to the minimum standard, to recall it if there is a problem with it, but not to do all the necessary things to keep our witnesses safe on their journey. The problem with what the Internet companies do is that, if we are honest, they are really publishers who have more impact on the world today than probably all broadcast and print media combined together, which is a terrific compliment to the way the companies have been built up. Yet, for the purposes of making profits, they want to have no obligations to what those impacts are, apart from saying that they will take material down when the damage has been done.

I go back to my central question to our witnesses again. Do the companies believe that at any point that they should accept the principle of being a publisher of material? I ask each of the witnesses to reply in whatever order they like.

Mr. Ryan Meade

The Deputy has made clear his stance that he considers our companies to be publishers. It is open to legislators to decide to apply the same rules that publishers work under to the various online services. This would need to be taken through to its conclusion. What would it mean and what impact would that have on the operation of the open Internet? We have developed our services on the basis that they are open and for everyone. One of the great features of our services is that the same Gmail, drive or whatever service we offer is available anywhere in the world on the same basis. That brings with it a huge volume of material. On YouTube, for example, there are hundreds of hours of video material uploaded to our services every minute. If we were to be considered publishers, each item would have to be reviewed before appearing on the platform. It would be open to legislators to make that a requirement but it would fundamentally alter the availability of that service. It is up to legislators to decide what the rules should be. The point I would make, in common with the other members of the panel, is that legislators should also take account of what benefits the liability regime has provided.

Just because we do not consider it appropriate that the regulation should be the same for publishers as for the platforms, that does not mean the platforms cannot be regulated. As I said in my opening statement, there are regulations that apply to platforms and it is obviously open to legislators to decide if they are appropriate, if they can be strengthened, changed and made more clear and whether we can get better outcomes for users.

I do not know if the Deputy and I will agree on the question of publishers versus platforms. Perhaps we can agree, however, that just because we are not regulated in the same way as publishers and are not subject to exactly the same regime, that does not mean there is no ability for legislators to regulate.

I ask our other guests to respond.

I ask for a response from Twitter.

I appreciate that Internet services providers are in a slightly different position but I would like to hear from Twitter and Facebook.

We will ask Twitter to respond and will ask Ms White and Mr. Costello to reply, please.

Ms Karen White

I thank the Deputy for the question. At Twitter, safety touches every aspect of the organisation, whether that is our engineering or policy teams or those who construct-----

I am sorry to interrupt and without meaning to be rude, I asked a very straightforward question. Does Twitter consider itself a publisher?

Ms Karen White

We do not. We do not exert editorial control.

Twitter would not want to be considered a publisher under any circumstances.

Ms Karen White

It is important to note that Twitter is a live public service. We do not exert editorial control over any of the user-generated content that we see on our service. That is not to say that we do not take our responsibilities with regard to the safety of our users very seriously. When it comes to issues like abuse, illegal material, terrorist-related content and child sexual exploitation, we have in recent years worked to leverage the technologies that we have to proactively identify some of that-----

That is what I do not understand because that is where Twitter blurs its own line every single time. The company makes editorial content judgments every time it makes one of these decisions in the same way that a newspaper or broadcast organisation makes editorial judgments when it transmits, yet it says it is not a publisher.

Ms Karen White

What we do in those instances is enforce our own terms of service and the Twitter rules that govern the types of behaviour that we do and do not-----

Those are editorial judgments.

Ms Karen White

It is making judgments on whether or not a piece of content or behaviour is violative of the Twitter rules or terms of service.

That is very similar to what an editor of a newspaper would do. I ask the other witnesses to respond on that issue.

Mr. Dualta Ó Broin

If I may make a very simple point, Facebook publishes material on the Internet. We publish our parents portal, our youth bullying prevention hub and our suicide prevention material. That is material that we author and put on the Internet. We provide a platform for over 2 billion people so that they can put their views on the Internet. We would not consider ourselves a publisher for the user content.

It is on the company's website.

Mr. Dualta Ó Broin

Similarly, we operate the same system that-----

Mr. Ó Broin accepts that Facebook makes editorial judgments by removing material but he does not consider that it publishes material.

Mr. Dualta Ó Broin

Once we become aware of the content and it is reported to us as being of concern for violating our community standards, we then make a decision as to whether that is or is not the case.

To my mind the real distinction is that if Facebook were deemed to be a publisher, many people would be lining up to take legal action over certain content. By being deemed an intermediary, it escapes this consequence. This is why, in my opinion, this is like the wild west of the 21st century. There is a very interesting historical analogy. In the United States at the start of the previous century, European copyright was not accepted. This meant, in effect, that authors could not be paid. The American Government and the publishers, some of which are now the biggest names in publishing in the entire world, did not accept European copyright. The publishers made their money in that way. When they got to a certain size, they of course turned around and said that they wanted copyright enforced and regulation. It suits Internet companies to grow to a certain size and then look for a bit of regulation to prevent entry into the market.

There is one main stumbling block, if one takes an honest look at it. This is the bit I can never quite get. It reminds me of the chief executive of a cigarette company sitting before the US Senate and saying that smoking is not harmful. The witnesses answered one of Deputy O'Callaghan's questions about whether their companies increased the proliferation of child pornography. The honest answer to that is "Yes". Whether the companies accept that they are individually responsible for that or that they should be legally responsible for it, the Internet did increase that proliferation. There is no getting around that. My point is that, by not being declared publishers, the companies avoid the liabilities with which almost everybody else has to deal. I do not believe a decision in this regard should be made in Ireland alone. It should be broader and decided at least at the level of the EU. The traditional western countries need to operate together and to look at this issue. There is a role for the Internet and for what the companies do, but there is also a role for regulation.

I will conclude with this because I am conscious of time. There are two things that really changed my mind. It is not so much about the area of pornography or anything like that. There was one horrendous incident in Ireland involving Facebook, and I do not want to go into too much detail about it out of respect for the family involved, in which the public shared multiple images of an horrific incident. There is no comeback for that family. They cannot unsee it. It does not matter that it was taken down in 24 hours, which in fairness it was. People should not profit off that level of absolute horror being inflicted on someone. The other issue, unfortunately, also involves Facebook. Somebody went on a murder spree and, using the live-streaming facility Facebook developed to make further profits, live-streamed himself murdering people. I cannot understand, from a moral perspective, how people are comfortable with defending themselves against liability for that. That is my issue. That is why I believe there should be a change. Any of the witnesses are more than welcome to reply to me on those points, but that is where I will leave it for now.

Would Facebook like to respond?

Mr. Dualta Ó Broin

I will go first and then pass over to Ms Rush. We are subject to rules and regulations as it stands. We are here today and, in all of our opening statements, we have said that we are open to further regulation. We have been saying quite publicly since March of this year that we recognise the need to work with governments across the world, globally, nationally, and at EU level, to figure out how, as a society, to deal with harmful content. We have a major role to play in that. I am not saying that it is a societal issue from which we will just stand back. We have a major role to play in it. We will play that role in working with legislators around the world to deal with this issue.

I know the Deputy does not want to go into detail on the particular case involving the car accident. With regard to the Christchurch incident, which he raised, we are doing everything we can to learn from that incident and to ensure that it can never happen again. For example, one of the weaknesses in our systems was that the artificial intelligence, AI, system did not recognise a video being made because the shooter framed it in a first-person view, that is, he had a camera on him.

We are working with the Metropolitan Police in the UK and with the American armed forces. We are inputting their weapons training footage into our AI, which will ensure that type of incident can never happen again. This is a rapidly evolving area and we have rapidly evolving harms on the Internet. We are committed to doing absolutely everything we can to make our platform safe. I do not accept the premise that we profit from these types of incidents being on our platform. It is the opposite. We hear from both our advertisers and our users all the time that they do not want this type of material on our platform. The advertisers do not want their advertising anywhere near this type of content. We are extremely serious about removing it. I will ask Ms Rush to come in on the legal points the Deputy raised.

Ms Claire Rush

I do not have too much to add to what Mr. Ó Broin has said. I thank the Deputy and note and appreciate his concerns in this regard. One of the points he made is that the heightened layer of responsibility arising from being a publisher would result in people queueing up at the door to take claims and so on. As Mr. Meade mentioned earlier in respect of an obligation to preview, pre-screen or pre-moderate every single piece of content against all the laws in force anywhere in the world the content might be available or might be shared, regardless of the operational burden and cost, such an obligation would have a significant chilling effect. Companies would be incentivised to take down anything that is remotely dubious or questionable from a legal perspective. They would have to err on the side of removing everything, which could have a very detrimental effect on the availability of content. It is just one perspective. I absolutely appreciate all the other points the Deputy has raised, to which Mr. Ó Broin has responded. We are here because we accept and know that this is an opportunity to find a better way forward with regard to a future regulatory model which would learn from and build on other models and which would be conscious of EU developments and developments that have worked in other countries. I hope that we can constructively engage in such a discussion with the committee.

Mr. Ryan Meade

With regard to the point on editorial control, it is important to point out that, as Ms White said, platforms have the ability to enforce their terms of service and to create content policies. In our case, these policies are our community guidelines for YouTube, our advertising policies and so on. That is provided for under the existing legal framework. It allows the companies to exercise a degree of moderation and to impose their own standards with regard to what they will or will not allow on their platforms. From our point of view, that has served the purpose of allowing us to keep our users safe on our platforms and to provide them with a more positive experience. The Deputy's point is that is a form of editorial control which would lead to us being defined as publishers. What we do, and I believe all the platforms have similar systems of enforcing guidelines, is allowed for under the existing legal framework. If it was not allowed for, we would be in a much worse situation in respect of platforms' ability to take proactive measures to keep users safe.

Mr. Ronan Costello

To build on what Mr. Ó Broin said about what our business models rely on or how they are structured, Twitter's model is that users express themselves on the platform through tweets. Tweets are the lifeblood of the platform. Users' comfort in expressing themselves on the platform is dependent on them feeling safe and their feeling safe on the platform is dependent on them having confidence that Twitter will be responsive to issues they report to us and that we will be proactive, where necessary, in leveraging machine learning and so on to surface content for review proactively. If people stop tweeting because they do not feel comfortable expressing themselves on the platform, the lifeblood of the platform is extinguished. There is therefore no reason for people to be on it or for advertisers to use it.

The very model of the platform is dependent on them having the trust and confidence in us to invest in safety.

I would love to respond but I respect my colleague's time so I will leave it for another occasion.

I thank the witnesses for their contributions. Many of them should be here on the politicians side of the table because I am hearing a lot of political speak, waffle and diversion from the questions, which is interesting.

I have a couple of questions. The stuff about publishers or facilitators is very interesting. As far as I can see, it is a movEable feast from the witnesses' point of view. Facebook was involved in a case with a company called Six4Three that came to court in 2018 where it said that it was a publisher, that it had to make decisions on what not to publish and that it should be protected because it is a publisher. However, it is not a publisher when it comes to abusive posts and so on. That is an interesting aside.

All the witnesses said there were different aspects of the regulation, that sometimes they have to make one decision and another decision on other occasions. It seems that the entire regulatory framework is unclear. From our point of view, and the point of view of the public, we could do with that being clarified. That would be important.

I ask all the witnesses, starting with Facebook, if they have the technology to track the sharing of illegal identical and equivalent posts? If I post something that is illegal and it is shared widely, can that be tracked the whole way to the end of the process?

Would Mr. Ó Broin and Ms Rush like to take that question first?

Mr. Dualta Ó Broin

I am happy to take it. We have media matching technology. I refer to a piece of content that is an image or a video. If we get that piece of content or image, and this is the way we would deal with child sexual abuse material, images that promote terrorism or images of violence, we can put it into this system. The term we use is that we bank the image. That prevents the re-upload of the image across our services.

The Deputy mentioned a previous case where we would have used this technology. Once we became aware of the image, we banked it and made sure that image could not be re-uploaded. If there is another image taken from a slightly different perspective, that will not be caught by that. We would need to identify that image and start the process again of removing it and ensuring that it cannot be re-uploaded to the system.

Across industry we talk about the database of hashes. These are images which have been banked and a specific fingerprint or hash has been added to the image. Across industry that ensures that images cannot be re-uploaded. We use that in child exploitation.

Is that done automatically?

Mr. Dualta Ó Broin

Obviously, there are different ways in which we can become aware of the image. Our artificial intelligence, AI, classifiers are looking for images of, for example, terrorist-related content on our platforms, or somebody could report it to us. That part of it might not be automatic but once the image is put into the system and it has that hash put against it, that prevents it from being uploaded onto our systems.

Once the image is identified, is that hash put against it automatically?

Mr. Dualta Ó Broin

A decision would need to be made as to whether it constitutes the type. It is not totally automated but once it is in the system, it is completely automated.

After the decision has been made.

Mr. Dualta Ó Broin

Yes.

It has been discussed and it has been decided it might be something that deserves to be hashed.

Mr. Dualta Ó Broin

I will ask Ms Rush to clarify that point.

Ms Claire Rush

There are two ways in which that can work. One is that across industry there is a shared database of hashes so if Google or Twitter were to input, say, 100 images into that database, once that content is in the database, if someone tried to share that image on Facebook it would be caught and immediately removed. The second way it would work is through the way Mr. Ó Broin just described it. If new images were uploaded, our classifiers in AI would look for certain thresholds or signals to see if that image would meet a certain threshold of violation. If that ticks the box, so to speak, of being violating material, it would also be caught, hashed and banked.

There are two mechanisms, one of which is the shared process while the other is for new images we discover.

How long does the process of identifying and hashing the image take?

Ms Claire Rush

I apologise but I cannot speak to that because I do not know the length of time.

Ms Claire Rush

I highly doubt that but I do not know-----

Ms Claire Rush

-----the speed of the technology.

Would the Deputy like to move that question on to our guests from Twitter? Perhaps Ms White or Mr. Costello would like to comment.

Ms Karen White

On the speed, I would need to speak to our experts to find out how long the process takes and revert to the committee. Our technology is very similar. In the case of child sexual exploitation, we use photo DNA, creating digital fingerprints of the material, and any related material can be removed automatically and immediately from our systems. In the case of terrorist related content, for example, when certain content is identified, as Ms Rush and Mr. Ó Broin noted, it is hashed and given a taxonomy of the type of content before it is shared within our hash-sharing database.

We deploy other measures at Twitter when we identify rule-violating content, such as that which promotes terrorism, in our URL-sharing project. If we identify URLs on our service that direct to another service, whether it be YouTube or others, we will share the URLs with the companies in question to let them know we have removed the content from our service because it violates our rules and promotes terrorism, and to suggest they might wish to review the content to which the links connect on the companies' own service. It is a multi-pronged approach.

Ms White has no idea how long the timeframe is from when the content is identified to when it is removed.

Ms Karen White

No. As I stated, I would need to speak to the experts within the company and follow up with the committee.

If the Deputy would like to hear a response from Google, I will call Mr. Meade.

Mr. Ryan Meade

Similarly, we use photo-hashing and a database shared among the industry to identify material that should not have been uploaded in the first instance. Like the other companies, we develop artificial intelligence, AI, classifiers that are able to identify content likely to be violative of our policies. The classifiers are getting more efficient and better all the time, although one note of caution is that AI is better at identifying certain types of content. Some types of content are readily identifiable by machines, whereas others will require a human review to detect. Nevertheless, the technology is constantly developing.

On the timeframe, like in the case of the other companies, pieces of content may take different lengths of time to action. Given there may be questions over context or issues that need to be escalated and further reviewed, timeframes can differ. As I stated earlier, of the videos we removed from YouTube in the second quarter of this year, a total of 80% were removed before they gained one view. That gives an idea of how quickly the majority of violative content is removed.

Before the Deputy asks his next question, I inform Ms Niculescu that although the previous question was not particular to the ISPA, if at any time she wishes to make a contribution, she should indicate to me and we will be delighted to hear from her.

I turn to a slightly different issue. In all their submissions, our guests note the high percentage of material that is identified as violative and removed. There is no mention, however, of how much content the companies monitor. For example, while a total of 38% of abusive content on Twitter is removed, that could be 38% of ten, 10 million or 100 million posts. To what total figure do the percentages relate?

Ms Karen White

One of the main areas of focus for the company over the past two years, in particular, has been the proactive identification of potentially rule-violating content. In the latter six months of the 2018, we received more than 11 million reports.

What leveraging technology has allowed us to do in one instance is to dedupe reports. For example, a few years ago, if we received 1,000 reports about the same piece of content, it would be a cumbersome process of review and we would review all 1,000 reports. We have become much smarter with our technology and we will now review only the one tweet that has been reported 1,000 times. Accordingly, we process reports much more quickly. As I noted in my opening statement, we take three times the amount of action within 24 hours on reports we receive.

Nevertheless, our objective is to reduce the burden on people in order that they will not have to report such content to Twitter because we find it ourselves and take action. We will never take any sort of action on certain types of content automatically or suspend accounts automatically. It is about finding potentially rule-violating content and surfacing it for human review. The results are very positive in that regard, given that we process reports and potentially rule-violating content much more quickly than we did previously. Much of that is due to leveraging the technology we use.

My question may have been unclear. How much content does Twitter review every day? Does the figure of 38% represent 11 million posts?

Ms Karen White

We do not break down our figures by daily reports-----

Or weekly, for example.

Ms Karen White

-----or by individual countries. The figure relates to the number of reports we received globally for six months. That is broken down into the different categories of enforcement action we took. In the case of abusive behaviour and hateful conduct, for example, we took action on more than 500,000 accounts in the latter six months of 2018.

Some 500,000 accounts.

Ms Karen White

Yes, but on those two specific policy areas. There are a number of additional areas, such as terrorist related content, the figure for which stands at more than 250,000 enforcement actions taken against accounts in the latter six months of 2018. There were approximately 30,000 for sensitive content and something similar for child sexual exploitation. Significant action is taken against reported content.

The figure is approximately 1 million accounts in total for the latter six months of 2018.

Ms Karen White

It could well be but I would need to do the maths on the overall figure. It is somewhere in that region.

I assume that the figure is similar for Facebook.

Mr. Dualta Ó Broin

From what I can gather, the Deputy seeks the total number of reports versus the total number of actioned pieces of content.

Yes, it would put some context on the figures provided. We were given percentages of the content that has been actioned, but knowing that Facebook actioned 40% of what it has seen does not tell us anything unless we know the figure the 100% represents.

Mr. Dualta Ó Broin

I do not have to hand the number for the total number of reports globally but I do have to hand the amount of content we have removed and the percentages thereof that were removed as a result of us finding it before anyone reported it to us. It is in the scope of the AI to which Ms White referred. AI is good in instances where the violation is clear cut and the line is not blurry, but it is not as good in areas such as bullying and harassment. While our proactive rate on terrorism or child sexual abuse material is 99%, for bullying and harassment the rate of effective AI drops to 20% or 14% but we are committed to developing the technology. As Ms White noted, we try to ensure that people will not have to view the content and that it is removed from the platform before anyone does.

The reason that action against content related to terrorism or child sexual abuse is so proactive is there is a legal requirement to remove it. They are criminal offences and society says they are wrong.

Could the reason that the rates are not as high for bullying be that there is nobody saying that someone will go to jail if this content is disseminated?

Mr. Dualta Ó Broin

I will let Ms Rush comment on the legal point. The point I am making is that it is more difficult for AI to proactively identify instances of bullying, harassment or hate speech than it is for-----

I feel sorry for this AI. The witnesses are all talking about how hard it works. What about all the people Facebook employs and pays next to nothing? For example, GPL Technologies employees in Ireland are paid €25,000 to €30,000 a year, while Facebook pays €154,000. Moreover, what about the people in Manilla whom Facebook pays $2.50 a day? Are they AI?

Mr. Dualta Ó Broin

No, they are part of the solution. The 30,000 people we employ globally-----

In Manilla and places like that?

Mr. Dualta Ó Broin

We have sites right across the globe. I am happy to provide a full list of them.

How much does Facebook pay them?

Mr. Dualta Ó Broin

We pay them a competitive rate for the markets in which they are operating.

Mr. Dualta Ó Broin

I will ask Ms Rush to comment. The Deputy mentioned a couple of legal points around bullying and harassment.

Ms Claire Rush

I have personal experience of doing that content review work. I have been working in this area for many years. The role of content reviewers has really changed because of the evolving lead-in and the technological advances we have been able to make. Some eight or nine years ago all the content reviewed by content reviewers would come from human reports, that is, people clicking on a piece of content and indicating that they did not like it because they thought it was hate speech or violence. That would be reviewed manually. The volume of content has increased. Some forms of harm are particularly egregious and abhorrent. All types of harmful content hold their own in some ways, but it is easier to develop the technology for areas like terrorism and child sexual exploitation imagery, CSEI, because they are very visual and clearcut. Context does not really come into it. That is different from a piece of hate speech, for example. It is much harder to train classifiers or machine learning to tell if a hateful slur is being used in a derogatory way or in a comedic, satirical or self-referential way. That is why the rates of AI detection are lower for bullying and those types of content.

To return to the previous point, it is increasingly the case that content reviewers are not just reviewing reports from human users. The AI is always trying to learn and become more precise. Where it cannot take automatic action to remove content once it matches it, it will sometimes send content to humans for review as a double-check.

I thank the witness. I have a final question. I will start with Facebook. Does Facebook value the type of content it removes depending on the amount of followers the associated user has? Is that a factor in the decision? There was a Channel 4 documentary called "Inside Facebook: Secrets of the Social Network" in 2018 which said that pages posted by an English Nazi were left in place because the user had a lot of followers. As such, there was a lot of revenue to be generated. Is that a factor in Facebook's decision on whether to review and remove posts? Would different rules apply to my posts because I have a lot fewer followers?

Mr. Dualta Ó Broin

The only factor we consider when reviewing content is whether it violates our community standards. That is it. We do not consider popularity, virality or anything like that. If it violates our community standards it comes down.

Is that regardless of who they are?

Mr. Dualta Ó Broin

That is regardless of who they are.

Ms Karen White

Twitter's rules are applied rigorously and consistently across the board, regardless of who makes the report or who is reported. There is a global set of standards and they must be applied consistently and rigorously across the board. There is no incentive for us to leave rule-violating content or illegal material on our services.

There is no incentive except that it gets a lot of views and is seen and circulated a lot.

Ms Karen White

There is absolutely no business incentive to leave rule-violating material on our service. As Mr. Costello pointed out, it does little to garner trust among our users or advertisers. There is absolutely no incentive to leave it on the service.

Mr. Ryan Meade

Similarly, our community and content policies are global. One of the reasons they are so difficult to frame in the first place is that they have to be applied equally to all users. It can be tricky to develop a policy that is enforceable in all cases so that one reviewer looking at a piece of content in light of the policy will make more or less the same decision as another. We do a lot of work on testing our policies before we introduce them. For example we updated our policy on hate speech earlier this year. As others have said, these are areas where context is important. Before we launched that policy we carried out extensive testing with reviewers in different parts of the world to ensure to the greatest degree possible that it was enforceable, in the sense that when a rule-violating piece of content was identified, a decision could be made on it that would not be arbitrary. In other words, a similar decision would be made regardless of who the reviewer was or the identity of the user posting the content.

The identity of the user is an important question. Whether it is someone who has 50,000 friends or 5,000 friends makes a big difference in how far the post goes. Does someone with a large number of followers go through the process more slowly?

Mr. Ryan Meade

I do not think that is the case. There is one wrinkle, which is that our policies sometimes allow for public figures. The allowable speech about public figures can be different from what is allowed concerning a private citizen. Extra nuance can be required when a reviewer is deciding whether content violates our policy, depending on who the speech is aimed at.

Can a public figure say more hateful things?

Mr. Ryan Meade

No. In order to prevent unwanted chilling of speech that relates to public figures, a more nuanced decision may be required. For example if the subject is the head of a government or something like that, we do not want to be in the position of suppressing legitimate political comment.

As such, if I make a racist comment that is different from Joe Bloggs making one, it will be treated differently.

Mr. Ryan Meade

No, that is not what I am saying. It is the other way around. We do not tolerate hate speech under any circumstances, but the decision about whether it violates our policies may be slightly different if the speech is about a public figure. If so, it may be considered to be legitimate political comment. If the same thing was said about someone who is not a political figure it might be considered in violation of policy.

I thank the witnesses.

I thank the witnesses for coming before us. I want to start with Twitter. In recent days we have heard about the experience of a family, Ms Fiona Ryan and Mr. Jonathan Mathis, who felt they had to leave the country because of the content published about them on Twitter's platform. They received death threats and racist abuse and were subjected to shocking commentary online. They made complaints to An Garda. Can the witnesses respond to that? Do they feel Twitter's current standards dealt with their situation well?

Ms Karen White

I thank the Deputy for the question. I sympathise with anyone who has been subjected to targeted abuse, harassment or violent threats, whether online or offline. It is absolutely abhorrent and unacceptable.

I cannot speak to the individual circumstances of any one particular account or piece of content on the service, but I want to reassure the committee that we have robust policies in place at Twitter, particularly around abusive behaviour, hateful conduct and violent threats. When we are made aware of that type of content, or we find it ourselves-----

What does Twitter do then?

Ms Karen White

-----there are a range of enforcement actions that we can take. Those actions have changed over the years. Previously, Twitter had a binary sort of a system whereby a user was in or out. If a user broke the rules, his or her account was suspended. That led to people trying to create new accounts or that behaviour being taken to other services and the offender trying to find new platforms. We now have a range of enforcement actions that we can take and the changes that we have made have been partly based on feedback we have received from safety organisations, including the European Network against Racism and the European Commission, working as part of the EU code of conduct on illegal hate speech. It was identified that not talking about content or individuals who are operating at the extremes was not the way to go. Instead, we had an opportunity to try to educate people, to try to bring them back into compliance with our rules.

We now do a number of things. We can lock an account. We can tell a user that he or she has violated our rules and what specific rule has been violated. We can ask a user to delete the content in question, or to verify an email, telephone number, or many other things. Such a user's account would be locked for a period of time. The objective of these enforcement actions is to try to educate people and bring them back into compliance. We have found that 65% of people who are placed in a limited state of functionality are in that state only once. That enforcement action is having a real world impact.

If someone tweets something that is evidently racist and results in a family questioning its habitation in, or having to leave, a country, is deleting that particular tweet sufficient enforcement? Should there not be a greater consequence for the person who publishes that tweet?

Ms Karen White

The enforcement action of requesting somebody to delete a tweet-----

Does Twitter think that is sufficient?

Ms Karen White

The purpose is to try to educate that particular user that they have broken the rules. Consistent rule violations will result in permanent suspension. If a user engages in violent threats, for example, that could lead to the permanent suspension of their account. If law enforcement was to trigger an investigation on the back of behaviour like that, we can work with it as part of its investigation. There are different enforcement actions that we can take depending on the rules that have been violated.

Does Ms White not agree that, from the point of view someone who has been made subject to hateful content and the consequences of it, a simple deletion of the offending tweet is a really weak enforcement consequence for the person who has brought such hatred to other people's lives?

Ms Karen White

That is the enforcement action that we can take currently. Progress in this area, relating to the type of behaviour the Deputy is talking about, is incredibly tough. There is a wider, societal issue that needs to be addressed here. Simply removing the content from a service is not necessarily, in all instances, going to change the intolerance we see online.

With respect, Twitter promotes that content to a considerable audience. Would Ms White agree that Twitter is one of the-----

Ms Karen White

I would not agree with that. Could the Deputy clarify what he means when he says Twitter promotes that content?

If someone with a considerable number of followers brings hatred or information to a big audience, that promotes a racist message to a large audience. Does the simple deletion of the offending tweet rectify and remedy the consequence for a person who feels he or she has to leave the country?

Ms Karen White

There is a range of things that happen when the type of behaviour to which the Deputy refers is identified on the service. The enforcement action might be asking for the content to be deleted, or suspending accounts if violent or other threats are made. Our rules are clear relating to abusive behaviour and hateful conduct.

Mr. Ronan Costello

Another function that the platform performs is a feature of the fact that it is a public platform. Not to reference any particular case, but when someone tweets something that the majority of Twitter users, be it in Ireland or in other countries, find distasteful or offensive, a number of tweets will reject the premise and content of that original tweet and thereby create a majority which outnumbers those who agreed with the original tweet.

That is kind of a mob response. That promotes a kind of mob response. Is that Twitter's policy?

Mr. Ronan Costello

I do not think so.

Ms Karen White

Counter-speech is very important.

That is not helpful to mature debate. Trying to create two binary and extreme sides responding to each other hardly promotes-----

Mr. Ronan Costello

People who disagree with an original tweet that they find distasteful or offensive are creating a consensus of solidarity around the opposing point of view. This notion of counter-speech has been established for several years now. As Ms White referenced earlier, we have been part of the EU code of conduct around illegal hate speech. Indeed, in January this year, we held a workshop in the office which was participated in by the council, the Commission and a range of non-governmental organisations, NGOs, from across Europe. The purpose of that was to create a counter-speech campaign that would be rolled out across Europe and would reject hate speech and encourage people to take an alternative view. At the early part of that workshop, we had attendees who were a part of successful referendum campaigns in Ireland in the past few years. They talked about how they had promoted a positive and inclusive narrative in Irish society by creating counter-narratives around conversations that could have taken an altogether negative turn.

In a lot of their responses, our guests are referring to Twitter being an educator and are referring to general societal issues. The net response is to broaden the fudge, with respect, and that seems to be Twitter's public policy. The response to many of these issues is that they are complicated, multinational and difficult. Twitter maintains that it is a platform, not a publisher. When the argument is brought down to the family who was affected on the platform, the response from Twitter was to delete the tweet and that was it. Surely Twitter's enforcement mechanisms can be improved. Would our guests agree that the e-commerce directive and current legislative framework does not go far enough to mandate companies to protect citizens who are subject to that content?

Ms Karen White

As I said, I really sympathise with any individual or family who is subjected to the type of behaviour that the Deputy outlines but, in order to tackle these complex issues, a multi-pronged approach is needed. There are enforcement actions on one hand, which include removing content that is violative of Twitter rules, or illegal in a particular jurisdiction. It is also a matter of challenging viewpoints, ideologies and prejudices online. As Mr. Costello pointed out, counter-speech does have a valuable impact within that. Counter-speech and organising EU-wide anti-hate speech campaigns, whether through the European Network against Racism or the Council of Europe's No Hate Speech campaign, are fundamental components of the commitment Twitter made to the EU code of conduct on illegal hate speech many years ago.

My point is that a multi-pronged approach is needed.

One measure is not going to stop racism and intolerance within society.

Was that family worth the counter-speech Ms White is advocating? Was the counter-speech online and in the media worth it? There has been a lot of solidarity in the media and among the public, but should they have had to go through that experience and rely on the kind of counter-speech Ms White refers to? I would argue that Twitter should have stopped it before we had to have this conversation about counter-speech. I accept that in other, broader debates counter-speech may be important, but not when a family is at the centre of the issue. In the context of the issue this family faced, Ms White's approach of resorting to counter-speech misses the point. They were in the middle of this. They would be pretty concerned that Ms White is advocating counter-speech as a way to deflect from their experience. As far as I am concerned, Twitter should not have allowed them to become the epicentre of this very difficult experience.

Ms Karen White

It really is not about using counter-speech as a defence-----

That is what Ms White is saying.

Ms Karen White

I am using it in the broader sense to say that-----

Everything is broad in a fudge. Ms White raised counter-speech when I raised this family's case, so clearly she is using that as a defence on this issue.

Ms Karen White

I also raised the enforcement actions we can take-----

Twitter deleted a tweet.

Ms Karen White

-----and our policies, including working with law enforcement if investigations relating to violent threats are triggered. Counter-speech, as I said, is one measure in what should be a wider multi-pronged approach to what are wider societal problems in many cases. We need to challenge the hateful discourse that we are seeing. Counter-speech is one measure within that approach. I was not laying everything at the feet of counter-speech, or implying that it has the potential to remedy all of the problems we see online. It is simply one measure that can be taken. As academic studies have shown and as the European Commission has attested to, we see positive results when those who engage in abusive behaviour such as racist activity online are challenged or called out. There is some benefit to being able to have these conversations in the public domain and challenge these ideologies, intolerances and abuses. This is just one measure that can be taken.

Of course we need to challenge them, but I am sure that family would disagree that they have to be the centre of that challenge. I would like to broaden the discussion. Last week witnesses mentioned account verification as a key issue for all of the platforms represented here. Of course freedom of expression is extremely important, but does anonymity on these platforms allow for a lot of the hateful issues we are talking about? Have the witnesses' companies looked at moving their account structures to a verified model? A lot of other platforms, such as membership clubs, have proper verification systems that are relatively simple and accurate. I accept that some people will circumvent those, but the majority will not because the barriers would not be as soft as the current ones. The view of law enforcement and the NGOs that testified here was that account verification could play a huge role in creating a better online space. Do the witnesses agree with that? I know they do not agree with the publisher concept, which I have issues with. These are currently community platforms. How would the witnesses feel about a legal obligation to verify all accounts and a responsibility to provide information relating to them? I will ask each witness to respond.

Mr. Dualta Ó Broin

I outlined some of this earlier when I was responding to Deputy O'Callaghan. Facebook has a real-name policy, although that is not the same as authenticating every person on the platform. We require people to use the name by which they are known in real life. If an account is suspected to be false, we will act on that and ensure either that the person verifies it or, if he or she is not who he or she claims, that it will be removed. We also take strong measures against fake accounts that people try to establish on our platform. In the first quarter of this year, we removed 2.2 billion accounts from our platform. That figure does not include the accounts we stopped or prevented from being set up in the first instance. They are accounts that managed to be set up but were caught within minutes.

I mentioned a couple of the issues relating to what such a system would look like. At the global level at which we operate, what would the system of authentication or verification look like and what sort of requirements or burden would it put on the user to be able to access our services? We are not dismissing such issues out of hand but rather they are the types of questions that arise. We are working hard on the issue of age verification for children's accounts, which is just a small subset of the wider question of authenticating accounts. The types of steps we are taking in that regard include enabling AI to detect where somebody has submitted an age of, say, 20 but where it is clear the pattern of their behaviour does not match that of a 20 year old. While we do not have the answer just yet, we are examining and exploring the area. As some of the other guests and I noted, it is for legislators to make a decision on what is required. We can offer our opinions on what such a system might look like and we are happy to have more in-depth conversations on the matter.

Would Facebook support a statutory obligation with regard to user verification?

Mr. Dualta Ó Broin

That depends on what it looks like. What would it look like in an Irish context, an EU context and a global context?

It would mean that the user base would be more reflective of the population, rather than how it is currently skewed, which Mr. Ó Broin mentioned. A total of 2.2 billion, or one third of the world's population, have created an account, which means a large number of users remain in Facebook's system, as I am sure Mr. Ó Broin will accept.

Mr. Dualta Ó Broin

Yes. We are quite open about that. We believe there are cases where people have set up accounts for their dogs, for example, which have not been flagged-----

I read that Facebook recently filed a court motion in the US in which it referred to itself as a publisher. The motion stated, "neither Facebook nor any other publisher can be liable to publish someone else's message." Has Facebook changed its stance on whether it is a publisher?

Mr. Dualta Ó Broin

We are responding today in the context of the e-commerce directive and the question of intermediary liability. I do not have to hand the specifics of the case to which the Deputy referred but I can revert to the committee. I can guarantee there will be a clear distinction between Facebook and publishers but I do not have the brief to hand.

In the past two weeks, Facebook filed a case in the US where it referred to itself as a publisher. Given that Facebook has used the term "publisher" as a defence in a court motion, surely we can engage with it on the fact that it may be a publisher of content in other contexts. Does Mr. Ó Broin accept that?

Mr. Dualta Ó Broin

I am not aware of the details of the case.

The motion, which was filed in court, stated, "neither Facebook nor any other publisher can be liable to publish someone else's message."

Mr. Dualta Ó Broin

Ms Rush might comment on the matter.

Ms Claire Rush

It would depend on the overall context of the brief.

I am not familiar with the details of it in depth, and taking one paragraph in isolation might not give the full picture.

Is Facebook reconsidering its public position as to whether it is a publisher?

Ms Claire Rush

Our position is as we have outlined it here this morning.

Facebook is stating, in motions it has filed in court, that it is a publisher.

Ms Claire Rush

I am happy to go away and look at that and get more information from my US colleagues. The US system, as regards content rules, is very different from the one in the EU. As a sub-point, and to underline what we reiterated earlier, we consider ourselves in line with the obligations of the e-commerce directive, and that is what follows here.

The recent ruling in the European Court of Justice in the Eva Glawischnig-Piesczek case arguably provides a greater burden on companies. Do our guests agree with that ruling?

Ms Claire Rush

We have a number of concerns about the ruling. As of now, it is a general judgment and is being referred back to the national court in Austria for further guidance and information. It remains to be seen how its specifics and details will pan out. It does, at the outset, raise significant questions around exactly what we are talking about here today, namely, the responsibility and role of online services for moderating content, their role or responsibility over illegal content and obligations of monitoring and so forth. There are also very real freedom of expression concerns with how it could be applied in practice.

Another element that has been discussed widely, including in the media in the aftermath of the decision, and a concern we share, is that the judgment tends to cut against the widely accepted principle that a court in one jurisdiction should not have the ability to determine what is available, or should be seen or accessible in other jurisdictions. There are some concerning notes in the judgment and we are still considering them.

Freedom of expression is very important. Do our guests accept that, in all the documents from each of the companies before us today, there is a combined fudge? The documents refer back to the e-commerce directive, which it seems as if it is placed above everything else. The companies seem to allow an element of harmful content by using the e-commerce directive as a net defence, even though it has more to do with a regulatory impact on the companies' profitability rather than the freedom of expression aspect. Why will these companies not suggest something? For example, we all accept that the e-commerce directive was outdated before all of these companies arrived here. Should there be another legislative move in that space?

Mr. Dualta Ó Broin

Freedom of expression is one right but the system in place at Facebook is a rights balancing one. Our community standards are an exercise in rights balancing. There are rights to be considered on the one hand, but if a user engages in certain types of behaviour, other things will also be taken into consideration in coming to an outcome. Freedom of expression is important, but we are demonstrating Facebook's experience of rights balancing over the past 14 years.

We are also saying that Facebook would welcome governments, legislators and policymakers critically reviewing where we have drawn the lines in our community standards. We are open to being told where we are and where we should be going further. There may be places where we are doing more than legislators and policymakers would have been satisfied with. The goal is to come to an acceptance of how, as a whole, we are going to deal with the issue of harmful online content. As I said, that is not putting the onus back on society. We have a significant role in this and intend to play it, but we need input and guidance. People such as those on the committee will ultimately decide what will and will not be allowed.

What about the user verification issue for the other companies?

Ms Karen White

This is a much debated topic in Twitter and has been for many years. Anonymity is ingrained in the DNA of the company. Some of the earliest users availing of our policy on anonymity have been human rights activists, journalists, whistleblowers and people operating in countries where they need anonymity to do their work safely. We have also seen how it serves a positive purpose for young people, for example, who may feel isolated and do not want to speak out on Twitter without being able to do it anonymously. We have seen many positive use cases. It is important to note that anonymity does not by any stretch of the imagination serve as a shield against the Twitter rules. We have introduced robust policies over the last year relating to fake accounts, so one cannot set up accounts with the purpose of misleading or engaging in deceptive behaviours or disrupting the service. We have rules relating to impersonation accounts so one cannot impersonate a brand, an organisation or others on the service.

Anonymity is incredibly important to the company. As I said, it is ingrained in our DNA but it does not mean that one-----

Verification is different from anonymity. A user is verified prior to their name being public. Would Ms White accept that the verification process is slightly different from anonymity?

Ms Karen White

Is it verifying on the basis of a government ID or some such form?

Yes. People can be verified and still avail of a platform, but be publicly anonymous.

Ms Karen White

Absolutely, they can avail of pseudonymity in that sense. Is the Deputy implying that the verification would be based on verifying a government ID or the like?

I am implying that if somebody is subject to significant racist abuse and the Twitter platform allows that to happen and it has a huge impact on the person, whether they are anonymous or public Twitter cannot go back and provide another condition on freedom of expression where it says that not only is their freedom of expression important, but so is anonymity and even no verification.

Ms Karen White

I can appreciate that.

Then Twitter says it is not responsible for it and has nothing to do with it as it is not a publisher but simply a platform that adheres to certain community standards. However, many people are affected in that car crash when they are subject to it.

Ms Karen White

What I trying to get at with my question regarding verification was to be clear about what the Deputy meant by verification. I was not clear on whether that was verifying government ID, which would be the most meaningful way to identify somebody's identity. I wanted to ensure we were clear about what we meant when talking about verification.

Twitter would not support verification.

Ms Karen White

By means of government ID?

I mean a greater threshold of verification than Twitter has at present.

Ms Karen White

One of the challenges with requesting more personal data from individuals, such as a government identification, is that it can pose a number of risks. We have always pursued an approach where we try to collect as little personal data as possible about individuals who use our service.

Twitter collects a great deal of data to sell to advertisers.

Ms Karen White

It is public data. I am referring to personal data. It is sensitive personal data such as the data that would be contained in a government ID, for example. We would have to examine that against the data minimisation principles of the general data protection regulation. We have always sought to collect as little personal, sensitive data about our users as possible. It could open us to a variety of challenges, not least the cybersecurity challenges we would be open to by holding that type of personal data. There is much debate still required about verification and what is meant by true and meaningful verification.

Mr. Ryan Meade

On account verification, it does vary depending on the service. We carry out account verification in certain circumstances where we feel it is proportionate, for example, when we introduced new policies for election advertising for the European Parliament elections. That involved a process of account verification so that anyone who wanted to advertise had to provide documents and so on. The degree of data that was required to be collected in that process was quite significant. Our view is that it was proportionate to apply that in that case but perhaps not proportionate in every case.

In terms of a universal obligation, there are certainly countries in which we operate that require people to verify their account with us, using a government ID or some other form, in order to use our services. There are certainly countries where that would exclude large numbers of people. In particular, it would exclude people who were perhaps not as well got with the government as other parts, shall we say. That is just the nature of them. That may not necessarily apply here in Ireland but in many countries in which we operate. That is a consideration we have to take into account.

I will not repeat the issues concerning data minimisation. That point would have to be considered against the GDPR.

Speaking for Google, we do see a role for private or anonymous use of the Internet. There are use cases for various reasons. We offer, for example, in our browser incognito mode where people can switch into a mode to use the Internet without their identity being known. Overall it is useful to hear from the companies on this but, primarily, one would have to put that question to the people who would be affected by it, which would be our users. I know that next week the committee will hear from civil liberty groups and so on and that is a really important question to tease out. At present, our view broadly is that there are circumstances that are proportionate to require account verification but I am not sure we would go as far as to say it is appropriate in all circumstances.

I thank Deputy Chambers for his questions. Deputy Connolly is next and she will be followed by Deputy Kenny, Senator Higgins and then myself. Ms Rush jumped ahead of what I was just about to say. Would it be appropriate to take five minutes for a bathroom option, please? Ms Rush has led the way.

A vote is due to be called in the Seanad and I do not know if that will affect the timing of the break.

No, I think a toilet opportunity is appropriate at this point. I reassure the Senator that we will accommodate her, vote or no vote. We are suspending for five minutes.

Sitting suspended at 11.45 a.m. and resumed at 11.50 a.m.

The next member of the committee on my list is Deputy Catherine Connolly.

Cuirim fáilte roimh na finnéithe go léir. Leanfaidh mé ar aghaidh i mBéarla, so ná bíodh imní orthu. I said the witnesses are all very welcome and that I will continue in English so they should not worry, at least on that score. I appreciate the effort they have made to come before the committee. It is very important. We all need to learn, so I welcome their effort and the collaborative spirit they have shown. At the end of the day, however, this is a balancing exercise, is it not, between freedom of expression and the right to privacy and the right not to be harmed? We had a number of witnesses before us last week, including a representative of Rape Crisis Network Ireland; Professor Joe Carthy from UCD; Garda representatives; and a representative of the Irish Society for the Prevention of Cruelty to Children. There were two common themes, or points of agreement, which I will put to all the witnesses. Do they accept there is a need for a digital safety commissioner - yes or no?

We will start with Facebook.

Mr. Dualta Ó Broin

I do not want to give a very long answer-----

Mr. Dualta Ó Broin

-----but in our submission we outlined how an office such as that of an online safety commissioner-----

If Mr. Ó Broin does not mind my interrupting, my question is whether he agrees with the four organisations that came before us last week - yes or no? I am not giving my opinion on this, but they say there is a need for a digital safety commissioner. Do all the witnesses agree with that - yes or no? Perhaps a nuanced answer is necessary.

Mr. Dualta Ó Broin

I will have to qualify my answer. We-----

The answer is "Yes", then, but qualified.

Mr. Dualta Ó Broin

Yes.

That is all right.

Mr. Dualta Ó Broin

That is the answer because there are a number of legislative proposals going around and we-----

Perhaps Mr. Ó Broin could come back with the clarifications on the Chairman's time but, for the moment, on my time, the answer is "Yes" with a qualification. I understand that. I realise that this is a complex area.

Ms Karen White

We would welcome the idea of a digital safety commissioner.

Mr. Ryan Meade

We support the idea and have made our views known in that regard.

Great.

The next common theme among last week's witnesses was that self-regulation has not worked. Would today's witnesses agree with that? We will start with Google this time.

Mr. Ryan Meade

The Deputy has not asked for a yes or no answer, and I cannot give one. On self-regulation-----

Has self-regulation worked?

Mr. Ryan Meade

The question assumes we operate in a situation in which self-regulation is the only form of regulation. We operate under a mixture of legal regulation, legal obligations and self-regulation. In most cases, what the Deputy refers to as self-regulation is, I assume, the process by which we implement and enforce our own guidelines. That form of self-regulation is really about our having the freedom to develop policies on what we do and do not want on our platforms. I expect that that form of self-regulation will always be there because platforms will always-----

Mr. Meade is very good. I thank him for that clarification. I should have clarified the matter myself. It is very important. One element is the legal part, which I will come back to, and the witnesses have all mentioned it, but has Google's self-regulation worked?

Mr. Ryan Meade

It has brought great benefits to users, but there are obviously large areas in which we can improve and are making improvements.

It has not worked as well as it could, then.

Mr. Ryan Meade

That is fair. Technology and processes move on. Also, the nature of the harms move on. There is definitely an improvement that can be made not only by us but also by working with governments and legislators to get-----

I will come back to Mr. Meade on how Google would improve that self-regulation. Do the other witnesses think self-regulation has worked?

Ms Karen White

Yes. Many of the self-regulatory models we have in place are bearing fruit.

I will speak about one very briefly, as I am conscious of the Deputy's time. The EU code of conduct on illegal hate speech has been in existence for a number of years. Initially, the main signatories were the companies represented here, but the flexibility it has allowed has meant that many additional companies have signed up to the code of conduct over the years. The European Commission and Commissioner Jourová have spoken about the effectiveness of the code. More than 70% of illegal hate speech is being removed by the signatories and more than 80% is being removed within 24 hours. It is just one example of an effective self-regulatory-----

Ms White believes that self-regulation is working.

Ms Karen White

Yes.

What about Mr. Ó Broin?

Mr. Dualta Ó Broin

As Ryan Meade has outlined, we are subject to regulation in a number of areas.

That has been pointed out so we will not repeat it and just answer the question.

Mr. Dualta Ó Broin

There have been a number of positive initiatives in self-regulation. However, there are areas which we have identified where we believe further regulation or further regulatory oversight is required.

Does Mr. Ó Broin wish to point those out now?

Mr. Dualta Ó Broin

They are harmful content, political advertising, data portability and privacy. We are quite open about that and open to discussing it.

Ms Ana Niculescu

If we are talking about self-regulation in the area of both harmful and illegal content, we can say it is working. When it comes to harmful content, however, it is an undefined, unclear and complex area and there is scope for further investigation.

The opening statement from Facebook referred to setting up an independent oversight board. Has that board been established?

Mr. Dualta Ó Broin

No, not yet.

When will it be established?

Mr. Dualta Ó Broin

We hope it will be considering its first cases in the first half of next year.

It has not been set up. Mr. Ó Broin said, "That is why we are establishing an external oversight board ...", so it will be set up next year.

Mr. Dualta Ó Broin

Yes.

Who will be on it? All the witnesses will be asked the same question so I thank Mr. Ó Broin for telling us this. Excuse my manner, but we have spoken to people last week and will be speaking to more next week and ultimately we must tease it out. I am acutely aware of the importance of the right to freedom of expression and the dangers that are inherent in a system that seeks to stop it. I know that and do not need to be told it. I am acutely conscious of what governments can get up to stop that. Within that we must look at people being harmed online, and that is where I am coming from. Now there is an oversight board and it will be set up next year. Facebook is probably ahead of the posse or are there other oversight boards?

Mr. Ronan Costello

We have a global trust and safety council.

We will come back to that. I will stay with the overnight board for the moment to get my head around it. Who will be on it?

Mr. Dualta Ó Broin

I apologise for taking up the Deputy's time in trying to clarify this.

There is no need. It is me who is apologising for pushing on this. Who will be on the oversight board and what is its function?

Mr. Dualta Ó Broin

We have not released the names of who will be on it yet. Essentially, we are trying to set it up so it is independent of the company. We have engaged in a global consultation campaign, going around the world talking to people and asking them: "How would you do this?" and "How would you set up something which-----

Has it been set up previously for any of the platforms?

Mr. Dualta Ó Broin

Not that we are aware of.

Mr. Dualta Ó Broin

Yes.

It is going to be set up next year and will be completely independent.

Mr. Dualta Ó Broin

The way it will relate to us is that there will be a trust which will act as the go-between. There will be the board, the trust and Facebook. Obviously, we will have responsibilities in respect of providing information and we will provide funding through the trust. We are trying to set this up in an open and transparent way.

What is the purpose of the board and what will it be examining?

Mr. Dualta Ó Broin

That is a good point. The board will be looking at the most complicated and contentious cases. Where a system of regulatory oversight might be able to look at daily cases or cases that come up weekly or monthly, this board will be looking at the most contentious. It is a small number.

Who will decide what the most contentious cases are?

Mr. Dualta Ó Broin

The board can decide, or users can appeal.

Is this after the damage is done?

At what point will the board look at these very contentious cases?

Mr. Dualta Ó Broin

By the nature of having time to reflect on extremely difficult cases, it will not be done extremely quickly.

I understand. It will have a limited role. If the information has been posted and it has caused harm, presumably it has been taken down and then the oversight board did something.

Mr. Dualta Ó Broin

Basically, we will make a decision in line with our community standards, as we would, and then there would be the option for the appeal to the external oversight board.

Is there an ethics board in Facebook?

Mr. Dualta Ó Broin

I will have to come back to the Deputy on that.

What about Twitter?

Mr. Ronan Costello

As I mentioned earlier, we have a trust and safety council, which, if I am not mistaken-----

What is it called again?

Mr. Ronan Costello

The trust and safety council. It is an advisory group made up of non-profit organisations from across the world. It is represented in Ireland by spunout.ie. In the years since it has been launched, the members of the group have provided feedback on policy and product development consultations. We have gathered them twice in San Francisco, last year and the previous year, where over two days we recap the year's progress and look to challenges that arise.

That is not independent, is it?

Mr. Ronan Costello

It was set up by Twitter.

It is monitored by Twitter.

Mr. Ronan Costello

The members are asked to join and then they can accept or deny that request.

However, they do not have a role in the ongoing monitoring of harmful-----

Mr. Ronan Costello

They do not review content on a case-by-case basis.

Does Google have an ethics board or an oversight board?

Mr. Ryan Meade

We do not have those similar structures. We do similar processes when it comes to developing our policies, convening NGOs, academics and others. We also work on an ongoing basis with our community of trusted flaggers. These are NGOs and experts in particular subject areas who are able to bring matters to our attention through a privilege route and help us to respond to issues more quickly. We are certainly looking with interest at the different models. At the moment, we do not have a formalised structure of that nature.

Amnesty has made some findings about Twitter's non-publishing of data. I ask for clarification. Having the maximum data available and transparency is in all our interests to help us to formulate policy and legislation. We asked the same questions last week over data collection by the Garda. What data about harmful content has Twitter published? Where can we find it? What can we learn from it?

Ms Karen White

I thank the Deputy for the question. We have made great advances in the area of transparency.

Perhaps we will judge that and Ms White can tell us. Is Amnesty wrong in claiming that Twitter has not published the data? Has it changed? Has Twitter published or not?

Ms Karen White

Yes, we have.

Good. Where can we find that?

Ms Karen White

It can be found within the Twitter transparency report. We have published that report biannually for more than eight years. We took the lead after Google first published a global transparency report. Within it can be found specific data relating to law enforcement requests for information, government requests for the removal of content-----

Therefore, this is a global transparency report.

Ms Karen White

Exactly.

Global information.

Ms Karen White

Exactly, and that is broken down by country. Regarding-----

To be parochial, is it broken down in the context of Ireland?

Ms Karen White

For certain types of content.

What types of content are included for Ireland?

Ms Karen White

It would detail the requests that we have received from law enforcement relating to information requests, where law enforcement is requesting personal data about a particular individual or where it has requested the removal of certain types of content. Similarly, those data exist for Government requests we have received.

Can people access that report on Ireland?

Ms Karen White

Yes.

How long is the report?

Ms Karen White

It is housed online. We do not have physical copies as a result of the amount of data it contains.

It is a global report but it is very easy to navigate.

I am not concerned about the navigation, I am interested in the content. What information will it give me on the types of harmful content on Twitter's platform? How frequent is it? What actions are taken to remove it? We need to know this in order that we can learn.

Ms Karen White

I completely understand that and it is very important. As stated, it is broken down very clearly by the different types of request for data that we have received.

These would be requests from the Garda.

Ms Karen White

Exactly. There is also a specific section that relates to terms of service enforcement. I previously referred to figures relating to the number of reports we have received - more than 11 million across six different categories - on matters ranging from our safety policies, hateful conduct, abusive behaviour, right through to child sexual exploitation and terrorism. It details the various figures with regard to reports received and actions taken.

I may be wrong in this, but I understand from the Irish Council for Civil Liberties that Amnesty International requested Twitter to publish data on the abuse perpetrated on its platform and the company has failed to do that so far. Is that a wrong statement?

Ms Karen White

As I say, within the Twitter transparency-----

Is that a wrong statement?

Ms Karen White

We have taken many of the recommendations. We have had a long-standing partnership with Amnesty International and have taken on board many of its recommendations, not just in the context of transparency but also disclosure relating to our own terms of service and rules enforcement. One of those relates to increasing our transparency over our terms of service enforcement that are included within the Twitter transparency report. At the beginning of 2018 - it may have been in the previous report - we began disclosing more details about our own terms of service enforcement related to things like hateful conduct and-----

What is the nature of the harm on any of the platforms? I am just focusing on Twitter.

Ms Karen White

Sure.

What is the nature of the harm? What data is Twitter publishing on that harm? What is it learning so that it can prevent that harm? That is the type of report I would like to see in a transparent manner. Is Twitter doing that? Is Amnesty happy with Twitter in their collaborative talks?

Ms Karen White

I cannot speak-----

Does Amnesty believe Twitter is making great progress?

Ms Karen White

I cannot speak on behalf of Amnesty, but those data are very important because they tell us a number of things. On the back of many of the enforcement actions we have taken, we have seen a 16% drop in reports relating to abusive behaviour. The data certainly show that our enforcement actions are having very positive results.

I have difficulty with all of that. I preface what I say by saying that freedom of expression is vital. We interfere with freedom of expression at our peril. Within that framework we need transparency; we do not need jargon. We need honest communication of information. I am not hearing that. I would not go away reassured that the self-regulation by any of the companies represented here is working. I have listened to the submissions - I will stay on to listen to the others - and they seem to be reactive. I am hearing that in the past year things have been set up. Twitter has referred to robust policies in the last year. It is reactive depending on some outrage on our part over somebody suffering. I will not go into any of the personal details. That is what I am hearing. In addition, our guests all referred to the e-commerce directive, which is greatly outdated. None of them mentioned that it is greatly outdated. They have all used it as a shield. I understand that new legislation is about to be introduced. How will that affect the companies represented here? Have they looked at it?

I interrupted Ms White. What is the nature of the harm? Does it appal her on a personal level? Does she say, "Good Lord, if it was my child accessing that ..." or whatever? Can she point to things Twitter has done as a result?

Ms Karen White

Just to clarify, the objective of the Twitter transparency report is to provide transparency for people so that they know the scale of the enforcement actions we are taking as well as the unique reports we are receiving. I appreciate why there may be the impression of reactivity when we say that we have introduced measures in the past year.

Sometimes we are reacting to the changing behaviours we see online. Twitter is a window to the world. I have worked with Twitter for more than five years and nobody could have anticipated how online services and the Internet in general would be used by terrorist organisations.

I understand that it is a very fast-changing area. I appreciate the fact that Twitter is before us and I hope its representatives will come back again because we need to work together. However, I would have been more impressed if they had stated that the e-commerce directive was greatly outdated. It is the law at the moment and we are complying with it but, within that law, we are passive and there is no obligation on us to be proactive. None of our guests stated this. They seem to be saying they only have to act when it is brought to their attention. Few of them were even born at the time the directive was introduced.

Facebook has a proactive approach to teachers. It has a programme involving over 100 teachers and the first set of training sessions started in September. We are commencing something in response to the outrage. How many teachers are in Ireland and how many will take part?

Mr. Dualta Ó Broin

It is going to be offered to every post-primary school. It is one example of a number of things we are doing globally. We approach any issue through policies, partnerships with experts and enforcement. Political advertising is an area where we felt we had to act. We identified a harm and we went beyond anything required in legislation.

What is required in legislation is between minimal and non-existent, so it is not really a positive to state that Facebook is doing more.

Mr. Dualta Ó Broin

I am making a point in response to the question of whether we are reactive or proactive in addressing harmful content on our platforms.

What about Google?

Mr. Ryan Meade

I was born when the e-commerce directive came into being. In fact, I was working in this industry at the time.

I was not referring to Mr. Meade personally.

Mr. Ryan Meade

I am sorry. I had taken it as a compliment. Technology moves on and legislation needs to be reviewed, which is happening. Any review needs to look at why legislation exists in the first place and whether there are good things about it, which have had positive impacts. I trust the legislators will do that.

As we try to balance the challenge with freedom of expression, which is crucial, it would help if providers were coming forward to identify the problems, their extent and their context but they are not doing that.

Mr. Ryan Meade

It is difficult to do it in this context. We will commit to looking at other ways in which we can do it. Under the code of conduct for hate speech, we have regular monitoring where NGOs, including Irish NGOs, sit with us on a quarterly basis and look at how we deal with the material. Google is looking at ways to provide extra transparency and we undertook a transparency report a number of years ago, to which we have added material over time.

I agree that we need to work together. Some 20 years ago, the e-commerce directive involved a complex balancing of rights and freedom of expression. It is appropriate that we continue to look at that balance and that legislators, companies and other stakeholders get involved in it.

As service providers in this area, we are hoping to get across our experience of the benefits of the legislation for the open Internet in general.

I wish to focus on the moderators for a few minutes. I do not know what they are called now but I refer to the people who have to look at the terrible images to which we referred. Can we drop the phrase "revenge porn"? It is not acceptable to me. It is used in one of the documents we have seen today. It is not I but other organisations which have drawn attention to it, and asked for it not to be used. They are sexually explicit images of abuse, not porn revenge. I would not like to be the person who had to look at these images. Can Facebook, Google and Twitter tell me if the people in question are part of their entities? Are they outsourced? What do the companies learn from them? Have they listened to them? What is the mechanism for them to report back on terrible images they have seen? I ask our guests to be precise because the Chairman will stop me in two minutes.

Mr. Dualta Ó Broin

We refer to them as our community operations team and there are 30,000 of them.

Are they directly employed by Facebook?

Mr. Dualta Ó Broin

There is a mixture of people who are directly employed by Facebook and contract staff.

The latter are outsourced.

Mr. Dualta Ó Broin

Yes.

What is the breakdown?

Mr. Dualta Ó Broin

I do not have it to hand.

Can Mr. Ó Broin send the details into the committee?

Mr. Dualta Ó Broin

In Ireland it is 50-50, with a total figure of 5,000.

Are the outsourced staff and direct employees paid the same rates?

Mr. Dualta Ó Broin

They have different roles. A person who would be doing the same role as an employee of Facebook would be paid the same but-----

I know Facebook is a for-profit company but Mr. Ó Broin's answer is difficult for me. One of the most important ingredients of this is the person who had to watch these things and make decisions. Facebook, however, is outsourcing the function and paying people less. Mr. Ó Broin is doing all sorts of things with lovely language.

Mr. Dualta Ó Broin

They go through an extremely long process of training before they start reviewing content and they have psychological supports on hand around the clock.

I read that but how can the company monitor that if half of these people are outsourced?

Mr. Dualta Ó Broin

We have very close relationships with our outsourcing partners.

I have a close relationship with my husband but that does not mean I know everything. I sit on another committee and organisations which come before us have an obligation to give us information, just as we have an obligation to put them under pressure. However, the three companies here today are for-profit organisations. Can Mr. Ó Broin give me the information on how many people are outsourced?

Mr. Dualta Ó Broin

I have given the figure, as I understand it, for Ireland. Out of 5,000 staff in Ireland, roughly half are outsourced.

Some of these have the same roles and others do not. We do not know how the outsourced staff deal with this terrible stuff. Does Facebook have a mechanism whereby they can report back and maybe suggest different ways of doing things?

Mr. Dualta Ó Broin

We take our responsibilities towards those employees extremely seriously. I can submit to the chair written details of exactly how we approach them.

What is the situation with Twitter?

Ms Karen White

We refer to these individuals as colleagues. They are employees of Twitter and many of them sit beside me in our Dublin office.

They are all employees of Twitter.

Ms Karen White

Yes.

Twitter does not outsource.

Ms Karen White

Some of them would be contractors.

So they are not employees of Twitter.

Ms Karen White

There is a combination of both.

Can we have the breakdown?

Ms Karen White

We have more than 1,500 people globally who work on issues related to moderating the content on the service. It is very important to point out that our approach relies on a combination of human review, that is, those who are reviewing the report, but also-----

I understand that, and artificial intelligence and so on.

Ms Karen White

It is a very important point to make in this context.

No, what I am asking, and I will ask Google the same question before the Chair stops me, is for the breakdown in Ireland. What are the terms and conditions? How does Twitter monitor how its very valued colleagues who are outsourced manage this terrible job? In fact, they are not colleagues in the case of Twitter because they are outsourced. What reporting mechanism is available to these people? These are factual questions.

Ms Karen White

We have people reviewing content in more than eight different offices from San Francisco to Dublin to-----

There is an office in Dublin. How many people work in the office in Dublin?

Ms Karen White

I do not know off-hand the specifics of how many people review content in Dublin.

Could Ms White share that information with the Chair in due course?

Ms Karen White

Absolutely.

Can she provide a breakdown of how many of these people work directly for Twitter and how many are outsourced? Is that possible?

Ms Karen White

Does the Deputy mean globally?

Particularly locally, but globally would be good.

Ms Karen White

I would be happy to follow up with the committee in that regard. We put very strong employee resilience programmes in place for-----

I am sure that is the case but these are obviously the most important people.

Ms Karen White

Yes, I agree.

An intelligent organisation would cherish them, treat them very well and learn. There would be a proactive approach to this terrible, harmful content that we need to deal with. I do not hear that coming proactively. I am sorry but I have to move on to Mr. Google, so to speak.

Ms Karen White

If I could just make one point.

Of course but the Chair is stopping me.

We will conclude with Mr. Meade and then we will have to move on.

Ms Karen White

I think the Deputy is drawing the implication that we do not cherish these people who are doing an incredibly difficult job. We do cherish them because we appreciate how difficult the job is and that is why we put resilience structures and programmes in place. We appreciate the very difficult job that they have. I just want to clarify that.

Many witnesses come here and clarify and reassure but I am never reassured. I like to see it on paper. I like to see the breakdown and the protections that are in place.

The witnesses will provide that information. Mr. Meade will be the last to respond.

Mr. Ryan Meade

I do not have a breakdown with me so I will not be able to provide one today. We have a pretty large trust and safety presence here in Dublin. This is our team that develops and enforces our policies. They work with vendors who provide moderators - human reviewers - who look at content. It is done both by full-time Google employees and staff employed by vendors. Just to explain briefly-----

What is the breakdown?

Mr. Ryan Meade

As I said, I do not have a breakdown.

Will Mr. Meade be able to provide a breakdown?

Mr. Ryan Meade

I am not sure but if I can I will, certainly.

Mr. Ryan Meade

One of the big reasons we use vendors is that, as other speakers have said, the nature of the harms and threats that need to be dealt with varies over time. At a particular point in time, there may be a great need for language skills related to a particular location or coverage in a particular area. It is very important that we work with vendors so we are able to scale our activity to provide both 24-7 coverage and coverage in all of the languages in which this material might arise.

I am not reassured by those types of answers. I have to tell Mr. Meade that.

Mr. Ryan Meade

If the Deputy could tell us what would reassure her, we will do our best.

We are out of time.

I will sum it up. We would appreciate if the witnesses could provide in writing the breakdown of the statistics in respect of personnel employed and their roles and responsibilities, both domestically and globally, if possible, in each of the three companies. If that information can be provided through the Chair, we will circulate it to the members. It will also be helpful to members in our consideration of our report and recommendations to have a better understanding of each company's respective profile.

I thank all the contributors for what they have shared with us this morning. I read in detail all the written contributions. I will start with a few questions on the issue of explicit images, which, according to the submission, is the main focus of hotline.ie.

While child sexual abuse material seems to be its focus, I imagine hotline.ie also receives reports of more general sexual abuse material. The witnesses emphasised that child sexual abuse material is clearly illegal and therefore the issue of consent does not apply. I would like to tease out the issues of consent and freedom of expression and where the platforms stand in that respect and also how they deal with those issues. We all regularly come across content online which is not considered appropriate or normal. We do not know from looking at this content whether the people depicted in the images gave their consent. A couple of decades ago, page 3 of many daily newspapers featured a picture of somebody wearing very little or no clothing. That was considered normal at the time but it is no longer considered normal and these pictures no longer feature. One of the reasons for this is that much of this material has moved onto the Internet. The reality is that society has deemed it not to be appropriate. If it is not appropriate in a newspaper, surely it is not appropriate in formats like the various Internet platforms. I understand completely that what makes these platforms unique and what makes them work is freedom of expression and the ability to put something online immediately. At the same time, there is an element of responsibility around this, including as regards consent. I would like to get Ms Niculescu's views because we have not heard from her in regard to checking out that consent or whether consent is checked out regularly in respect of images other than child sexual material.

Ms Ana Niculescu

I thank the Deputy for his question. The mandate of hotline.ie is to handle reports of suspected illegal content. In assessing those reports, we have to apply the provisions of the Child Trafficking and Pornography Act. If content does not meet the threshold which is clearly detailed in the Act, it will not be classified as child pornography. We prefer to use the term "child sexual abuse material" because of the serious nature of the content. As I mentioned in my opening statement, we are talking about infants to 12 year olds. We also get reports in respect of self-generated content but in terms of the age group, that would involve those aged over 14 years and it is not as severe as the sexual abuse of a two year old infant. I am not sure that-----

If hotline.ie gets material that involves a young female aged between 14 and 18, what does it do?

Ms Ana Niculescu

If the imagery meets the threshold which is set out in the Child Trafficking and Pornography Act, that image or video would be classified as child pornography under the Act. If it does not meet the threshold, it is not illegal. The hotline would not be in a position to act against that image.

So hotline.ie is very much determined by whether the material falls into the category of being legal or illegal in regard to child pornography and so on.

Ms Ana Niculescu

Yes. If it classified as child pornography, it will be actioned irrespective of where the content is hosted in the world. If it is not classified as child pornography, no further action will be taken.

In regard to the companies that also come across such material, if such material falls outside the category of child pornography but is close to it, what actions do they take?

Mr. Dualta Ó Broin

I ask Ms Rush to respond.

Ms Claire Rush

This is a good example of where there is a threshold of illegal, and there is no question about that. We all agree and know what type of content is illegal. We have experiences, however, of seeing different types of content being shared and uploaded on our services and platforms, some of which is equally harmful and has no place on our platforms. I am referring, for example, to an area such as the non-consensual sharing of intimate imagery. Even if that an image was initially taken with consent, that is not allowed to be shared or posted on our platform subsequently without the consent of the person originally involved. That is not allowed on our platforms. We would take such an image down and also bank it to prevent it from being uploaded again. Some of the content referred to earlier, such as implied sexual, or sexually suggestive content, involving minors, would also breach our policies and we would also take that down.

From my knowledge, and listening to the earlier conversations, it is being suggested that people putting up that kind of content are abusing the Facebook platform.

Ms Claire Rush

Yes, that is the case if they are breaching the rules. Some people, however, do it unintentionally, such as minors uploading something and not realising what they have done. We would not necessarily apply a penalty on their accounts. We might just send educational messages to let those people know that sharing that type of content is not appropriate, they should be more careful in future and refrain from that type of behaviour.

All the various platforms and companies have large legal sections. How many cases have been taken against users who have abused the platforms in that way? Has Facebook made such legal challenges?

Ms Claire Rush

There could be a whole range of situations. If we are talking, however, about someone uploading harmful or abusive content on our platform that is illegal, then it is for the State to take an action if someone has breached criminal law.

This involves criminal law only. Facebook does not-----

Ms Claire Rush

Is the Deputy asking if we take legal action against someone who has breached our policies? No, we do not.

Facebook does go to civil law in such instances.

Ms Claire Rush

No, that is not the position.

For example, let us take a case where I put something up about Deputy Connolly suggesting she is involved in criminal activity. She could take a civil case against me, even though that content appeared on the Facebook platform. Deputy Connolly, in that hypothetical scenario, however, could not take a case against Facebook. Equally, it seems that Facebook would also not take a case against me for abusing the platform.

Ms Claire Rush

It would be difficult for us to get a successful suit in that instance because there has just been a breach of our platform rules. If the content shared was lawful, even if it breached our policies, I do not think taking a civil action against each and every user who made that misstep would bear much fruit.

Before Ms Rush goes on, Ms Niculescu had indicated. She may have something to say that is relevant to this theme.

Ms Ana Niculescu

I want to clarify one thing regarding the reports we get. In most cases, the context is lacking, so we need to judge the image on its own. Sometimes we only get one image reported, with no comments or further description from the person making that report. In assessing that one image, we need to ensure that the image itself is manifestly illegal and meets the threshold set out in legislation. When there is more context, then we can make judgments regarding inappropriateness. When we are talking about harmful content, the language we use is very important as it helps us to conceptualise the problem and also to forge responses. We must ensure we are referring to the same thing. To avoid any confusion, hotline.ie only deals with illegal content online. We think more can be done regarding self-generated content and content which may be age inappropriate, but that does not fall within our current mandate.

Who set out that mandate?

Ms Ana Niculescu

The Department of Justice and Equality.

That is interesting.

Would Deputy Kenny like a response from the other service providers?

I ask the other witnesses to address Deputy Kenny's range of questions, from where the representative from Facebook left off.

Ms Karen White

On non-consensual nudity, which is how we classify it in Twitter, I mentioned "revenge porn" because it is a commonly known phrase. Concerning how we classify this type of content within our policies, it is non-consensual nudity. We have an absolute ban on the posting of that type of content on our services. There are times when users post rewards or bounties regarding posting of that type of content. In that case, we would institute an outright ban on those accounts and those users would be suspended from the service. We take that issue very seriously.

Twitter, however, has not taken legal action against anybody.

Ms Karen White

No, we have not.

Mr. Ryan Meade

Google has a range of products with different policies. Regarding YouTube, which is our primary content-hosting platform, we do not allow pornographic material of that nature. It would be in violation of our terms. On non-consensual sharing of explicit imagery, that would fall into a narrow category of material that we would delist from search results. Our Google search engine is an expansive index of the web and we do not apply our content policies to what is on the web more generally. If the material, however, can identify a person or is personal information that has been shared without consent, we would delist that content from being searchable.

How is consent checked?

Mr. Ryan Meade

There is no question that-----

That is the big issue.

Mr. Ryan Meade

There is no easy answer to that question. We have to take account of the facts available to us and make a determination. In many cases, not all of the facts are readily available.

Again, Twitter has never taken legal action against anybody.

Mr. Ryan Meade

That is not our practice. If users breach our terms of service, the penalties for that are set out in those terms of service. I am not a lawyer, but I am not sure we would have a legal basis to take court action, because there is no contract as such.

It just seems bizarre. We are in here and have a certain legal privilege because we are in the Houses of the Oireachtas. It would seem the Internet almost has a similar legal privilege in that people seem to be almost able to do whatever they like and nobody can touch them. That seems to be the case, and the companies responsible for putting up and hosting this material are not prepared to take legal action.

Mr. Ryan Meade

Legal recourse is available to users. There is no question about that. Similarly, we often get requests to provide information if a legal investigation is launched by law enforcement agencies. We will co-operate with those law enforcement agencies based on the appropriate judicial process. Some cases involve online as well as offline behaviour reach successful conclusions in the courts. Our view is that what is unacceptable offline should be unacceptable online as well and the law should reflect that. One of the reasons we are here today is because the committee is considering if the current laws which apply to offline behaviour are sufficient against similar behaviour online. That is a worthy endeavour.

The other area I want to move to is the area of hate speech, abusive commentary and racism. We have all seen live feeds of people going around towns in Ireland, making comments such as "look at all of the black people in this town, isn't this terrible?", and using that in a way to enable the promotion of a certain type of racism. It is almost like the modern version of the Ku Klux Klan has found a new place to operate on the Internet and they seem to be free to do so. Many complaints have been made about several individuals and organisations that engage in that type of behaviour. Very little seems to be happening to address this issue because it is ongoing.

We can look it up right now on the computer. I am sure we will find those people are there livestreaming some commentary which is very racist, abusive and dangerous. I do not understand how the organisations represented here today cannot take responsibility. If they are responsible, they have to extend that responsibility beyond just stating that they will take down such material. It then happens the next day and is taken down, and then the next day and so on.

At what stage does a company reach a point when it accepts that it must go further than simply continually removing the content, and that it needs to be proactive rather than reactive? There seems to be no limit where that line is crossed.

To whom is that question addressed?

I would like Facebook to answer first.

Mr. Dualta Ó Broin

I will not refer to any particular case. When something meets the threshold of hate speech, it is one category of this type of content which would violate our community standards and would be removed. There are several different nuances within the type of content being shared on the platform. There is material that obviously violates the terms but we must also respect that there may be political views within that criticising asylum seekers or the migration of people. It is not a clearcut process where one can run through each and every one and say that they are all hate speech and they must all be taken down. We do take it very seriously. That is why we take review of each of these cases extremely seriously. Having said that, we have said what our line is. We are open to hearing whether our line should go further. I will ask Ms Rush to speak on the legal situation in Ireland on hate speech. Facebook has set out its policy and we enforce against that. If there is a view that we are not doing that or that we should take down more, we are open to hearing what that would look like.

Ms Claire Rush

As Mr. Ó Broin mentioned at the outset, hate speech, as a large bucket of harmful content, is one area where our policies arguably go further than current legislation in Ireland. That is an area that might be ripe for future consideration. It can be difficult when reviewing a piece of content without the surrounding circumstances to know whether, in theory, it would rise to the level of incitement. It is very hard to know that when there is a vitriolic or malicious attack on a person. Our position is that making derogatory threats or statements or dehumanising speech is not allowed, even if the content is perfectly lawful within a jurisdiction, and we will still take it down if it violates our policies.

One example of a fairly successful self-regulatory regime is the EU code of conduct on countering illegal hate speech. That tests us on our responsiveness in taking down illegal hate speech as swiftly as possible. We go through rounds of testing, most recently at the end of 2018. After that test, the EU Commission observed that the most serious types of hate speech, that is credible threats, had the highest rate of removal at 82% whereas in more nuanced speech, where it was difficult to make a determination whether or not it constituted incitement or if it was insulting language, that a greater balance had to be struck, and it seemed as though the signatory companies were trying to take that balancing exercise very seriously when assessing the content.

That is just taking each piece of content. If the same content is coming from the same organisation or individual, is there precedent for the company to say that this organisation or individual will no longer have access to the platform? That would be a start.

Ms Claire Rush

Yes. Exactly.

I suggest that the companies should go further than that, but that they should take court action against them for abusing the platform. Facebook is saying it does not agree with that.

Ms Claire Rush

We do not allow them on the platform because there is no case to take. If criminal, dangerous or hate organisations start to have a presence on Facebook we will remove them immediately.

I do not mean this flippantly, but Deputy Martin Kenny referred to the KKK earlier. If it wanted to set up a benign page which had nothing whatever to do with hate speech, we would take that down as its ideology is contrary to our principles and policies. We would not allow the organisation itself to have a presence on the platform, nor would we allow any other user to engage in praise, support or representation of that organisation.

We will move on to Twitter next. First, I noticed that Deputy Pringle left. Perhaps someone might tweet or text him as we are taking a photograph with our guests at the end and I do not want all the members to go so that the Chairman is the only one left standing. I invite all the members to regroup. We publish photos with our witnesses in our reports at the end.

I will let him know.

I invite Twitter to respond to Deputy Martin Kenny.

Ms Karen White

We have talked quite a lot about hate speech and how we apply our policies at Twitter. Twitter has a hateful conduct policy. It outlines that it is against our rules to promote violence against, threaten or attack others on the basis of a range of protected categories such as gender and race. Twitter's focus is on behaviour over content in particular. It is much easier to identify certain types of abusive behaviours and scale that than solely looking at hate speech which is highly subjective and often context dependent. Twitter is somewhat unique in that it is a public communications platform. We are looking at behaviours of different types of accounts. Our systems can proactively detect if, for instance, one account is @-mentioning another account repeatedly and that account does not follow them. That might signal to us that someone is engaging in some level of abusive behaviour or if there is some sort of dog-piling with an attack on another account. Sometimes we take a variety of these signals in combination to give us some indication that we need to review an account for hateful conduct or abusive behaviour. I can assure the Deputy that our policies are applied consistently across the board. When violations are found actions are taken. There are various actions, which I outlined, that we can take beginning with the outright suspension of an account. We have vastly improved the stopping of repeat offenders coming back to Twitter by setting up another account and leveraging our own technology; we are finding new ways to do that and to stop them coming back to Twitter to engage in the same behaviours. There are those on our service who set up what we classify as sole purpose accounts where they engage in nothing more than abusing others. That type of behaviour is reviewed against our policies and we take appropriate action. There is a range of actions we can take. We also introduced a change to our hateful conduct policy on dehumanisation. Research shows that dehumanising language leads to offline violence. We have made a change to our hateful conduct policy in regard to religious groups. We will continue to iterate on those.

Mr. Costello might talk to some of our efforts with NGOs with whom we work.

There seems to be a contradiction in what Ms White says. Earlier, in reply to Deputy Connolly, she spoke of the idea of counter speech, that Twitter allowed a conversation to happen so that others from a counter view could get in and put their piece in. That might be all right at one level but at another I wonder if it is part of a big picture which Twitter is trying to create.

From the point of view of the platform gaining traction and popularity, be it Twitter, Google or something else, if there is controversy, a heightened conversation or a major debate, it means people are interested, watching and keying into it. As such, creating these two counterviews and giving them amplification helps the platforms gain traction. Many people have the view that, although the platforms claim they are watching over something, they are really just letting it run. To date, there has been at least some evidence of this.

Ms Karen White

It is unfortunate that that perception exists. We take on board that it is our responsibility to try to change it and educate people on the work that has been undertaken by the company over the past couple of years to improve the health of public conversation on Twitter and put various mechanisms in place to help empower our users so that they feel that they have a safer experience online. There is still work that we need to do as regards perceptions.

There is no business incentive to leave rule-violating content on our service. Counter-speech works. There are different categories of content that we address. There is content that is illegal and content that is not illegal but is in violation of the Twitter rules. In certain jurisdictions, our terms of service and rules go far beyond the laws. Then there is content that is neither illegal nor a violation of the Twitter rules. We find that counterspeech in particular can have a positive impact on that type of speech, as it is a space where people offer diverse views that challenge prejudices, intolerance and racism online. They are calling others out, saying that certain types of speech are wrong and showing a more tolerant and respectful viewpoint. It is within the context of the type of speech that is neither illegal nor a violation of the Twitter rules that counterspeech can have a positive impact.

One of the difficulties is that all of the platforms, in particular Twitter, provide an abrupt form of conversation. It is short. One cannot get into the detail of an argument on Twitter. One just puts out a paragraph at most. From that point of view, it is difficult to see how counterspeech can work, given that all one ends up with is a whole lot of people shouting at one another. Ms White called it a "perception", but for most people it is the actual experience.

I understand the Chair wishes to get through this, but the other issue is-----

It is not just the Chair. We have to vacate this room by 1.15 p.m. The Chair has questions and Senator Higgins will be back to avail of her opportunity.

I wish to raise a couple of other issues. Young people in particular use many of the platforms. We hear of young people who are self-harming, sometimes to the extent of suicide. Much of that comes from pressure they have been put under online. We often discover in the aftermath that various complaints were made but material was not removed from online in spite of people's requests.

The committee is undertaking work on the issue of spent convictions. The people in question have moved on with their lives. Someone in my area had mental health issues when young and got into trouble. However, if one were to Google that person's name today, a media report would come up. There was a request to have that search result taken down, but the response was that it was in the public interest to leave it even though it was hurtful and damaging to the individual in question. Anyone looking at this logically would not see a public interest in the matter at all.

A large amount of work is to be done by all of the organisations before us if they are to gain people's confidence in these matters. How and where are the organisations moving as regards the right of a person to move on with his or her life and the issue of young and vulnerable people being victimised online?

I would like a quick response from each of the three organisations. We will take Mr. Meade first.

Mr. Ryan Meade

I might also answer the Deputy's previous question. He was right to mention livestreaming as a category of content that needed special caution. At YouTube, we have preliminary hurdles that people need to jump through before they get the ability to livestream. For example, they need to have a certain number of followers and cannot be brand new. They cannot just create an account and suddenly start livestreaming. If there is a violation of our policies, their ability to livestream can be taken away. We have taken the approach that livestreaming is something that needs special attention.

Regarding the types of livestreaming the Deputy mentioned, he may or may not be aware that we updated out policy on hate speech this year, which has resulted in violations by people who were livestreaming. In one fairly high-profile case, we terminated that account. The Deputy may have seen that there has been quite a reaction to that. This shows that there has been progress in the development and implementation of our policy. There is no question but that we will suspend accounts and remove people's opportunity to use these tools if they violate the policies.

The question on spent convictions is an interesting one, in that it cuts across privacy law. The right to be forgotten was been established in European privacy law. The practical implication of this is that platforms, including Google Search, must assess requests from users to delist information. The information may still be out there on the Internet, but we must assess the request to delist it from our search engine. The law also requires us to apply a balancing test. As such, there has to be a consideration of whether other issues need to be taken into account. Naturally, any decision we make on this basis is subject to dispute, in that people may say that we made the right or wrong decision. In such cases, there is recourse to the Data Protection Commission, DPC, as the matter is handled under privacy law. The individual in question can make a complaint to the DPC if he or she is not happy with our decision. The law requires us to have a balancing test. As Mr. Ó Broin stated, private companies like us may not always be the right arena in which to make such decisions, but that is what is required of us under the law. We need to consider requests as they come in, apply the test and make a determination on whether information should be delisted.

Perhaps the Twitter delegation will address the last question.

Ms Karen White

There are a variety of ways in which individuals who use our service can request that content be removed. Each request needs to be reviewed on a case-by-case basis. For example, there are procedures to deal with claims of defamatory content.

Either Mr. Ó Broin or Ms Rush may respond now.

Mr. Dualta Ó Broin

It is similar for Facebook. There are a number of ways in which content can be removed from the platform.

We all take the issue of suicide and self-harm extremely seriously. I am happy to follow up with the Deputy on the steps we are taking in that regard.

I thank Deputy Kenny for his questions. As everyone can see, Senator Higgins has been called to a vote in the Seanad, so we will juggle. I will take my own opportunity at this stage.

Senator Higgins is anxious to be able to ask her questions.

Of course. She was present for quite a part of the meeting and is well entitled to that.

It is important to acknowledge that, while the focus of our engagement over a number of hours today has been on what we all accept are the worst excesses of human usage of each of the platforms' sites, the services they provide are very valuable, appreciated and utilised responsibly by the overwhelming majority of their respective customers.

That needs to be said because, otherwise, we are giving a very skewed image and perception of the respective sites with which our guests are involved. I am of the view that this acknowledgment is important, but the focus of our address is harmful online activity. Although we are some steps away from producing our report and recommendations, I get the sense that there is a general unified opinion coming from all the members that the platform companies are facilitating significantly harmful postings, including criminal acts, that have given rise to serious injury and worse. I hope our guests are equally committed to addressing and, where possible at all, eradicating that activity.

I wish to reflect on the contributions of those who appears before us last week. They were very strong in their recommendations. The committee is very mindful of the importance of working together. The respective component parts of the industry must work with legislators to try to reach a better situation where safety is critical and most especially in respect of children, but where there is safety for all users if at all possible.

Some of our guests keep referring to rules. There has been talk of Twitter rules and Facebook rules. When the rules have been broken, the companies respond. It is almost a competition from our perspective. In terms of breaking the rules, in many of those instances and most certainly in the most serious of the cases that have been reported to us, we would like to see that people have broken the law. It is not enough just to look at some as being inappropriate. We would like to see some of the cases that have been reflected to us as involving illegality.

I am minded of the contribution last week of the Irish Society for the Prevention of Cruelty to Children. I am very grateful to Mr. John Church, its chief executive. I will just instance one of the cases that was brought to its attention. A 16 year old girl told Childline that she had sent images to a former boyfriend, who then shared them with others without her permission. With the images circulating widely, the girl told Childline she could not face going back to school and was contemplating suicide. I do not expect a response on that today but, in the context of the exercise we have embarked on, we are very mindful of the harm, hurt and serious consequences involved and we want to try to find a means of address. We have only the power of recommendation to the Minister and the Government, but we will not shirk in our responsibilities in outlining the strongest recommendations if we believe they are the most appropriate.

Professor Joe Carthy was mentioned. He is the founding director of the UCD centre for cybersecurity. He made an analogy with automobile legislation. Every car has a number plate. Every car is registered to someone. He said while it is perfectly acceptable to use the Internet in any anonymous fashion, it is not acceptable to abuse, bully or libel others anonymously. He said it is akin to allowing people to drive cars without number plates.

This is a key area in respect of which legislation could make a huge difference. Professor Carthy believes that we should require social media platforms and online providers to register their users and that users should provide appropriate evidence of identity, and not just after the event or in following through when something outrageous has happened. He stated that users who can show that they have been bullied, harassed or abused on a social media platform should be legally entitled to find out the identity of the user carrying out the inappropriate behaviour. I would welcome a brief response to that. I know that if Professor Carthy was here he would like to ask our guests about this matter. I invite a response from Mr. Ó Broin. Our guests have probably all looked at a recording of last week's meeting so they know what I am talking about.

Mr. Dualta Ó Broin

It is an issue that is referenced in the Law Reform Commission's report. I refer to the ability of people to identify who is posting information about them.

Yes, and the platforms having a statutory responsibility to ensure that whoever is opening up a new Facebook account is who he or she purports to be.

Mr. Dualta Ó Broin

I will ask Ms Rush to comment. As I understand it, however, the basis on which we provide that information to a third party is through something called a Norwich Pharmacal order whereby we are given a legal basis on which to share information about an account to a third party. That is based on the current situation in Ireland.

Ms Claire Rush

I do not have much to add except to say that there is a process already in place for that, with which we are compliant, if we receive an appropriate order through a judicial process. Deputy Howlin wants to place that on a statutory footing in his proposed Bill. That is something we already comply with in any event and do so in a large number of cases. We are open to discussing how that would work in practice.

Ms Karen White

We follow similar procedures in that respect, with due legal process and working with law enforcement, for example, if we receive a court order. We have already had much debate with other members of this committee about the introduction of verification and some form of digital ID. The European Commission is looking at the issue. It must weigh up the benefits of introducing a harmonised system of digital ID across Europe. We are very much looking at how something like that could be applied globally, because that is what we would need to do.

I have mentioned some of the challenges in respect of collecting additional data, for example, government ID, but introducing that kind of verification mechanism where we might have to register individuals who use our service could also have severe implications for human rights activists, dissidents and in countries like Russia where similar requests have been made to us. There is a balance to be struck and much more consultation and debate are needed on the benefits. I understand many of the arguments made by Professor Carthy. I looked at last week's session. We take the points the Chairman has made onboard. Unfortunately, there is no simple solution but we can commit to continuing discussion and collaboration to find a solution in this very important area.

We will accept that "collaboration" is an important word in the context of these matters.

Mr. Ryan Meade

I refer back to my earlier response to Deputy Jack Chambers. The question of proportionality is really the big issue when it comes to verification of accounts. There are cases in which it is a proportionate response to require verification of accounts, but I am not sure we would go so far as to say that universal verification of accounts for anyone engaging in activity online is a proportionate response. On the other hand, in many ways, that is for legislators to decide. Professor Carthy may have a different view and he may feel that is proportionate. That is a matter for debate.

The only thing I would say is a variety of voices would need to be heard on that issue. As the Chairman rightly pointed out earlier, we are focusing on the activity of a minority and the worst activity of that minority. There is a vast corpus of Internet users out there who are benefitting from these services daily and their interests must be taken into account as well. Apologies for the fudge, but I encourage legislators to look at the proportionality of that as a measure.

That is okay. I hear what Mr. Meade is saying. I wish to make one other point. Senator Higgins need not worry; she will get her bite at the cherry.

The representatives from Facebook cited, in their opening statement and in their contribution since, the extent of their response to material that is inappropriate and that is brought to their attention internally or reported to them. I am going to refer to a particular case of which I am aware, and to at least two Facebook pages that existed for a number of years. I refer to the recent outrageous and dreadful attack on Kevin Lunney, a director of Quinn Industrial Holdings. At least one of the Facebook pages in question referenced, by name, the senior executives of the company and allegations were made against them. Other posts included mock-ups of posters calling individuals traitors and warning of consequences. The intention in this regard was to misrepresent and label those executives. Some of them were referenced on the page. When I sat with these people, including Kevin Lunney, earlier this year, I was mindful that we were embarking on this particular exercise. My question would have been put to our guests in any event, regardless of what happened to Kevin recently, which was an absolute disgrace. My question is the same as it would have been if the attack had not taken place. The executives in question made repeated requests of Facebook to remove the offensive material. I understand that Facebook removed it he day after Kevin Lunney was kidnapped and attacked. Facebook described is as a violation of authenticity guidelines. I wonder how our guests will explain the meaning of the latter. In any event, it was action after the event. It was much too late. The material was hugely offensive. It was intended to harm the good name and reputation of the people involved and, as a Deputy representing the constituency of Cavan-Monaghan, I believe it fed into a view that in some way those people, the named parties, and the businesses they were involved in, were a legitimate target for the worst excesses of what I can only describe as the most ill-informed people on this island. How does Facebook respond to the example I have cited? I would have raised it with Facebook even if the attack had not taken place because the pages should have been removed. The vile content and its intent and purpose is an affront to any and every one of us. I invite Mr. Ó Broin and Ms Rush to respond.

Mr. Dualta Ó Broin

I thank the Chairman for the question. We were all shocked by the events in Cavan and we extend our sympathies to Mr. Lunney. We have been working with the Quinn Industrial Holdings team for a number of years and it has made a number of reports to us about content. We have removed a significant number of posts from a number of pages.

There is a focus on a particular post. I will not go into the details. When it was reported to us initially, it was reviewed and found not to be violating our community standards. It was then brought back to our attention through the media on 20 September. At that point, it was put into an authenticity check. It was then reviewed and found to be violating our community standards. The post and the page on which it appeared have been removed from the platform.

Had been removed.

Mr. Dualta Ó Broin

Have been removed.

Were they removed by Facebook?

Mr. Dualta Ó Broin

By Facebook.

With respect, is it not obvious that Facebook needs to review its community standards? The series of attacks and outrageous actions that have taken place, clearly targeted at the executives in Quinn Industrial Holdings, not only put their safety at risk but also put at risk the jobs of the approximately 850 people who depend on their expert oversight of the respective companies. As another member stated, it would be to return these fields to fields of rushes. The community is so dependent on the organisations that it would be an absolute tragedy if any or all of these companies were to fold. Even to have them put at risk is an outrage and an affront to the community that depends on them. If the post met the community standards, Facebook needs to examine those standards.

Mr. Dualta Ó Broin

On the general point about our community standards, they are constantly evolving. We are constantly being informed by experts about where they should be. That is just a general point, not one on the specific question. Having looked back at the post, we believe we should have taken the wider context into account when it was initially reviewed. That did not happen. It was found not to be violating our community standards. We should have taken the wider context into account.

Ms Karen White should note that I appreciated very much Deputy Chambers' reference to the Ryan–Mathis case. Only for "The Late Late Show" last Friday, we would not really have known about the gravity of what had taken place in regard to the couple's experience. I commend Ryan Tubridy and "The Late Late Show" on having brought it to public attention. I really believe this is an important part of the exercise in moving towards better, safer and respectful utilisation of these important platforms. What has Twitter has done regarding the references to the couple, who were carrying out a job of work as parents in creating an income stream?

Ms Karen White

As I mentioned previously, I sympathise with the family, which was subjected to a range of behaviours, not just on Twitter but also on other services. Where we found that there were behaviours and content in violation of our rules, we took action.

There has been a focus on one specific account. I am sure the Chairman will appreciate why I cannot get into a conversation about it but that is not to say it would have been the only account regarding which reports would have been received by Twitter or content would have been surfaced proactively by us in respect of which action would have been taken. Therefore, I believe there is a broader issue.

I thank Ms White. I will hand over to Senator Alice Mary Higgins. Who else would I give the last word to?

The Quinn case and the Ryan family case have been very well delved into. The issue of targeted harassment is one I hope to return to in a moment, specifically in conversation with the representatives from Twitter. It is notable that in its testimony today, Facebook has told us it is not confident there is any lawful weight to its guidelines. Twitter, with regard to the Ryan case specifically, said that if there had been any legal action, it would have co-operated in the case. Even in the representatives' own words, they are strongly making the case not only regarding legislation on hate speech but also regarding legislation on online regulation. Representatives from both Facebook and Twitter pointed out that they would have moved further if required to by law. This underscores the need for legislation in this area and the fact that self-regulation clearly has its limits. Community standards have little concrete weight when it comes to persons' lives and persons who may be the subject of online abuse, regardless of whether they choose to be part of the online community. That is important because the duty of protection is not simply to users but to all citizens. That is a wider concern. The duty of protection also extends to those who may not be citizens but who are resident in our country.

Let me point to an issue raised by Deputy Brophy. Follow-up information and granular detail may be required. Deputy Brophy asked about the resources and number of staff allocated for taking down offensive or harmful content. I would like a breakdown also on the resources devoted to brand protection. All the organisations represented today have a large number of staff working on brand protection, for example, to ensure their advertisers' branding is not being misused. I am aware there are those whose work is dedicated specifically to targeting and watching out for issues that may be of concern to advertisers. I do not know the extent to which those issues, in terms of brand protection removals, are subject to notification requirements. Are there staff permanently allocated to deal with these issues? In providing us with the information on staffing and resources allocated for taking down content that violates standards, the representatives might give a specific breakdown on the proportion of resources allocated to deal with harmful content and the proportion triggered by notifications, by comparison with the resources allocated to deal with issues such as brand protection.

To add to that, the Senator will probably have noted that Deputy Connolly asked for that information. We had positive responses from Facebook and Twitter on providing a written reply. Mr. Meade is working towards that.

Mr. Jim Meade

If I can, I will.

I thank Mr. Meade.

It was in respect of staffing, in particular, that Deputy Connolly raised the issue.

I am concerned about notification and the strong emphasis placed on the e-commerce directive and the terms "suitably notified" and "appropriately notified". It has been pointed out time and again that the e-commerce directive is wildly out of date. In many cases, individuals are not in a position to notify because they may not be aware of the offensive or harmful content related to them. I want to focus a little on the facilities associated with the targeting of information. If I do not see abusive content about me, I am not in a position to notify the relevant organisation about it. This leads to the question of the targeting of information.

Twitter has a policy whereby it can block content but it tries to support a person's experience on the site. I may not be simply concerned about my experience on the site but about abusive content about me. As I understand it, Twitter changed its rules, and perhaps there has been a further change, whereby if I block somebody and assume that person no longer has access to my information, that person can still see me and speak about me but I do not see what he or she is saying. That change in the blocking provision is a very serious concern and relates to stalking, harassment and so forth. It is not just my experience but misinformation.

Ms White indicated that Twitter never automatically suspends accounts. I have spoken to people who have been the subject of very targeted campaigns of harassment and a person who has been suspended as a consequence immediately created a new account and resumed the harassment. This is a real concern. When the Rape Crisis Network Ireland, RCNI, spoke to this committee it emphasised that this should not apply only to repeated activity but a single incident should be enough. Stalking is one of the concerns that may be included in this Bill. Someone can repeatedly re-register on new accounts but Twitter will wait until that person does something offensive or dangerous. That approach creates a very chilling fear for a person. I will not bring the people involved into committee testimony but I know there are people who have been repeatedly targeted by accounts which effectively threatened them and when those accounts were suspended a new one immediately appeared. How can Twitter improve its blocking mechanisms to address those concerns?

Ms Karen White

The Senator raised several issues and I will do my best to address them as quickly as I can. She mentioned a strong emphasis on notification. I have tried to communicate to the committee this morning that we are trying to reduce the burden on victims having to report to Twitter. People absolutely need a mechanism to get in touch with Twitter to tell us if they have been subjected to certain types of behaviour. In response to the Senator's point that if someone cannot see something, that person cannot notify us, we made a change some years ago to allow bystander reporting. If a person sees something online, it goes back to the "see something, say something" principle. If people see someone being bullied in the playground, they should say something to the teacher. The same principles apply online. We allow bystander reporting.

In addition, I have highlighted that 30% of the action that we take on content deemed to be abusive behaviour is taken proactively. That is without any reports; it is content that Twitter has surfaced for human review. We hope that figure will increase so that we can further reduce the burden on victims having to report content to us.

We had a problem with repeat offenders at Twitter. It was a bit like playing whack-a-mole in that we would suspend an account and another account would be opened. We did significant work with our trust and safety council which advised us of the need to remedy this situation. We have stopped repeat offending by more than 100,000 accounts over the past year.

Our actions are having real results. I do not mean to imply that we have fixed the problem in its totality but we are slowing it down. Our actions are stopping repeat offenders opening new accounts or engaging in multiple account abuse, where one individual has multiple accounts with the sole intention of abusing others on the service. We rely on behaviour base signals. There are several signals that may not be visible to individuals who use our service or to the public at large but they are behind the scenes, such as whether an email address has been verified or whether an account is mentioning other accounts in large volumes. That would give the impression to us that an account is potentially violating rules on behaviour.

That seems slightly at odds with the comments on suspending and hoping that people learn from their experience and change earlier because it seems Ms White is saying Twitter will block repeat accounts as soon as they appear if somebody is suspended.

Ms Karen White

Yes. If we believe an account holder has opened another account, there are several ways we can do that. He or she may use the same email address, IP address or telephone number. We automatically suspend those accounts once we have made a determination that somebody who has been suspended is trying to circumvent the Twitter rules to open another account. That person will be suspended from the service.

What about the change in policy whereby I can put up harmful content about anybody and can follow them to comment on what they are doing but they may not see that? Previously, I understood blocking meant that person would not see my content if I did not wish to engage with that person on Twitter and did not want him or her to see my content.

Mr. Ronan Costello

If an individual blocks another individual on Twitter, that person can no longer see the other person's profile and timelines and can no longer direct tweets at or send that person a direct message. If the person happens to be in a group direct message conversation, the person who has been blocked is ejected from the group conversation. Blocking on Twitter is a definitive way to cut ties with someone on the platform.

Muting is the low-friction way of controlling interaction with someone else on the platform. That may be what the Senator is referring to. If the Senator mutes someone, she will not see that person's content and timeline but if she navigates to the person's profile, she will see what he or she says about anything.

Publication is a fundamental issue that came up again and again. The appendix to Facebook's presentation was optimistic when it suggested an amendment to the Bill which provided that communication does not include provision of information society services. I hope and expect that would be roundly rejected because it is a very ambitious attempt by Facebook to remove itself from its responsibilities. There are questions about what a platform and publication are, which need to be teased out but there is a communication function and communication is taking place. That was acknowledged elsewhere. This is not a neutral platform for freedom of expression. It is a commercial platform in which certain particular messages can be promoted, highlighted and amplified. It is not a neutral tabula rasa where people place their tuppence worth. What steps does Twitter believe need to be taken to change its financial model? I accept that there are serious concerns about freedom of expression. I do not know that we should automatically move towards identification of persons online. We need to consider what happened in China and avoid government ideas being used in that sense.

If I am paying to have my message amplified then there is a different responsibility and in terms of the facilitation of the anonymous targeting of information. On the question of harmful communications, which I know is a wider issue here, Senator Lynn Ruane and I previously put forward a measure that has not been commenced but is in legislation on the use of information to target vulnerable persons such as an 18 year old with, for example, an eating disorder. There is a wider issue around the targeting of messaging.

The Senator must ask a question because we are about to finish.

On the financial model in terms of targeting and its facilitation, do the witnesses believe a much higher standard of transparency and legal liability for harmful content should sit on platforms when they commercially benefit from an amplification in terms of advertisement?

In terms of algorithmic rewarding harmful content by facilitating advertisements on sites that have harmful clickbait content, particularly in terms of YouTube, which is represented here today through its owners, should there be a higher legal standard of accountability for yourselves in terms of that content?

I ask each of the three groups to quickly respond to the two points, if they can after four and half hours.

Mr. Ryan Meade

In terms of targeting, what we try to do on Google and YouTube is to make it very clear when something is an ad so it is not hidden in our search results. Particularly in Google search, the ads are clearly labelled on the top and on YouTube the ads are clearly labelled as ads. That is a very important principle for us. We do not take payment for boosting content. We have the ability for users to click in and see why the ad was displayed to them.

I welcome the Senator's question because she addressed an important matter. I think the question is what standards should apply to platforms, as platforms, rather than identify whether we can just apply the standards that exist for publishers and apply them to platforms. That is an encouraging line of thought.

Google has sought to say that the websites, where its ads appear, are the publishers and thus evaded accountability with that response in the past.

Mr. Ryan Meade

Can the Senator elaborate? In what sense?

When a Google ad appears on, for example, a local newspaper site Google has been very clear that it regards itself as not in any way accountable. What should the accountability measures be for Google?

Mr. Ryan Meade

If an ad has been placed through Google then it is subject to our advertising policies, which we enforce.

There are other advertising products, and, I think, we have discussed in the past, whereby Google is acting in a slightly different way through facilitating the auction for the ad but the ad is purchased from a third party. There are multiple third parties involved in this market. Google is not the only-----

Please address the general question of legal accountability for harmful content within those ads. I do not need to hear about the full mechanism for advertising.

The panellists will have to give a crisp reply.

Mr. Ryan Meade

I wish, after four hours, I could give a crisp reply.

I have never used the gavel but I am tempted. I ask Mr. Meade to please respond if he can.

Mr. Ryan Meade

Senator Higgins has me at a loss. I do not have a crisp reply but if I think of one while the others are responding I will come back in.

Very good. Twitter is next.

Ms Karen White

We apply an extremely high standard when it comes to the content moderation practices of enforcing our legal ads policy.

With regards to targeting on Twitter, it is very important one understands the policy framework we have in place. One cannot target any sort of inflammatory content. One cannot take ads that are targeted based on political beliefs, hate speech or anything.

The algorithmic reward mechanism is particularly relevant to Twitter in terms of what gets algorithmically rewarded.

Ms Karen White

I ask the Senator to please elaborate on what she means.

In terms of what content is more likely to appear and what content is given a higher profile, does Ms White know what I mean?

What is rewarded when one has a number of followers or whatever that number a multiple of thousands?

Ms Karen White

We have been working to try to surface more healthy conversations in the application of algorithms and how we deploy them on Twitter. In tackling certain types of behaviour, like troll-like behaviour - which in some instances, as I have mentioned before, is neither illegal nor a violation of the Twitter rules but still has the potential to detract and distort from healthy conversations on Twitter - we are trying to surface healthier conversation and down-rank that type of behaviour that distorts and detracts from the conversation. That is one way that algorithms are used. In that sense the content is not removed from the service. It would still remain there but we are ensuring we are not giving more prominence to unhealthy types of conversation, particularly in-----

I want to move on the debate.

Ms Karen White

I have tried to answer the question.

Perhaps we might-----

No, the Senator is opening up an exchange that I am unable to accommodate.

A written answer might suffice, in terms of the mechanisms.

It might help.

Ms Karen White

I am happy to do so.

The companies will be in correspondence with us. It would be important to get a written answer on that. Would the representative of Facebook like to respond?

Mr. Dualta Ó Broin

It might be of greatest benefit if we respond in writing and set out our policies. I would have look into what the specific rules are. We have very strict rules on what is and is not allowed in our advertising. I take the point that has been made. Let me write to the Senator and if it is not exactly what she needs then she should write back to me, by all means, and I will come back to her again on the matter.

Do the companies believe there should be a higher level of legal accountability to them for advertising or for algorithmic targeting? I do not simply want their information on their internal mechanisms. I have asked about the level of legal accountability because that is what we are considering.

We have not had an opportunity to speak about WhatsApp but we know it is one of the main mechanisms and it is owned by Facebook, which was not spoken about in the presentation. I ask Mr. Ó Broin to elaborate on that matter.

I am sorry, Chairman, but I want to ensure that the written replies relate to my questions.

To summarise, the companies will write to us on these issues; Mr. Ó Broin will not just correspond with the Senator.

Mr. Dualta Ó Broin

The issues are incredibly complex and trying to respond to them, under the pressure of the committee, is probably not fair.

The companies will also write to the committee.

Ms Karen White

Yes, certainly.

We will circulate the correspondence to the Senator and to all of the Members. That would just about sew it up, Mr. Meade, because your organisation is going to write to us now anywa.

Mr. Ryan Meade

I genuinely apologise to Senator Higgins. She had an interesting line in questions and I am happy to meet her separately about them as well.

I am sure that the committee can look to the written answers.

Mr. Ryan Meade

Yes.

We most definitely will in the consideration of our report and recommendations.

There is only one thing I am afraid of when I leave here today and that is that I might have vexed the Green Party. If I did, how could I ever face Deputy Ryan? He may make one brief intervention.

I do not have a question to ask. I wish to notify the Chairman and the committee that the Joint Committee on Communications, Climate Action and Environment will host an International Grand Committee on Disinformation and Fake News on 7 November in the Seanad Chamber. The debate will be on how to get international collaboration in the regulation and governance of platforms and will also address harmful speech versus free speech and how to protect both. I am very hopeful that all of the three companies here will send representatives. Parliamentarians from all over the world will attend as will leading thinkers on the subject. The day before we will have a non-governmental organisation, NGO, event in the Westin Hotel and extend an invitation to the members of this committee. The Joint Committee on Communications, Climate Action and Environment will work in collaboration with this committee on this very critical issue.

The Joint Committee on Justice and Equality does not charge for advertising unlike our guests.

I thank Deputy Eamon Ryan, Senator Alice-Mary Higgins and Deputy Martin Kenny for being here at the end. I thank each company for their opening statements and participation. I shall seize on the word "collaboration" and say that I hope that this will not be the last time that we will engage or that the companies will engage, more importantly, with the representatives of these Houses and Government. We have already outlined the objectives of this committee, in setting out on this course of hearings, to the companies.

I believe this is a journey on which we can travel together. That is the most important message that we can deliver publicly. I thank Mr. Dualta Ó Broin and Ms Claire Rush from Facebook Ireland; Ms Karen White and Mr. Ronan Costello, Twitter Ireland; Mr. Ryan Meade, Google Ireland; and Ms Ana Niculescu, the Internet Service Providers Association. We will invite them to the launch of our report sometime between now and Christmas.

The joint committee adjourned at 1.45 p.m. until 9 a.m. on Wednesday, 16 October 2019.
Top
Share