Skip to main content
Normal View

Joint Committee on Communications, Climate Action and Environment debate -
Wednesday, 1 Aug 2018

Moderation of Violent and Harmful Content on the Facebook Platform: Discussion

By virtue of section 17(2)(l) of the Defamation Act 2009, witnesses are protected by absolute privilege in respect of their evidence to the joint committee. However, if they are directed by the Chairman to cease giving evidence on a particular matter and continue to do so, they are entitled thereafter only to qualified privilege in respect of their evidence. They are directed that only evidence connected with the subject matter of these proceedings is to be given and asked to respect the parliamentary practice to the effect that, where possible, they should not criticise or make charges against any person or an entity by name or in such a way as to make him, her or it identifiable. I also advise that any submission or opening statement made to the committee will be published on its web page after the meeting. Members are reminded of the long-standing parliamentary practice to the effect that they should not comment on, criticise or make charges against a person outside the Houses or an official, either by name or in such a way as to make him or her identifiable.

I remind members and delegates to turn off their mobile phones or switch them to flight mode. Mobile phones interfere with the sound system and make it difficult for the parliamentary reporters to report the proceedings the meeting. Also, television coverage and web streaming will be adversely affected.

I welcome our two witnesses: Ms Niamh Sweeney, head of public policy at Facebook Ireland, and Ms Siobhán Cummiskey, Facebook's head of content policy for Europe, the Middle East and Africa. I invite Ms Sweeney to make her opening statement.

Ms Niamh Sweeney

I thank the committee for asking us to be here today to discuss some of the issues raised in the recent "Dispatches" programme that aired on Channel 4 on 17 July 2018. I am head of public policy for Facebook Ireland. My colleague, Siobhán Cummiskey, is Facebook's head of content policy for Europe, the Middle East and Africa, and we are both based in our international headquarters in Dublin.

We know that many who watched the "Dispatches" programme were upset and concerned by what they saw. Siobhán and I, along with our colleagues here in Dublin and all around the world, were also upset by what came to light in the programme, and we fully understand why the committee wanted to meet us.

The safety and security of our users is a top priority for us at Facebook, and we have created policies, tools and a reporting infrastructure that are designed to protect all our users, especially those who are most vulnerable to attacks online, such as children, migrants, ethnic minorities, those at risk from suicide and self-harm, and others. It was deeply disturbing for all of us who work on these issues to watch the footage that was captured on camera at our content review centre in Dublin, as much of it did not accurately reflect Facebook's policies or values.

As our colleague, Richard Allan, said during an interview with the "Dispatches" team, we are one of the most heavily scrutinised companies in the world, and that is right. It is right that we are held to high standards, and we also hold ourselves to those high standards. "Dispatches" identified some areas where we have failed, and Siobhán and I are here today to reiterate our apologies for those failings. We should not be in this position and we want to reassure the committee that whenever failings are brought to our attention, we are committed to taking them seriously, addressing them in as swift and comprehensive a manner as possible, and ensuring we do better in future.

First, I would like to address one of the claims made in the programme, that it is in our interests to turn a blind eye to controversial or disturbing content on our platform. This is categorically untrue. Creating a safe environment where people from all over the world can share and connect is core to our business model. If our services are not safe, people will not share with each other and, over time, would stop using them. Nor do advertisers want their brands associated with disturbing or problematic content, and advertising is Facebook's main source of revenue. We understand that what I am saying to the committee now is undermined by the comments that were captured on camera by the "Dispatches" reporter. We are in the process of carrying out an internal investigation to understand why some actions taken by CPL were not reflective of our policies and the underlying values on which they are based. I will explain who CPL are as I go on.

We also wish to address a misconception about reports that relate to an imminent risk of self-harm or suicide. During the programme, a CPL staff member was asked if a backlog of reports could include reports about people who were at risk of suicide. The staff member's answer was that it could, but this was wrong. Suicide related reports are routed to a different queue so we can get to them quickly. Queue is the word we use for the list of reports that we have coming in to us. Reports about suicide or self-harm are considered high priority, and almost 100% of the high-priority reports during those two months of filming were reviewed within the set timeframe.

This is a somewhat shorter opening statement than we have submitted in writing, but I will touch on all the main points covered in the written version. I will address the actions to fix specific content errors that were highlighted by the programme. "Dispatches" highlighted a number of issues. I want to take the committee through what we have already done and are continuing to do to improve the accuracy of our enforcement.

As I highlighted at the outset, some of the guidance given by the trainers to content reviewers during the "Dispatches" programme was incorrect.

As soon as we became aware of these mistakes, we took immediate steps to remove those pieces of content from our platform in line with our existing policies. These included a decision not to remove a video depicting a three year old child being physically assaulted by an adult. This was a mistake because we know that the child and the perpetrator were both identified in 2012. The video should have been removed from the platform at that time. We removed this video as soon as "Dispatches" brought it to our attention. Our policies make it clear that videos depicting child abuse should be removed from Facebook if the child in question has been rescued. In addition to removing the specific piece of content, we are now using media matching technology to prevent future uploads of the content to the platform.

We do not allow videos of this nature to be shared except in a very narrow set of circumstances, namely, if the video is shared to condemn the behaviour, the child is still at risk and there is a chance the child and perpetrator could be identified to local law enforcement as a result of awareness being raised. According to Malaysian news reports, that is what happened in this particular case. A neighbour recognised the child in the video, having seen it on Facebook. In that instance, once we know the child has been brought to safety, it is our policy to remove the video and prevent it from being re-uploaded to our platform by using media matching software. In the relatively few cases where this kind of video is allowed to remain on the platform, in line with what I have described, we apply a warning screen for users and limit its distribution to only those who are 18 years of age or older. We also send this content to an internal Facebook team known as the law enforcement response team which can contact local law enforcement.

We recognise that there are a number of competing interests at play when it comes to this type of content, namely, the child's safety and privacy, the effect of that content on those who may view it and the importance of raising awareness of real world happenings. However, on foot of the concerns voiced by safety NGOs and others following the "Dispatches" programme, we are actively considering a change to this policy and have started an extensive consultation process with external organisations, including law enforcement agencies and child safety organisations, to seek their views on the exception we currently make for children we believe to be at risk or who could be brought to safety.

It is important to make absolutely clear that we take a zero tolerance approach to child sexual abuse imagery. Whether it is detected by our technology or reported to us, we remove it and report it to the US-based National Centre for Missing and Exploited Children, NCMEC, as soon as we find it. NCMEC leads the global co-ordinated effort to tackle child sexual abuse imagery of which we, other tech companies and law enforcement agencies around the world, including An Garda Síochána, are a part. We also use photo and video matching technology to prevent this content from being uploaded to Facebook again and we report attempts to re-upload it to law enforcement agencies where appropriate.

One of the other examples highlighted by the programme included a decision not to remove a video of teenage girls in the United Kingdom who were filmed fighting with each other. This video has since been removed from our platform. It is our policy to always remove bullying or teenage fight videos unless they are shared to condemn the behaviour. Even content shared in condemnation appears behind a warning screen and is only visible to people over the age of 18. The user must click through this warning screen if he or she wants to continue to view the content. In the example highlighted in "Dispatches", the person who shared the video did so to condemn the behaviour. However, it is our policy to always remove such content, regardless of whether it is shared to condemn it, if the minor or his or her guardian has requested its removal. When we learned from the "Dispatches" team that the mother of one of the teenagers involved was deeply upset and wanted this video removed, we immediately deleted it and took steps to prevent it from being uploaded to the platform again.

There was also a decision not to remove a post comparing Muslims to sponges and a disturbing meme that read, "When your daughter's first crush is a little negro boy". These were both violations of our hate speech policy and should have been removed by the reviewer. Hate speech is never acceptable on Facebook and we work hard to keep it off our platform. These posts were left up in error and were quickly removed once we became aware of them via "Dispatches". The meme in particular violates our hate speech policy as it is mocking a hate crime, that is, depicting violence motivated by a racial bias. We have deleted it and are using image matching software to prevent it from being uploaded again. The post comparing Muslims to sponges violates our hate speech policy as it is dehumanising.

We are increasingly using technology to detect hate speech on our platform which means we are no longer relying on user reports alone. Of the 2.5 million pieces of hate speech we removed from Facebook in the first three months of 2018, 38% of it was flagged by our technology. In 2017, the European Commission monitored the compliance of Facebook and other tech companies as part of the code of conduct on countering illegal hate speech online and we received the highest score, removing 79% of potential hate speech, 89% of which was removed within 24 hours.

We are also making some changes to our processes and policies to address the issues raised. The first is that we will now flag accounts of users suspected to be under 13 years of age.

We do not allow people under that age to have Facebook accounts. If someone is reported to us as being under 13, the content reviewer will look at the content on the profile - meaning text and photos - to try to ascertain the user's age. If the reviewer believes the person is under 13, the account will be put on hold and the person will not be able to use Facebook until he or she provides proof of age. Since the "Dispatches" programme, we have been working to update the instructions for reviewers to put a hold on any account they encounter if they have a strong indication the user is underage, even if they have another reason for undertaking the review.

As I flagged, our policy in respect of non-sexual child abuse videos is under review. We have started this consultation process with external organisations to decide if it is appropriate to continue with our policy of allowing these videos on our platform in the limited circumstances I described, namely, when they are shared to condemn the behaviour and the child is still at risk.

We are also taking actions to address training and enforcement of our content policies. We recognise the responsibility we have to get our training and the enforcement of our policies right. Content review at this scale has never been done before, as there has never been a platform where so many people communicate in as many languages across so many countries and cultures. We work with reputable partners to deliver content moderation services because it enables us to respond more quickly to changing business needs. For example, we may need to quickly increase the number of staff we have in different regions, and the outsourcing model enables us to do that. As I stated, CPL Resources is one of our outsourcing partners here in Dublin and we have worked with the company since 2009. However, in light of the failings highlighted by "Dispatches", we are making changes to substantially increase the level of oversight of our training by in-house Facebook policy experts and to test even further the readiness of our content reviewers before they start reviewing real reports.

We are in the process of carrying out an internal investigation with CPL to establish how these gaps between our policies and values and the training given by CPL staff came about. The investigation is being led by Facebook, rather than CPL, due to the extremely high priority we attach to this. It began in earnest on Monday, 23 July, as out of an abundance of caution and concern for their well-being, CPL encouraged the staff members directly affected by the programme to take some time off. We immediately carried out retraining for all retrainers at our CPL centre in Dublin as soon as we became aware of discrepancies between our policies and the guidance that was being given by trainers to new staff. Ongoing training will now continue with twice weekly sessions to be delivered by content policy experts from Ms Siobhán Cummiskey's team. CPL is also now directly involved in weekly deep-dive discussions with Ms Cummiskey's team on our policies covering issues like hate speech and bullying. All content reviewers will continue to receive regular coaching sessions and updated training on our policies as they evolve. We have also revised our training materials used to train content reviewers to ensure they accurately reflect our policies and illustrate the correct actions that should be taken in all circumstances. This has been done both for CPL and for all of our content review centres globally. These materials have been drafted and approved by Facebook only and will continue to be updated by us as our content policies evolve.

We are also seconding highly experienced subject matter experts from Facebook to CPL's office for a minimum of six months to oversee all training and provide coaching and mentoring. We are introducing new quality control measures, including new dedicated quality control staff to be permanently assigned to each of our content review centres globally. We are also conducting an audit of past quality control checks at CPL, going back for a period of six months to identify any repeat failings that may have been missed. This will include temporarily removing content reviewers who have made consistent or repeated errors from this type of work until they have been retrained. We will also continue to deploy spot testing at our review centres. If we find any irregularities in the application of certain policies more broadly, we will test for accuracy using targeted spot-checking of all content reviewers to improve accuracy. We have for several months been in the process of enhancing our entire on-boarding curriculum and are continuing to do so. The enhancements to our curriculum include even more practice, coaching and personalisation to help content reviewers focus on areas where they may benefit from additional upskilling.

As I have been speaking for some time, I will conclude with some comments on the Digital Safety Commissioner Bill 2017. I would like to share our thoughts on the 2016 proposal by the Law Reform Commission, LRC, to create a digital safety commissioner with statutory take-down powers. As the committee is no doubt aware, the LRC's proposal also provided a foundation for Deputy Donnchadh Ó Laoghaire's Private Member's Bill, the Digital Safety Commissioner Bill 2017. We understand the motivation behind the establishment of a digital safety commissioner and have discussed it with many of our safety partners in Ireland. We also understand the appeal of having an independent statutory body that is authorised to adjudicate in cases where there is disagreement between a platform and an affected user about what constitutes a "harmful" communication, or to provide a path to appeal for an affected user where we have, in error, failed to uphold our policies. We also acknowledge the draft Bill's efforts to ensure its scope is not overly broad in that an appeal to the digital safety commissioner could only be made by an individual where the specified communication concerns him or her. We see great benefit in a single office having the ability to oversee and co-ordinate efforts to promote digital safety, much of which has been captured in the Government's recently published Action Plan for Online Safety 2018-2019.

Only through a multi-pronged approach, of which education is a critical part, can we begin to see positive changes in how people engage and protect themselves online.

In addressing the nature of harmful communications, the Law Reform Commission report states that while there is "no single agreed definition of bullying or of cyberbullying, the well-accepted definitions include the most serious form of harmful communications, such as ... so-called "revenge porn"; intimidating and threatening messages, whether directed at private persons or public figures; harassment; stalking; and non-consensual taking and communication of intimate images". We agree with the Law Reform Commission with respect to all of these types of communications. The sharing of non-consensual intimate images, otherwise known as revenge porn, harassment, stalking and threatening messages are all egregious forms of harmful communication and are banned both by our community standards and, in some cases, the law. We fully support the Law Reform Commission's proposals to create new criminal offences to tackle non-consensual sharing of intimate images and online harassment where those offences are clearly defined and practicable for a digital environment. We have also taken steps improve how we tackle the sharing of non-consensual intimate images on our platform. More information on this was shared in a Facebook newsroom post in April 2017.

However, beyond the egregious examples I have outlined, the proposed Bill is unclear as to what precisely constitutes a harmful communication. No definition is included in the draft legislation, but from the drafting of the Bill, it appears that this concept is intended to be broader than content that is clearly criminal in nature, much of which I outlined and on which we are in full agreement with the Law Reform Commission. The exact parameters are left undefined and this will lead to uncertainty and unpredictability. In its 2016 report, the Law Reform Commission states:

The internet also enables individuals to contribute to and shape debates on important political and social issues, and within states with repressive regimes, the internet can be a particularly valuable means of allowing people to have their voices heard. Freedom of expression is therefore the lifeblood of the internet and needs to be protected.

Later, the report notes:

Thus, balancing the right to freedom of expression and the right to privacy is a challenging task, particularly in the digital and online context. Proposing heavy handed law based measures intended to provide a remedy for victims of harmful digital communications has the potential to interfere with freedom of expression unjustifiably, and impact on the open and democratic nature of information sharing online which is the internet's greatest strength.

We agree with the Law Reform Commission's analysis. While it would clearly not be the intention of this Bill to impact on free speech in Ireland, the commissioner's ability to issue a decision ordering the removal of harmful communications should be considered in light of the potential for limiting freedom of expression. It is important, therefore, to have a clear definition of what constitutes a harmful communication included in the legislation.

Facebook has put community standards in place for a reason. We want members of our community to feel safe and secure when they use our platform and we are committed to the removal of content that breaches those standards. I thank the committee for meeting us today to discuss these important issues.

I thank Ms Sweeney for her presentation and for coming before the committee. As we will take questions from several members before asking her to respond, Ms Sweeney may wish to take a note of the questions.

Yesterday, news broke in respect of Facebook's ongoing investigation into alleged efforts to influence the mid-term elections in the United States. Will Ms Sweeney update the committee on this, including on how Facebook identified the issue and whether it was reported to the FBI?

This is the second time this year that the committee has asked representatives of Facebook to appear before it. The first time was to discuss a case involving Cambridge Analytica and a clear breach of trust. Today, we are dealing with a second breach of trust, for which Ms Sweeney has apologised. I accept her apology and I also note Facebook's welcome intervention in the recent Irish referendum when it suspected political interference through its platform. Notwithstanding that, is it not time that we regulated social media?

Ms Sweeney referred to the Bill proposing the establishment of a digital safety commissioner. The committee earlier agreed to proceed with detailed scrutiny of the Bill and we would welcome an input from Facebook into our deliberations.

Channel 4's "Dispatches" programme showed a clear betrayal by Facebook of its own standards. While we understand that it is not possible to moderate everything uploaded on the Internet, the programme showed cases of illegal, abusive and suspected abusive behaviour where Facebook did not meet its own standards.

Ms Sweeney stated Facebook is carrying out an internal investigation to understand why some actions taken by CPL were not reflective of Facebook's policies and the underlying values on which they are based. In his 2016 memo entitled "The Ugly", Facebook's vice president, Andrew Bosworth, set out that anything that achieves growth for Facebook is de facto good even if it means somebody dies as a result of being exposed to bullies or through a terrorist attack planned using Facebook platforms. There is a clear disconnect between what has come from the top at Facebook and what Ms Sweeney has said here today. I would like her to expand on that point.

There are 7,500 moderators in Facebook who deal with moderation and four clinical psychologists worldwide who are employed to support them. Does Ms Sweeney think that is enough?

I understand from Ms Sweeney's presentation that Facebook does not have an objection, in principle, to a digital safety commissioner but that there are a few issues that need to be worked out.

Will Ms Sweeney bank those questions? I will call Deputy Stanley next.

I thank Ms Sweeney and Ms Cummiskey for attending today to account for themselves and the company. I was one of a number who requested that representatives of Facebook be brought before the committee to deal with the very serious issues that arose in the Channel 4 programme.

The context in which this has happened is interesting. We need to tease out some of the issues today. The issues of child protection, hate speech and racist material have arisen. Facebook is in our pockets because it is on our phones and it goes into every sitting room. It is present throughout the globe. Facebook has a huge responsibility to protect people who are connected to it.

I read the opening statement that Ms Sweeney supplied to the committee. The company has either been incapable or unwilling thus far to deal with the issues of regulation and to stop this type of content from appearing on the platform. Do Ms Sweeney and Ms Cummiskey agree the days of self-regulation are over? Facebook is a private company. A number of private companies are running platforms. This is a sovereign state and one of many throughout the globe. We have to ensure, as lawmakers, that people are protected. Thus far, Facebook and some other private companies have failed to do that. Do Ms Sweeney and Ms Cummiskey agree the days of self-regulation are over?

One of the pieces of content mentioned in the programme - the one showing a serious assault on a three year old boy - has been up on the site since 2012. The statement the committee received the other day refers to errors, content that should not be present and incorrect guidance that was given. Surely to God it is not an error when stuff like that is up on a platform for six years. It shows a complete inability or unwillingness on the part of Facebook. Will Ms Sweeney and Ms Cummiskey address that issue for me?

Ms Sweeney said Facebook is one of the most heavily scrutinised companies in the world. It is everywhere so there will be a lot of scrutiny of it. However, Facebook is not the most regulated. There is an absence of regulation at state level. Thus far, Facebook has been incapable or unwilling to remove certain content. One of the CPL moderators, who is employed by Facebook, stated on the programme that if Facebook starts censoring people too much people will stop using the platform and that it is all about money at the end of the day. The view of one of the chief executives on this issue was referred to by the Chairman.

To my mind, some senior person is in charge of moderators, gives direction to them and sets out the code of ethics for them. This is not stuff they think up in their own heads. Is this the ethos of Facebook? How did the senior person who is in charge of the moderators and gives them direction on what they should look out for on the platform come to the idea that if people were censored too much, they would stop using the platform and that it was all about money at the end of the day? I understand Facebook is a private company that has to make money from advertising. However, there has to be a balance. That statement is striking and does not fall out of the sky. A moderator did not wake up some morning and decide that this was approach that should be taken. He or she is employed by Facebook and given directions by it. We need to know why that is happening.

Sinn Féin's Digital Safety Commissioner Bill has passed Second Stage in the Dáil. Deputy Donnchadh Ó Laoghaire proposed the Bill and we believe it is very important. Bills passed on Second Stage will always be improved as they pass through the Oireachtas and we welcome Facebook's views on the Bill. It is welcome that it acknowledges the need for it. That is very positive. The Bill has broad support across the House. The Minister for Communications, Climate Action and Environment, Deputy Denis Naughten, spoke on RTÉ radio yesterday about the need for it and signalled his support for its principles. We look forward to it proceeding quickly.

Ms Sweeney has said Facebook does not allow 13 year olds have a Facebook account. Her explanation of how that is enforced is a little flimsy. I recognise that it is difficult to enforce, but we need a better explanation of how the company intends to do this in the future. She has also said Facebook limits the distribution of certain images to those over 18 years of age and that it is possible to do this in practice. How does Facebook do it?

Ms Sweeney has mentioned that some of the staff involved have been advised to take time off. Is it the case that Facebook has asked them to be fired or is it enforced time off? Are they simply sacrificial lambs to allow Ms Sweeney to come here and tell us that they have been given time off to show how seriously Facebook is taking the matter, whereas, in fact, responsibility goes further up the chain in the company?

Does Facebook harvest data from children aged under 13 years?

I welcome Ms Sweeney and Ms Cummiskey and thank them for attending. They have always made themselves available to the joint committee and Members whenever they have raised issues. I have considerable sympathy for them in finding that they have to appear before the committee again to address issues of such a serious nature. While I accept that they also found the material deeply disturbing, I am not prepared to give the company the fool's pardon it may seek from us. While the material is deeply disturbing, it is material that was recognised and available to people within Facebook. If it had only been brought to its attention and that of others by the "Dispatches" programme, we might be able to accept the explanation. However, as it used was part of the training manual, it should not be disturbing to anybody within Facebook. What, in the first instance, should be disturbing is that the material formed part of a training manual.

I watched the programme again two nights ago. If I am not mistaken, I saw the Facebook logo on the slides used. The training manuals were used by CPL. Ms Sweeney has, to some extent, indicated that Facebook was at a remove from CPL. It is a well worn path taken by companies, when they seek to excuse themselves, to hit the subcontractor. However, was the material prepared by Facebook and, if so, within what department?

If it was instead prepared by CPL, what oversight did Facebook have of this training content? This is the only way for us to understand to what extent Facebook is culpable in allowing material like this to remain on its site.

According to Ms Sweeney, whenever failings are brought to Facebook's attention, it addresses them. If it is setting a standard based on training manuals with this kind of content, it will find that standard to be unacceptable when the matter is brought to its attention at a later stage. As such, I am trying to understand how or why people within Facebook were shocked by this.

Ms Sweeney referred to the CPL staff member on the programme and the suggestion that it was "in our interests to turn a blind eye to controversial or disturbing content on our platform". She stated: "This is categorically untrue." The Chairman referred to the memo from Mr. Andrew Bosworth. He stated:

So we connect more people. That can be bad if they make it negative. Maybe it costs a life by exposing someone to bullies. Maybe someone dies in a terrorist attack coordinated on our tools. And still we connect people. The ugly truth is that we believe in connecting people so deeply that anything that allows us to connect more people more often is *de facto* good.

When Ms Sweeney appeared before us previously alongside her colleague from the US, I asked him about when Mr. Bosworth had left the company or, as with the people on the CPL programme, when had he got some time off. I suspect the answer is still the same, in that he has not been given time off or been removed from his position. Someone as senior as he has set out the clear policies at work within the witnesses' company. Even if material is ugly, disturbing, destructive, about death or the bullying of children, it is de facto good because it is effectively the cocaine that attracts users to this material.

Ms Sweeney suggested that this material would be off-putting to people and damaging to Facebook, but the opposite is the case. This is the kind of material that attracts lots of eyeballs and certainly outrages people, but in their outrage they copy, paste and share. Ms Sweeney stated: "unless they are shared to condemn the behaviour". That is the catch. Disturbing material evokes an unbelievable reaction in the minds of those who view it, they express outrage and they share it, meaning that millions more people get to see it. Facebook is not concerned about what they are seeing, only about them remaining on its platform for a number of minutes longer. It is about capturing attention with the cocaine of the business. It is attractive and interesting, holds people's attention and is addictive. More and more people get to see this material and, all the while, Facebook has a business model that is based around the couple of billion people who log in each month and view this kind of material.

Facebook has always said that its business is about connecting people, communities and so on. That is for sure, but we also saw some home truths the other day when the stock market was in play. Ripples went around the world when Facebook posted its latest earnings and viewership figures. The company saw a 20% write-off in its value in one day, not because of any reduction in users but because it did not meet its expected growth targets. That 20% ran to billions of dollars. As such, I am more inclined to believe Mr. Bosworth because, at the end of the day, this is about eyeballs, capturing people's attention and retaining them online. Facebook makes vast sums of money on the back of selling advertising. Ms Sweeney has tried to present this in a simplistic way by saying that Facebook does not position advertising beside this harmful material.

It is not always about positioning the advertisement beside the harmful material. It is about retaining the person, and the people with whom he or she will share the information, online long enough to allow Facebook to put something into their news feed that will attract their attention and thus generate profits for the company. I have much more to say but I will leave it at that for now.

I invite Ms Sweeney and Ms Cummiskey to respond to those questions following which I will take questions from Senator O'Reilly, Deputy Lowry, Deputy Eamon Ryan, Senator Lombard and Deputy Bríd Smith in that order.

Ms Niamh Sweeney

As Ms Cummiskey is our content and policies expert, I will hand over to her at times. On the first question regarding the news that emerged yesterday about co-ordinated inauthentic behaviour, we have reported the matter to the FBI. We shared the information with it and Congress before the news emerged yesterday. We subsequently deleted all of the accounts involved once the authorities had had an opportunity to get a handle on what was involved. We have talked a little about this before, particularly in the context of the referendum because we deployed the same artificial intelligence here for the referendum as has been used in the identification of many of these accounts. I use the term "co-ordinated inauthentic behaviour", which, while mouthy, best captures how it works. In terms of what was identified with the Internet Research Agency, as it was known, it was not so much that the content they were sharing violated but that they were not who they said they were. Often when accounts are operating at scale like this, multiple accounts are created, sharing the same content at the same time, and these are the types of triggers that our system is built to detect. Once some of those clues were offered up, it was possible to connect the content to some of the accounts involved in the Internet Research Agency. A lengthy blog post was published yesterday in which we set out what we found and how we find it. It provides more detail which I hope will be helpful.

We accept the need for greater regulation in this area. Many countries have struggled to get to grips with content regulation and the regulation of political campaigning online and so on. We are keen to continue to engage with the committee, in particular on the Bills presented by Deputies Lawless and Ó Laoghaire. We are to happy to engage with the committee in that regard. I thank the Chairman for the information that the detailed scrutiny process is scheduled for the autumn.

Deputy Dooley is correct that this was a clear betrayal of our standards. Ms Cummiskey has not yet had an opportunity to comment but I would like to set out her background for the committee. She is a human rights lawyer who has worked in Ireland and abroad. She spent a lot of time with the Irish Traveller movement and she was involved in the prosecution of the first ever hate speech case in Ireland. This is all deeply upsetting for her and her team because they are not just people who have been hired to do a job and who get to grips with the policies, they have a history and depth of experience that even I do not have, which is the reason they get hired for these roles.

On the point regarding Mr. Andrew Bosworth's statement, many of us would like to go back and hit delete such that it would never have been posted. As stated by my colleague, Mr. Joe Kaplan, on the previous occasion, Mr. Bosworth's views as expressed in that post do not represent the views of the company. We would never stand over them and the matter was taken up with Mr. Bosworth at the highest level by Mr. Mark Zuckerberg. Mr. Bosworth, or Boz as he is known within the company, has a reputation for posting provocative material to get a conversation going. This is my understanding of what happened in this instance. The post is still live and still has comments. Ms Cummiskey, I and others have weighed in multiple times over the years since it was first posted to express our disgust with the notion. I do not think it represents Mr. Andrew Bosworth's views but I can understand why the issue was raised here today. It is a difficult one for us to talk about but I can assure the committee that his views do not represent the company's views.

On the question regarding the 7,500 reviewers and the four clinical psychologists, there are four clinical psychologists who devise our wellness and resiliency programmes that are deployed across all of our centres globally. There are other mental health staff who are located in each centre. There are nine located in the CPL office in Dublin.

I do not think four is enough. They just devise the wider programmes and staff are on site to deliver wellness classes and one-to-one counselling. They also advise staff on how to access resources when they need them. Deputy Stanley's questions are probably more in Ms Cummiskey's area but perhaps she could add to the replies to the Chair's questions.

Ms Siobhán Cummiskey

Ms Sweeney has covered it.

Ms Niamh Sweeney

Grand. With regard to the digital safety commissioner Bill, I say again that we are very happy to continue to engage on that and we thank members for the opportunity to do that.

On the point about people who are under 13, it was indicated that members would like a better explanation of how this would be enforced and how we would limit distribution of disturbing content to those over 18 only. Since our creation, the policy has been that a person must be 13 or over to use the platform. It was only if an account was reported to us specifically for being held by an underage person that we would investigate and put the account on hold if there were indications that the user was underage. There has been a change arising from the programme and if a reviewer doing a report for any reason suspects an account holder is underage, the account will be put on hold and information requested. The limited distribution for those over 18 is based on the age given to us by users. Unless we are led to believe users are underage, we go by what they tell us. There are various different principles laid down in data protection law around the idea of data minimisation as to why that is the case, and we can talk about that if it is helpful.

I hope I did not misrepresent an idea of forced time off, and there was no forced time off for the staff involved. This has been very difficult for everyone and I have to imagine it has been very difficult for the people who were filmed. It would not be a comfortable position for anybody to be in and the level of scrutiny in the weeks since the programme was aired has been intense. The idea was to give these people a chance to take some time out and regroup. It was not forced time off and no one is being thrown under the bus here. Content review is a very difficult job. The programme clearly highlighted serious issues with our policies and the training materials being used, as well as enforcement of policies, but at times people reviewing very difficult content all day, every day, would sometimes reach for a turn of phrase that would not seem appropriate. In very difficult circumstances, we can all imagine how we might reach for humour at a time when it does not seem appropriate outside. Certainly, I would not blame any of my colleagues because it is such a very difficult job to do. I want to leave members with the firm impression that we are taking full responsibility for this and implementing the changes. I do not want to pre-empt the outcome of the internal investigation but I also do not want to suggest we are heading towards laying the blame firmly at someone else's door.

Ms Siobhán Cummiskey

I thank the committee for having us here today and reiterate our apology for the failings we saw in the "Dispatches" programme. I reassure people that we are here to speak about that, explain it and follow up where we see those failings. I will start with Deputy Stanley's question around the responsibility we take in terms of child related content. To reiterate the words of my colleague, Ms Sweeney, we take a zero-tolerance approach to child sexual abuse content and we work with image matching technology to ensure that known child abuse images can never be re-uploaded to the system. They are called child exploitation images, so any known child exploitation images cannot be re-uploaded to the system. We work with the National Center for Missing and Exploited Children in the US to do that.

With regard to the content of the "Dispatches" programme, I reiterate that we were very concerned by what we saw. This type of content is very upsetting and disturbing. I will explain to the committee how we approach this type of content. When it comes to child physical abuse, in the vast majority of circumstances the content is deleted. It is not just deleted as we also use image matching technology to ensure it is never re-uploaded. In a very narrow and limited set of circumstances, we will allow that content where the child is still at risk and there is a possibility of the child being brought to safety.

In that circumstance, we will take two actions. First, we will age gate and provide a warning screen on that content. Second, we will provide that content to our internal law enforcement response team to see if they can identify where it has come from and if they can contact local law enforcement.

Since the programme, that policy has been under review and we have started a broad consultation on the potential changing of that policy. In that consultation, we will meet law enforcement and child safety organisations. In fact, we have already met An Garda Síochána to discuss that policy and get their very useful feedback and insights on same.

People use Facebook for many reasons. The majority of people who come to Facebook use it for very positive things, share very positive things and use Facebook to connect with their family and friends. A small number of people try to abuse our platform, which we take very seriously and we will take action against them where it is brought to our attention.

On the matter of people who are under 13 years of age, we use a number of indicators. Deputy Stanley asked how Facebook identifies if somebody is under the age of 13. We use a number of indicators. For example, we will look at the images that they have uploaded to see if we can tell whether they have lied about their age and are under 13 years of age. We also look at the images of their friends to see if there are indications there. We also see if, perhaps, some of their friends and contacts have had their accounts removed for being under age. There are a variety of different factors and indicators that go into that.

As Ms Sweeney has said, one of the remedies that we have put in place since the "Dispatches" programme is that we have made sure to put a hold on any account that we believe belongs to a person who is under 13 years of age to be reported for any reason. Also, anyone on or off Facebook can report an account to us as belonging to a person under 13 years of age.

Ms Niamh Sweeney

Deputy Stanley asked specifically why the Malaysian example stayed on the platform for six years. He has hit on one of the major gaps that we have since identified, which is that we were not closing the loop effectively enough where a video was left up with a view to raising awareness so that the child could be rescued and what happened then after the child had been identified. There are major gaps in our system there. That is one of things that we are trying to close off so that-----

Ms Sweeney has identified this matter as a gap and highlighted how some staff were upset after seeing all of this. She has said that Facebook will leave up stuff at times in certain circumstances to help identify the people who are involved in unlawful activity and stuff like that. Surely the image of a child being abused being left available for viewing on a platform for six years indicates that there were no procedures in place or that procedures were deliberately ignored. There is no explanation. Were procedures in place? Even if a person was half asleep in the corner, he or she would have noticed such things happening. Never mind the fact that Facebook is supposed to have thousands of people working on this matter, Ms Sweeney's comments do not explain how this happened.

Ms Niamh Sweeney

The Deputy is right that there was a problem with the procedure. It might help to explain that the recording concerned would not be consistently on the platform for six years or highlighted on a daily basis. There might be one reviewer who would see it one day. The Deputy is right.

One of the options that we are considering as part of the wider consultation on whether we retain or modify the policy, and I am referring to the policy where, in narrow circumstances, we might leave up a video which is shared and a child is still at risk, and it is an option that we discussed with the Garda on Monday, is leaving the material up for a specified period, once it is known for sure that it is new material that has not been previously identified, and then the material would come down either way after a certain period. That is one of the options we are considering. The other option-----

The material was shown for six years.

Ms Niamh Sweeney

I cannot defend that and I will not defend that-----

Ms Niamh Sweeney

-----because it should not have been six years. The Deputy is right.

I wish to make a brief comment on this matter. Should the first port of call not be the law enforcement agency? Should it not be making the call? It is not Facebook that should be the judge and jury on what stays up online to find the perpetrator. Surely it is the law enforcement, be it the Garda in Ireland or Interpol, that is the first port of call and not Facebook in making this decision.

Ms Siobhán Cummiskey

That is a fair criticism. It is a fair view, in this instance. That is exactly why the policy is under review, to see if that is the change that we should make. Is it that we should have time limits on it or is it that we should not allow this in any circumstance? By way of explanation and to give the committee some of the thinking behind it, what we read in local Malaysian media reports is that this child was identified to law enforcement when the neighbour of that stepfather who was involved viewed the content on Facebook and contacted local law enforcement.

That is the reason for the policy. In the narrow set of circumstances where a child is still at risk, is there a way somebody could help to bring that child to safety? It is difficult because there are so many interests at play. In this circumstance the welfare of the child is our top priority. One is balancing the interest of the child to be brought to safety against the interest of the child to have privacy in the matter. At the same time people are viewing the content. Weighing up these interests-----

Is that not where the relevant agencies responsible for child protection such as Tusla and the Garda in Ireland's case should be leading the way, not Facebook? Facebook should report the matter to the relevant agency and it is the agency's call. If the agency wants Facebook's assistance by leaving something online, Facebook could work with it, but Facebook being the sole arbiter in that regard is just not acceptable.

Ms Niamh Sweeney

The key point concerns who is the relevant agency. It is not always clear from the video where it is taking place. In the particular video that was flagged there was speech in order that one could tell that it was Malaysian, but that is not always the case. It is not always appropriate to take it to the Garda if one knows that it is not Ireland, but one does not know where in the world it is happening. The main difference-----

Facebook knows the account that posted the material.

Ms Niamh Sweeney

That does not necessarily mean-----

It might not be a bad place to start and let law enforcement carry on from there.

Ms Niamh Sweeney

If it is Facebook Live, it is much easier to pinpoint. However, such a video could be coming from any source. We have had contact with Tusla since the programme was broadcast and are in the process of setting up a meeting with its representatives. The Deputy is correct that these are issues we must revisit, but sometimes it is not quite as simple as just reaching out to the relevant law enforcement agency. There is a co-ordinated global effort led by the NCMEC in the United States to deal with child sexual exploitation. There is no equivalent organisation here to deal with any type of child abuse video or image that is not sexual in nature. However, we are re-examining this issue from scratch and I do not say that by way of an excuse or a defence.

Has Ms Sweeney answered Deputy Timmy Dooley's questions, before I call Deputy Michael Lowry?

Ms Niamh Sweeney

No, we have not reached them yet.

Perhaps Ms Sweeney might do that and then I will call Deputy Michael Lowry.

Ms Niamh Sweeney

Deputy Timmy Dooley said his main concern was that the material flagged was part of our training material. He also referred to other concerns. Many of the measures we have outlined speak specifically to the concerns he raised. The material used in the training decks was unapproved. The people who were delivering the training had changed it. We were unaware of this. We should have been aware of it and it should not have been possible for them to change it; therefore, part of the overhaul of the wider curriculum will mean that decks will not be alterable. They will only be-----

To clarify, in this instance, did CPL create the material without Facebook's knowledge?

Ms Niamh Sweeney

It had a deck to begin with, but then it updated it.

What did it have?

Ms Niamh Sweeney

A PowerPoint training deck.

CPL sourced material other than the material provided by Facebook.

Ms Niamh Sweeney

Yes and we should have been aware of it. One of the gaps we are trying to identify through the internal investigation is how it had come about that CPL had changed the material and we were unaware of it.

Can Ms Sweeney state categorically that nobody in Facebook was aware that the material had been changed or that this material was being used?

Ms Niamh Sweeney

I can state categorically that anything that was inaccurate in the decks had not been approved by Facebook staff.

Ms Siobhán Cummiskey

There are some examples we had not seen previously. One was the meme we saw, the cartoon image that we had never seen before. There was also the one with the child abuse imagery. It was a mistake and inaccurate. It should never have been used in a training presentation.

To clarify, Facebook never sanctioned use of the video of the child being abused as an example of material it would retain but mark as disturbing and put beyond the reach of someone under the notional age of 18 years.

Ms Siobhán Cummiskey

We are looking into that matter. There is an internal investigation to examine how they ended up in the training deck and exactly what happened along the way. The investigation is ongoing.

For the purposes of clarity, Facebook is not categorically denying that it was aware of the material.

Ms Niamh Sweeney

It is not so much whether it would ever be appropriate to use the video in a training deck but how it was being used.

The Deputy will recall from watching the programme again last night that the slide on the screen said "CHILD ABUSE VIDEOS" and then it was said that they should never be deleted and never ignored. We do not know what happened either side of that. That is what we are trying to establish. If it is exactly as presented, then that was incorrect. It is never a case of never deleting. In the vast majority of circumstances it would be a case of deleting except for those-----

I will simplify because this is important. Can Ms Sweeney categorically say - and she does not have to - whether it is possible that somebody in Facebook was aware that this video of the child being abused by its stepfather was being used as part of an overall training programme?

Ms Niamh Sweeney

I cannot answer the question. It is possible that somebody knew it was part of a deck. It is not possible that such a person knew it was being used incorrectly. He or she would have known that. If the video was being used to demonstrate the correct application of the policy there would not be an issue with that. We are trying to figure out what happened either side of the clip that was used as to whether there was an omission that would have explained why it was handled in that way. Either way, the decks were not approved. They included material we had never seen before. To be honest, I think-----

I will be honest with Ms Sweeney; I am none the wiser. What I am trying to understand is how material like that gets used to assist in training moderators. It was very clear to me when I looked at it again that this was an example of material that Facebook would not delete even if it was requested to delete it-----

Ms Niamh Sweeney

That is wrong.

-----and that so long as it was put beyond the wall and that it was indicated that it was disturbing material, anyone over 18 years of age could look at it and share it all around the place. I have concerns based on what Ms Sweeney had told me.

Ms Siobhán Cummiskey

I can tell the Deputy categorically that was a mistake and that example, that particular video, should never have appeared in that training presentation. I can categorically tell him that was a mistake, it should never have been there and it is no longer there.

Ms Cummiskey and Ms Sweeney are not prepared to say that nobody in Facebook was aware that it was being used by CPL however.

Ms Siobhán Cummiskey

As that is something we are looking into as part of the investigation, I do not have that answer for the Deputy right now. I am sorry that I do not have the answer for him right now but-----

Just for completeness, when the investigation Facebook has under way is completed perhaps it will provide the material to this committee, insofar as it can. Would the witnesses make a commitment on that?

Ms Niamh Sweeney

We will certainly follow up with the committee insofar as we can. Again, we would not share personal details or anything of that nature.

Facebook will report back to us. Are there any other outstanding questions from Deputy Dooley? I am conscious of other members trying to get in.

Ms Niamh Sweeney

On the claim about decisions being driven by financial interests, I understand that I am in a weakened position trying to make this point today because of what was captured on camera but how this individual came to think this way is again part of the wider investigation. I agree it did not fall out of a tree into his lap. There is obviously some gap between what our actual policies and values are and how they were communicated to this individual. If the Deputy looks at the reaction from Retail Excellence Ireland and Core Media, which had questions and which paused their advertising spending, it is very clear that advertisers do not approve of this. In reality, if financial interests were driving all of our decisions we would probably remove a lot more than we do, but I understand that on a day like today it is going to be hard to make that point. I can promise the Deputy that in the three years I have been at the company and the six years Ms Cummiskey has been there, that has never been communicated as a driving ethos. I understand that today is a difficult day on which to make that point.

On the stock market effect, the Deputy probably was not listening to the earnings call - why would he - but one of the points our chief executive officer made on that call was that he had flagged in the third quarter earnings call of last year that we were going to vastly increase the number of people working on safety and security, both reviewers - there are now 7,500, there were 4,500 last year so we have vastly upscaled there - and those who work across different parts of the company but who are exclusively focused on safety and security. He also flagged on this call that those investments are going to have an impact on the overall performance of the company. That was well flagged in advance and he stood over it on this occasion. Again, the ups and downs of the stock market certainly do not dictate how we approach our jobs.

I thank the witnesses for attending and for their willingness to attend this special meeting at short notice.

They have been asked to come before us on foot of a Channel 4 programme that members found very disturbing. The public reacted to the programme with anger, disbelief and trepidation, particularly in the context of the unknown. People are not fully aware of the extent of what is out there in the shadows and readily available.

Ms Sweeney has accepted that the decision not to remove the video in question was a mistake. I accept that this matter has been covered in a previous answer. It is inexcusable that something which was known to Facebook in 2012 was still available for viewing in 2018. It does not matter whether those who were viewing this vile act were under or over the age of 18 because the video should not have been available.

Ms Sweeney has accepted that areas of failure were identified in the "Dispatches" programme. How many other areas of failure have not yet been exposed? In my view, the ability of Facebook to police, edit and control content must be questioned seriously. Public trust and confidence in Facebook's efforts to self-regulate have been shattered. Facebook's ability to self-regulate has been badly damaged and must be questioned.

Everybody accepts that the innovation of Facebook has been phenomenal. The company has experienced spectacular growth. It has billions in turnover and it makes massive profits. In the public mind, what was initially seen as an ingenious way of communicating is now in danger of running out of control. There is a big fear among members of the public about this global entity. People are wondering what other damage it can do.

As Ms Sweeney stated, the reach and influence of Facebook are enormous. It has become a tool of abuse in a number of instances in recent times. It is a threat to consumers and society, particularly young people and the vulnerable. I will conclude by asking a question that gets to the nub of this issue. Does Ms Sweeney accept that the sheer numbers associated with the level of diverse activity on Facebook and other social platforms make it impossible for the company to control content and to self-regulate?

I thank the Deputy for the clear question he asked at the end of his contribution.

I join others in welcoming Ms Sweeney and Ms Cummiskey. I thank them for attending. I cannot sufficiently impress upon them the absolute horror of the public, including people we meet, about the availability of violent images, hate speech and pornography. The risks associated with this kind of thing are a cause of great concern to parents, teachers and the general public.

I have some specific questions. There is an obvious question that will be on the minds of those who are watching us live today and those who will review our proceedings in the print and broadcast media tomorrow. If Facebook can remove all of these images and take corrective action now, why did it not have a system of monitoring in place to prevent this action from being needed at this point? In other words, if Facebook can cure its ills now, why was it not able to cure them previously? Many people would react with incredulity to the concept that Facebook did not know about this until it was revealed in the Channel 4 programme and the subsequent media coverage. That would be disturbing on one level if it were true. If it is not true, then it is disturbing on a different level.

I ask the witnesses to address the public concern in that regard. It is the first question in the minds of many of those watching these proceedings.

It is admirable that an increase in staffing is being considered but, in light of the billions of Facebook users, the billions of euro in turnover generated and all the other factors involved, is the proposed increase too limited? Is an insufficient amount of money being put into the staffing of the monitoring section? I ask the witnesses to comment on that specifically. Are they satisfied by the proposal to increase the staffing level from 7,500 to 20,000? Are there plans for a further expansion of the section? That staffing level does not seem adequate in light of the global responsibility of the section. The staffing level at the time the witnesses last appeared before the committee was insufficient to address the political concerns in connection with Cambridge Analytica. The controls at that time were inadequate. Will the proposed increased controls be sufficient?

The witnesses stated that there are three options when dealing with flagged material, including its deletion or otherwise or its being marked as disturbing. What is the effectiveness of marking a post as disturbing? How is that justified and within what limits is it done? Disturbingly, that categorisation may be an incentive or encouragement for some users.

I wish to revisit the issue of it being financially attractive to keep people online such that they are exposed to advertisers, and the memo of Mr. Andrew Bosworth in that regard. Do the witnesses consider that to be the case? Some evidence suggests that keeping people online no matter what is financially attractive and, sadly, that that is being done. Why is Facebook not far more proactive in terms of immediately eliminating cyberbullying or other such behaviour at source? Would it have a far more aggressive policy in that regard were it not for the financial attractiveness of keeping people online?

As a parent and former primary school teacher, I am interested in the very serious issues in regard to children under 13. Many parents watching this meeting are very concerned by the issue. Is Facebook sufficiently vigilant in terms of its method of identifying those under 13? I ask the witnesses to expand on that insofar as they can. A trainer featured on the "Dispatches" programme stated that "we just like pretend that we are blind" in that regard, which is a very scary quote. Is Facebook pretending to be blind? Is it adequately vigilant in terms of the method of assessment? If the witnesses cannot answer that today, I ask that they send the committee a briefing note because it is an area of enormous concern.

Will there be major changes in the training provided? The issue has been well discussed. When will such major changes be made and how will the committee know the process has been completed? Is the staffing level sufficient to provide such training? Will it be offered immediately?

Cyberbullying relates to my first point on children under 13 but is a distinct concern of itself. A study by the Law Reform Commission indicated that 16% of the large sample of young people surveyed had met somebody online. That is a shocking statistic on which I ask the witnesses to comment.

If I understood the witnesses correctly, Facebook accepts and welcomes the concept of a digital safety commissioner. Am I correct in that regard? Implicit in such welcome is an acceptance that there must be external as well as internal regulation. I am hopeful that a digital safety commissioner will be put in place and that the legislation in that regard, which will be scrutinised by the committee, will be accepted. However, even if that is done, it would not excuse Facebook from carrying out major in-house reforms.

Is it necessary to leave offensive material online such that it can be reported to the Garda in this country or the relevant law enforcement agency in other countries?

Could it not be either reported or taken down immediately ? What is the merit for leaving it up? There would have to be an awfully shocking or compelling case made for leaving it up for the purposes of law enforcement, particularly in light of the harm or collateral damage that could be done. Should it not be a case of having to inform the law enforcement agencies, full stop?

I thank Facebook for agreeing - following on from what happened at a previous meeting - to provide data in respect of the volume of spending during the recent referendum campaign here. This is a very progressive initiative whereby we are setting standards that could and should be applied in other jurisdictions. The "Dispatches" programme is so worrying because instead of setting standards, we are seen to be failing to uphold them. The programme reflects badly on Dublin, on Ireland and on Facebook. In addition to the ethical issues that were raised, this is also an issue for our country.

When the committee met in private session, members discussed the fact that we will have to bring this matter to the attention of our European colleagues because the EU, as a collective political system, is the rule maker and Facebook is the rule taker. What is done in the context of making rules must be reflected in and must inform EU and Irish domestic legislation. I mention this because in recent days the House of Commons's Digital, Culture, Media and Sport Committee published its interim report, Disinformation and 'fake news', which, interestingly, looks at some of the issues we are examining. We should seek a meeting with that committee in the autumn in order that we might share our analyses. We have a particular interest in sharing analyses because we have to work out how we are going to regulate this online activity post Brexit. This is an issue the House of Commons committee raised in its report. I suggest that what I have outlined is one action that this joint committee could take. We should visit the committee in London or invite its members to come here in order that we might share with them details on how we regulate. This is particularly important in this instance because the "Dispatches" programme revealed that much of the issue involves either regulation or the taking down of UK-specific material. This is one of the reasons I would like us to travel to London to meet our counterparts.

Before I question the representatives from Facebook, I hope to get agreement from members that we could send a cross-committee message to Google asking it to do what Facebook has done in offering to share material relating to online advertising during the referendum. The material, or the ability to get that information, would be of interest to the House of Commons committee because I am aware that it is looking for similar material on the UK referendum.

I shall now turn to the specific issue of take-down material, which was the cornerstone of the "Dispatches" programme. While being shocked by the content, some of which is personally very upsetting, it raises the question of the need for a new online business model. The House of Commons report makes the same case. We need a new business model for online social media companies. Under the current system, the consumer is the product. There is much talk about community and the community of users. In truth, however, the online media and social media companies are selling our attention spans and our addiction to mobile phones, particularly in the context of how we get an endorphin rush regarding material on news feeds, on our friends' feeds and so on. Unfortunately, that business model lends itself to the attention span leaning towards material which induces anger or, I would argue, that which does not lead us to a necessarily attractive engagement. As both the Internet of things and the volume of data evolve, we will be obliged to move away from that business model because there will be too much power held by certain companies that have access to all our data and that will know everything about us. The business model where we are the product has to change. We must, therefore, have citizen ownership of data and this current model must change.

We also need to move towards a liberal online system which is respectful of different views but which is not libertarian in outlook whereby, in a sense, anything goes.

That is something we have learned over the centuries in terms of how our press works. Newspapers and television and radio companies are not allowed to put up whatever material they want. There are rules and regulations and it is increasingly clear that we need such regulation online, as well as in traditional media.

In terms of a new business model, I agree with the analysis of the House of Commons, which stated we need real transparency with regard to age. Has Facebook engaged with the UCD Geary Institute on the transfer of information and data during the referendum? When could we expect such information to be available?

With regard to transparency, why is it impossible for Facebook to not clearly delineate users' ages? Should it not change the business model in that regard? Rather than what we saw in "Dispatches", where uncertainty was allowed as to whether one was dealing with a child online, we need real transparency in order that everyone knows whether a child is aged under 13. We need real transparency in respect of who the people on the network are.

What is Facebook's understanding of the number of fake accounts on its network? Is the figure still 10% or 20%? How effective has it been in removing fake accounts? If one does not know with whom one is dealing in the online world, that undermines its entire business model.

Another question relates to those in Dublin who work in this area. I am interested in those directly employed by Facebook, as well as those working for CPL and other contractors. What is the staff turnover in the area? How long does someone typically stay in that sort of job? How difficult is it to get people to work in the area? Can the witnesses give me an indication of the typical pay for those working in Facebook and as contractors? I do not want to highlight individuals and want to respect people's privacy, but it would be interesting, as background information, to know the details of pay, conditions and, critically, turnover. If turnover is high, it is a sign there is a real problem.

This is important because the issues raised in the programme made me think of the horrific case of Eoin McKeogh, a video of whom was posted online in respect of taxi fare evasion. He had to go through the courts in order to get justice. Max Schrems came here to look for justice and if he had been listened to, rather than fought against from an early stage, we might have avoided some of these difficulties. In a terrible example, images of a young girl attending a concert in Slane some years ago were shared online. She was traduced in the way the images were allowed to be posted.

Given our experiences, we have an obligation to address this issue. We have experience over five or seven years of how the system has not worked, in terms of taking down material. It is in memory of those who have tried to get justice that we should act.

How many times has Facebook been prosecuted in recent years in the courts in the UK or Ireland in cases where someone has expressed concern about material which has not been taken down? I understand that in the context of Irish defamation law, Facebook is not directly responsible for material but if it is informed of something, it then becomes responsible. Are there data on the number of cases in which Facebook has been involved where material was not taken down quickly enough?

We will take a five-minute break. Do the witnesses want to answer the questions first?

Ms Niamh Sweeney

I will answer the questions first. I agree with Deputy Lowry that six years is too long for us to have left the video on the platform given that the child had been brought to safety in 2012.

I hope I am answering the right question; I wrote down my answer but I forgot to write down the question. I hope I am addressing the questions accurately. The point made by the Deputy relates to the issues raised, which are serious and are being taken seriously. It should be remembered that the undercover reporter involved was at the CPL location for seven weeks and we saw one hour of footage. I hope that much of the remaining footage captured Facebook's system working and people executing policies properly and making the right content decisions. We obviously do not have access to that content but I do not want to suggest to the committee that the system is broken entirely.

The hate speech test referred to in our opening statement was overseen by the European Commission. We have had some success in self-regulating across all of these issues. Success is perhaps the wrong word to use but we have been most effective in addressing child sexual exploitation imagery, for which there has been a global co-ordinated effort of which we are a key member. However, where there is human review there will be human error, therefore I cannot tell the committee today that there will be no examples of mistakes being made in the future. Unfortunately that is the situation we are in. We are trying to deploy technology more often with greater effect and that will improve over time.

Deputy Eamon Ryan made a point about some of the upsetting videos that have circulated in Ireland in the past. That content pre-dated the media matching technology or image and video matching technology that can prevent that kind of video from being re-uploaded. I think the specific example from Slane would not happen today because it would be prevented from being re-uploaded. That does not mean that people would not continue to store the video and share it peer-to-peer but it certainly would not appear on our platform again.

Mr. McKeogh had to go through the courts to get access to justice. Ms Sweeney should acknowledge that was a mistake, in hindsight.

Ms Niamh Sweeney

I am not familiar with the details of that court case but I accept he undoubtedly had a difficult time.

On the numbers and level of diverse activity and whether we think it is possible to control it, one point we have not made yet is that the vast majority of people who use our service do not use it in the way we are discussing today but rather use the service to connect with friends and family or to stay in touch with local politicians or businesses. They do not encounter much of the material that we are discussing. I would love to carry out a quick poll of the people in this room to find out who uses Facebook. How many use Facebook because of the shocking content on it or because there are other useful reasons to use it? I will not put anyone on the spot but it is important to note, as part of the wider conversation, that the ordinary experience a user would have on Facebook does not generate millions of reports. Many millions of reports are generated about the other kinds of content we are discussing but I do not want those present to leave this meeting today thinking that two billion people have a problem every day and that we are dealing with the reports generated from those two billion people.

Ms Siobhán Cummiskey

I thank members of the committee for their questions. Deputy Lowry asked why a video had been left on the site when Facebook had known since 2012 that the child in it had been identified to law enforcement. That is a valid question. As soon as we became aware that the child and the perpetrator had been made known to law enforcement, we took action to delete that content and use image matching technology to stop it from being uploaded. However, our failing was that we did not know. We should have known and we absolutely accept that is our failing. We are doing a number of things to make sure such a failing does not happen again. We are reviewing this policy to determine whether there is another way to address this and whether there should be time limits. That consultation is being carried out with experts in this area. Any update to our policy is done in consultation with international experts, including academics, safety organisations and anti-racism organisations.

I will answer the questions asked by Senator O'Reilly in turn. I want to be clear and state that we do not allow hate speech on Facebook. We have set out our policy on that clearly. We do not allow hate speech against people on the basis of ethnicity, religion, race or sexuality. We tier hate speech into three tiers, tiers 1, 2 and 3 which are available to view on our website; our community standards are set out there. We do not allow calls for violence against people on the basis of ethnicity, migrant status, etc.

We do not allow dehumanising speech and so on against those people either. I encourage members to have a look at our policies and we are quite clear on that. As Ms Sweeney mentioned, we also have been audited by the European Commission on our ability to remove content that is illegal or violates our policies on hate speech.

On the Senator's valid question as to whether we have enough people and are taking enough action, it might be interesting for the committee to know that we use a combination of human review and technology to tackle bad content. We issued our first transparency report on the removal of content in April. Moreover, in the first three months of 2018 we took action on 3.4 million pieces of graphic content on Facebook, using technology in the majority of circumstances. We also use real people who speak the language to review particular types of nuanced and contextual content, such as hate speech and bullying, and we are increasing our staff in that area. We accept we need more people to do this. We are increasing the number of our safety and security staff from 10,000 to 20,000 to make sure we do that well.

There was also a question about marking content as disturbing. It is important to know that sometimes, people want to use social media to bring attention to the terrible things that happen in the world. We want to allow people to bring attention to those things in a way that is appropriate and limited. In certain limited circumstances, therefore, someone can share content that otherwise might be disturbing if he or she is doing so to shed light on these terrible things. We want social media to have a role in bringing terrible things to light but we do that in a responsible way. Marking this content as disturbing makes sure that people under the age of 18 cannot view it and that a user must click through a warning screen.

It also is important to note the vast majority of people who use Facebook never come across disturbing content. I have been on Facebook for the 11 years since the firm came to Ireland in 2011 and I have never seen content of that type or anything with a warning screen on it in my news feed. I have never seen anything disturbing. That is reflective of most people who use the service. We are trying to take action against the people who are involved in the worst forms of sharing content.

We take a number of actions to protect those who are under 13. We note that we were not doing this well enough, and since the Channel 4 "Dispatches" programme we have been making sure to put on hold any account that is brought to our attention in any way. I was interested in the Senator's background as a teacher. We have had teachers using our contact form to report their entire class as being under 13. We encourage people to do that. We do not want children under 13 on our service and do not allow it.

I will address the explicit question on what we are doing about training and then conclude, because I believe Ms Sweeney has answered the other questions. We are doing three main things. First, we are increasing oversight at the CPL site and all our outsourcing sites. We are seconding Facebook employees to the CPL site. I was asked about the timeline for this measure and it has already begun and will continue to take place for at least the next six months. We will then review and see if it needs to be in place permanently.

In addition, we have made sure to correct any errors in the training documentation and retrained all the trainers at CPL as soon as we became aware of the "Dispatches" programme. That is an ongoing process, however, not something that ended with one training session. It is going on constantly. There are twice-weekly meetings between my team and CPL to provide direct support and allow CPL's staff to ask questions. That is in addition to the other training processes that we have in place.

A minute ago, Ms Cummiskey said Facebook does not allow children under 13 on its network, but one of the many interesting things revealed in the "Dispatches" programme is that it does. Their presence is not encouraged or wanted but it does happen. In a sense, it is allowed. Going back to the fundamental reappraisal of the business model, has Facebook looked at ways in which it could be certain that no one who is less than 13 years old has a profile or page on the platform? It does happen. The programme clearly showed that in effect it is allowed, even if it is not desired.

Ms Niamh Sweeney

That has changed, which we highlighted in our opening statement and answers to questions. Prior to this, an account would only be put on hold if it was reported specifically for having an underage user. It is now the case that if a reviewer has reason to believe that a user is underage, he or she will put the account on hold regardless of why it was brought to his or her attention in the first place.

On that point, I heard Ms Sweeney saying that it applies when it comes to her attention, but could Facebook not consider a pre-emptive measure at the entry point to ensure it would be de facto impossible to have that situation rather than it have to be something which is reviewed or that somebody reports?

Ms Niamh Sweeney

This is a matter the Minister of State, Deputy Jim Daly, brought to our attention at the start of this year. Others have also looked at this. That would necessitate a requirement that any social media account would have to be linked to a public services card or some other official identification. For us to be sure before anybody signs up for the service that he or she is under 13 years of age, we would have to take identification material from every user who signs up for it. In a reply to an earlier question, I mentioned a key principle in data protection law captured in paragraph (c) of article 5 of the general data protection regulation which relates to the notion of data minimisation. It provides that the data that can be gathered is only adequate, relevant and limited to what is necessary for the purposes of data processing. There are others who might make that case more convincingly to the Deputy than I will. He would say I have other motivations, but he would run into difficulty if there is an insistence that anybody who wants to have a social media account must provide some State-backed identification because of the limitations on us in terms of what we can ask people to use. It is a bit like saying one wants to sign up to buy something online but the vendor insists that one shares one's date of birth with the vendor even though there is no reason the vendor would need that. I understand this is not a simple straightforward area and many people have different views on it.

Is that the reason Facebook still has false accounts, in a sense, because it cannot, as a result of data privacy rules, ensure that it has authentic people in every case?

Ms Niamh Sweeney

There is probably a distinction worth making here between fake accounts and fake names. We remove millions of accounts every day. I do not have the exact figures to hand. We release numbers on a quarterly basis on the number of fake accounts we have deleted. That speaks more to the issue the Chairman raised with respect to those inauthentic co-ordinated accounts sometimes known as bots, although I hate using that word because bots can be good as well. They are useful in everyday like when we have automated interactions with different businesses online. They are bots as well. There is the notion that we have fake accounts that do not have a real person behind them, that are being operated from a bunker somewhere, and while I will not reach for the obvious clichés, those fake accounts are picked up by our systems.

Then there are people who operate under a fake name. The only way we would know that is happening is if it is reported to us because, to all intents and purposes, these accounts are real in every other way. There is a real person operating behind the account and connecting with people he or she knows online. They are just not using their name. In that instance, if it is reported to us, we would ask the person to provide proof of identification and, if the person does not provide it, he or she will lose access to his or her account. If the person does provide it, however, and, for example, it is Eamon Ryan who has changed his name to Tim Lombard, we will accept his name is Tim Lombard and he will be able to continue to use the platform.

A few other questions were raised in that round of questions, one of which related to meeting people online. I was not aware of the number Senator O'Reilly shared but I am not surprised by that. Did the Senator say 60% of people have met somebody online whom they do not know in real life.

Sixteen per cent.

Ms Niamh Sweeney

One, six?

Ms Niamh Sweeney

That is lower than I would have anticipated. Was it specifically in a threatening situation or simply anybody at all?

No, they had met online. Each one was not itemised.

Ms Niamh Sweeney

I am not surprised by that because many young people are engaged in gaming online where they would end up competing with a person they would not necessarily know but who is very good at a particular game, or there are many LGBTQ support groups. Organisations would have online fora where people who have yet to come out can engage with each other. Those are the good ways it can happen but, obviously, there are many bad ways as well. That speaks to the Government's action plan around education and having a central resource where all information can be accessed on how to avail safely of the opportunities presented by the Internet while also protecting oneself and being aware of the risks of being online.

Deputy Eamon Ryan raised many questions. Regarding his question on the referendum data, I have engaged with the Geary Institute. I had a long conversation with Diane Payne from the Geary Institute last week. We spoke for an hour and she followed that up with an email, to which I have not yet responded because I was preparing for this meeting.

I will follow up with her on that issue, but because of the time of year, it will probably be the end of August before we can get going on it in earnest.

The Geary Institute has not yet agreed to work with us. It probably has other things it would like to do with us that I am not in a position to promise just yet, but I will continue the conversation with Ms Diane Payne today.

The age question was addressed.

I do not know what the figure is for the turnover of staff, but I can check. I am not sure if I will be able to tell members afterwards, but I do know that across all of our outsourced centres, the rate of pay is 20% above the minimum wage in every jurisdiction. It is my understanding the rate of pay for entry level jobs in Ireland is somewhere above the average industrial wage, but our staff also benefit from full health insurance and all of the perks that go with being in a normal Facebook office, which include three meals a day and all of the other resources made available to us on site.

I do not know the answer to the question about the number of cases before the courts. In the cases in which members will have seen a prosecution brought for defamation on our platform or somewhere else online, the way it works is that the defamed person takes a case against the defamer. If the identity of the defamer is not obvious, the court system can be used to compel us to give the true identity of the person if they are not using their real name. It is important to state it is not the case that our hand has to be forced, but there are legal reasons we cannot disclose somebody's identity, other than what they have revealed online, without the legitimacy of a court order. We are usually caught in the middle, but the person who has been defamed takes up the matter with the person who has defamed them and we work with the courts to provide information, as required.

There are cases in which we have not moved quickly enough to remove material. I do not know the number of such cases.

If Ms Sweeney could provide it in written form, it would be appreciated.

Ms Niamh Sweeney

Okay.

There was one other point which was made by Deputy Eamon Ryan. It was about the amount of time people spent on the platform and whether there was a case for having a new business model. We are funded by advertising. That is the reason it is free. There are people who could probably apply their minds better than I to come up with a different way to do it, but it is difficult to imagine how a platform such as ours could grow. It requires a huge investment to keep developing and providing the service and building the infrastructure needed. It costs a lot of money which must come from either advertising or some other source.

An announcement will be made later today that we have built new tools to help people to spend their time better online. It was probably announced while we have been sitting in this room. It is about investment in engineering resources to build a system, whereby if someone decides they want to spend only 20 or 30 minutes on our platform over the course of a single day, they can set a limit and will receive a reminder when they have hit that limit. We are trying to make it easier for people to control because I know that it can be difficult to self-regulate in that regard.

I am conscious that we need to take a five-minute break.

I will be very brief.

I ask the Deputy to be very brief.

On that point, Ms Sweeney is right. Facebook is a hugely powerful and useful platform. I use it to advertise online because I can target people. We have sought a similar level of transparency from all parties. Members should start to seek such transparency in their own parties, given that we are looking for it from others.

On the power of Facebook, I was given a personal example. The other day I was on Airbnb looking for a house in County Mayo. The next day, on Facebook, Airbnb sent me advertisements for accommodation in the same location. Facebook has power of control of data that no one else has. Google, Facebook, Twitter, Airbnb and Linkedin have the power of an oligopoly which we cannot allow go unchecked and unregulated. There is a real question surrounding the business model. As the use of data and automation increases, it is a power that will kill every other business model. There are issues for policy makers in how we regulate it.

I am suspending the sitting for five minutes to give people a break. When we come back, we will take questions from Senator Tim Lombard and Deputy Bríd Smith.

Sitting suspended at 2.20 p.m. and resumed at 2.25 p.m.

It was useful, but as Facebook moves towards home energy management and directing vehicles, my concern is with its possession of data and material. When there is the same crossover, the power of knowledge and information and the ability for that information to be corrupted by external forces become a real concern.

Ms Niamh Sweeney

It is a highly regulated space. In time, people will become more used to how the technology works.

To illustrate the point, when Deputy Eamon Ryan researched a house on the Airbnb website, I presume he did not book it, or maybe he did. One of the failings of the system is that advertisers sometimes do not realise someone has already bought a dress, for example, and the person will continue to receive advertisements for dresses. The Deputy went on Facebook the next day and was served an advertisement for Airbnb. In that case, Airbnb does not know who the Deputy is. It knows that somebody who was logged in to a Facebook account went to its website and looked at houses in Mayo. This information allows it to tell Facebook it would like to serve an ad for houses in Mayo to the member of our service. However, it does not know who that member is. One of the things people find most worrying about this is that they think they are being watched. The way it works is that Airbnb will have embedded a piece of code on its website which pings a unique Facebook identifier back to us without it ever knowing who the Facebook user is. Users can operate in anonymity on the web without the websites they are visiting knowing who they are. This service allows the advertiser to reach Facebook users but there is a kind of Chinese wall in place that keeps the user's identity hidden.

Facebook would not throw away such valuable information.

Ms Niamh Sweeney

No, it definitely would not. The Deputy is right that this is new territory for many people. However, it is a highly regulated space and Facebook cannot operate with abandon in that space.

The ad I received was for houses in the village in which I looked for a house, not Mayo in general.

I will bring in Senator Lombard here because I am conscious he and Deputy Smith are waiting to ask questions.

I welcome the representatives of Facebook to today's hearing. At their previous appearance a few months ago, I asked a question about hate speech on social media platforms. I acknowledge the written response I received indicating that 3,000 staff had been added to the community operations team, bringing the total number to approximately 7,500. Ms Sweeney referred to a figure of 20,000 people. Will she explain the difference between those two figures? Does the figure of 7,500 refer to monitors? What is the profile of the other 12,500 people?

This has been an interesting debate and the reaction of members of the public in the past few weeks has been amazing. They regard the lack of regulation of the Internet, Facebook and other companies as a significant issue. They believe that the less regulation there is of these entities, the more profitable are their operations. I take into consideration a previous statement to the committee that an increase in the number of people involved in regulation would reduce Facebook's profitability. That is why there has been a dip in the value of the company on the stock market. Will Facebook executives, specifically the company's aforementioned vice president, look forward to increased regulation if it affects the bottom line? Will the general public be well served by such regulation?

Do those with responsibility for corporate governance in Facebook hold the view that less regulation means more profit and makes the company more desirable to the stock market? Is it Facebook's view that employing more staff will cost more in its back pocket? There are two competing views here. More regulation means Facebook will make less profit. How will Facebook square that circle? The general public's view is that Facebook is unregulated in many ways, as has been shown by the events of the past few weeks and the data breaches of the past few months.

We also spoke about the Facebook Live feature. Many people have expressed concerns to me about whether it is possible to regulate something that is instant. A person can broadcast a live feed from a nightclub or anywhere else. How can this live feature be monitored? How effective is a 24 hour time lag on the monitoring system if the footage is already available on the site? As Facebook changes and evolves, will the lack of regulation of its live feature prove a stumbling block? I cannot see how Facebook or any other entity could monitor a feed that is going out live to the web.

Facebook Live, as an entity, cannot be monitored and this issue cannot be solved because there is no 24 hour time lag within which the company could identify that there is an issue with a feed. These are big issues being discussed in society at large.

Deputy Eamon Ryan raised the serious issue of people looking up something online and then receiving advertising feedback the next day. This process needs to be explained because members of the public believe big brother is watching every move and angle. We famously saw the picture of the chief executive of Facebook in his own headquarters with a sticker placed over the webcam because he was not sure whether it was feeding images of him to the Internet. People fear they are being watched online without their knowledge.

On the issue of underage access to the platform, specifically 13 year olds, certain forms of identification, as opposed to PPS numbers, are required to buy a phone, apply for a credit card and open a bank account. Modern technology facilitates virtually everything. I do most of my banking online and send most of my documents to my bank online. Why can Facebook not ensure that people of a certain age profile do not access the platform?

I welcome that Facebook has announced it is introducing a limiter to make people aware that they are spending an excessive amount of time online. That is very important because conversations in living rooms have died as a result of the number of people looking at their phones and being online. Today's announcement is positive but Facebook should go a step further by requiring users to update their ID online and then verify their account. Such a measure would also help to solve the problem of fake accounts and similar issues. It would create the line of command and the traceability that are required.

There are major issues here, which require regulation whether at a national or European level.

I will bring in Deputy Bríd Smith whom I thank for her patience.

I thank the witnesses for their attendance. I had intended asking a series of questions but I was intrigued to hear Ms Cummiskey state that Facebook does not allow hate speech and will not allow people to call for violence against anyone else on its platform. One of the things nobody else mentioned regarding the "Dispatches" programme was the special status given to Mr. Tommy Robinson and the Britain First party for a long time before that. It was not until the party was banned and some of its leaders were sent to prison that its Facebook pages were taken down. Nobody can deny that there was a plethora of hate messages on the Facebook pages of Tommy Robinson and Britain First, an openly fascist party which continually attacks migrants and migrant communities online, physically and politically. I ask the witnesses to explain to me the reason it applies light-touch regulation to political parties until they reach a certain level of popularity - I believe Tommy Robinson's page had approximately 900,000 followers - and are then elevated to a different category where Facebook deals directly with the posts, rather than them being dealt with by other, less important moderators. According to the policy in some of Facebook's documents, if a page appears to have five or more hate messages, it violates the company's rules. Nevertheless the more popular pages, such as the far right page I cited, are protected from these rules. If that has happened in this instance, will it happen again or is it happening repeatedly? That issue needs to be discussed.

The other thing that emerges from this is that there are different strokes for different folks across the globe in Facebook's policies. I will take this one example from a piece in prorepublica.org, which was written well over a year ago when much of its content was drawn to Facebook's attention. It is good the witnesses have come in with their hands up today but it raises the question as to why this did not happen before. What is really going on beneath this? Mark Zuckerberg talked about helping people to understand the world in a better and more inclusive way. I agree with others who say that Facebook is a wonderful tool for the population of the planet.

However, we must look at what drives it. Is it Mark Zuckerberg's original idea or is it revenue and profits, 98% of which are generated from advertising?

I invite the witnesses to consider two decisions by Facebook. Following a terrorist attack in 2017 in London, a US Congressman wrote a post in which he called for the slaughter of radicalised Muslims and wrote, "Hunt them, identify them, and kill them. Kill them all. For the sake of all that is good and righteous". A month earlier, a #BlackLivesMatter activist and poet named Didi Delgado wrote a post stating, “All white people are racist. Start from this reference point, or you've already failed”. That was all that she said. Her post was removed immediately and the Congressman's was not, and remained up.

Another comparison worth looking at concerns Palestine and Palestinians. While Facebook states it protects people from extremism, 70% of Palestinian Facebook pages are removed and Palestinian activists use a hashtag, #CensorPalestine, to explain their frustration at this. In the case of Israeli Facebook hatred posts, however, something like one anti-Palestinan post goes up every 46 seconds and they are not removed. Is this related to the fact that an official meeting took place in September 2017 between Facebook and the Israeli Government, which threatened to ban Facebook from the country unless it did something to curtail the activity of Palestinians? There is no curtailment of hate against Arabs and Palestinians from Israeli Facebook posts. This shows that Facebook policies appear to favour the powerful and the elite over those who are oppressed. That is also the case with migrants. Facebook has a policy, which Ms Cummiskey repeated, that it does not allow hate speech. However, there are many incidences of posts containing hate speech against migrants. We quoted several that were exposed by the programme. Ms Cummiskey said there is a quasi-protected category for migrants. They are sort of protected, but not really. For instance, Facebook protects against calls for violence and dehumanisation in general but, in one document, allowed migrants to be referred to as filthy but not as filth, because Facebook distinguishes the noun when used in a Facebook post. That does not deal with this being one of the most vulnerable groups on the planet. We have seen them drown in the Mediterranean, held behind barbed wire and so on, but Facebook has a quasi-category in which it places migrants for protection. These are question relating to how Facebook monitors things.

I return to the related question of training. Deputy Dooley may have asked this already but were the training materials used for moderators created by Facebook or by CPL? Will that continue to be the case? It appears as though Facebook has created a big headache for itself by outsourcing moderation. The Facebook representatives here have stated they earn about 20% above the national minimum wage but outsourcing is obviously being used to save money for the company. There is no other reason to use outsourcing. If that was not the case Facebook would directly employ the moderators and make the company directly accountable for moderation rather than being able to say that CPL made all the mistakes. If that is the case, this committee should invite CPL before it to question it on how it does its business. Ms Sweeney and Ms Cummiskey seem to be saying they did not know about things that did not happen and that CPL changed the criteria for training. That is a real problem. If Facebook had not outsourced this and was directly responsible for moderation, this might not have happened. Maybe it should reconsider the question of outsourcing. These concerns have been raised previously.

I have some questions on Facebook's submission to the committee. It stated "Ongoing training will now continue with twice weekly sessions to be delivered by content policy experts from Ms Siobhán Cummiskey's team". From this, I assume this has not happened in the past but will happen now. Is that the case?

What sort of training has Facebook offered its moderators up to now? Is this a new type of training Facebook is offering or is the company engaging in a review? Ms Sweeney stated that all content reviewers will continue to receive regular coaching sessions and updated training on Facebook policies as they evolve. Has that not happened before? Will Ms Sweeney and Ms Cummiskey please explain how these wonderful ways of monitoring things have suddenly dawned on Facebook? Ms Sweeney also stated that Facebook will continue to deploy spot-testing at its review centres. How often does Facebook carry out such tests? Is enough consideration given to spot-testing? Why is Facebook suddenly stating that it has to increase the frequency of this testing?

The other question I want to ask about moderators relates to how they are paid. Deputy Eamon Ryan asked a similar question. There are reports of moderators in the Philippines being paid $350 per month and those in the United State being paid $15 per hour. Ms Sweeney represents Facebook in Dublin. The company is based in Dublin and it pays its taxes through Dublin. It gets off scot-free from paying a huge amount of tax across the globe on the basis of the advantage it enjoys because it is based in Dublin. Last year, its total revenue was $40 billion, 98% of which was from advertising. When one looks at those figures, it is extremely difficult to believe the statement that the company is not driven by financial interest. We are talking about a huge amount of revenue from advertising. It can become like a contagion when the attitude is that the company is making loads of money so it will keep going because that is what delivers. The latter is different from the attitude in Mark Zuckerberg's statement in the context of helping people to understand the world around them. As Facebook pays so little tax because it is based in Dublin, would it consider co-operating with the introduction of a levy that would ring-fence millions, if not billions, to help address the negatives associated with the operation of Facebook, in other words to help with, for example, mental health services or with the negative impact on minorities?

Does Deputy Dooley have a quick question? If he does not, I am happy to go back.

I have a number of final points to make.

The Deputy should be brief because Deputy Stanley wishes to pose a question.

I will be brief. Ms Sweeney referred to the possibility of appointing a digital safety commissioner but I get the sense there is some reticence. There are parallels in that reticence with the language of the Taoiseach. The Taoiseach has made his interest in Mr. Zuckerberg and Facebook known. He has shown great excitement about both. On the most recent occasion on which I saw the Taoiseach on television in the United States with Mr. Zuckerberg, I almost expected him to use his mobile phone to take a selfie. The Taoiseach seemed so excited about their meeting. Leaving that aside, has Facebook lobbied the Government? Will Ms Sweeney or Ms Cummiskey outline any communications Facebook has had with the Government regarding the specific matter of appointing a digital safety commissioner?

I will bring Deputy Stanley in very briefly.

Some of my questions were not answered. I have some very direct questions. If Facebook does not harvest data from children under 13 years of age, how is it filtered out? Ms Sweeney referred to the internal investigation being carried out by Facebook in respect of CPL. Is the investigation a vote of no confidence in the CPL staff who work for Facebook? Will Ms Sweeney explain that because it cuts to the nub of the issue? Will she address the fact the moderators felt free to direct staff to leave content up by saying that it was all about money?

I forgot to state that the digital age of consent is 16. Ms Sweeney and Ms Cummiskey have mentioned the age of 13. Does Facebook intend to change its parameters and policy to take account of the changes envisaged on that issue?

I have one minor question on the training, which I forgot to ask. Will Facebook make the training material available to the committee and the public in order that we can all see how the company does its business, what criteria it uses and how and why these should be applied? It would be very useful for the committee and the public to have access to all the training material.

I will bring the representatives from Facebook back in again on the final round of questions.

Ms Niamh Sweeney

I might address the last question first and work backwards because if I do it the other way, we will lose out. On the training manuals, in April this year for the first time we published our comprehensive policies. Prior to April of this year we had always published our community standards. There was an external-facing version and a much more complex set of internal standards that dealt with the specifics of various situations operating on a very fine line between what violates and what does not. Due to calls from many of our safety partner organisations for greater transparency - I talked about this in the opening statement - we published the full guidelines in April of this year. A considerable amount of very detailed information is available now. In a couple of very rare exceptions we do not give where the exact line is because it just tells people who want to game the situation how to do it. However, for the most part everything we have is online.

Deputy Dooley asked about perceived reticence. Certainly there is no perceived reticence. What I said - I was actually quoting the Law Reform Commission-----

I interrupt Ms Sweeney to advise that there is some interference from phones. To assist with broadcasting, I ask members and witnesses to put their phones into flight mode.

Ms Niamh Sweeney

On the notion of potential downsides for freedom of expression, I was quoting the Law Reform Commission's report from 2016. Those were the report's words rather than mine. The Law Reform Commission is very alive to these challenges. We had meetings with Ray from the Law Reform Commission and his team over the two years they were preparing that report. Unfortunately, the draft legislation did not include a definition of harmful communication. When the detailed scrutiny happens, other organisations will wish to come in and speak to that. The committee might have more faith in them than it has in us today.

We discussed lobbying with the Taoiseach when he met our CEO, Mark Zuckerberg, in Menlo Park in November. We encouraged the adoption of the Internet content governance advisory group's report of 2014, much of which was captured in the action plan for online safety which was published last month. We had a discussion on the potential impact on freedom of expression with the Law Reform Commission's proposed legislation. That did get discussed.

On the digital age of consent being set at 16 and whether we would change the age, that is not happening. WhatsApp has set a minimum age threshold of 16 but it is still 13 for Facebook. There was much confusion as this debate was playing out. There are certain conditions attached to consent within the general data protection regulation. For example, it is not permissible to bundle consent. It is not permissible to ask for more than one thing in one go. There cannot be a detrimental effect if consent is not granted. A company cannot deny use of a service if somebody does not consent to the processing of their data. It is also used for very specific types of data processing.

I am getting into the nuts and bolts of it now. There are six different bases for processing data: consent, contractual necessity, legitimate interest, legal obligation to do so, and one or two others. Most of our processing is done on the basis of contractual necessity, which means that to provide customers with the service we are contracted to provide, we must process the data. For example if Deputies Dooley and Stanley and Senator Lombard were all friends with each other, for Facebook to surface the posts of Deputy Stanley and Senator Lombard in Deputy Dooley's newsfeed, we must process the fact that they are friends. It is that kind of stuff as against anything more specific. It goes out from there to how we surface an advertisement from Airbnb in Deputy Dooley's feed because it has this code embedded on its website. That accounts for the majority of it.

There is also legitimate interest, which is a fundamental right under European law, which is to allow businesses to conduct and have a legitimate business. Some data are processed on that basis.

Certain types of processing must be done on the basis of consent. They would include a person's religious views, political views and biometric or health data. In our case that would include facial recognition, which is a feature that a user must opt in to. Any Facebook user will have been forced to make that decision at some point between early April and 25 May of this year. As that involves use of a user's biometric data because there is a facial template created if he or she agrees to it, we must have consent to do that. We must have specific consent just for facial recognition.

If someone wants to fill out the fields in his or her profile to, for example, say whether he or she supports Fine Gael or Fianna Fáil, whether he or she is interested in men, women or both, etc., that is considered an indication of political, religious or sexual orientation. If someone wants to put anything in those fields, he or she must consent to the processing of those data. This works in two ways. For those aged between 13 and 15 years, either their parents will consent to the processing of those types of data or the service will be offered to him or her without such processing.

I have answered most of the questions. Deputy Bríd Smith asked many questions on, for example, migrants. Ms Cummiskey might address those.

I asked two questions but I have still not received answers to them. Does Facebook harvest data from those under 13 years of age and, if not, does it have a mechanism to filter those data out? Ms Sweeney stated - in her written submission and verbally - that the internal investigation into what had happened with CPL is being carried out by Facebook rather than CPL. Does that mean that Facebook has no confidence in CPL's employees to conduct the investigation? I have asked these questions twice.

Ms Niamh Sweeney

I apologise to Deputy Stanley for that oversight. Regarding the investigation, I actually stated that we are conducting it with CPL. In some instances, CPL would conduct its own internal-----

That is not what is stated in the written submission.

Ms Niamh Sweeney

It is. The phrase "with CPL" is used. Sometimes, CPL might conduct its own investigations. Due to the priority we have attached to this, we are taking the lead but we are doing it together. We accept full responsibility for this situation. Where there were mistakes in the training material-----

The submission states, "This is being led by Facebook, rather than by CPL, due to the extremely high priority we attach to this."

Ms Niamh Sweeney

Yes, and just above that the phrase "with CPL" is used. There is no question of CPL being frozen out. We are not throwing it under the bus. I accept full responsibility for everything of which Facebook should have been aware. As we have mentioned, there will be far greater oversight, including through the secondment of full-time Facebook policy experts to the CPL site for a minimum of six months to oversee everything that is happening at training level.

To answer Deputy Dooley, the training presentations were created by Facebook and modified incorrectly by CPL staff. Modifying those materials will no longer be possible. If there were mistakes in those presentations, we should have been aware of them. Even if someone at CPL incorrectly made the adjustments to the presentations, we should have known and our own audits should have picked up on them. I am not suggesting for one minute that we can outsource responsibility in that regard.

I am not sure what Deputy Stanley means by harvesting the data of people under or over 13 years of age. One must be at least 13 years of age to have a Facebook account. If we become aware of someone under that age having one, he or she will lose access to it.

Does Facebook have a specific mechanism to try to prevent that happening? Ms Sweeney stated that there is a general policy to the effect that anyone under the age of 13 years is not allowed to have a Facebook account, but she also outlined how flimsy are the measures to prevent that. It is anything but 100% proof.

My question specifically relates to data on children. As anyone who has ever worked in advertising knows, the first rule of advertising is to get money out of adults through their children. Is there a specific mechanism to do that?

Ms Niamh Sweeney

To do what?

To filter out data that may be put online by children. Can their data be harvested for use by advertisers?

Ms Niamh Sweeney

The Deputy's question implies that we would be able to filter out certain data collections for underage users. We would never do that because if we became aware of an underage user, he or she would lose access to his or her account. It is as simple as that. As to whether there is a way for us to be sure that someone is not underage without receiving a report or coming across his or her account in another way, it cannot be done without there being much more intensive data collection from every single user. We cannot isolate a child from an adult without knowing what everyone's age is. That is probably a conversation for a different day when other organisations are before the committee. Many people have different views on this.

Can Facebook develop systems to improve its-----

I want to get answers to Deputy Bríd Smith's questions.

Could I also get answers in respect of the matters I raised?

Ms Niamh Sweeney

I will answer Senator Lombard first. He asked about the discrepancy between the 7,500 reviewers and the 20,000 people working on safety and security. He is right in that 7,500 people are involved every day in content review. The other 20,000 include people like Ms Cummiskey's team of approximately 15 individuals in Dublin.

It is a global team. There are people on it in Austin, Texas, Menlo Park and India. They do not actively review content; they formulate the policies. Anything that is escalated from our content reviewers, which might be unclear as to how it should be treated, is dealt with by Ms Cummiskey's team. The engineering teams build the systems for the content reviewers in order that they can escalate a piece of content in a particular direction within the company. There are other people who are working on the artificial intelligence that was built to detect the fake accounts about which we spoke earlier. It is much broader than that but the focus of all of these 20,000 people, be they on the engineering or policy formulation side, is on safety and security.

On the question regarding regulation and whether we accept the need for further regulation, we do accept it. That is where this-----

The more news Facebook makes and the higher its media profile, the better it is for its profits. The vice president of Facebook made a very brash statement a few years ago, which I understand is still live and has not been taken down. It is a little bizarre that despite that the company having disassociated itself from the statement, that post remains online. Is it a case of there being no such thing as bad publicity? Even though this has been a media fiasco for Facebook, its profile will be raised and it might even see a bounce in its fourth quarter off the back of it.

Ms Niamh Sweeney

I doubt it.

The point I am making is that the more regulation that is put in place, the less profitable Facebook will be. Is the company conscious of that?

Ms Niamh Sweeney

I do not know whether that is a natural follow-on. We are heavily regulated in particular spaces, data protection being one of them. We accept there is need for greater regulation in this space, whether it comes at national first or at European level, but I do not agree that will be a bad thing, if done sensibly. We welcome that we have been invited to contribute to the detailed scrutiny of the Bills proposed by Deputies Lawless and Ó Laoghaire.

I ask Ms Sweeney to respond to Deputy Bríd Smith's questions.

Ms Niamh Sweeney

I will come back to Senator Lombard's questions on Facebook Live, time permitting. I will hand over to Ms Cummiskey at this point.

Ms Siobhán Cummiskey

On the cross-checking discussed in the "Dispatches" programme, I wish to explain how this works. There are pages that have a large number of followers or a lot of traffic to them. We would apply what is known as a cross-check to those pages. This means that before a page is taken down, it is reviewed again to ensure the correct decision is being made. This does not mean that any page receives differential or preferential treatment. We are agnostic as to what those pages are and of what they consist. If they do not violate our policies and they have a large volume of traffic to them they will undergo a cross-check, which means they are checked twice before they are removed. The "Britain First" page was removed by us last April because it continuously violated our community standards. Everyone is subject to those community standards.

Ms Niamh Sweeney

On the notion that it is in our interests to leave up pages that have huge numbers of followers because they generate revenue for Facebook, this is not the case. An account that has a set number of violations will result in the account holder losing access to his or her page or account. This is true no matter who the account holder is. There was no exception made for Britain First and that is the reason it lost its account in March.

I should add that members' accounts are also subject to cross-check, not because of the number of followers they have but because they are public figures.

I am in trouble so.

Ms Niamh Sweeney

That does not mean members' pages are monitored. If Deputy Dooley is reported to us for violating for hate speech, we want to ensure that the reviewer who arrived at that decision is correct because the Deputy is a public figure representing people. We are often accused of left-wing bias and right-wing bias and we are often accused of both on the same day. We want to make sure that if the account holder is a public figure and the content is a public speech, we are getting the decision right but that does not mean there is a different standard for Deputy Dooley than there is for Niamh Sweeney. It will be the same outcome either way.

Is Ms Sweeney saying that Tommy Robinson's page had fewer than five violations of hate advocacy?

Ms Niamh Sweeney

No. Ms Cummiskey can speak more to the exact number of violations. I am not sure that we reveal that number because it tells people how to game the situation. Tommy Robinson knows where the line is drawn when it comes to hate speech. He is very good at gaming it and staying just about on the right side of it. When Britain First lost its page, it was because it hit the limit on the number of violations permitted.

The limit is five.

Ms Siobhán Cummiskey

It is not necessarily five and differs in different circumstances. A known terrorist, for example, would be removed immediately without any strike. In some cases, if a person uploads child exploitation imagery, it is one strike and he or she is out. There is no one or set limit for how many times a person can violate the policy. It depends on the nature of the violation. An egregious violation can result in a person's entire page or profile being removed.

That raises very interesting questions about how the company formulates policy.

Ms Siobhán Cummiskey

It does and we have a real responsibility to get our policies right. That is why we consult on them and do not create them in a vacuum. We consult others on whether the lines are in the right places. We also offer an appeal; therefore, if a page or a profile is removed, people will have had the opportunity to appeal. That has been the case for many years. In April we announced that by the end of the year we would roll out post-level appeals. Therefore, if a post or a photo is removed in certain categories of content, it will be possible to appeal the decision wherever the person is in the world.

I cannot corroborate it, but I cannot imagine there would have been many complaints about the Britain First page before it was taken down. Coincidentally, it was taken down when some of the group's leaders were sent to prison. I will leave it at that, but I will check it out. I am sure the content was more egregious than what would have been allowed for in the company's policy.

Were there other questions to be answered?

Ms Siobhán Cummiskey

We removed the Britain First page once we noticed it had continuously violated our policies. We did not leave any content on it that violated our policies. We would not do so. In circumstances where something is reported to us, we review the report against community standards and remove the content, no matter who the user is, if it violates community standards.

Ms Niamh Sweeney

The page might stay up, but the particular violating content would be taken down. Once the threshold is hit, the whole page comes down.

The Deputy asked a question about some of the measures we were introducing. Some of them are new, while some are being continued and stepped up. We said this in a couple of cases. The twice-weekly checks were already in place, although perhaps not at the same frequency or with the same level of oversight in the CPL centre by Facebook staff seconded to it. Members asked how often there were spot tests. They will be stepped up considerably. We will also look at the audits for the past six months to see if we missed things. The frequency of spot testing will increase greatly. The overall level of oversight will change.

I am not aware of a specific meeting with the Israeli Government. We regularly meet governments all over the world, including the Irish Government, as well as departmental officials, elected representatives and speakers from the Opposition. That would not be unusual. Legislation passed through the Knesset a couple of weeks ago that certainly was not favourable to us. Nobody on either side is happy, if that is in any way reassuring for the Deputy. It is not that we favour one side over the other; both sides have issues with our platform.

I can prove that is not the case as there is an anti-Palestinian post every 46 seconds which is left up, but more than 90% of all Palestinian pages are removed. That is not treating both sides the same. Clearly, there is a preference for the powerful and the elite over others, as many studies or publications demonstrate.

Ms Niamh Sweeney

I can promise that that is not the case. No preference is shown to governments or the powerful elite. I do not have any of the statistics referenced by the Deputy or comparisons from the other side, but I can promise that our policies are applied evenly across the board.

Perhaps we might move to the rest of the questions asking, including those about Facebook Live.

I asked a question about CPL and outsourcing. Is it at the heart of the problems?

Ms Niamh Sweeney

No. Outsourcing helps us to fix the problem in the sense that we have hired 3,000 people just to review content in the past year. We are hiring massively across the company. When I joined in 2015, there were 12,500 staff. We are now close to a figure of 30,000 and the number is growing rapidly all of the time. If Facebook's internal human resources function was to try to hire 3,000 people for the purposes of moderation, it would greatly slow the rate at which we could hire. Millions of CVs come to the company every week. Outsourcing means that we can scale rapidly in one region if there is an uptick in reports relating to it. It gives us much greater flexibility.

The problem is that gaps have been clearly identified in the programme between how our policies are being executed and how training is delivered. We need to fix that. I would be the first to admit that it is not a perfect system.

On the question of a sectoral tax, I do not know that sectoral taxes ever worked particularly well. We invest heavily in resources to address issues like bullying, self-harm and all of that. Later on this year we will make a sizeable announcement, with a local partner, about an anti-bullying programme. There will be more information on that after this summer.

What about the Facebook Live issue?

Ms Niamh Sweeney

Senator Lombard is right that it is difficult to monitor the issue. Most of the time Facebook Live is used in a good way and highlights great injustices. Most people became aware of Facebook Live when somebody in the US started to film proceedings when their partner, while in the driver's seat of their car, was shot by law enforcement. It can be seen how Facebook Live is a useful tool and a powerful tool in some ways.

One of the ways in which it is used on which we were criticised early, and we had to make serious changes, and again the engineering capability was brought to bear on the matter, was where in a couple of rare cases, although they do happen, a person started to Facebook Live and gave the clear indication that they were about to take their own life. It was not possible at that point for our reviewers to come in midway through a live feed and to see back what had happened from the start, because it is not always obvious. One might be presented with a certain scene but one is not seeing all the parts of it. We had to reconfigure the technology drastically so that people could alert us to an imminent threat of self-harm.

People often ask why Facebook leaves it up when a person is clearly showing this is where he or she is headed with it. We do not make our policies in a bubble. Ms Cummiskey's team consults very widely and globally because there are regional differences in views and how to approach these issues. Among the very clear feedback that we have received is that while a person is still filming, he or she is not doing it, which means there is still a chance for intervention. Therefore, if we shut off a feed while a person is in the middle of a video where he or she has clearly indicated that is where he or she is headed, that can almost be more damaging in itself when there is still a chance for intervention. In fact, there have been many cases where somebody has reported it while Facebook Live has continued, which means it becomes a high-priority report that gets reviewed much more quickly than something else, law enforcement has been reached out to and people have been saved as a result. The Senator is right that it is a very challenging space. We do not monitor what happens. We rely on user reports. That is one of the ways that we try to deal with it.

I thank Ms Cummiskey and Ms Sweeney for their time and engagement, which have been very good. We look forward to engaging with Facebook and the other platform providers on the Digital Safety Commissioner Bill 2017. I ask that Facebook gives us the results of its investigation reviews as a follow-up to this committee hearing today, and that it convey the progress that it has made. We will do our own work as a committee, and we will proceed with legislation and regulation. We would like to hear a follow-up from Facebook on the progress that it has made in terms of self-regulation.

I propose that the committee publishes the opening statements and the submissions received on its website. Is that agreed? Agreed.

The joint committee adjourned at 3.10 p.m. until 2 p.m. on Tuesday, 4 September 2018.
Top
Share