Skip to main content
Normal View

Joint Committee on Communications, Climate Action and Environment debate -
Tuesday, 17 Apr 2018

Online Advertising and Social Media (Transparency) Bill 2017 and the Influence of Social Media: Discussion

I draw the attention of witnesses to the fact that by virtue of section 17(2)(l) of the Defamation Act 2009, they are protected by absolute privilege in respect of their evidence to the committee. However, if they are directed by the committee to cease giving evidence on a particular matter and they continue to so do, they are entitled thereafter only to a qualified privilege in respect of their evidence. They are directed that only evidence connected with the subject matter of these proceedings is to be given and they are asked to respect the parliamentary practice to the effect that, where possible, they should not criticise or make charges against any person, persons or entity by name or in such a way as to make him, her or it identifiable.

Any submission or opening statement made to the committee will be published on the committee's website after this meeting.

Members are reminded of the long-standing parliamentary practice to the effect that they should not comment on, criticise or make charges against a person outside the House or an official either by name or in such a way as to make him or her identifiable.

I remind members and witnesses to turn off their mobile phones or to turn them on flight mode as they interfere with the sound system.

I welcome our witnesses today. I propose the main witnesses will speak for no more than five minutes.

I ask witnesses to please indicate in advance if they wish to share speaking time. Is that agreed? Agreed.

I invite Deputy James Lawless to give a brief overview of his Bill. I would appreciate if he could keep his contribution to five minutes.

I welcome Mr. Kaplan and Ms Sweeney from Facebook and thank them for joining us today. As this is the first hearing of the scrutiny of my Online Advertising and Social Media (Transparency) Bill 2017, I shall briefly outline the legislation for the benefit of committee members and all interested parties before moving on to the Facebook engagement.

The Irish Electoral Acts were written largely in 1992 and have seen little update since. All of that happened before the existence of the Internet, broadband and certainly before social media. As such, the Acts do not cover online space at all.

The Broadcasting Act is an anomaly in the sense that political advertising on television and radio is completely prohibited. We have governance on paper, materials, election posters and a complete ban on television and radio advertisements yet a total vacuum exists in the online regulatory space. The Broadcasting Authority of Ireland, BAI, has said that online advertising and social media advertising is out of scope. The Standards in Public Office Commission, SIPO, has said that it does not have a role under the current legislation. The Data Protection Commissioner has an interest in personal data but not in political online campaigning. Also, the Referendum Commission of Ireland, for the forthcoming referendum, has said that online campaigning is out of scope for it as well. We have a situation where the Internet remains like the Wild West electorally.

The reason the funding of campaigns has always been regulated is because of the fundamental principle that democracy should not be available to the highest bidder. We have expenditure limits, campaign monitoring, the governance of fundraising and a ban on television advertising. All of that subscribes to the basic principle that there should be a level playing field as much as possible and that anybody can compete in politics at the entry level. Again, none of this applies online.

As we have seen in recent elections that have taken place around the world, such as in the US, the Brexit referendum and in many other jurisdictions, and specifically Cambridge Analytica, it has been possible to enter the online domain, spend an awful lot of money and disproportionately affect the outcome of elections using very sophisticated techniques. It is important that we move to manage that gap in Ireland. I introduced First Stage of my Bill in the Dáil and it passed Second Stage before Christmas. My Bill seeks to do a number of things to regulate the sector. My Bill embraces online advertising and views it as a good thing. My Bill sees political online advertisements as democratising to some extent. There is a low barrier to entry. My Bill seeks to put transparency front and centre of any political advertising. It requires a transparency notice, which means the advertiser must state who they are, identify the publisher, identify the sponsor and whether a targeting algorithm has been applied. There is a ban on bots. Where multiple fake accounts artificially promote a particular viewpoint, bots can bombard somebody to make something appear unpopular or conversely multiply something to make it appear very popular. Bots skew online discourse, which has a negative impact on democracy.

When I was preparing my Bill I engaged with many civic society groups, including the National Union of Journalists, the Irish Council for Civil Liberties, Amnesty International, Uplift and the Transparent Referendum Initiative. All of the organisations broadly support my Bill and its measures. They view the measures in my legislation as very important..

I thank Facebook, with which I engaged at the outset last year on this legislation. I have found it to be fair, co-operative, collaborative and very keen to see the legislative measures implemented. Twitter, Google and the other social media platforms have also been involved.

The Bill is needed. There is a broad consensus among the technical sphere and the civic society sphere that my legislation is needed. Unfortunately, the Government remains ambiguous about the legislation. I hope it will come around and I think it will. The parliamentary legal advice has deemed the Bill to be relatively solid and with some amendments will be tidied up. The view from civic society and elsewhere is that the legislation is needed. We just need to get on with the process. Today is our first engagement and I look forward to the session ahead. I know that the Clerk to the Committee will arrange further engagements with other witnesses over the next number of weeks and months. It is a matter of when the Bill becomes law.

Again, I welcome the delegation from Facebook and I look forward to the engagement.

I welcome Mr. Joel Kaplan, Vice President of global policy in Facebook. He is accompanied by Ms Niamh Sweeney who is the head of public policy with Facebook Ireland. I invite him to make his opening presentation.

Mr. Joel Kaplan

I thank the Chair and members of the committee for asking us to be here today to talk about the events that have come to light in recent weeks and the steps that we are taking to address them. Ms Sweeney and I are accompanied by Mr. Gareth Lambe, Vice President and Head of Facebook Ireland, Mr. Richard Allan, Vice President for policy in Europe, the Middle East and Africa, or EMEA, Ms Eva Nagle, regulatory counsel and Ms Claire Rush, content counsel. Ms Sweeney, Mr. Lambe, Ms Nagle and Ms Rush are all based at our international headquarters located in Dublin.

I shall start by echoing what has been said by our chief executive officer, Mr. Mark Zuckerberg. What happened with Cambridge Analytica represents a huge violation of trust, for which we are deeply sorry. We now serve more than 2 billion people around the world who use our services to stay connected with the people who matter most to them. We know we have a responsibility to the Facebook community and that people will only feel comfortable using our service if their data is safe.

As our second largest office globally, Facebook Ireland plays an important part in providing that service, with more than 2,500 people working across multiple teams at our international headquarters located in the docklands of Dublin. As Facebook has grown, people everywhere have gotten a powerful new tool to stay connected to the people they care about, make their voices heard and build communities and businesses. It is now clear that we did not do enough to prevent these tools from being used for harm as well. We did not take a broad enough view of our responsibility, and that was a mistake.

I will not repeat the detailed timeline in my written statement but I will highlight some key points. In 2014, we made some big changes to the Facebook platform to restrict the amount of data that developers can access and proactively review the apps on our platform. This means that today a developer cannot do what Dr. Kogan did four years ago. In 2015, when we learned that Dr. Kogan had shared the data we banned his app from our platform and demanded that all improperly acquired data be deleted. Last month, when we learned from the media that Cambridge Analytica may not have deleted the data as it had certified, we banned the organisation from using any of our services.

Most of the people impacted by this matter were predominantly in the United States. However, as members will have heard from the Data Protection Commissioner, Ms Dixon, we understand that 15 people in Ireland installed Dr. Kogan's app. Also, up to 44,687 people in Ireland may have been friends with someone who installed the app and, therefore, may have been affected. We are notifying all of those people. We have a responsibility to make sure that what happened with Dr. Kogan and Cambridge Analytica does not happen again. As a result, we are limiting the information that developers can access using Facebook login and we are putting additional safeguards in place to prevent abuse. We are also in the process of investigating every app that had access to a large amount of information before 2014. If we find that someone is improperly using data then we will ban him or her and tell everyone affected. We are making it easier to understand which apps people have allowed to access their data and turning off the access to information on any app that they have not used in 90 days. We have expanded Facebook's bug bounty programme so that people can also report to us if they find misuses of data by app developers. We know that there is a lot of work to be done and that this is just the beginning.

Over the past four weeks we have co-operated with Ms Dixon, as our lead regulator, and her office as they have sought to establish what happened and how Irish and EU users may have been affected by Cambridge Analytica. We have shared detailed information with that office about our past and current practices. We continue to engage with them on same. To be frank, the Data Protection Commissioner has been critical of us in recent weeks. We recognise and understand those criticisms. We could and should have done better when responding to concerns. We are committed to doing better going forward.

On 25 May, the General Data Protection Regulation will take effect. Under the regulation, Ms Dixon and her team will become our lead supervisory authority. Transparency is a fundamental requirement of the GDPR. Our teams have worked hard, for the past 18 months, to make key changes to our products that will give our users greater control over their data and visibility as to how they can exercise that control. We take the GPDR very seriously. It places additional standards on companies, which is a good thing. We need to rebuild trust with our users and compliance with the GDPR is critical to that. We have been working with the Data Protection Commission on our final products. We welcome the time invested by that office in an ongoing engagement with Facebook and the feedback that we have received on our GDPR implementation plan.

Finally, I will comment on the Online Advertising and Social Media (Transparency) Bill 2017. I know that Deputy Lawless met our team when he first proposed his Bill. We very much appreciate that he sought to engage with us on his legislation from the outset. We fully understand and appreciate what his Bill seeks to achieve and we are aligned with its goals. It mirrors, in a large part, what we are trying to achieve with the new advertisement transparency tools that we have announced. We agree that when it comes to advertising on Facebook people should be able to see all of the advertisements that a page is running. Also, when it comes to political advertisements, all advertisers should be verified and any advertisements that they run should be clearly labelled to show who paid for them. We are working hard to build out these transparency tools and roll them out globally. However, it takes time to do so and, most important, it takes time to get right. I am able to report the following to the committee today.

As of 25 April, we will add Ireland to our pilot programme for the first phase of our efforts on transparency, namely, the "view ads" tool. This will enable Irish users to see all the advertisements every advertiser is running on Facebook at the same time. We hope it will bring greater transparency to advertisements running in the context of the forthcoming referendum on the eighth amendment.

As I am sure the members will have comments and questions, I will stop there. I thank the committee for inviting us to be here.

I thank Mr. Kaplan for his contribution. I will begin with my own questions. Mr. Kaplan can make a note of the questions as we go around the members of the committee before returning at various intervals.

I posed a question to the Data Protection Commissioner about this issue earlier. Mr. Kaplan stated that in 2014, Facebook said it was changing its platform to significantly limit the information apps could access based on recommendations from the data commissioner. Those recommendations were made in 2012. Can the witnesses explain why Facebook delayed putting those changes in place for two years?

Mr. Kaplan also stated that in 2015, Facebook learned from The Guardian that Dr. Kogan had shared data from his app with Cambridge Analytica. It sounds as though it very much appreciated that information. This week's edition of The Economist states that Facebook threatened to sue The Guardian if it published the story. That sounds as though Facebook valued its corporate reputation over users' privacy rights. Will the witnesses comment on this?

Is it the case that Facebook only took the matter seriously after The Guardian published its story and the company came under intense public scrutiny?

Mr. Kaplan stated that in 2015, Dr. Kogan shared the data from his app with Cambridge Analytica so Facebook banned his app from its platform and demanded that Dr. Kogan and all entities to which he gave the data, including Cambridge Analytica, confirm they had deleted the data. Did Facebook check or confirm that this had happened? The witnesses say that only last month, Facebook learned that the data may not have been deleted. Can the witnesses confirm that these data were not shared with other parties prior to their deletion, if indeed that occurred?

I thank Mr. Kaplan and Ms Sweeney for their attendance.

I will begin with a couple of comments. I would like the witnesses to address a number of matters arising from the written submission provided to the committee before the meeting. It notes that "As our CEO explained last week, Facebook is an idealistic and optimistic company" and then goes on to state:

it's clear now that we didn’t do enough to prevent these tools from being used for harm as well [as good]. We didn’t take a broad enough view of our responsibility, and that was a mistake.

It is as though the submission is looking for the fool's pardon, as we say in Ireland, that Facebook has been somewhat naive and is almost the victim and has been taken advantage by these app developers.

I will read into the record a memo with which the witnesses will be familiar. It was circulated internally in their company by one of its senior vice presidents, Mr. Andrew Bosworth. I will quote several sections of it rather than go through it in detail. It states:

We connect people ... So we connect more people. That can be bad if they make it negative. Maybe it costs a life by exposing someone to bullies. Maybe someone dies in a terrorist attack coordinated on our tools. ... The ugly truth is that we believe in connecting people so deeply that anything that allows us to connect more people more often is *de facto* good. It is perhaps the only area where the metrics do tell the true story as far as we are concerned.

... That’s why all the work we do in growth is justified. All the questionable contact importing practices. All the subtle language that helps people stay searchable by friends. All of the work we do to bring more communication in. The work we will likely have to do in China some day.

Later, it goes on:

The best products don’t win. The ones everyone use win. ... We do have great products but we still wouldn’t be half our size without pushing the envelope on growth. Nothing makes Facebook as valuable as having your friends on it, and no product decisions have gotten as many friends on as the ones made in growth. Not photo tagging. Not news feed. Not messenger. Nothing.

I am sure the witnesses are familiar with the latter; it has been well circulated in the media. I accept that it was written in 2016, two years after the 2014 watershed to which the witnesses referred as addressing many of the concerns. When did Mr. Bosworth leave the company or when was he fired? Is he still with the company? It seems to me that if one reads that memorandum, it speaks to a DNA within Facebook of pushing the envelope of growth over everything else.

Facebook makes a lot of users owning their data. Perhaps the witnesses can assist me but my understanding is that any data I put onto my page is something I own and over which I have control. However, there is a second set of data. The Data Protection Commissioner confirmed earlier that Facebook tracks individuals when they leave its site so there is another dataset; there is something that it captures on a user's activities once he or she leaves the Facebook environment. Who owns that data? Where is it stored? Can the witnesses provide me with my data or users' data generally? If so, are the two sets of data made readily available? Who owns the second type of data, where is it stored and is it available?

We have been told that people are tracked after they leave Facebook. Do the witnesses think that the company should continue to be allowed to do that? Is it something that should be a regulated practice? Is it something that we should look at regulating and prevent it. I raise this issue with the witnesses because they are here, but I accept that Facebook is one of many and others effectively do the same thing. There is a marketplace and companies need to go with the flow.

There was a discussion earlier about opting in and opting out. Deputy Eamon Ryan focused on the issue. From now on, should it be very clear that when a service is available that facilitates the company to provide the Facebook service through advertising, as it does without a paywall - and we accept that it must make money and the accounts have to stack up - but should there be an opt-in window? From now on, should it any service that allows me to access all that information for free, that I know what I am opting into, so that I would know where my data goes, and what data goes, what it is likely to be used for, what silo it will go into and predict what might happen in future? That would bring transparency into the interaction between user and company. Then we would not have the concern about whatever nefarious activities are going on beneath the bonnet.

I will ask the witnesses to respond before bringing in other members.

Mr. Joel Kaplan

The Chairman asked about the delay between the recommendations made in 2012 and the action to lock down the platform in 2014. The bottom line is that we did take too long. That is one of the mistakes we made. However, it is also important to put it in context. The Data Protection Commissioner spoke about this at some length. The audit we underwent in 2011 and the follow-up in 2012 was quite comprehensive. We received something like 60 recommendations that required a response from Facebook and in many instances significant modifications to our service. This was one issue which was raised. We did provide some additional education and the change in the location of controls for users to allow someone to share their information with an app developer but ultimately concluded after around 18 months that we should just turn that capacity off. I wish that we had come to that conclusion sooner but, as the data commissioner noted, it was something of an iterative process.

The next question related to what action Facebook took after the article published by The Guardian in 2015. We moved immediately to ban Kogan's app from the platform and to demand of him and anyone he gave the data to that they delete it.

We were told relatively quickly by Cambridge Analytica that it had complied and deleted the data. However, the Chairman has hit immediately on another place where I think we did not do as well as we should have, which was that we took their word for it that they had deleted the data and we should not have done that. We should have pursued that with an audit, which we are undertaking now as soon as the ICO permits us, and we should have notified the people who were affected. We are now taking all of those steps, although I wish we had taken them in 2015. We are taking them not only with respect to Cambridge Analytica, but we are going back and looking at all of the apps that are similarly situated, meaning they had access to large amounts of data prior to the change to the platform in 2014. We are now going back and looking at all of those apps, seeing if any of them are engaging in any type of suspicious activity. If they are, we will conduct a full audit, and if they have misused our data, we will tell the people affected.

I hope that answers the Chairman's questions about whether we ever confirmed or checked in 2015. The answer is that we did not. We received assurances and we should have checked.

I expect Facebook will be doing this in the future. From the public's point of view, the issue is to know their data is protected. This is not just a retrospective review or audit. The public who use the Facebook platform need to be assured that their data will be protected and that Facebook will put in these safeguards.

Mr. Joel Kaplan

That is exactly right. We are doing that with all of the apps that were in the same position as Kogan's app. We have also taken additional steps in recent weeks to further lock down the access by developers to the platform so we do not recreate this kind of problem in the future.

On the first question about Mr. Zuckerberg's comments and mine in my written statement, we do not believe we are a victim here. We believe we have a broader responsibility to the people who use Facebook. We made mistakes, as the Chair and I were just discussing. We want to make right on the mistakes we made with respect to this app but also go further and give the people who use Facebook the confidence that their data is safe. No, we do not believe we are a victim in any way here.

On Andrew Bosworth's memo, Mr. Bosworth has not left the company and has not been fired. His memo did not reflect the company's values, and Mr. Zuckerberg has spoken directly to this. Mr. Zuckerberg did not agree with it then and he has been very clear that he does not agree with it now.

With respect to the online behavioural advertising, or rather the data that is collected when somebody is logged out or is not a Facebook user, the Deputy is exactly right that that is how the modern Internet works - the architecture of the Internet. When someone goes to a website, they will find present on that website code from a number of different sites - it could be Google, Twitter, LinkedIn or Facebook. When someone is logged out or is not a Facebook user, the data that we get transmitted back to us from that code is quite limited and is mainly IP, the type of browser the person is using and things of that nature. For non-Facebook users we do not use that data for advertising purposes, we do not associate it with an individual, unique user ID and, in most instances, we do not store it more than a few days.

We are talking about data in regard to a Facebook member rather than anyone else who might access a site. When a Facebook member accesses a site other than one within the Facebook platform at a particular time, some data is created which is connected to them through their Facebook connection. Is that data stored, with the profile of that individual, in perpetuity?

Ms Niamh Sweeney

I think this speaks to Assistant Data Protection Commissioner Neary's description of how these interests or interest-based ads work. The Deputy is right that a person owns all of the data they share with Facebook and they can take it back at any time or delete it. That is something I try to stress every time we meet with stakeholders. There is also the additional data of the interest buckets that would be appended to a person's profile.

He or she can download his or her information so that, in other words, everything he or she has ever posted or shared on Facebook is all sitting in one place and he or she can download it and port it under the GDPR to a different service, which is a new requirement, or he or she can just look at it and see what there is.

There is also the matter of a person's ads preferences. That now sits within a new settings area on our platform, given that, again, one of the criticisms we have faced in the past, on which we accept we need to make improvements, was that there were controls and settings but they were spread across the platform. We have now brought them all together in one place, in the same place where a user will download his or her information, which is everything he or she has ever shared with us, he or she will also find ad preferences. He or she can also access that from every single ad he or she ever sees on the platform. It shows the user the interests we have appended to his or her profile, based on some of his or her activity off-platform, if he or she has not turned off third-party data. The user can change those given that we might get it wrong. We get stuff wrong all the time. If we say that the user is interested in Manchester United and fashion, he or she might say that the or she is really interested in fashion but not at all interested in Manchester United - that is just a guess.

Perhaps both. Taking that chunk of data, in Facebook's opinion, who owns the data, that is, the data Facebook has collected in tracking me rather than the information that I have voluntarily posted? This is tracking data as much as anything else.

Ms Niamh Sweeney

It is what is referred to as observed data. If the user decides that he or she does not want his or her account to exist any more and he or she wants to take it away, that all goes with it and we do not keep that. In a sense, the user still owns it because if he or she decides to shut down his or her account, it is gone.

Facebook provides that where somebody requests it.

Ms Niamh Sweeney

We provide it in the user's ads preferences in the sense that it is not part of the download of his or her information because some of it would be commercially sensitive, let us say, in terms of how we bring that together. However, it is accessible through the ads preferences and the user can see all of the preferences that we have appended to his or her profile.

What about discussing that commercial sensitivity for a minute?

Ms Niamh Sweeney

I am not an engineer so I am not going to speak to something where I can only speak in theory rather than technical-----

That is something Facebook can provide to the committee.

One of my questions was not answered, namely, that relating to what appeared in The Economist this week in respect of the fact that Facebook threatened to sue The Guardian newspaper if it published the article. Can the witnesses confirm that?

Mr. Joel Kaplan

My apologies for not getting to that one. My understanding is that our lawyers objected to a certain characterisation that The Guardian reporters were pursuing and it was with respect to a particular characterisation that they made that threat, not with respect to whether it should publish an article at all.

Mr. Kaplan is saying that Facebook did not want The Guardian to publish this article and it threatened to sue. Can he expand on that?

Mr. Joel Kaplan

Sorry, I did not mean to communicate that. What I meant to communicate is that they objected to a particular characterisation in the article but not the article itself.

Therefore, Facebook had no issue with The Guardian publishing an article which stated that Kogan had shared the data app with Cambridge Analytica.

Mr. Joel Kaplan

I think the issue was the characterisation of the data breach as a legal issue and our lawyers disagreed with that. The important thing is, as Mr. Zuckerberg testified and has stated any number of times, this was a breach of trust under any circumstances, and we believe our responsibility is to prevent that type of breach from happening.

I thank Mr. Kaplan.

My second question related to the opt-in piece.

Mr. Joel Kaplan

While he can correct me, I think the essence of the Deputy's question will be reflected in the GDPR and our compliance with it. Getting the type of consent from online behavioural advertising and targeting is an important part of the GDPR's requirements and our intent is to comply with that. I am hopeful that it will now evolve quite quickly in the direction the Deputy laid out.

Senator Tim Lombard took the Chair.

I thank the representatives from Facebook for attending. The questions that arise point to a situation whereby we have, in the past decade, come into contact with a whole new world.

To date, there has not been much regulation in this area. It has been a case of trial and error and there have been abuses. It is a bit like the wild west without the sheriff. Now, the sheriff has arrived and has to try and get a grip on what is happening and to protect people.

I was shocked when I read that the information of 87 million users had been utilised. In the case of Ireland, there were just 15 people who had tapped into that app yet it affected over 44,000 citizens here. That just goes to show the ripple effect. In reality, the investigation came about because of the media in Britain and not as a result of any internal process that Facebook had in place. I disagree with self-regulatory structures but there was some onus on Facebook to take responsibility for the fact that it was holding all this data belonging to people. Could the witnesses clarify the position in that regard?

On the commodification and collection of information and its use, Facebook's business model is based on advertising and collecting data in order to be able to match people up to advertising. Is there not always a danger that the temptation is there to abuse this, particularly in the political context? I refer here, for example, to the most recent US presidential election or, a matter of some concern to this island, Brexit. We are facing into a referendum on what is, to put it mildly, a fairly hot topic. There is huge concern regarding the manipulation of information in that regard. The witnesses mentioned that they are about to make some changes. I ask them to make said changes, particularly those relating to advertising, tomorrow. Who pays for advertising? If I pay for an advertisement in the local press or wherever, it is identified straight away. If I put up a political poster on a lamppost, the identity of the director of elections must be listed at the bottom. That is a very simple thing. I implore Facebook to do something similar. People may argue about whether it happened in the context of Brexit or in the US. We all have our own views about it at this stage. There certainly is strong evidence to show that it did happen but the jury is still out. This country is facing into a referendum at a time when 2.5 million citizens on this part of the island alone access Facebook each day. Almost 3.5 million people throughout the Thirty-two Counties access Facebook. The witnesses might clarify matters in this regard.

We need to develop a body of law in respect of social media in order to protect the right to privacy and to protect people in the context of the huge power that corporations possess. In particular, there is a need to protect freedom of expression. In terms of the witnesses' company, how firm is it in its objection to very strict laws being introduced in individual states? What happens on the other side of the Atlantic may be one thing, but what happens on this side may be quite different.

Fake accounts are a real problem. If I seek to insure my car, I have to show that I am the owner of the car. If I go to tax the car, I have to do the same thing. If I open a bank account, I have to show proof of identity. This is all very basic. I am aware of a few fake accounts. Most of us involved in political life have been the victims of fake accounts at one time or another. Social media can be a very good platform for citizens to participate. However, there is huge abuse of it. I am aware of some fake accounts that are putting up information about people who are in public life. There is absolutely no protection. Related to that is the whole question of the stage at which Facebook intervenes and says that what has been put up about X or Y is beyond the pale and that it is going to intervene and take it down. In fairness, people do not mind a bit of robust political exchange but when the latter oversteps that mark, it takes away from the benefits of social media.

I thank the witnesses for their presentation. Facebook is a very profitable global corporation that has a huge say in respect of social media and with regard to how people think and communicate across the globe. The company has a community, as the witnesses call it, of over 2 billion. Its headquarters are here in Dublin, presumably because it is able to make use of transfer pricing and various mechanisms to offset its tax against, for example, research and development. Ireland has been known for helping global corporations to portray what are actually not their real profits. A company will already have priced away some of its profit but actually portrays much of it as expenses and pays a great deal less tax than would be the case in other countries. Having its main office in Dublin is really important to Facebook in the context of being able to avail of all of this.

Facebook has made massive profits out of people's personal data. While the witnesses have said that they are sorry and that mistakes have been made, there is a big list in front of them regarding all the matters about which they have been asked by the Chair and other members as to why or how particular things happened. It is very difficult for political representatives such as ourselves to trust the witnesses, particularly in the context of both the company's record and the sudden Damascene conversion that has occurred. Our guests have indicated that the company is not going to allow certain things to happen again, that it will ensure that there is full transparency and that data will never be used or abused in the way it has been in the past. It is really difficult for any of us to accept that at face value.

Although it was not in their original submission, I welcome the fact that the witnesses have announced that Facebook will roll out the transparency tools to users in Ireland on 25 April next. The implications of the misuse of people's personal data in the political context of the referendum on the eighth amendment of the Constitution and abortion rights could still be very big indeed. Advertising companies can prey on people's attitudes and their sexuality. There is a great deal of information that people still do not realise Facebook and others can gain about them just because they are advertising stuff. Political advertising in this regard is totally unregulated at present but it will be regulated in the very near future, just in time for the referendum on 25 May. I welcome the fact that Facebook is going to allow transparency tools to be used in Ireland and I think it should do so tomorrow rather than on 25 April. What is happening now indicates that the company recognises that there is a problem with political advertising and the way data is used to manipulate people's political ideas and responses, particularly if they respond to shocking images, are sensitive to certain images, etc. I would like the witnesses to comment on that matter. Do they recognise it as a problem? Why did the company suddenly decide to allow the transparency tools to be rolled out - there is no mention of this in the original submission - a few weeks prior to the referendum? What changed its mind? Has Facebook been coming under pressure? Why has it decided to roll out the tools not just in Canada but also here?

Many of the questions I was going to ask have already been posed. It should be noted that our concern is that a former key employee of Cambridge Analytica is being used by the "No" side of the referendum campaign to do political advertising for them. That has implications and gives rise to concern regarding the independence of that campaign.

At the outset, if the Chair will indulge me for 30 seconds I want to congratulate the colleague on my right, Deputy Lawlor, on his excellent and timely legislation. In the context of this discussion, it is extraordinarily relevant.

I welcome our guests. I note at the outset that during the celebrated hearings in the USA, when the witnesses' CEO Mark Zuckerberg was asked by an Illinois Democrat if he would want the name of the hotel he stayed in the previous night to be made public, he said that he absolutely would not. Reasonably so. That captured the difficulty that people face and how people feel. It is a very serious issue. My questions will be comments as well as questions, but they are nonetheless relevant. None of us should fail to express this important view today. The use of data to manipulate or distort democracy has horrendous consequences. We can see the failure or breakdown of democracy, the shocking things it has led to in recent times in Europe and the absence of democracy in so many parts of the world. This leads to awful horror. To break down or threaten democracy, even by these subtle and sinister methods, is of horrendous seriousness. Our guests have an extraordinary responsibility in that regard, given their access to 2 billion people.

I share the point raised by Deputy Smith. It is very important. I know the witnesses have said that Facebook will put in protections but I would like them to comment on that again. We want an open, transparent and sensitive debate in the upcoming referendum. Everybody aspires to that. However, we need absolute assurance, and the people watching this today need to know, that there will be no distortion there. The witnesses must convincingly say to us that there will not be. I ask them to explain again, in layman's terms for those of us who are less technologically proficient, precisely how that will be achieved and how people will be so assured.

Again, the same applies to future elections. I understand that the commercial premise by which Facebook functions is that it provides names and information and thereby potential access for advertisers. That is how our witnesses pay their bills. That is a reasonable proposition. At the same time, the fact that this could be distorted or used wrongfully is not acceptable. Can the witnesses tell me how Facebook can allow people to be sure they have prevented the use of their data in an accessible and easily understood way? I do think that we should have the option to opt out. That point has been raised. The witnesses have said that in light of the general data protection regulation, GDPR, Facebook will be going down that road. It is very important for people to have that right.

I want to mention fake accounts. I was personally a victim of this during an election campaign, and I know very few colleagues who were not. Thanks be to God I had the strength of mind and the capacity to deal with it and not be affected, but I know some colleagues on whom it had much more adverse affects. This concerns colleagues across the board. It does not relate to any ideology or party. This is about people and our human sensitivities. The use of fake accounts is a horror, and it really needs policing. People watching today need real reassurance on that score.

They also need real reassurance that if any advertising is directed at them they know where it is coming from and why, especially in the political case. This issue was mentioned earlier when the Data Protection Commissioner was here. In the case of posters, there are names and sources displayed, and the director of elections is written on the poster.

In the case of radio and television, under the Broadcasting Acts there is no specific political advertising. It is a shocking difficulty. There is nobody who would suggest for a moment that Facebook should not do business. We understand that the witnesses should do their job, that they have to pay their bills and that the company has to function. That is not the issue. The issue is how that process works out in practice.

Mr. Joel Kaplan

As each of the last round of questioners at some point touched on elections and fake accounts, maybe I will start there and hopefully will cover all those concerns. The changes that we are making with the "view ads" tool are the first phase of the transparency tools that we have announced. In response to Deputy Bríd Smith, I certainly understand the frustration and the desire that those should be out as quickly as possible - we would make them available tomorrow if we could. These things take time to develop and implement.

We are accelerating the deployment of the "view ads" tool in Ireland as quickly as we can. It is not going to roll out globally until some time later this spring or early summer, probably in mid-June. Because of the results in Canada, and the fact that the tests there had progressed to such a point that we had confidence that we could deploy it here a month before the referendum and do it in a way that it would work, we made the decision only in recent days to accelerate and include Ireland as part of the pilot programme. It will be the second and the only other country where we roll out the tool before the global deployment.

Overall, we certainly appreciate the concerns about interference in elections, distortion of the democratic process and the role that fake accounts can play in that process. We learned a lot as a result of the 2016 US elections and Brexit. We have devoted tremendous resources, in terms of both personnel and development of technology, to addressing some of the concerns that manifested themselves in those elections. In 2018 we are doubling the number of people dedicated to safety and security from 10,000 to more than 20,000, with a big focus on election integrity. We are investing in artificial intelligence tools, in particular ones that can help us identify and remove fake accounts before they are even uploaded onto the site or have a presence on it.

There have been a number of big elections since the US election including the French election, the German election and the election in the US state of Alabama. In each of those elections, we put together an internal task force with representatives across the company that was dedicated to doing everything we could to detect interference, in particular foreign interference. We have had good results. In the French election, we were able to detect and remove tens of thousands of politically motivated fake accounts. We had similar good results in Germany. In Alabama, which was a very controversial and hotly contested election in the United States, we were able to deploy the same type of task force, and identified a number of Macedonian fake accounts. We believe they were more financially than politically motivated but we removed those as well.

We stood up a task force like that for the upcoming referendum. We will be deploying the artificial intelligence technologies we have developed to try to detect those fake accounts as quickly as we can. We detect literally millions a day and remove them before they are ever seen on the site. We need to continually improve that capability and make the investments to do so, and that is what we are doing.

Ms Niamh Sweeney

There are two types of fake account problem. There is the one that Mr. Kaplan has just described, the kind of activity we discovered subsequent to the 2016 US Presidential election, which is usually one person or a number of people creating a series of fake accounts that all operate in tandem. The way our technology works in identifying those is that usually, they are created and deployed at the same time and spread the same information at almost exactly the same time. Those kinds of signals and triggers allow us to detect them. Often, using those bots is how people spread misinformation at scale.

Let me stress that not all bots are bad. Other cases were highlighted and Senator O'Reilly mentioned that he had been a victim of a fake account. One must be who one says one is on Facebook. We have an authentic identification policy, it means that one must go by the name that one's friends and family recognise as you. We rely in large part on people reporting to us when people do not adhere to that. Sometimes it is really obvious and one would know that the name is not real. In other cases it may be that they have set up an account for trolling purposes. If that is reported to us we can detect it. I stress we do not always get it right.

There are also impersonating accounts, I know that sometimes that can affect public figures, whereby somebody purports to be that person online. If that is reported to us we can detect it pretty quickly and we know that Deputy Timmy Dooley is Timmy Dooley and there is only one Deputy Timmy Dooley. Equally, the commissioner, Ms Dixon, referred to facial recognition, which is a technology we do not currently use in Europe. I think most people would be aware that under GDPR, people would be given a choice as to whether they want to opt in, so as Deputy Dooley said, it is very much on an opt-in basis and again that is a very effective security tool for people. If one opts in, any time a photograph in which one is included is uploaded on to the platform, one is alerted to that fact. I think that will do a lot for those who decide to opt in. Other people will not be comfortable with the technology used in order to make that service available. As we are dealing with those two problems in tandem, we rely a great deal on user reports. If one is operating a page, the administrator of that page must be a real account, that is, a real person who is the person he or she says he or she is. We are conscious of that as well in the context of the referendum. While Deputy Lawless rightly points that there is an absence of applicable law when it comes to campaigning in the online space, we are conscious of how sensitive this is for people and have been thinking about it for a long time. We have met personnel from the Transparent Referendum Initiative, TRI, and I wish to acknowledge Ms Liz Carolan, who is present today. We welcome what they are doing. Anything that brings greater transparency to this process is a good thing. We have not been able to address all of their concerns - and I admit that upfront - but what we have done is to try to open up a dedicated channel of communications so that if there is anything that is surfaced to them or they encounter, they can surface it to us and we take a wider look at those issues when they are surfaced to us. Consequently, they might come to us with one issue but we take a look at a page in the round to make sure that its administrators are legitimate actors on the platform.

It is worth spelling out what the "view ads" tool is and what it is not. We understand that a number of issues need to be addressed with our political transparency tools for advertisements. As Mr. Kaplan outlined, "view ads" is the very first step in a series of steps to bring greater transparency to online advertisements. To explain, if one is served an advertisement or if one goes to a page, one will be able to click on the tool and see all the advertisements which that particular advertiser is running at that moment in one's jurisdiction. If a particular campaign group is running an advertisement in Ireland, one will be able to click on it and see all the advertisements it is running at that particular time. What that is supposed to address is the issue of micro-targeting or so-called dark advertisements, whereby I am being targeted because I am a woman in my 30s who is from an urban area while somebody else is being targeted because he is a man in his 60s and is from a rural area. One is trying to target people with different messaging. It is supposed to bring greater transparency to that. We hope it will. Of course, it does not address all of the concerns that people have.

The other things we have announced will take time to build and roll out. I acknowledge that it is not of any great comfort to people when I say it takes time to build them and roll them out but unfortunately, that is where we find ourselves. One such measure is that one will need to say who has paid for an advertisement. We need to build the functionality for people to be able to build that into their political advertisements. Another is that one will need to verify who one is as a political advertiser whereby if one is running political or issues-based advertisements of national significance, one will need to be verified. That is quite tricky to deploy and it will have to be different in every country because there will be different systems and different vendors in place. The other element is slipping my mind at this exact moment in time.

We are rolling out these systems over time. This is the first one, as Mr. Kaplan said, and we are adding Ireland to the pilot on 25 April. Obviously that will be with a four-week run into the referendum but there are other things we still need to do and we are working on them.

Chairman, may I ask a question on the same theme? To clarify, is Ms Sweeney saying we will not be able to see who has paid for these advertisements?

Ms Niamh Sweeney

Not at this point. The Deputy may know that there should be information associated with the page. In a lot of cases that will be accurate information and it will represent the particular group, but at this point, we do not have the functionality where we can say explicitly who has paid for the advertisement.

Is it the case that this has not been rolled out in Canada either?

Ms Niamh Sweeney

No. What we are running in Ireland is exactly what we have tested so far in Canada.

I welcome the move in the 25th paper. That is progress and it is very welcome. It is good to see it. A number of my questions will be about the feasibility technically of meeting the requirements of my legislation and I suppose the witnesses have answered that in one sense because of the things that Facebook is doing voluntarily. Obviously it is possible. I had hoped it would go further and as Deputy Smith has just mentioned, I had anticipated perhaps that it would because the core of my legislation is the transparency notice, it is great to see what other advertisements are being run by the same page and it is great to be aware of the dark advertisements. I think that may have been available already in some cases, but it is important to be able to see who published the response of the advertisement. I know Ms Sweeney mentioned the difficulty in identifying an issue-based advertisement as opposed to a candidate-based advertisement because usually the candidate is front and centre so that one cannot miss it, so James Lawless is always smiling at you from Facebook. The issue-based advertisements can be more nuanced.

I have a number of questions. How technically feasible is it to address this? I understand that Facebook is working on it. I understand it is in pilot, if not in Canada or another jurisdiction, at least in the laboratory. Will Ms Sweeney confirm if that is so? How far away is Facebook from being able to do that? In essence the provisions in the Bill - my understanding from a previous conversation is that if one does not see any real technical difficulties it is a case of rolling out the pilot. It would be great if they could confirm that it is just a case of rolling them out.

The second question I have is around the dataset. I appreciate and acknowledge the note sounded in the opening statement and indeed the note that Mark Zuckerberg struck last week. I am aware that Facebook is not a charity, that while it did some good work and some good activities and some philanthropy to an extent, the essence of the organisation is not philanthropic. That is not why Facebook is in existence. Facebook is a company, a corporate entity and it exists to make money. That is fine. There is nothing intrinsically wrong with that. That is what most businesses do, but if people do not pay to use Facebook - advertisers do but the general public, the 200 million or whatever number of users are on it do not - is data a revenue stream?

Has Facebook ever sold data to third parties? On a second and related theme, how many categories of data does Facebook keep for metadata use? I have heard the figure of 96 categories and I do not know if that is accurate or not. It seems like a large metadata set so will the witnesses respond to those questions?

The third question relates to Cambridge Analytica. Earlier today we heard from the Data Protection Commissioner, Ms Dixon, who set out the Irish context and what is being addressed there. We had a number of questions about that as well because it was the Irish Data Protection Office that responded to the Max Schrems complaint in 2012. It is a good thought borne out by a lot of evidence that if the ruling of the Irish Data Protection Commissioner had been followed through in 2012, then the Kogan app could not have exploited the loophole and Cambridge Analytica could not have accessed the data. If we extend that to its logical conclusion, Britain might still be in the European Union and Donald Trump might still be in the hotel business. How do the witnesses respond to that scenario? The key question is why did Facebook not act sooner. I appreciate that we have been over that ground already.

Mark Zuckerberg might not know his suite.

Indeed not and we would not be here and we would be doing something else. In an Irish context it is a bit like the horse has bolted because the Kogan app is being zoomed in on. Much of commentary stressed that it is tip of the iceberg; it is the one app we know about but it is the unknown unknowns and the known unknowns that are the ones that get us. I appreciate that the witnesses supplied information in the opening statement about the Kogan app, but how many other apps have scraped or taken or improperly accessed data from Irish and worldwide users? I understand the figures are pretty high and it is pretty close to almost the entire data set. I would greatly appreciate an answer to that.

Given that we touched on GDPR coming into force in five weeks time, my understanding is that Instagram is not yet GDPR compliant and one cannot download one's dataset but I presume that will be done before 25 May.

The witnesses might just confirm that.
Mention was made in some media reports that Facebook executives used to communicate with each other and then delete or wipe messages from their internal messaging within Facebook Messenger. Is that available publicly? Can it be made available? Why was it only available to executives? What kind of information and messages were being deleted? What was this tool about?
Facebook is a very successful corporation and a very successful application as evidenced by its take-up and user base and the pace at which it has grown. Many apps and platforms have developed for it and it has been embraced by users to the point where it has become a 24/7 feature of most people's lives. Political advertising and issue-based discussions are very much part of that. Controversies, threads and arguments are part of the whole package. It is hard to understand, given the resources, data and knowledge available to it, that Facebook did not know about Russian interference in the US elections, Brexit and, as we saw on Channel 4, other processes around the world. How is that credible? It is hard to believe Facebook had no inkling that any of this was going on. I would like to believe it, but it is hard to credit that Facebook was in complete ignorance of these activities and only found out when Channel 4 did some digging around and came back to tell it what had been happening on its platform for the last number of years. It is hard to take that at face value. I would appreciate some response to that.

Deputy Ryan will realise we are running tight on time and will do his best.

I thank the Acting Chairman. I thank Mr. Kaplan and Ms Sweeney. Mr. Kaplan is very welcome to Europe. This democratic republic is part of Europe. I was thinking about some of the names mentioned earlier. I am a former Minister for Communications, Energy and Natural Resources and I remember sitting on the EU Council with Viviane Reding for four years. She is a formidable and powerful woman. That power in Europe to set standards is very important and rightly in place. It has been one of the proud records of our Union. At Council meetings, there were only 27 people around the table but in private session some were saying they wanted Facebook shut down. That was never the Irish position. We are hugely supportive of companies like Facebook, not just because its large headquarters is based here, but also because we are a modern, forward-looking country which embraces the digital revolution, openness and democratic engagement. Shutting things down is sometimes too much on the attack side. We have always stood up for co-operation between US multinationals and Europe.

My experience in Europe has also involved working with the likes of Jan Philipp Albrecht, a Green colleague who was centrally involved in the European Parliament in the development of the GDPR. He is a brilliant parliamentarian. It was Parliament which drove standards in recent times, moreso than the Council or the Commission. There is real strength in that. It is a cross-party institution and it unites Europe left, right, green and liberal on the need for standards. Facebook is facing a Europe which is not divided on this issue but is rather united absolutely in the European Parliament. Having worked on these issues for ten or 15 years, I reflect also on the real strength in the likes of Digital Rights Ireland and Max Schrems who have done a real service to public policy by asking awkward questions and raising high standards. Some might see it as being difficult, but to my mind, they have done it out of a strong ideal about what the revolution could bring. They have done a real service and our party very much supports and recognises the work they have done in the courts and elsewhere.

Deputy Dooley quoted someone from Facebook earlier. I quote from the boss, as it were, Mr. Zuckerberg, and his address on 16 February 2017:

To our community,

On our journey to connect the world, we often discuss products we're building and updates on our business. Today I want to focus on the most important question of all: are we building the world we all want?

It is the right question. My first answer, however, is that I do not want to live in a world of surveillance. I do not want anyone to have surveillance or control over me, be it a state or private entity. That is what the issue of privacy and the need to protect data rights are about. This revolution is only at the beginning. We need it to evolve so that we embrace the Internet of things and do all the management required to protect our environment, improve our health service and obtain all of the other efficiencies we can gain from digital systems.

We cannot do that if we do not have trust and the fundamental building blocks needed around the use of our data. We cannot convince our people to share data if they think it is being used for exploitation, which is why this issue is important. This is important for every committee of the Houses, including the energy, health and education committees. If we do not get the building blocks and basic rules for data use right, we will not get all of the benefits from the digital revolution that is evolving. It is as significant as that. It is one of the most important public policy issues we face.

As I said to the Data Protection Commissioner earlier, I recognise that time is very short here. I wish we had the five hours Mr. Zuckerberg had before Congress and elsewhere. Recognising that, I will provide 20 questions in writing to which I request a written response. I will ask a couple of them now, however, reflecting on some of what we have heard. As time is tight, a "Yes" or "No" answer will be appropriate in some cases. These are some of the questions people in Europe are now asking about Facebook.

Facebook's opening presentation set out that the company engages with the Data Protection Commissioner on new products and how things are working. In that consultation, are there discussions around the new face recognition services which are about to be offered by Facebook in the EU? I understand the new policy is based on an opt-in to facial recognition to inform Facebook users that their faces have appeared in photos uploaded by other users. Does that mean Facebook will index all facial profiles on any photo uploaded, regardless of any consent by any person depicted? More specifically, will Facebook refrain from analysing any photograph uploaded by any user for biometric data on persons depicted in those photos until it has received an opt-in from every person depicted in those photos? I acknowledge that this is specific, but that is where we are at in terms of what we need to know.

Privacy International created a new Facebook profile recently to test default settings. By default, everyone can see one's friends list and look one up using the phone number one provides to Facebook. This is not what proactive privacy protections look like. How does this use by default work under article 25 of the GDPR, which is about to come into effect?

I suggest the Deputy puts those questions in writing. We are exceptionally tight.

I have other questions but I need to ask these ones.

We have to be out of this room by 5.40 p.m.

If the Acting Chairman will bear with me, I will complete my questions. The answers to the questions I have just asked can be "Yes" or "No". Why does the default setting allow for maximum exposure sharing? Why not make each setting a choice before a profile is functional? Why do privacy settings continue to focus on what friends can and cannot see? I go back to what Ms Sweeney said. When are we going to treat advertising settings as privacy settings so that there is that level of protection across all areas? Will Facebook stop creating shadow profiles of people who have not registered a Facebook account and who have never given their consent to being tracked?

Why did Facebook not contact the Irish Data Protection Commissioner in 2015 when it became aware of the controversy regarding Cambridge Analytica and the breach of data, given that it had been a point of real contention between Facebook and the Commissioner? Section 702 of FISA was recently reauthorised. In its referral to the Court of Justice of the European Union, the High Court stated as a factual finding that mass surveillance by US authorities continues to take place. Is Facebook preparing for a situation where the EU-US privacy shield is declared invalid? Does it agree that a stable solution will have to involve changes in US law?

Facebook has voluntary agreements with Swedish intelligence services to share data. How does it reconcile that with the GDPR? Is Facebook sharing or has it shared user data with any other intelligence service, including UK intelligence services, in relation to the question I asked Ms Dixon earlier?

I am conscious of the huge number of questions to be answered. I ask, therefore, if I can do follow-up questions also.

Specifically, I want to pick up on what specific measures are being putting in place in relation to article 9 and the processing of special categories of data. To be clear, we are not simply talking about special categories of data, that is, political views, religious views and sexual orientation that people might put in their profile but that which might be revealed by the actions that they are taking online and in their profile. It is important that we are very clear that it is what one does and says and what might be deduced from that. Is there any danger of any of that kind of data being included, for example, in the observed data that is being discussed? I think Deputy Dooley spoke about that. I refer to the question of that observed data and that clarity over who owns it, how it can be used and how it can be targeted. Is it linkable back, or is it linked back to individual accounts?

I refer to the facial recognition and specifically that question around the opt-in. If persons were to opt-in in the way that was suggested, would that photograph still only be used by the user or can it be accessed or used by third parties by virtue of that opt-in? There is a concern around whether is it proportionate in terms of the purpose of the opt-in.

There is also a concern, which I know was mentioned to the American Senators on the judiciary committee, that they could expect GDPR-type regulation, but as far as I know, automatic privacy settings and automatic opt-out does not seem to be on offer to them, so maybe there is an inconsistency there. On that question of automatic highest-level privacy settings, can the witness guarantee that that will be the default?

The witness mentioned the question of data breaches. It is a crucial question as to why those whose data was breached were not informed. Is that now going to be a policy for any future data breaches?

This is my final point but it is very important. I, along with Deputy Ryan and others, wrote to ask that these optional tools be rolled out early, although I believe the regulation we need is such as Deputy Lawless has proposed. I was delighted when I heard initially about view ads but I am extremely disappointed to hear that those who purchased the ads will not be here. We are not talking about content; we are talking about commercial content. Payments are being accepted, commercial activity is taking place and ads are being sold now. We know that in the US, of the five million ads that were examined over a six week stretch, over half of the advertisers were not registered anywhere or findable online. In an initial work by Transparency Referendum and others, we know there are multiple actors, which are not registered anywhere, are not findable online and are effectively anonymous and which are putting ads up currently in relation to the referendum, and indeed there are international advertisers as well. There are situations where individuals' images have been used by the opposite side in the referendum in advertisements, that pictures have been taken from Facebook without their permission and that does not seem to have been responded to. Thank you very much.

I thank Senator Higgins.

I have just one question because I know we are exceptionally tight on time. I want to ask about hate speech on platforms. The witness initially informed us that it would take an hour before hate speech could be taken down. I believe it has to be flagged before one actually engages with the process itself. How can we create a situation where one does not need to flag hate speech before it is taken down? I will give Mr. Kaplan approximately 13 minutes to respond.

Mr. Joel Kaplan

I will try to go as quickly as I can and I am going to tag on a bunch of replies to Deputy Ryan and others. I will start with Deputy Lawless's questions. The first question had to do with how technically feasible it is to enforce transparency around issue ads relative to electoral ads. I believe that was the question. It is a very astute question because that is exactly the issue we are struggling with and that we are trying to solve. A couple of months ago we started trying to build the transparency tools for electoral ads because that is easier for the artificial intelligence to detect - for instance, discussion of a candidate's name. Discussion of issues and issue ads come in such varied form and involve familiarity and understanding of the nuance and the context involved. The state of artificial intelligence and machine learning is still developing. With the combination of artificial intelligence and the significant investment in resources in hiring, literally, thousands more people just to focus on this question, we think we will be able to review and capture most issue ads. Artificial intelligence may not detect some which are classifiers and we will have to rely on our users to report them and then have the manual reviewers address that.

That is why this takes some time. Our engineers are working furiously trying to build it and deploy it, but that is why it is going to take a least several months before that capability is live on the site.

In terms of the dataset, I believe we were asked if we ever sold data. This is one matter I want to be super-clear on. We do not sell people's data.

Ms Niamh Sweeney

We never have and we never will.

Mr. Joel Kaplan

It is actually core to our business model not to sell people's data because people trust us with their data and we know we have to rebuild that trust. If we were to sell people's data, that trust would be lost and we would ultimately not have a business.

Is the witness concerned about facilitating the sale of data that might be scraped from-----

Mr. Joel Kaplan

How many categories of data does one keep? Deputy Lawless raised I believe the 96 or 98 categories. That came up in the congressional testimony last week and the short answer is that I am not sure where that number came from. We will provide an answer. Once our personnel can figure out exactly where that information came from, we will try to capture and characterise the types of data that we collect. I am not exactly sure where that number originated.

Ms Niamh Sweeney

I think I know. It came from an article in The Wall Street Journal which was not about the number of the data points appended to any one user's profile but it was about how advertiser's decide how they are going to target people. I think it is worth dwelling on that for just one second. When one is an advertiser advertising on Facebook, one does not know who it is one is targeting. It could be based on location, on gender or on some of the interests that we have created buckets for, but it is all aggregated and it is all anonymised. We know who a person is and that is why we will never sell a person's data, and that is a key point for us, but they do not. I think that is an important point. I know that it does not address all of the points made, but it is important.

Depending on how active one is on the platform I think would depend on how many data points one has. If one likes a lot of pages - if someone likes all one's pages - that would obviously constitute a data point in each case. One would never know my partner was on the platform, because he is a ghost. It really depends on how active a person is. I will hand back to Mr. Kaplan who was on a roll.

Mr. Joel Kaplan

Yes but get ready as there are a couple of questions coming your way.

The next question was on how many more Cambridge Analyticas are there. As I mentioned at the outset, we are undertaking a rigorous review of all the apps that exist on the site that had access to that same amount of data before the 2014 platform change. I do not know how many will prove to have suspicious activity, but if we see any suspicious activity or if, under the new bug bounty programme for developers, it is reported to us, we will audit them, we will let the regulator know and we will let the people affected know. That is the commitment we have made.

Ms Niamh Sweeney

I think it is important to flag that it could not happen today. If one is an app developer, one gets very limited information now. We made a big change in 2014 to totally cut out friends data but now if one is someone who downloads the app via a Facebook interface, one gets very limited data. If a developer wants to request further data from a person, the developer has to make a business case to us for it first, and then, again, request that permission from the users.

What happened with Kogan cannot happen again but that is of little comfort to anyone is affected or whose data was used by it unbeknownst to them. At least going forward, we know that is a known known.

Mr. Joel Kaplan

The next issue to address is Instagram. Instagram will be GDPR compliant. Facebook executives deleted messages. After the Sony hack of executives emails a couple of years ago, our security personnel put in place new protocols for our executives, including our CEO. He does not really communicate that much by email but tends to use our product of messenger. One of the protocols put in place was to clear and delete all his Facebook messages. That was a security precaution taken given the sensitivity we and other companies saw after the Sony hacks.

We are working on that capability for users and I am not sure exactly when that will be available, but my understanding is that this is something that we hope to have in the near future.

On the Russian interference question, we were asked how it was credible that we did not know. Certainly we wish we knew about this type of Russian interference before the US election. Our security personnel were very attuned to the notion of state-actor interference, including Russian interference, but candidly they were focused on more of the traditional types of cyber warfare, namely, hacking, malware, phishing and things like that. It was not until after the 2016 elections that we began to hear reports about Russian information operations in the US during that election and that was what kicked off the internal review that led to the discovery of the pages and ads that we talked about late last year in the fall.

I am going to ask Ms Sweeney to take a number of Deputy Eamon Ryan's questions.

Ms Niamh Sweeney

We know the Deputy's Green Party colleague, Mr. Jan Philipp Albrecht, MEP, very well and he played an important part in bringing the GDPR to life. Digital Rights Ireland is also an important player in this space. We have stepped up our engagement with it recently. I think it is fair to say that we do not always agree, and it makes that very clear to us, but we try to at least engage. One of the issues we have engaged on recently was facial recognition and in recent weeks, we outlined how we will be rolling out the option for people to opt in to that under GDPR. It has made it clear to us that it does not agree with how it operates, but let me make a stab at explaining how it works, so the committee can give us its view on that.

If one does not opt in to facial recognition, we do not create a template of one's face and append that to one's profile. When a photograph is uploaded with three people in it and facial recognition is turned on in respect of two of the people, in order to eliminate the other person, we have to scan the face, as such. "Eliminate" is probably the wrong word but we need to do that in order to eliminate them from the photograph. When we have eliminated them, we do not maintain any information about their facial or their biometric data. There is nothing appended to their profile. It is literally a process of elimination. I know some privacy rights campaigners still take issue with that, and I recognise where they are coming from, but we see benefits, particularly from a security perspective, for people who might be victim to impersonating accounts. It certainly puts paid to that straight away.

We have engaged very closely with the Data Protection Commissioner on that issue. In recent weeks it asked us for a paper setting out exactly how it works, mirroring a lot of the questions the Deputy have asked. Obviously, the commissioner's office would have some additional technical knowledge on this matter. We have set this out in a very detailed manner in writing to the commissioner and we are awaiting feedback on that matter.

There were several other questions. We do not create shadow profiles. That is one of those myths. A shadow profile for those who may be familiar with it is the notion that if a person is a non-Facebook user, we collect data about a person and create a shadow profile, waiting for the day when this person creates an account. That is not the case. It is true that if one goes to a website that has some of our cookies or pixels embedded in it - the same website would have them for Google, LinkedIn or Twitter - it pings back information to us, as Mr. Kaplan described earlier, such as browser information as in the kind of browser one is using, whether it is Chrome or something else, and one's IP address. It is similar to the way in which Internet profiles work and I am sure Deputy Ryan knows more about this than I do. It is not that when it lands with us, and if we understand one is not a Facebook user, we hang on to it. It is gone out of our system within a matter of days and that is an important point to flag as well.

On the notion of surveillance, that is very important to us as well. We are signatories to the Reform Government Surveillance principles. A lot of the big technology companies have signed up to that. We work closely with law enforcement here and elsewhere on certain matters, particularly around child safety and child exploitation and lots of other pressing issues. We have very particular protocols in place when it comes to requests from law enforcement. The Department of Justice and Equality said it was going to initiate a wider review of our surveillance laws, via the Law Reform Commission and that would be a welcome development. We are very alive to those issues and, as I said, the Reform Government Surveillance principles are something we stand behind.

On the reverse lookup, have we shut that down?

Deputy James Lawless took the Chair.

Mr. Joel Kaplan

We shut that down. The feature that Ms Sweeney mentioned was a useful feature, particularly in countries where the spelling of names can be difficult and it is easier to look people up by their phone number in places where many people have the same name. When many people had the same name, a common name, the feature was very useful, but it was subject to abuse by people scraping publicly available information. So only if the information was set to public, could it be scraped, but because of that risk, several weeks ago we shut down the functionality of reverse lookup.

We are under the pressure of time, as we must vacate this committee room. Important though this committee is, there are other items of business ongoing.

I have a final question on the legislation. Will Ms Sweeney send her response in writing to the committee secretariat on the questions that she did not get the opportunity to answer? Although we have had a wide-ranging discussion, my final question is on the legislation which is the topic on which we opened our discussion.

I welcome the pilot study from 25 April, which is very positive but every advertiser who opens an account on Facebook and begins to run campaigns, be it political or otherwise, gives details of who they claim to be, and some information about themselves. How hard can it be to put that on a notice upfront? I understand there is an issue with this in respect of issue-based campaigns, because one cannot always detect who they are but, because people to watch their opponents, I wonder how far away we are from this kind of self-regulation and transparency in politics. I get that issue-based campaigning can be difficult and there are grey areas, as we discussed, but as there is some data on the advertiser because they have opened an account and are engaged in commercial advertising, it does not seem that technically difficult to add to the meta data information pane. How far away are we from that?

Vice Chairman, several questions were not answered but I have one question in particular about the Data Protection Commissioner. Why did the witnesses not contact the Data Protection Commission when they found out about the use of third party Kogan App, scraping friends data? Given that this had been a contentious issue between Facebook and the Data Protection Commissioner, given that the witnesses had been given direction in the years 2011 and 2012 to shut it down and had a fight for two years over not doing that, how come they did not tell the Data Protection Commission that it happened?

We will get answers to those two questions and then we will wrap up the meeting. I invite Mr. Kaplan to respond.

Mr. Joel Kaplan

On the first question, unfortunately we are several months away. I know that is frustrating. We are working as quickly as we can to develop that capability and put the tools in place, but my understanding from the engineers who are working on it, is that we are several months away from that point still.

Ms Niamh Sweeney

Let me give a practical example. If Deputy James Lawless, a candidate is running an advertisement, there are probably other people who have access to his account, who might have access to the credit card details, so it does not necessarily follow that the person whose credit card details are set against it represents the entity who is running the advertisements. If it were the Fianna Fáil Party, again I am sure there would be individuals and one could not just expose their information in that way. I see the point that the Deputy is driving at, but there are complicating factors.

Mr. Joel Kaplan

I will also respond to Deputy Ryan on the question of why we did not notify the Data Protection Commissioner. That is a very fair question and it is something that we regret we did not do.

What we were focused on when The Guardian ran the article in December 2015 was ensuring that the data was secure. We suspended Kogan's app. We banned Kogan's app. We contacted the entities that we understood had gotten the information about the app installers and the friends from Kogan inappropriately and in violation of our terms. We demanded that they delete the data. They confirmed for us that they did within a matter of weeks and that is what our focus was on.

In retrospect, it would have been better had we notified the DPC and it would have been better had we told the people whose data Cambridge Analytica inappropriately got a hold of. That is why we are focused as both Ms Sweeney and I have talked about, on going back to the time when there were no reports of misuse and we are trying to find it where we can and take appropriate action both to shut it down and to notify the regulators and the people affected.

Ms Niamh Sweeney

Let me add that obviously we are regulated by the Data Protection Commission here in Ireland. Cambridge Analytica is subject to the jurisdiction of the information commissioner in the United Kingdom, given that is where the company is established for data protection purposes.

We are engaging with both and available to both and have been in close contact with them as they investigate from the perspective of both Cambridge Analytica and Facebook.

On behalf of the committee I thank the witnesses for coming before us. It is much appreciated by the committee.

I thank all the members who contributed intensively and constructively as well. It is proposed the committee will publish the opening statements received on the committee web page. Is that agreed? Agreed.

Ms Niamh Sweeney

Essentially we had the opportunity to have official scrutiny of the Bill. We had some suggestions with respect to amendments. I do not know if we would have an opportunity to put them forward at another time, so with the permission of the committee I will leave them with the Chair.

That would be great. The discussion has been more wide ranging than the content of the Bill, but the Bill is ultimately the reason we are here. Perhaps Ms Sweeney could submit her suggested amendments to the committee in writing or to myself.

Ms Niamh Sweeney

I will give them to the Chairman now, however, there is a possibility that I have written on the back of them.

We will take them before we finish. That will be very helpful. I thank Ms Sweeney for that.

The joint committee adjourned at 5.50 p.m. until 3 p.m. on Tuesday, 24 April 2018.