Session 3: The State of Play in Regulation

This is the third session of the International Grand Committee on Disinformation and 'Fake News'. I welcome to our meeting this afternoon Ms Helen Dixon, Data Protection Commissioner for Ireland; Mr. Paolo Cesarini of the Directorate General for Communications Networks, Content and Technology, DG Connect, of the European Commission; Mr. Marc Rotenberg, president and executive director of the Electronic Privacy Information Centre, Washington DC; and Deputy Richard Bruton, Minister for Communications, Climate Action and Environment in Ireland. I thank them all for coming here today.

I draw the attention of witnesses to the fact that by virtue of section 17(2)(l) of the Defamation Act 2009, witnesses are protected by absolute privilege in respect of their evidence to the committee. However, if they are directed by the committee to cease giving evidence on a particular matter and they continue to so do, they are entitled thereafter only to a qualified privilege in respect of their evidence. They are directed that only evidence connected with the subject matter of these proceedings is to be given and they are asked to respect the parliamentary practice to the effect that, where possible, they should not criticise or make charges against any person, persons or entity by name or in such a way as to make him, her or it identifiable. I also advise our guests that any submission or opening statement they make to the committee will be published on the committee's website after this meeting.

Members are reminded of the long-standing parliamentary practice to the effect that they should not comment on, criticise or make charges against a person outside the House or an official either by name or in such a way as to make him or her identifiable.

The format of the meeting will be the same as previous sessions. All witnesses are asked to give a five-minute opening statement and I will give a ding on my glass after four minutes to let them know they have a minute remaining. I ask Ms Helen Dixon to give her opening statement.

Ms Helen Dixon

I thank the Chair and members of the committee for inviting me to be here today. I am pleased to have the opportunity to share details of the role of the Irish Data Protection Commission, DPC, and position in relation to the regulation of online platforms.

There is no doubt that despite the great benefits and access to more and more information the Internet has provided all of us with, it also presents significant challenges to our rights and freedoms in how it now operates. The challenge of combating disinformation in many contexts, such as during electoral cycles or in, for example, public health scenarios is a pressing issue of our time given the negative consequences for democracies and societal well-being.

Issues of harmful and-or illegal content and disinformation are issues that stretch well outside of the scope of the data protection legal framework. Data protection is predicated on a fundamental right of individuals to have their personal data protected and, as a result, the remedies individuals may avail of under the general data protection regulation, GDPR, require the personal data of that specific individual to have been processed.

The DPC is very pleased to have an opportunity for interaction during this panel with the Minister for Communications, Climate Action and Environment as the DPC has this year responded to his consultation on the regulation of harmful online content and the implementation of the audiovisual media services directive. In the context of making a submission to that consultation, the DPC clarified that it supported the Irish Law Reform Commission’s recommendation in their 2016 report on harmful communications and online safety that a dedicated office with a statutory responsibility to promote online safety and to oversee and regulate a system of takedown orders for harmful digital communications should be considered.

The DPC, however, has responsibility for a number of areas of regulatory activity that relate directly to the theme of the hearing and are relevant to the panel.

First, it is possible that content relating to a specific individual posted online may be harmful to the individual or may contain false information about the individual. In such circumstances, the individual may be able personally to exercise his or her data protection rights, in particular to erasure and rectification. In circumstances where a platform does not comply with a request to exercise data protection rights by an individual, the individual may make a complaint to the DPC, following which we will take the matter up as appropriate on his or her behalf. Equally, however, it is worth recalling that Article 85 of the GDPR requires a reconciliation of the right to freedom of expression with the exercise of data protection rights, and therefore they are not absolute rights. The majority of complaints the DPC receives in respect of platforms are complaints about erasure.

Another issue of significance is the role that personal data play in a social media context through facilitating the so-called micro targeting of individuals with specific content, thereby amplifying any harmful effects of disinformation. In such a scenario, the profile a platform has created of a user and the categorisation of that user as being of a certain lifestyle, passion or habit may allow an undecided voter in an election context to be pushed in one direction or the other. All of this may happen without the user being aware his or her data are being deployed to reinforce the individual’s existing viewpoint rather than the individual being in a position to take an objectively informed stance based on an understanding of both sides of an issue. Given the rates of online users who consume their news exclusively on social media, this is a concern. As a data protection authority, we have a number of ongoing investigations into how the online behavioural advertising system operates and whether in all respects it is in compliance with the GDPR, especially in respect of lawfulness and transparency to users. In this regard, we have investigations open into platforms, data brokers and advertisement exchanges, which will conclude in 2020.

The issue of children and other vulnerable Internet users being subject to disinformation and harmful content somewhat overlaps with the protection of children’s personal data in an online context. To protect children, whether in respect of their personal data or to protect them from harmful content, presupposes that children can be identified as such on the Internet. To date, a systemic solution to age verification online that protects younger users but leaves the Internet open and accessible to all has been elusive. The DPC has watched with interest the attempts in the UK under the Digital Economy Act 2017 to impose age verification measures for access to legal online pornography. That the legal provisions have not proceeded to implementation simply underlines how difficult it is to find a solution that meets all requirements. The DPC has run a consultation over the past year on the protection of children’s data, including a consultation directly with children through their schools and youth centres. The DPC will next month publish a guidance note based on the outcomes of that consultation, proposing ideas and criteria that data controllers need to take into account when implementing mechanisms to identify and protect children online.

Finally, I raise an issue of interest to the committee, namely, that of how to foster international collaboration. As an EU data protection authority, the commission is bound in close legal co-operation with all other European Economic Area data protection authorities. Indeed, we meet in Brussels weekly. In addition, the DPC is a member of the Global Privacy Assembly, has signed memoranda of understanding with a number of global data protection authorities, and this year we have been visited at our Dublin offices by a large number of other commissioners, including the New Zealand, Icelandic, Australian and UK commissioners. The dialogue and the opportunity to discuss regulatory approaches and solutions to issues of common concern are invaluable in shaping better outcomes in our regulatory role.

I look forward to the panel discussion and questions from committee members.

I thank Ms Dixon and invite Mr. Cesarini to give his opening statement.

Mr. Paolo Cesarini

I thank the committee for inviting a representative of the European Commission to this important event. I am head of the unit responsible for media convergence and social media policy at DG Connect. Respect for democracy, fundamental rights and the rule of law are core values of the European Union. They bind us together and underpin the functioning of our institutions. President-elect Ursula von der Leyen has announced that protecting European democracy will be a priority of the new Commission and underlined we must do more to protect our democratic processes and institutions from external interference. Disinformation poses major challenges to our democracy as new technologies can be used, notably through social media, to disseminate disinformation on a scale and at a speed that are unprecedented. They can create personalised information spheres and become powerful echo chambers for disinformation campaigns, polarise the public debate, and create tensions in society. Media manipulation, however, and the strategic use of disinformation are not the exclusive prerogative of foreign actors. Domestic actors, too, can exploit digital technology to interfere in electoral processes and, increasingly, to manipulate policy debates in areas such as climate change, migration, public security, health and finance.

While the conduct of free and fair elections is primarily a responsibility of member states, the cross-border dimension of efforts to manipulate democracy, as well as the importance of joined-up efforts to address such threats, make a European approach necessary. What affects one member state affects us all. The Commission, along with other EU institutions, has put in place a robust framework for co-ordinated actions against disinformation, with full respect for European values and fundamental rights. We often mention free speech, freedom of association, press freedom, and pluralism, which are fundamental principles that need to be kept in mind.

Our work on combating disinformation has evolved over three major, interlinked initiatives. Last December, the Commission adopted an action plan against disinformation, a plan that builds on the communication on tackling online disinformation and was adopted in April 2018. Furthermore, in September 2018, the Commission put forward a comprehensive election package, setting out a variety of measures, with a focus on what were then the upcoming elections to the European Parliament.

Broadly speaking, the work carried out in recent months centred on the strength of action. First, we improved the EU capability to identify and counter disinformation via our strategic communication task forces and the EU hybrid fusion cell, which operates within the European External Action Service. Second, we supported member states by setting up a rapid alert system to facilitate the exchange of information between member states and the EU institutions. The system has become a reference point for national authorities and a mechanism for strength and co-operation with platforms. It also links up and facilitates co-operation with other international partners, not least the G7 and NATO. Third, in the run-up to the European Parliament elections, we closely monitored the implementation of the code of practice on disinformation, to which the major online platforms signed up in October 2018. The platforms, in particular Facebook, Google and Twitter, were subject to a programme of targeted monitoring that required them to report each month from January to May, inclusive, on the progress made on the implementation of their commitments under the code.

The monitoring was carried out in co-operation with the audiovisual authorities of the member states, the ERGA. It focused on those actions that held particular relevance to the integrity of elections, namely, actions to disrupt the advertising and democratisation incentives for purveyors of disinformation; to ensure the integrity of the platform services against inauthentic behaviour, including fake accounts and malicious bots; and to ensure the transparency of political advertising. In the final respect, searchable political advertising libraries were created and they resulted, for the first time insofar as online political advertising is concerned, in a better view of the identity of the sponsors, the amount spent and the basic targeting criteria used in the campaigns.

The fourth strand of actions included a number of initiatives directed to improve societal resistance to disinformation.

One of the aspects on which I wish to focus is the effort to promote media literacy, which is important to enable citizens to evaluate the credibility of information they encounter online and access alternative points of view when they navigate on social networks. In the long run, media literacy initiatives may prepare users of online platforms and social media to better understand the effects of disinformation and the malicious actors with which they may be confronted.

We are facilitating the creation of a European multidisciplinary community of fact checkers and academic researchers. The programme will arrive in 2020. The Commission has supported investments in new technologies for content verification and network analysis through social media. It has also launched a new platform, the Social Observatory for Disinformation and Social Media Analysis, SOMA, to facilitate networking, knowledge exchange and best practices among independent fact checkers. The Commission will follow this up by founding a new digital infrastructure entitled the European Digital Media Observatory, which will offer tools and networking possibilities to link fact checkers and academic researchers to improve their understanding of the phenomenon and to exercise better oversight of the dynamics that disinformation shows online.

Disinformation is a multifaceted phenomenon, which requires a multidimensional response. Our preliminary view is that all the efforts I mentioned have contributed to narrow the space for malicious activities online. On 29 October, we published a report that takes stock of the self-assessment reports prepared by the signatories of the code of practice. Our initial view is that it is a mild kind of assessment. After all, the recent elections to the European Commission were not free from disinformation and malign actors constantly change their strategies. As such, we need to strive to be ahead of them. The evolution of the code is ongoing. When the full evaluation has been carried out, we will see what further actions are necessary.

Mr. Marc Rotenberg

My organisation, the Electronic Privacy and Information Centre, EPIC, was established 25 years ago to focus public attention on emerging privacy issues. More than a decade ago, we worked with an organisation entitled Facebook Users Against the New Terms of Service. It was an international campaign joined by more than 150,000 people on the Facebook platform to oppose changes in the company's policies that would diminish personal privacy. As a consequence of the campaign, in 2009 Facebook gave commitments to its users that it would allow them to actively participate and vote on changes in its business practices. At the time, this was viewed as a great success and a demonstration of how Internet governance could promote democratic principles. However, Facebook reneged on its commitments and backed off on its agreement to allow users to vote. Chillingly, it shut down the political organisations, including the Facebook Users Against the New Terms of Service group and prohibited the use of the company's name in any user group on the platform. I bring this story to the attention of the committee because there has been much reference to Facebook and free expression. I know from ten years ago the company's view on free expression.

Thereafter, EPIC and a group of consumer privacy organisations in the US went to the FTC and laid a charge that the changes in the company's business practices violated US trade law and, specifically, were unfair and deceptive. We spent two years persuading the FTC to act on our complaint. We provided evidence, legal analysis and the blueprint for the remedies that the FTC announced in November 2011. Once again, we thought we had obtained a victory. The then chairman of the FTC pointed to the settlement with Facebook and stated the company would be held to account. When FTC commissioners appeared before the US Congress and in Europe, they pointed to the Facebook settlement as evidence that the US had effective protection for personal data. However, we almost immediately became aware of a problem, namely, that the FTC was unwilling to enforce its legal judgment.

In a related case, Google changed its business practice in violation of a consent order. We sued the FTC and stated that it must exercise its enforcement authority to protect users. The judge was sympathetic to our case but concluded that she did not have the authority to force the commission to take the action it should have taken in 2012.

We have spent many years trying to get the FTC to act against Facebook. During that time, complaints from many other consumer organisations and users have increased and include complaints about the use of personal data, the tracking of people who are not Facebook users and the tracking of Facebook users who are no longer on the platform. A request lodged by EPIC under the US Freedom of Information Act uncovered that 29,000 complaints were pending against the company. The FTC issued a judgment in June of this year against Facebook, accompanied by a historic fine of €5 billion. However, the FTC left Facebook's business practices in place and the users of the service at risk.

My message to the committee is simple: it must act. It must not wait ten years or one year to take action against this company. The terms of the GDPR must be enforced against Facebook and that should be done now. Facebook should be required to divest of WhatsApp not because of a scheme to break up big tech but, rather, because the company violated its commitments to protect the data of WhatsApp users as a condition of the acquisition. Until adequate legal safeguards are established, Facebook must be prohibited from engaging in political advertising. Its recently stated views on political advertising and the US First Amendment, which are not shared by US legal scholars, are reckless and irresponsible. Advertising revenue from political candidates should instead flow to traditional media organisations, which would help to support independent journalism.

I congratulate the committee members and other parliamentarians on this initiative. The background to the regulatory environment in Ireland is very much set in our broad plan on online safety. It involves several Departments, including the Departments of Communications, Climate Action and Environment, Justice and Equality, Health, Taoiseach, Children and Youth Affairs, and Education and Skills. I am a former Minister of the latter Department, which provides strong online support mechanisms for students and schools. The plan has a broad base. The principles of what we are trying to do are clear and well set out in the committee's documentation. They include transparency, accountability and protecting citizens while also respecting freedom of expression.

The devil is in the detail when it comes to regulation. There are several main regulatory developments under way in Ireland. I could name at least six. The regulation of harmful online content, to which I will return, is being pursued by my Department. The Department of Justice and Equality is facilitating a consultation on the regulation of hate speech and hate crime. Regulation of transparency of political advertising has been developed in the context of an electoral commission that is being drafted and that will deal with the regulation of elections, including funding, which is another element. Private Members' legislation supported by the Government has proposed the creation of new criminal offences relating to images displayed without consent.

Stronger protections relating to cybersecurity and cyberattacks are also relevant to my area. The approach we are taking to online safety, for which I am directly responsible, is not dissimilar to that being taken in Australia. We propose to define harmful content, require companies to have a code of practice and put an online safety commissioner in place to oversee the delivery of those codes of practice. That online safety commissioner would receive third party complaints and could, on his or her own initiative, require takedown or notify companies that their code was inadequate. Non-compliance with the directions of an online safety commissioner would be a crime and the commissioner could publish details of non co-operation. That legislation is in development and we have had consultations on it, as Ms Dixon indicated.

Consultations are also ongoing on the area of hate speech. There are issues around that and the characteristics of certain groups that have been defined. Some argue that the bar is very high under the existing incitement to hatred legislation. The definition of "hatred" in that legislation stipulates that an intention or likelihood to stir up hatred must be demonstrated. We are also considering whether such laws adequately deal with online content. Those issues are being developed.

This week, the Government announced the initiation of legislation to deal with political advertising, which will define political advertising, require its clear identification and labelling, and require the disclosure of any targeting or engagement metrics being used. There is quite a bit of legislative initiative under development. We also need to educate people and equip them with the right tools to help them be discerning in their use of the Internet. That balance is very strong.

There is no doubt that the power of the Internet is accelerating. When I was Minister for Jobs, Enterprise and Innovation, we discussed aspects such as social, mobile, the cloud, and big data. It has now moved on to the Internet of things and AI. Principles of operation will be embedded within those areas, which should be subject to regulation. This is a very tricky area for governments and it is absolutely essential that we work together across countries. Media and politics are at the front line of this invasiveness, but it will start to move into other areas of our lives. We need to decide the principles underpinning the roles of artificial intelligence, big data, micro targeting and so on. This is a challenging area and this initiative is worthwhile in bringing countries together to be a part of this group.

I thank the Minister. I call the representative from Australia.

Ms Carol Brown

My first question is for Mr. Cesarini. Does he have any communication with Facebook as part of his fact-checking projects?

Mr. Paolo Cesarini

The digital service infrastructure that will bear the name, European Digital Media Observatory, must be understood as an independent, academic-driven structure that will exercise oversight of the social media platforms. This initiative is aimed at building up co-operation and constructive relations, particularly as regards access to the data that are necessary for analysing disinformation trends and threats, and better understanding the impact of policies on the phenomenon. As such, the platforms must be external to this co-operation.

Ms Carol Brown

Mr. Cesarini mentioned in his presentation that one would be able to click on an ad in this proposed library and see where it is coming from. We heard evidence earlier that when an ad is fact-checked, there is no mechanism for getting back to people who have viewed something that may contain lies or disinformation. The fact check does not go back to the people who have already viewed that ad. Is there an avenue to request or put pressure on Facebook to take that very simple step? It is really extraordinary that it has not done so.

Mr. Paolo Cesarini

I understand Ms Brown's question. This should apply not only to sponsored content or political ads, but to all information that has been fact-checked and discovered to be false. This is not a question of public authorities - much less private companies - exercising censorship; it is a question of duly informing consumers. This mechanism should enable users' awareness of the type of content to which they have been exposed. Consumer empowerment is part of the fourth pillar of the code, which is now being implemented. Priority was given to other parts of the code during the first months of this year as they related more directly to the integrity of the elections. Those include the fight against bots and fake accounts and other malicious or co-ordinated actions. For the first time, a transparency space will be created for political ads where the question is not the veracity of the ads. The question is who is paying for them, how much they are paying, and what the targeting criteria are. It will also monitor and track advertising revenues to avoid the monetisation of bad actors.

Did Ms Brown want to come back in?

Ms Carol Brown

I wanted to ask Mr. Rotenberg a question. In his presentation, he gave a snapshot of what has happened with the various decisions and rulings that have been handed down, which Facebook and others that should be in a position to enforce seem to have simply ignored. Based on the discussions I have been having over the past few days, that view seems to be shared by many. Mr. Rotenberg spoke of his own recommendations. Does he think the business model needs to be changed?

Mr. Marc Rotenberg

I certainly think the business model needs to be changed. Facebook should be prohibited from using the platform for political advertising today. That is a concrete action that is supported by plenty of evidence. Other companies are already doing it.

I will make a further point, in partial response to Ms Brown's first question to Mr. Cesarini. A good friend of mine, the former European Data Protection Supervisor, Giovanni Buttarelli, who recently passed away, addressed the issue of election integrity last spring. He made a very important point. He said that, in his view, transparency and content management will not be sufficient to solve this problem. This is about the collection and use of personal data. If our aim is to safeguard the integrity of elections and to limit fake news, we must enforce these privacy obligations. Transparency will not be a substitute for doing so. Mr. Buttarelli was exactly correct on that.

I will bring in Ms Pentus-Rosimannus, who is from Estonia.

Ms Keit Pentus-Rosimannus

I thank the Chairman for all the introductions. The urgent need to forbid Facebook from accepting political advertisements has been mentioned. It has been suggested that there is a need for regulation in this area. I wonder how it is envisaged that it would be possible to make such a distinction. If we say that political advertisements are not allowed on Facebook, should we proceed to say the same in respect of television and newspapers? How can this distinction be made? As lawmakers, how can we forbid political advertisements in one case by regulating, as opposed to their own self-regulation?

Mr. Marc Rotenberg

I can speak to our position in the United States. By tradition, we regulated political advertisements. We required people in print and broadcast media to identify the source of their advertising. Facebook quite brazenly claimed it was unnecessary for Internet companies to be bound by the same obligations that apply to print and broadcast media. It removed itself from what traditional journalists and news organisations were required to do. I believe this accelerated the trend towards fake news and disinformation. My view at this point is that the time has long since passed for Facebook to come before this committee to say what type of regulation it thinks it will find acceptable. There is enough reason now to say this is one place where we no longer need political advertising, which is a source of significant revenue. It could help to support independent media, which needs the support.

Ms Keit Pentus-Rosimannus

My understanding is that there is no need to have a new regulation in this area. During a previous session, we heard from representatives of Twitter, which has decided - without any new revelation - that it will no longer allow political advertisements to run.

I would like to put some questions to Mr. Cesarini. Before the European Parliament elections, the EU worked with online platforms on the basis of the code of practice. How can that be evolved? What is the conclusion? Shall we continue with a similar formula? What were the main problems with the code of practice?

Mr. Paolo Cesarini

I will express my personal view.

Ms Keit Pentus-Rosimannus

Sure.

Mr. Paolo Cesarini

I think the code of practice was a necessary step. It remains to be seen whether it is sufficient. We are in the process of evaluation. I cannot anticipate the conclusions that will be known to the next Commission, which is now carrying out caretaking duties but is unable to take political decisions on the next step forward. This moment will come very soon, early next year. My view is that today we are in a better place than we were a year ago. It is clear that much more needs to be done as well. There are several areas we may need to consider. I will not mention them all. As we are focusing in today's debate on political advertising, I remind the committee that the devil lies in the details. I refer, for example, to the definition of "political advertising". What would the ban be about? What about sponsored content that communicates on social, political and economic issues that do not come within electoral contests and, therefore, escape the definition? We are well aware that such content can influence and shape public opinion artificially and in a much more pernicious way in the long term. These complex questions need to be taken in their integrity and in their totality.

Ms Keit Pentus-Rosimannus

I agree with Mr. Cesarini. If I am correct, the rapid alert system that was put in place before the European Parliament elections was never triggered during the campaigning period. Was it not needed because there were no disinformation campaigns? Does Mr. Cesarini envisage that we will continue with a rapid alert system that will alert us if something bad is happening on social media?

Mr. Paolo Cesarini

The rapid alert system was put in place in March of this year, less than two months before the European Parliament elections took place. It had the merit of creating links and connecting dots that had been completely separate and did not communicate between one another at the level of the different member states. Each member state has a different authority that deals with issues concerning electoral integrity and disinformation attacks. While it is true that the rapid alert system has issued just one alert, an increasing flow of information is being exchanged. This shows and demonstrates the usefulness of the tool. It certainly needs to be developed, especially in terms of agreeing common methodologies, discussing the thresholds for triggering an alert and having better ways of co-operating with platforms, particularly when disinformation leads to investigations into groups that co-ordinate their behaviour and, therefore, require the intervention of specialised authorities within the structures of member states.

I will move on to Mr. Packalén, who is from Finland.

Mr. Tom Packalén

I would like to ask Mr. Cesarini about the work of the European Commission. Fact-checking is an important part of the growing problem of false media and fake news on the Internet. It is a difficult question when there is clear fake news and when we should somehow verify where the limit is. There is no one truth in the world when it comes to difficult and controversial questions such as climate warming. It is very difficult to say what is fact and what is not. I ask Mr. Cesarini to speak about what the SOMA project is doing. How are the things it is working on chosen? I would like to open up this question. How is the project seen in the European Commission?

Mr. Paolo Cesarini

I agree fully that fact-checking is not a silver bullet. Nevertheless, there is a need to create more clarity about the trustworthiness of the information space within which we experience our access to news on a daily basis. Fact checkers can make an important contribution that has to remain independent from any public interference. The initiative must come from the media sector. The SOMA project is helping fact-checking organisations that have been growing over the past couple of years, having entered this newly emerging market, to work together to avoid duplications, to learn from one another and to develop fact-checking in a proper way. Mr. Packalén's question raises the important question of where to draw the limits between news and views, or between what is false and what is real. It would be a dangerous move to concentrate on the idea of regulating content. We have to focus much more of our attention on detecting, analysing, preventing and, where necessary, sanctioning online behaviours that are systematically directed towards the amplification of certain stories and narratives and that use the vulnerabilities that exist in the current digital media ecosystem to mislead the users of such media by making them believe that a certain story has popular support when, in fact, it does not. We need to hide the authors and vectors that have been helping this manipulation to happen.

In other terms, we should be much more focused when we talk about regulation and much more concerned about the conduct than the content, although the content FactCheck has provided and the analysis it has carried out are important in order to provide leads to identify the kinds of conduct that could be reprehensible in a regulatory framework.

Mr. Tom Packalén

I thank Mr. Cesarini. Those are very good answers. How much co-operation does DG CONNECT have with these big companies such as Google and Facebook? Are they trying to solve this problem with DG CONNECT? Is there communication? If so, how much?

Mr. Paolo Cesarini

There is an arm's length relation. The code of practice is their own code of practice; the Commissioner has acted as a facilitator. Without the intervention of the Commission, the code of practice probably would not be there. However, the implementation of these principles remains entirely within the responsibility of the signatories to the code. The Commission has the very precise role of an independent oversighter - I do not know whether that word exists - to exercise oversight over the actions taken and then to take the appropriate steps on its own, using the powers that can be used in this tricky field, in order to ensure that the objectives that underpin the code are actually achieved.

Ms Nino Goguadze

Ms Dixon mentioned in her presentation that there is active co-operation between data protection agencies, that she has a permanent platform of co-operation and that they meet regularly. Could she explain the level of co-operation she has with big social media companies?

Ms Helen Dixon

Regarding the co-operation between data protection authorities, we have excellent fora but there is huge room for improvement. For example, the general data protection regulation provides for something new called joint enforcement operations. We have not really got them off the ground yet in the EU in terms of data protection authorities lending resources to one another that can be authorised on investigations, so there is lots of room for improvement there. Regarding co-operation with the big platforms, the relationship is one of regulator to regulated entities. The regulated entities are obliged under the GDPR to co-operate with investigations conducted by the data protection authority. To date, the 21 big tech organisations in respect of which we have large-scale investigations open are engaging and co-operating. With equal measure they are challenging at every turn and seeking constant clarifications on due process. They are obliged under various measures under the GDPR, for example, to conduct data protection impact assessments in certain circumstances and to consult with us as a data protection authority where they identify high risks they can mitigate and so on. Again, that form of co-operation with, or submission to, the regulator is already in effect. What remains to be seen is how the investigations we currently have open will conclude and whether there will ultimately be compliance with the outcomes of those investigations or whether they will be subject to lengthy challenge and so on. The big question of whether we will be able in the near term to drive the kinds of outcomes we want is still open and awaits us as a data protection authority to put down the first final decisions in a number of cases.

Ms Nino Goguadze

Are those big companies flexible and open to co-operating with the Data Protection Commission on preventative measures? Preventative measures are very much important and sometimes it is much more important to prevent new harmful actions on social networks. To what extent are they flexible, and are they willing at all to co-operate actively on effective measures?

Ms Helen Dixon

On practical measures, did Ms Goguadze say?

Ms Nino Goguadze

I mean preventative measures.

Ms Helen Dixon

I think Ms Goguadze is well familiar with the fact that the GDPR is a high-level, technology-neutral, principles-based law. From our point of view, it is a very good platform over which we can regulate. All organisations, including the platforms, despite how large they are and the resources they have, have an issue understanding with any degree of certainty how these principles should be applied in specific scenarios. For example, the GDPR for the first time now in the EU calls out that children merit specific protections in a data protection context, but it really gives us no clues as to what those specific protections should look like. I mentioned earlier the challenge of even knowing how to identify children on platforms without limiting the rights of adult users or making platforms inaccessible. It is a challenging question. We find when we engage in exercises such as the consultation we have run on protections for children's data that the platforms are willing to come forward, make submissions and engage in ideas to ultimately find solutions that work across the board and drive up the levels of protection.

Ms Nino Goguadze

Is my time up?

Ms Goguadze has 30 seconds.

Ms Nino Goguadze

I will ask one last question. I asked it during our morning session and I will ask Ms Dixon the same question. Some items of information collected by big social platforms are provided by consumers themselves. I believe that in many cases people are not aware when they provide and share information of the possible usage of that information in the future. I am very much interested in Ms Dixon's opinion on this. What does she think? Should social platforms and the big social media companies be responsible for informing people before they share their personal data on a network?

Ms Helen Dixon

Probably the first large-scale investigation we will conclude under the GDPR is one into the principle of transparency and involving one of the large platforms. We will shortly make a decision spelling out in detail how compliance with the transparency obligations under Articles 12 to 14, inclusive, of the GDPR should look in that context, but it is very clear that users are typically unaware. For example, some of the large platforms do have capabilities for users to opt out completely of personalised ad serving, but most users are not aware of this. There are also patterns in operation that nudge users in certain directions. Aside from the hard enforcement cases we will take, we also published guidance recently on, for example, that issue of how users are being nudged to make choices that are perhaps more privacy-invasive than choices they might otherwise make if they were more aware. There is a role for us as a regulatory authority in regulating the platforms as well as driving awareness among users. It is an uphill battle, though, given the scale of what users face.

I thank Ms Dixon. She put very well some of the concerns a number of people have articulated about the business model when she said in her testimony, "All of this may happen without the user being aware his or her own data are being deployed to reinforce the individual's existing viewpoint rather than the individual being in a position to take an objectively formed stance based on an understanding of both sides of an issue." In other words, this is having the consequence of an increasingly self-reinforcing polarisation. Ms Dixon went on to say her office is looking at "whether [the online behavioural advertising system] is in compliance with the GDPR, especially in respect of lawfulness and transparency to users". In coming to conclusions on this, is any decision the commission makes binding on all of Europe? Is the decision the commission has to make an all-Europe one? Ms Dixon made the point that the GDPR is a very good, broad, principles-based regulation. Does it give her the powers to address some of these issues if she feels there is a real lawfulness or transparency issue in the consequences of this characteristic of the platforms?

Ms Helen Dixon

Any decision we will ultimately make is binding only on the data controller, against which we make the decision. However, I did mention in response to the last question that there is a demand from controllers of all types for more certainty as to what the correct objective standard in application of the principles looks like.

We anticipate that any of the decisions we make in these larger scale cross-border investigations will serve as a precedent and will be followed by others, not least because they will wish to avoid enforcement action subsequently being taken against them on similar issues. It is important to be aware that when we are applying the principles to something like online behavioural advertising, we must go back to what Mr. Rotenberg was saying about underlying business models. The GDPR is not set up to tackle business models per se but to apply principles to data processing operations. When we come to look at something like advertising technology or online behavioural advertising, there is, therefore, a complexity in that we have to target multiple actors. I mentioned earlier that for that reason we are looking at publishers at the front end that start the data collection from users. It is when we first click on a website that the tracking technologies such as pixels, cookies and social plug-ins start the data collection that ultimately ends up categorising us for the purposes of sponsored stories or ad serving. We are looking at the advertisement exchanges, the real-time bidding system, the front-end publishers and the advertisement brokers that play an important part in all of this in combining offline and online sources of data. We will rigorously apply the principles against those data processing operations. When we conclude we will then have to see if that adds up to a changing of the underlying business model. The jury is out on that until we conclude.

I have a question for Mr. Cesarini. On that issue, if the GDPR directive does not sufficiently address some of the underlying principles or approach in the business model, is the European Commission looking at how the e-commerce directive might provide further support if there is a desire within the European Parliament and Council for that sort of approach? This spring the Spanish data protection agency temporarily banned micro targeting in political advertising, although the decision has been withdrawn pending an appeal. From the European Commission's perspective, has any individual data protection regulator taken action to initiate a ban on micro targeting, whether in political advertising or another sector? Is there a fundamental difficulty with doing that?

I ask Mr. Cesarini to keep his answer as short as he possibly can because we are conscious of time.

Mr. Paolo Cesarini

The two issues are separate. What can be done in the perspective of the review of the e-commerce directive would point more towards the issue of fairness in a commercial relationship than to the legitimate use of personal data for micro targeting purposes, which is within the realm of the GDPR. One point I would like to stress is that micro targeting is only one of the possible vulnerabilities of the ecosystem. When we talk about disinformation, clickbait, which works by directing traffic into a malicious website, operates in a different way. Micro targeting is typical for sponsored content and that is certainly an issue that needs to be addressed. Then there are other issues that are not necessarily data-driven or are not so linked to the use of personal data. For instance, the manipulation of media for amplification purposes where bot accounts behave in a co-ordinated manner and the trade of engagement indices or signals in the black market are not necessarily linked to the manipulation of personal data. Another issue about personal data may appear in the area of algorithmic bias. When the recommendation system operates in order to provide the user with certain recommendations, there we have an issue again with the usage of personal data.

I will be as succinct as possible. I have two questions, one for the Minister, Deputy Bruton, on the Irish environment and the new proposals, and one for the panel as a whole on regulatory enforcement. I welcomed the announcements this week and the Minister alluded to a number of other proposed measures. I look forward to reviewing the details of those when they are published. One of the findings of the committee in its discussions today and in some of the preliminary discussions yesterday has been that we should not delay because, as Mr. Rotenberg said, delay only suits the platforms. There is an urgency about all of this. Our Canadian colleagues told us yesterday that we must not let the perfect be the enemy of the good but must plough on and take steps. That point was well made. We may not take the final step but we must take the first steps and every step forward is progress. In that vein, on the proposals the Minister mentioned, two Private Members' Bills that deal with these matters are before the committee. These are the Digital Safety Commissioner Bill 2017 and the Online Advertising and Social Media (Transparency) Bill 2017, which I proposed. Both of these Bills have undergone detailed scrutiny on First and Second Stage in the Dáil. It would be prudent to consider these as vehicles to make progress, even if they are superseded by Government legislation. That would certainly fit the theme we discussed of making rapid progress and putting something in place rather than letting more elections take place in the absence of action. Perhaps the Minister will respond before I put a question to the panel.

In developing Government legislation on an online safety commissioner, we will adopt the principles set out in some of the Private Members' Bills, particularly provisions defining harmful content which listed issues such as cyberbullying, creating suicidal intent and so on. We will look at how we accommodate those. Nonetheless, getting legislation through the Office of the Parliamentary Counsel requires the Attorney General's sanction for the different elements. A number of elements of these Private Members' Bills run into significant problems. While we can transpose significant blocks of these Bills in our legislation, we also have to change significant blocks of it. The difficulty is that I have to get someone to stamp the legislation. I am exerting maximum pressure to have exactly what the Deputy says done. He is correct that the best is the enemy of the good and there will always be a reason to have another legal assessment done of this or that aspect of legislation. I am trying to avoid that scenario and trying to hit the end of the year deadline I have set.

The Minister mentioned the Digital Safety Commissioner Bill 2017. Does the same logic apply to the Online Advertising and Social Media (Transparency) Bill 2017? Is the Minister referring to both Bills?

The Online Advertising and Social Media (Transparency) Bill 2017 will be handled by the Minister for Housing, Planning and Local Government. It is part of the electoral legislation. I am sure the Minister will seek to adopt the principles set out in the Bill. The Department has articulated clearly what its definition of political purpose is and it has started to list what the informational requirements will be. Those requirements will have to be set out in legislative form so I expect there is a bit of work left in that.

I thank the Minister. I have a brief question for the panel. We are attempting to formulate regulation and legislation and while some countries have had more success than others in that regard, we are all moving in that direction. However, enforcement is the key to legislation and regulatory activity in any sphere. The Data Protection Commissioner has had some success in enforcement and has acquired some additional powers. One of the difficulties, however, and one to which the Minister alluded to in his opening remarks, is that this area tends to fall across a number of different Departments and agencies. What is the most appropriate model? Is it a multi-agency response? Is it a dedicated agency being tasked with online regulation? Is it a first on the scene type scenario? What is the best practice or what have the witnesses seen work best?

Mr. Marc Rotenberg

In addition to my work at the Electronic Privacy Information Center, I have also been a professor of law at Georgetown University for 30 years. I have taught privacy law and have two different case books. All roads lead to the GDPR. I say this for three reasons. First, the GDPR is not a set of principles but a set of rights and responsibilities associated with the collection and use of personal data. When companies choose to collect personal data, they should be held to account. Second, the decision in the Schrems case of 2015 makes clear that while co-ordinated enforcement anticipated under the GDPR is important, individual data protection authorities, DPAs, have their own authority to enforce the provisions of the charter, which means individual DPAs do not need to wait for a co-ordinated response to bring an enforcement action.

My final point is a matter of law. The GDPR contains the authority within its text to enforce the other laws of the European Union. This is largely about the misuse in the collection and use of personal data for micro targeting. That problem can be addressed with the GDPR but it will take an urgent response and not a long-term game plan.

Dr. Janil Puthucheary

I will address a few questions to the Minister, Deputy Bruton, if I may. Ireland is no stranger to attempts of foreign interference, as was seen in the May 2018 referendum when some civil society organisations tracked various inauthentic accounts and suggested that up to 70% of them came from across the Atlantic on the basis of time zones. How has the experience with foreign interference altered or informed the approach to designing the online safety legislation? Are there concerns that once the legislation is passed, the online safety commissioner is in place and the various processes are working, those processes might be used by foreign actors to interfere with domestic matters?

In the previous session I was quite struck by the assertion from Dr. Bickert from Facebook that even if this proposed Act, the safety commission and the processes provide directives and measures for content to be removed or adjusted, Facebook would test these directions against its internal policies. Does the Minister have a view on what this means for the operation of that legislation? We have had some arguments as to whether the fundamental problem is the underlying business model of the social platforms. Does the Minister have a view on the matter?

There is no doubt there was an attempt to interfere in some of the referenda and in those instances it was a voluntary take-down policy that was adopted by platforms rather than a legal provision. I spelled out earlier that aside from the requirement to have transparency around political advertising, the reporting of targeting and so on, much broader legislation will cover the funding of referenda in particular. We have ceilings on funding for individual candidates and parties but funding from overseas is not regulated. Foreign donations to Irish political organisations are banned and a lacuna will need to be addressed.

The legislation I am introducing is limited to harmful content such as cyberbullying of individuals rather than the addressing of political fake news or distortion of views. We are not designing it in that way and we are defining harmful content in quite a narrow way. We are working on how such definitions can evolve over time in a way that is legally robust. That will have to come back in some form to the Legislature and an online safety commissioner cannot create that legislation. We must devise a vehicle for that.

It is a fact that any online platform operating in Ireland must respect Irish law and would find itself with enforcement action against it in an Irish court if it failed to do so. There is no doubt that these Acts can be enforced.

Dr. Janil Puthucheary

To follow up, one of our concerns in Singapore and in several other neighbouring countries in Asia is that many of the issues not overtly labelled as political can be easily exploited for political gain. That is why I asked the second question as to whether the Minister had concerns around issues covered in the proposals for this online safety effort. From our perspective, such issues may well be exploited for political gain through foreign interference. Does the Minister have any comment to make?

We have not encountered that but I am interested to hear about those. We have been working on a narrower definition of creating legislation that would require companies to have codes and an online commissioner to vet those codes. There would be a power to serve notice on platforms where a code is deficient in some respects. The crime would be created if there was a failure to comply with notices. That is the structure we are seeking to put in place. We have not identified in that work foreign interference as being a feature of this legislation, although it would be with some of the more politically orientated legislation.

Lord Puttnam

My question falls into the category of political priorities and is addressed to Ms Dixon and the Minister. I am looking at this from the perspective of somebody who is worried that the big tech companies are beginning to see themselves as nation states, in effect. That is not an overstatement. One of the very few successes we have had in the United Kingdom is getting adequate resources, staffing and money to the authorities to take on their responsibilities. If the UK Information Commissioner, Ms Elizabeth Denham, were here today I am sure she would confirm that. It was quite a struggle within the British Parliament but I believe she feels she is adequately looked after.

I am concerned that a number of nations are engaging in a dangerous tension between going out of their way to get investment income from big tech companies while not looking at the investment required to regulate those companies they have actively encouraged to come to this or that country. It is a real worry. Does Ms Dixon absolutely believe her office has the resources to take on the responsibilities that in effect it is taking on for the European Union to regulate the companies already in Dublin?

Ms Helen Dixon

Going back five years we started from a very low base in building towards the GDPR and exploiting all the strong enforcement powers we have been given. We started with approximately 27 staff when I came on board at the end of 2014 and we now have over 140. The composition of the staff has changed significantly in that we have hired top class lawyers, litigators, technology experts and investigators. We are in a radically different position. We have just had the budget for 2020 announced last month. I made a submission on the resources I anticipated I would need in 2020 as part of that budgetary process. Ultimately, we secured significantly less than what we sought for 2020. However, we secured increased resources for next year that will allow me to recruit an additional 30 to 35 experts to the staff. I should say that in that context, even if 140 sounds like a relatively modest number, we are among the top tier of highly resourced data protection authorities both in the EU and globally.

I agree with the statement. We do not have enough and that is why we sought more. We see the scale of the challenge more clearly than anybody else can see it in terms of what we face. All of these hard enforcement cases we are trying to drive in the investigations we have open are extremely labour-intensive in terms of the process we must follow. I mentioned that we are being challenged and questioned at every turn. We need more resources and this must see continued investment as an area of regulation.

Lord Puttnam

I can typify this as a tension between jobs and democracy and my concern is that democracy is in danger all the time of losing out to the jobs argument. I cannot think of a better invitation to populism than that. The Minister referred to draft legislation or the development of such legislation. An election will certainly come next year so is there a chance the legislation will be put in place? Our experience in the United Kingdom is that we badly dropped the ball. We are having an election right now with nothing like the recommendations the British Electoral Commission suggested are needed to have a fair and free election.

My concern is this. If the results of the UK election are contested, that would set up a time bomb in democracies throughout Europe, which would be very difficult to control.

By its nature, regulation always tends to be catching up with developments that have moved ahead of it. As for, artificial intelligence, I read a book by Mr. Jamie Susskind, a US writer who has written about the processes embedded in artificial intelligence that will determine things like our right to get insurance and jobs. The political and governance system, in Ireland and internationally, has not yet caught up with this and struck a balance between freedom of expression and regulation. These are complex issues. The problem is not a lack of willingness to find solutions. I have found that even the narrow aspect I am trying to deal with, the definition of harmful content, is quite tricky to regulate in a way that balances freedom of expression with the need to protect citizens, particularly vulnerable citizens, from its abuse. It is not as simple as governments not bringing resources to bear. These are genuinely tricky issues. We need international co-operation in the design of a robust and enforceable response. While it is important to ask whether sufficient resources are being devoted to the enforcement of existing legislation, many of these challenges are far more profound than that. The regulators have not quite caught up and learned how to design these systems.

Lord Puttnam

I understand everything the Minister says and I appreciate it. Does he think there is an understanding within the Government that these are existential issues for democracy? These are not trade issues or even harm issues. They are existential-----

The Government is absolutely aware of that. That is why a lot of work has gone into this and it is being led from the Taoiseach's office. There is no doubt that the importance of this is recognised. Equally, we recognise that what we do is interdependent with what colleague nations are doing. We need to work together in designing systems that will be enforceable, robust and consistent across geographies. As Lord Puttnam rightly said, these are not nation state companies. These are companies for whom we, to are large extent, are the product and they transcend boundaries. That is the greater challenge, not the issue of finding resources to enforce well designed policy instruments.

Mr. David Cicilline

I thank our panellists for this very useful discussion. Mr. Rotenberg has argued that the $5 billion settlement concluded by the Federal Trade Commission, FTC, with Facebook was insufficient, saying it was too little, too late. I have raised similar concerns, saying that this fine was essentially a speeding ticket that will not help consumers and that the remedy in this case will not serve as an effective form of deterrence. Can Mr. Rotenberg describe for us what an effective remedy in that case might have looked like?

Mr. Marc Rotenberg

I thank the Representative. In our statement to the FTC prior to the judgment, we set out several proposals. We said that effective data protection standards should be imposed on the company. The FTC has failed to do that. Second, we said that WhatsApp and Instagram should be divested from the company, not because of any grand political philosophy but because Facebook violated its commitments to protect those user data. We had additional recommendations with regards to civil rights issues, which are widely discussed in the United States. These proposals were not taken on board and as the Representative says, the settlement is a speeding ticket.

Mr. David Cicilline

I thank the witness. Some scholars argue that Facebook's ability to collect such large swathes of data is largely due to its market power. To what extent has Facebook's market dominance permitted the development of a comprehensive surveillance infrastructure?

Mr. Marc Rotenberg

I would say that Facebook and Google together are probably unparalleled in the amount of information they collect on individuals. A key point, which I believe was made on an earlier panel, is that the vast majority of this information is not collected directly from the user. This is particularly true with Facebook. Users are tracked across the Internet, which makes notice, consent and transparency mechanisms effectively useless.

Mr. David Cicilline

Of great concern to many, including me, is that Facebook has become a dominant communications network while running a behavioural advertisement-based business model. It seems increasingly clear that this combination is lethal for our democracy. There was a reason AT&T was never allowed to surveil the conversations of telephone users to sell them advertisements. What problems does Mr. Rotenberg see with the behavioural advertisement business model? What does he think should be done about these problems? Rather than improving transparency in the online ecosystem or policing the use of data collection, should there be an outright ban on behavioural advertising?

Mr. Marc Rotenberg

Yes, there should be an outright ban on behavioural advertising. People fail to understand that the behavioural advertising model of Facebook has taken advertising revenue away from contextual advertising, the advertising that supports the editorial content of genuine news organisations. This is a zero-sum game and journalists are losing it.

Mr. David Cicilline

Turning to Ms Dixon, as I have mentioned previously I have deep concerns that the enforcement policies for both consumer protection and antitrust regulations in the United States have not kept pace with the digital economy. The fact that the press has to rely for survival on the Facebook News tab and Mark Zuckerberg's views on the First Amendment shows how dire the situation really is. What is the European Commission doing to ensure that enforcement is effective and timely and that it stays ahead of the curve? What is Ms Dixon's response to concerns that delays in enforcement have entrenched the market power of firms that are dominant online? Does she have any recommendations for us in improving enforcement in the United States?

Ms Helen Dixon

Is the Representative asking me about the Irish Data Protection Commission, rather than the European Commission?

Mr. David Cicilline

Yes.

Ms Helen Dixon

There have not been delays in enforcement under the general data protection regulation, GDPR. This regime is 18 months old. As an individual data protection authority, we have handled 11,000 user complaints, many of which have been in respect of the platforms where individuals have tried to exercise rights. We have intervened and ensured this has happened. We have also prosecuted several companies in the last 12 months for e-privacy regulation infringements. We have 29 litigation cases in the Irish courts at the moment. Mr. Rotenberg referred to the Schrems I case earlier. We have brought forward what is referred to as the Schrems II case in the meantime. There has been a hearing on the critical issue of transfers of EU personal data to the US this summer. It is pending the judgment of the Court of Justice of the European Union. It is a mistake to say there has been no enforcement but I think the Representative is referring to the fact that there has not yet been an outcome to the large-scale investigations into the big tech platforms which are currently under way on lawfulness, transparency, privacy by design, default and so on. Eighteen months is not a long time and not all of the investigations have been open for 18 months. We must follow due process or we will not secure the outcome in the end. As these companies have market power and have the resources to litigate forever, we have to make sure we follow due process and allow them a right to be heard. We conclude the legal analysis carefully by applying the principles of the GDPR to the scenarios at issue, and then we can hope to deliver the outcomes that the GDPR promises. That work is under way. We could not be working on it more diligently. The first set of decisions will start rolling out in the very near future.

Mr. David Cicilline

I thank Ms Dixon.

Does Senator Higgins have a question? I must confine her to 30 seconds.

My question is to the Data Protection Commissioner. Are the special categories of personal data under Article 9 a tool her office could be using? How do they relate to observed data, that is, behavioural activity? How does one make sure that observed data, which relate to the Article 9 categories, are included? Is that a key tool which could be used in the regulation of micro targeting?

My second question is directed to the whole panel. The business model has been talked about again and again. One of the key things we can do to disrupt a business model is to apply financial penalties. How important are financial penalties under the GDPR to disrupting the business model? Should a portion of fines collected under GDPR be ring-fenced for digital empowerment or education?

I apologise, as we do not have time for the whole panel to respond. Only 30 seconds are available.

Ms Helen Dixon

Article 9 is most certainly in scope in terms of the investigations I referenced earlier. There is no doubt that sensitive categories of personal data come into the online behavioural advertising model. On the question of fines, there have been studies done and David Wright and Paul De Hert published a book on enforcing privacy in 2016, which looked at the effects of what was a new fining regime in the UK from 2010 that applied to the information commissioner's office. It concluded that the fines made no difference. We will be obliged to impose fines where we find infringements so that is what will happen but we expect the corrective powers we apply, such as bans on processing and requirements to bring processing operations into compliance, will have the more significant effects.

I thank all the witnesses for appearing before us this afternoon.

Sitting suspended at 2 p.m. and resumed at 2.46 p.m.