Skip to main content
Normal View

Joint Committee on Communications, Climate Action and Environment debate -
Thursday, 7 Nov 2019

Session 2: Industry Perspective

I welcome Dr. Monika Bickert, vice president of content policy at Facebook; Mr. Jim Balsillie, chair of the Centre for International Governance Innovation; Mr. Marco Pancini, director of public policy at YouTube EMEA Google; Ms Karen White, director of public policy at Twitter; and Mr. Ronan Costello, public policy manager at Twitter.

I advise witnesses that by virtue of section 17(2)(l) of the Defamation Act 2009, they are protected by absolute privilege in respect of their evidence to the committee. If they are directed by the committee to cease giving evidence on a particular matter and they continue to do so, they are entitled thereafter only to a qualified privilege in respect of that evidence. They are directed that only evidence connected with the subject matter of these proceedings is to be given and they are asked to respect the parliamentary practice to the effect that, where possible, they should not criticise nor make charges against any person, persons, or entity by name or in such a way as to make him, her or it identifiable. Any submissions or opening statements made to the committee will be published on the committee website after the meeting.

Members are reminded of the long-standing parliamentary practice to the effect that they should not comment on, criticise or make charges against a person outside the House or an official by name or in such a way as to make him or her identifiable.

I will give witnesses an indication when four minutes are up to let them know they have one minute left. We will start with Dr. Bickert.

Dr. Monika Bickert

I will not read my entire submission. I will just hit some high points. I thank the Chairman and the members of the Oireachtas Committee on Communications, Climate Action and Environment for inviting me to speak to it and the other International Grand Committee members today. I am the vice president of content policy at Facebook and based in our Menlo Park headquarters in California. I joined Facebook in 2012 after serving 11 years as a US public prosecutor and as regional legal adviser at the US embassy in Bangkok, Thailand. I now lead Facebook's global content policy team. My team's responsibilities include developing and enforcing the rules for how people can use our services. My remit also now includes work to further our company’s goal of facilitating appropriate regulation of content on social media platforms and the broader Internet.

Facebook welcomes governments and regulators taking a more active role in addressing harmful content online. Protecting the people who use our services is a top priority, to which we continue to dedicate a great deal of time and resources, but we do not believe any company should tackle these issues alone. This is why we work together with governments, civil society, experts and industry peers to develop rules for the Internet that encourage innovation and allow people the freedom to express themselves while protecting society from broader harms.

The aim of the committee’s session today is to advance international collaboration in the regulation of harmful content, hate speech and electoral interference. Facebook shares this goal and I am grateful for the opportunity to share our thoughts on how to meet it.

Freedom of expression is a core value of our company. Facebook exists to give people a way to connect and express themselves. At the same time, we want to make sure that people using our services are safe. That means we must make decisions every day about what is and is not acceptable among our community of 2.8 billion people. Some of these decisions are clear but many are nuanced and involve balancing competing principles like voice, dignity, safety and privacy. We work hard to get these decisions right and we have community standards that define what is acceptable content. Those standards are informed at every turn by relationships with hundreds of civil society groups and experts around the world.

We invest heavily in technical solutions to quickly identify potential violations of our rules. For example, more than 99% of the terror propaganda we remove from the site is content we identify ourselves using technical tools before anybody has reported it to us. We also have more than 10,000 people working around the clock to assess whether content is violating our rules and removing it if it is. We respond to the overwhelming majority of reports of potential violations within 24 hours. We publish transparency reports about how effectively we remove harmful content. These documents, which are publicly available, show how much content we are removing in each category and how successful we have been in trying to identify that content before it is reported to us. Nevertheless, we know that with such a diverse community, the lines we draw will never please everyone and we will never be perfect in enforcing these lines. To address these challenges, Facebook is creating an independent body called the oversight board to which people can appeal our decisions on content they have posted but we know that this too will not solve all of the challenges we face. We believe that a more standardised approach would be beneficial. Regulation could set baselines for what is prohibited and guide the development of systems to better address harmful content.

In the area of elections, regulation could address important issues such as the definition of political advertising, who is allowed to run political advertisements and what steps those persons must take before doing so. Regulation could also address how political parties can and cannot use data to target political advertising. We believe that Facebook and companies like it are central to solving these complex challenges but it is clear that we cannot and ought not do this alone. In that spirit, we look forward to collaborating further with governments, civil society, industry and all of the people who hold a stake in creating rules for a safe and innovative Internet.

Mr. Jim Balsillie

I thank the committee for the opportunity to present to it today. The committee's leadership on issues related to data governance inspires many well beyond Ireland's borders. I am the retired chairman and co-CEO of Research In Motion, a mobile data services firm we scaled from an idea to $20 billion in sales. My expertise is the commercialisation of technology, specifically for multi-sided platform business model strategies and their network effects.

In the attached appendix, I list the six recommendations I made at the IGC meeting in Ottawa in May. Today, I will explain three foundational elements that underpin all of my recommendations because a stable, long-term solution to the current challenges lies in confronting them systemically.

The current business model is the root cause of the problems the committee is trying to address. Its toxicity is unrelenting. It is not a coding glitch that a legal patch will fix. Data at the micro-personal level gives technology unprecedented power and that is why data is not the new oil. It is the new plutonium - amazingly powerful, dangerous when it spreads, difficult to clean up and with serious consequences when improperly used. A business model that makes manipulation profitable is a foundational threat to markets and democracy. Democracy and markets only work when people can make free choices aligned with their interests, yet companies that monetise personal data are incentivised by and profit from undermining personal autonomy. Whistleblowers inside platform companies told us that "the dynamics of the attention economy are structurally set up to undermine the human will." That is why we need to outlaw the current business model and reintroduce responsible monetisation, such as subscription-based models. Strategic regulations are needed to cut off the head of this snake. Anything less means governments will be perpetually coping with its slithery consequences turning policymaking into a losing game of regulatory whack-a-mole.

Silicon Valley’s business plans are not political programmes. The contemporary technology sector is an industry that celebrates engineering as an alternative form of governance. It distrusts the political process and disregards the public interest. Its concentration of power is owed to the features of the modern knowledge-based and data-driven economy that tip markets through its steep economies of scale, powerful economies of scope, pervasive information asymmetry and network externalities. Technology is not governance; it must be governed. The choice we face in 2019 is not between Facebook and China, which paradoxically borrow from each other the tools and tactics that encode their grip on power. The option we face is either a social choice mediated by democracy or social outcomes engineered by unbridled, unethical and unaccountable power.

A global digital stability board is needed to institutionalise co-ordinated responses and underwrite a progressive digital future. Cyberspace knows no natural border. The Cambridge Analytica scandal involved Canadian technology on a US platform paid for by Russian and US money to interfere in a British referendum over its future in the European Union. The business model that enabled this, nourished by discredited neoliberal policies, turns customers into products. If left unaddressed, it will render liberal democracy and free markets obsolete. The timing is urgent. In North America, record-setting lobbying expenditures by data-driven platforms resulted in chapter 19 of the United States-Mexico-Canada Agreement, which includes provisions that lock in the current advertising-driven business model and prevent lawmaker oversight of algorithms. The current US Administration is working to entrench these rules globally through the World Trade Organization negotiations on the trade-related aspects of e-commerce.

We have arrived at a new Bretton Woods moment where new or reformed rules of the road at the international level are needed. They should be rules that preserve an open global trading system yet, at the same time, respect a nation’s sovereign ability to regulate the data-driven economy's profound cross-cutting economic, security and social effects. I have proposed the creation of an organisation that would be akin to the Financial Stability Board that was created in the aftermath of the 2008 financial crisis. Industry can be part of the solution by acknowledging the toxicity of the current business model and migrating to responsible revenue generation for services.

Mr. Marco Pancini

I thank the committee for the opportunity to join its deliberations today. I lead YouTube's public policy work in Europe, the Middle East and Africa. I have over a decade of experience in online safety issues and I am centrally involved in our work to keep our users safe every day. This committee's work over the past year has addressed a range of critical topics, including privacy, misinformation, election integrity and more. In our testimony before this committee previously, we outlined Google's commitment to information quality and how continued collaboration improves the ways we all address harmful content online. Today, I will focus on YouTube's efforts and outline improvements we have made to address misinformation on that platform. I will also highlight opportunities for greater collaboration among companies, government and civil society to tackle this challenge.

YouTube is an open platform where anyone can upload a video and share it with the world. The platform's openness has helped to foster economic opportunity, community, learning and much more. Millions of creators around the world have connected with global audiences and many of them have built thriving businesses in the process. At the same time, YouTube has always had strict community guidelines to make clear what is and is not allowed on the platform. We design and update our systems and policies to meet the changing needs of both our users and society. Videos that violate those policies generate a fraction of 1% of total views on YouTube, and we are always working to decrease that number. In fact, over the last 18 months, we have reduced the number of views of videos that are later removed for violating our policies by 80%.

Our approach towards responsibility involves four Rs. We remove content that violates our policy as quickly as possible. We raise up authoritative voices when people are looking for news and information, especially during breaking news moments such as elections. We reduce the spread of borderline content and harmful misinformation. We set a higher bar for what channels can make money on our site by rewarding trusted, eligible creators. Over the past several years, we have used those four approaches to address misinformation.

While we remain vigilant against new threats, we are proud of the progress we have made. We have raised up quality content by, among other things, implementing two cornerstone products, namely, the top news shelf in YouTube search results and the breaking news shelf on the YouTube homepage. These products highlight authoritative sources of content and are now available in 40 countries. We have worked especially hard to raise up authoritative and useful information around elections. For example, earlier this year, we launched information panels in YouTube search results. For example, when users were looking for information about official candidates running for seats in the European Parliament in May we showed them authoritative information. We have continued our strict enforcement of YouTube’s policies against misleading information and impersonation. From September 2018 through to August 2019, YouTube removed more than 10.8 million channels for violation of our spam, misleading and scam policy, and more than 56,000 channels were removed for violation of our impersonation policy. We have also undertaken a broad range of approaches to combat political influence operations, which we have regularly reported over the course of the past two years. For example, in September, we provided an update about disabling a network of 210 channels on YouTube when we discovered they behaved in a co-ordinated manner while uploading videos related to the ongoing protests in Hong Kong. We have also worked hard to reduce recommendations for content that is close to a policy line but does not violate it, including attempts to spread harmful misinformation. Thanks to changes we have made over the past year, this type of content is viewed as a result of recommendations over 50% less than before in the United States. YouTube has begun experimenting with this change in other countries, including Ireland and the UK, and we will bring it to other European markets soon. We know that this work is not done. That is why we continue to work with law enforcement, industry and third-party experts around the world to continue to evolve our efforts.

I will conclude by discussing opportunities for greater collaboration. The EU code of practice on disinformation is an important foundation that we can all build on. Launched just over a year ago, the code was developed in light of work that we and others had been pursuing with experts and publishers around the world to elevate quality information and support news literacy. As part of the process, we have provided regular reports on our efforts to address disinformation and we have highlighted the work that we can do collectively in this regard.

We must continue to support collaborative research efforts. For instance, we have invested in research on the detection of synthetic media, often referred to as deep fakes, and have released data sets to help researchers around the world to improve audio and video detection. We have also made the data about our election advertising efforts available in the transparency report. This information is available to everyone, including governments, industry and experts. We can work together with this data to improve matters. We strongly believe that addressing harmful content online is a shared responsibility, which is why we are so committed to meetings and collaborations like these. We are committed to doing our part and we look forward to answering members' questions.

Ms Karen White

I thank the committee for the invitation to participate in today's session. I am the director of public policy for Twitter in Europe. I am joined by my colleague, Mr. Ronan Costello, public policy manager for Twitter in Europe. We are pleased to be here with the committee today for this session which aims to focus on how industry can be part of the solution. Twitter is committed to improving the collective health, openness and civility of the conversation on our platform. Our success is built and measured by how we help to encourage more healthy debate, conversations and critical thinking. Conversely, abuse, malicious automation and manipulation detract from it.

I will use this opportunity to briefly walk through three specific areas where Twitter has been doing critical work to prioritise online safety and election integrity. These include our investments in proactive technology to better enforce the Twitter rules, our policies on political advertising and synthetic or manipulated media, and our focus on state-backed information operations. I will also share some insights into the structural and operational changes Twitter has made since 2017 to protect conversations on the platform during elections while building partnerships that promote a better understanding of our online environment.

It would be instructive at this point to re-emphasise the public commitment made by our CEO, Jack Dorsey, in May 2018 to prioritise the health of public conversation on Twitter above all else. He recognised that the platform had come to be used in ways that were harmful and unforeseen and he said Twitter would hold itself accountable towards progress. Since then, we have leveraged a combination of policy, people and technology to yield positive results. It is our view that people who do not feel safe on Twitter should not be burdened to report to us, so we have significantly ramped up investment in proactive technology and tools to better tackle issues such as abuse, spam and automation, which detract from people having healthy experiences on our service.

More than 50% of the tweets we remove for abuse are surfaced proactively for human review by technology, rather than relying on reports. This is an increase from 20% last year. While we will strive to improve this further, it is a significant enforcement milestone and a positive indicator that our investment in technology is helping us to tackle abusive behaviour at scale. Figures released just last week in the latest edition of our biannual transparency report further outline the trends and progress we saw in the first half of this year. We have increased by 105% our rate of action on violating content. We took action on 133% more accounts for violation of our hateful conduct policy. We took action on 68% more accounts for violations of our policies on abuse. Taken as a whole, the progress I have summarised reflects Twitter's mission and commitment to enhance the health of the public conversation on our service.

The scale, speed and targeting effects of online political advertising have been widely discussed lately. Last Wednesday, our CEO announced that Twitter had made a decision to stop all political advertising. This policy is global, includes all candidate and issue advertisements and will come into effect in the very near future. We continue to update our rules and policies in response to evolving threats and technological challenges.

We share the public concern regarding the use of disinformation campaigns that rely upon the use of manipulated and synthetic media, commonly referred to as deepfakes.

On Monday, 21 October, we publicly announced that we have been working on a policy to address comprehensively synthetic and manipulated media on Twitter. In the coming weeks, we plan to open a public feedback period to get input on this from the public. We want to listen and consider a variety of viewpoints in our policy development process and we want to be transparent about our approach and values.

We appreciate that some of the threats on our platform can be urgent, and our expertise and analyses can be bolstered by partnerships with external researchers, journalists and academics. One area where we have unlocked these valuable partnerships to help provide more transparency on our platform is in the area of state-backed information operations. For more than a year, we have been publicly disclosing comprehensive datasets of tweets and related media information we identify on the platform that we have attributed to malicious state actors. We launched this initiative to empower academic and public understanding of these co-ordinated campaigns around the world and to enable third-party expert analysis of these threats and tactics. Using our archive, these researchers have conducted their own investigations and publicly shared their insights and independent analyses.

Since January 2017, we have launched numerous election related product and policy changes, expanded our enforcement operations and strengthened our team structure. We further expanded our enforcement capabilities for global elections by creating a dedicated reporting feature to allow users to report content that undermines the process of registering to vote or engaging in the electoral process. This reporting feature was first used this year for the Indian and European Parliament elections.

The challenges we face as an online society are complex, constantly evolving, and often without precedent. Industry and Twitter cannot address these issues alone. Nor is our industry monolithic in its approach to these issues. Each of us has different services, varying business models, and often complementary but distinct principles. This should be recognised as we continue our engagement. Every stakeholder in this conversation has a role to play. We propose a whole-of-society approach to improving the health of online conversation and citizenship. We all need and deserve a thoughtful approach and long-term perspective in this discussion, and Twitter very much welcomes the opportunity to participate.

We will start with Australia and I ask the witnesses to try to keep their answers as short as possible to give the parliamentarians time to question them.

Mr. Milton Dick

My first question is for Dr. Bickert. My colleague, Ms Carol Brown, and I, are Members of the Australian Parliament. In Dr. Bickert's written submission, she states Facebook does not believe:

... that a private company should be determining for the world what is true or false in a politician’s statement. This doesn’t mean that politicians can say whatever they want on Facebook.

I appreciate Dr. Bickert will not know every nation's issues to do with her company and I am happy to speak further offline to her about the matter I am about to raise. In our recent election in May this year, in which I was a successful candidate, a campaign was waged anonymously through Dr. Bickert's platform against the party I represent with regard to a death tax and inheritance tax. This campaign was false and misleading. Advertisements were posted and anonymous statements were made. Despite our nation's complaints to Facebook executives, no action was taken. Why was this?

Dr. Monika Bickert

With respect to the specific advertisements mentioned, I would have to talk to colleagues to understand them. I am happy to follow up. In 2017, we introduced an advertisement library whereby we have brought unprecedented transparency to political advertising. Now when somebody runs a political advertisement on Facebook, people can see who is running the advertisement and we verify identity. We also make public the audience for that advertisement, the dates it ran, and any other advertisements that party is running.

It is not a free game for political advertisements. They must adhere to our advertisement standards, which are a step above our community standards. The community standards have measures such as a prohibition on hate speech and threats. The advertisement standards go higher and they are the standards that are applicable to any of the political advertisements that are run.

Mr. Milton Dick

In terms of advertising, in her statement Dr. Bickert said Facebook does not believe a private company should be determining for the world what is true or false in a politician's statement. How does that intersect with the standards?

Dr. Monika Bickert

From what Mr Dick has just described, it sounds like the advertisements mentioned were not run by a politician, so let me explain the difference.

Mr. Milton Dick

They were. They were run and sponsored by elected representatives of the Parliament. Anonymous fake accounts were set up to support them. Videos were collated with the unrelated and irrelevant words "death tax" and "inheritance tax", as said by my colleagues, spliced and put into advertisements that were sponsored and paid for on Dr. Bickert's platform.

Dr. Monika Bickert

Any account run under a false name or inauthentically violates our policies and should come down. We also have a mechanism for sending to third-party independent fact checker organisations, certified by the Poynter Institute, information that is likely to be false. They can rate that themselves. We also have tools and user reporting systems that will send information to them. If those fact checking organisations rate something as false, we put that information next to the information that is being seen and we do not run the advertisements. If something is directly from a politician and if the politician refers to something such as a link or an image that has already been debunked by a third party fact checker, the advertisement will not run. However, if the politician himself or herself is engaging in direct speech, he or she is held to our advertisement standards but we do not police whether the information is true. We do not believe we are the right entity to do this and we do not believe it is something we could do to the satisfaction of all involved.

Mr. Milton Dick

Does Dr. Bickert think more needs to be done in the area of fact checking on Facebook?

Dr. Monika Bickert

This is a primary area where we have actually called for regulation. We think the purpose of this hearing today to see how we can collaborate further is entirely appropriate and we are very open to discussing what regulation in that realm could look like.

Mr. Milton Dick

What, in Dr. Bickert's opinion of the company's position, does she see as appropriate regulations?

Dr. Monika Bickert

Especially in the area of political advertising, definitions would be very helpful, such as defining what a political advertisement is, when it is appropriate to run the advertisements, who are the appropriate parties to run them, and what are the appropriate verifications. I will say the verification process has proved to be not simple. We mail people if they want to run a political advertisement. Part of the authorisation process involves us mailing something to them to confirm their identity because we do see the upload of false documents. These are challenging areas, but we are very open to regulation.

Mr. Milton Dick

Will Facebook look at the Twitter model that was announced on 21 October about political advertising? Is Facebook considering this?

Dr. Monika Bickert

We are very open on what the next steps are to improve our political advertisements and how people encounter them on Facebook. We think it is an important part of the political dialogue. It is an important part of the way policy makers often communicate with their constituents and we want to try to preserve that. We also want to make sure we are acting responsibly, and we are very open to that dialogue.

We will now move to Estonia and I invite Ms Keit Pentus-Rosimannus to ask her questions.

Ms Keit Pentus-Rosimannus

I thank the witnesses for their statements and for the work they have already done to increase preparedness for fighting disinformation. My first question is for Facebook. I want to use this opportunity to get a better understanding of the principles behind its policy on political advertisements. I have understood from the previous answer that the current state is still that Facebook does not fact check political advertisements. Is this correct?

Dr. Monika Bickert

Not exactly. We do not send to our fact checkers advertisements containing direct speech from politicians. Other political advertisements may be sent to our fact checkers. I should point out the fact checking framework we have is relatively new. We only developed a process for sending advertisements to fact checkers in August 2019. Prior to August 2019, this was not something that would have been a process. We have never sent direct speech from politicians in advertisements to fact checkers. We thought it was important to make this clear, so recently we publicly reiterated it.

Ms Keit Pentus-Rosimannus

Will Dr. Bickert explain why Facebook does this? Why is it for all other Facebook users there is no right to lie on Facebook and Facebook does not take money from those advertisements but for politicians it is okay to lie and Facebook accepts money for the advertisements that spread the lies?

Dr. Monika Bickert

Let me be clear. It is not that we think it is okay for people to lie. We think that when people come to Facebook freedom of expression is one of the best ways the truth comes out. The fact we do not think Facebook should be the truth police for the entire world and that we should not determine for citizens what they shall and shall not see in terms of truthfulness from their politicians does not mean we dismiss the importance of combating disinformation. There are several things we do to combat false statements on Facebook. First, we go after fake accounts that the data have told us time and again are disproportionately more likely to be sharing disinformation. Second, we disrupt the financial incentives. Most disinformation, and this includes political disinformation, is shared to make a profit. It leads to ad farms. We have gotten much tougher on that. Finally, for content that is around the border and run in somebody's real name, we now have a process for allowing third party fact checkers to check that information. When it comes to the direct speech of a politician, we do not think it is appropriate for any private company to interfere between that politician's speech and the citizenry and saying this is true and this is false.

Ms Keit Pentus-Rosimannus

Is it still all right to interfere in all the other ads except politicians' ads?

Dr. Monika Bickert

Sorry, could Ms Pentus-Rosimannus repeat that?

Ms Keit Pentus-Rosimannus

In any other ads, Facebook still interferes and does not let the false advertisements run on its service. However, it does not interfere when it comes to politicians and, therefore, it is all right for politicians to spread lies on Facebook.

Dr. Monika Bickert

To be clear, we are not the truth police for the world. We do not remove content simply for being false outside of a couple of small areas such as if there is an immediate threat to safety and a safety partner has confirmed for us that there is an imminent risk of harm or if somebody misrepresents voting times, locations and processes. Generally, when we use those third party fact-checking organisations, we do not remove content they say is false; we mark that as having been rated false by the fact checker and we put the information from the fact checker next to it.

Ms Keit Pentus-Rosimannus

That does not necessarily make it much better. A few people have said the change in Facebook's policy, where it refuses to take down false political ads, has happened because it allows Facebook to accept millions upon millions of dollars from upcoming large and important campaigns and spread information that is knowingly false as political ads. How does Dr. Bickert answer those accusations?

Dr. Monika Bickert

These ads represent a very tiny portion of our ads' revenue.

Ms Keit Pentus-Rosimannus

Can Dr. Bickert be specific on that? In 2016, for example, what was Facebook's approximate earnings from political ads versus all other ads?

Dr. Monika Bickert

I believe Mark Zuckerberg clarified this publicly. I do not have the exact statistic on hand. We can follow up with Ms Pentus-Rosimannus on that but suffice it to say financial motivations do not lead us to have this policy. We make very little from these ads. That is because we do not believe that it is appropriate for a private company to decide for the world what is true or false coming from their politicians and we do not think we could do so effectively.

I will move to Finland next and call Mr. Tom Packalén.

Mr. Tom Packalén

I thank all the guests for their presentations. Many of them have done a good job and we are going in a better direction. It is important companies take on their responsibilities beforehand. We must have tight regulation with respect to what is bad and difficult for everyone.

My question is for Dr. Bickert. With respect to false information, I understand it is very difficult to say what is false. Something is false for one person and means something else to another person but hate speech is a much easier target to discuss. For example, in Sri Lanka, people were burned alive because of the hate speech spread through Facebook. In Myanmar, there was violence against the Rohingya minority. I refer to what Facebook has done to prevent this. It has more than 10,000 moderators, as Dr. Bickert mentioned, but what can it do with them? The only way to address this is with an artificial intelligence, AI, solution. Mark Zuckerberg told the US Senate in April 2018 that it might take five to ten years to have this kind of technology but that is not true. A Finish company, Utopia Analytics, has offered to build Facebook a model in two weeks to get rid of the hate speech content originating in Sri Lanka but Facebook was not interested in it. Many companies do AI moderation. Will Dr. Bickert explain a little why Facebook will not go deeper into tackling hate speech and why it believes, with more than 10,000 moderators, it can handle billions of messages?

Dr. Monika Bickert

I very much agree it is important to tackle hate speech. It is, and has been, a focus for us and, like Mr. Packalén, we think that technology is an important part of that solution. We do not allow hate speech on our service. We have very strict policies against it. Our transparency report indicates that the majority of speech we remove for violating our hate speech policies is flagged by our automated systems before anybody has reported it to us. That number is now is in excess of 65%. Interestingly, Myanmar, is one of the areas where we have made the most progress. Back when our detection efforts removed approximately 50% of the hate speech we found with automated technology, in Myanmar, our detection rate was at 63%. We have focused on hiring people who are native language speakers and on building relationships with organisations on the ground that can help us spot trends. Those are very helpful but we also think technology is very important. We partner with others to improve that technology but we also invest significant resources ourselves.

I now call Ms Nino Goguadze from Georgia.

Ms Nino Goguadze

I thank the guests for showing their openness to work together to address the challenges we face today. Facebook is the most broadly used social network in Georgia. It is the main source of information, especially among the younger generations. That is why I have question for Dr. Bickert on Facebook's operational system. In 2008, with Russia military intervention, Georgia experienced a severe cyberattack, which blocked critical digital infrastructure in the country. Our citizens were denied access to Facebook. In 2009, a similar cyberattack took place. We had no access to Facebook and Twitter and one can imagine the effect of that when Facebook is most important source of information for the country. That fact was brought to international attention and even The New York Times pointed out how vital web tools and services are becoming to political discourse and how vulnerable they are to disruption. My question is whether, in 2019, Facebook has a policy to protect its users from political bullying or whether the company has a policy or any deterrent measures against countries which undertake attacks on Facebook users.

Dr. Monika Bickert

There is much to address in this question.

We think, however, that it is very important to maintain open access for the public to Facebook. We have a team devoted to cybersecurity and that includes preventing hacking and cyberattacks, perhaps of the sort mentioned. We also focus on disrupting what we call "co-ordinated inauthentic behaviour". These are networks of accounts, sometimes state-run and sometimes not, trying to abuse our platform to share things such as divisive messaging, political messaging or intimidation. We have a team focused on identifying and removing those instances. It is not something we can do alone, so we partner closely with researchers, academics, security firms and others in the industry.

Sometimes we will get a lead from somebody else, we will do the investigation and then we will remove that network. When we do that, we are transparent about it and in our newsroom, we publish blog posts about the actions we have taken. That has been a major area of investment for us. Going after fake accounts has also been an important part of disrupting bad activity that may, or may not, again come from state actors. Our automated tools have got so much better that in the first quarter of 2019, we removed more than 2 billion fake accounts. Not all of those were designed to share disinformation but we were removing the vast majority of these accounts within moments of their creation. These are definitely areas of interest for us. We believe people should have access to Facebook.

Ms Nino Goguadze

Is this information about the policy available somewhere? Can I find it?

Dr. Monika Bickert

Yes. I will follow up with Ms Goguadze on this issue.

Ms Nino Goguadze

That is very good. Has Dr. Bickert every been in Georgia?

Dr. Monika Bickert

I have not been to Georgia.

Ms Nino Goguadze

That is fine. I invite Dr. Bickert. That would be a great opportunity not only to explore an amazing country but also to learn what kind of challenges such relatively small countries are facing today. It is important that small countries should be the focus of big social networks. The presence of Dr. Bickert in Georgia and her personal experience of the problems in small countries would be very important for Facebook, as well as for us. It should be a part of the policies of big social networks to take care of the challenges and problems faced by small countries.

Dr. Monika Bickert

I thank Ms Goguadze. Although people think of Facebook sometimes as an American company, more than 87% of the people using Facebook are outside of the United States. Ms Goguadze is correct that getting policies right in smaller countries is important for us and we are committed to building those relationships. We have a public policy team based around the world and my team is based in 11 offices around the world. I look forward to following up with Ms Goguadze.

Ms Nino Goguadze

I thank Dr. Bickert and I look forward to seeing her in Georgia.

I will start with my own questions briefly. In its written statement, Facebook proposes to create an oversight body. Who will be on that oversight body? How independent will it be? Will those members be paid employees of Facebook? Will the body be funded by Facebook? It also appears that this body will be an appeals mechanism for people who want content removed by Facebook. We are here to ensure that social media platforms are safe environments and this does not seem to be helpful in ensuring we are working in a safe environment. This oversight body is instead going to allow people who have their content removed to appeal to Facebook to have that content restored. That is the opposite of what we are discussing. I invite Dr. Bickert to talk to us about that oversight body and those questions first.

Dr. Monika Bickert

We are launching the oversight board and while we do not think it is the perfect answer to all of these challenges, we think it is important to give people who have had content removed from Facebook the opportunity to appeal to an independent body. To go quickly through the questions posed by the Chair, we put out a charter in September 2019 that was the result of nearly a year of consultation in many countries with many stakeholders from different backgrounds. The result is that the board will have up to about 40 members. Those members will be chosen through a collaborative process between the co-chairs of the board, who will be initially selected by Facebook. Those co-chairs will then choose the additional members, with input from Facebook. That said, the decisions made by those members will be independent and binding. They will be paid by a trust that Facebook will fund, but those funds will be held and administered independently.

I will clarify the way that cases will get to this board. We have millions of cases every week where we make decisions. This board will be able to choose from among the decisions appealed to it by users. If Facebook finds a case where it is hard to make a decision, we will also be able to proactively send something to the board for it to make a decision.

The board, therefore, will be essentially funded by Facebook and Facebook will essentially be choosing who is on this oversight body. Is that correct?

Dr. Monika Bickert

We are not choosing the individual members of the board. We have chosen the co-chairs in a collaborative process. There is far more information in the charter that we published and I am happy to follow up with the Chair on the details.

There would be questions and concerns regarding the independence of the board. I highlight that from looking at this proposal initially. The perception is that this board would not be independent.

Dr. Monika Bickert

This is one of the challenges we face and one of the reasons that we had such a long consultation. We want the board to exist, which means that we have to pay for it in some form, although we are certainly open to other models of funding. We also, however, want to make it an independent body and that is why we have created the trust, or are in the process of creating a trust, to fund the board.

Facebook, however, is creating the trust.

Dr. Monika Bickert

We fund the trust and the trust will be administered.

That is a concern.

Dr. Monika Bickert

The board members, who will be chosen by the co-chairs, will serve three-year terms and cannot be terminated because of their decisions. They have that independent authority and they are paid by the trust and not Facebook.

My colleague from Estonia asked about political advertising. Twitter has announced that it is not going to allow any political advertising anymore. Mr. Zuckerberg has stated, however, that Facebook has no intention of implementing such a ban. Following on from that, how much revenue did Facebook earn in political advertising during the last presidential election in the United States and how much does it expect to generate in next year's presidential election?

Dr. Monika Bickert

I do not have specific figures to answer that question. I believe that Mark Zuckerberg has talked about electoral advertisements and how they have been and continue to be a very small, percentage of our advertising revenue. These decisions are not financially motivated. They decisions to allow political advertising are because we think that this is an important way that politicians are able to interact with their constituents.

Would Dr. Bickert be able to provide those figures to us on the revenue?

Dr. Monika Bickert

I will follow up with the Chair. I know statements have been made on this issue and I apologise for not having those figures to hand. I will follow up on them for the Chair, however.

That would be very helpful for the committee. I will come back to my Irish colleagues, but I will start with the representatives from Singapore. I call Dr. Puthucheary.

Dr. Janil Puthucheary

I apologise to the other members of the panel, but I am directing my questions to Dr. Bickert, as have all of my colleagues. I welcome her comments concerning Facebook being committed to addressing the issue of co-ordinated inauthentic behaviour. I would like to clarify that my understanding of how this functions is correct, however, and perhaps we could use the example given by Ms Cadwalladr and some others this morning concerning the upcoming British elections. I refer to a situation where the authorities are potentially concerned about co-ordinated inauthentic behaviour as an attempt to alter the British elections in the near future and Facebook is made aware of this information through the security services, government agencies, think tanks, academics and-or voluntary organisations. Ultimately, however, it would be Dr. Bickert's team that would then decide whether to act on evidence of co-ordinated inauthentic behaviour. The execution and timing of that decision might affect the British election. Is that correct?

Dr. Monika Bickert

Our policy against co-ordinated inauthentic behaviour is laid out publicly and those are the rules that we follow, so if-----

Dr. Janil Puthucheary

I understand. I have a very simple question. Ultimately, does Dr. Bickert's team decide to act upon this evidence?

Dr. Monika Bickert

Yes. We do so often in consultation with security firms.

Dr. Janil Puthucheary

Absolutely, but ultimately the decision is her team's, the timing and execution is her team's.

Dr. Monika Bickert

Yes. These are our policies; we apply them.

Dr. Janil Puthucheary

She can imagine the concern in the execution and timing. That may well have an effect on the election outcome.

Dr. Monika Bickert

Certainly, we will remove abusive content that we find and we are very public when we do that.

Dr. Janil Puthucheary

However, my understanding about the process is correct.

Dr. Monika Bickert

They are our policies and we apply them, absolutely.

Dr. Janil Puthucheary

Can I extend that perhaps to Irish law? I imagine Facebook has employees here and in Singapore. They are subject to Irish law surely.

Dr. Monika Bickert

Yes. I cannot speak to the direct applicability of which country's laws apply to which individuals.

Dr. Janil Puthucheary

Let me give an example. The Minister, Deputy Bruton, has suggested an online safety Act. There is been some public discussion here in Dublin about that. If it comes to pass, he has proposed an online safety commissioner. That may, for example, provide directions requiring the removal of content after adjudication, perhaps even court injunctions to enforce. Presumably, these would be served on Facebook employees here in Ireland. Would they need to check with Menlo Park and Dr. Bickert's team before complying with Irish law?

Dr. Monika Bickert

While I cannot speak to the specifics of Irish law, I can tell the committee that we have a process for evaluating and complying.

Dr. Janil Puthucheary

I understand that. The person subject to Irish law would not, therefore, in Facebook's view be required to comply with a direction constituted through Irish law.

Dr. Monika Bickert

No. My answer is that we have a legal team and a process in place for evaluating when we receive a request for a Government as to what is appropriate for us to do in terms of compliance.

Dr. Janil Puthucheary

I understand.

Dr. Monika Bickert

In our data use policy, we explain how we evaluate requests from governments with our processes.

Dr. Janil Puthucheary

I ask Dr. Bickert to explain the employees' letter that was circulated widely. Why are Facebook employees concerned about the company not observing election silence in compliance with local laws?

Dr. Monika Bickert

I cannot speak to the opinions of those individuals. I can say with that policy, as with all of our policies, we of course hear views across the spectrum - indeed we solicit them. When we refine our policies, part of that process involves talking to people across the company, but also civil society groups and experts outside the company. We want that diversity of thought.

Dr. Janil Puthucheary

I understand. This is my final question. Dr. Bickert's comments and assessments do not address any of the issues that are happening in end-to-end encrypted platforms such as WhatsApp and perhaps what is being proposed for other messaging services. They centre on Facebook's newsfeed-based products. In places like India and Brazil, there is good evidence that WhatsApp has been compromised, weaponised and exploited for the purposes of disinformation as well as affecting elections. Have any Facebook employees in its safety team or otherwise expressed serious concern about the weaponisation and use of WhatsApp and other closed platforms for this purpose? Does Facebook have plans to address some of our concerns, as legislators and regulators, that these closed platforms would be used for these purposes?

Dr. Monika Bickert

Absolutely, safety on WhatsApp is a priority. In fact, my team is responsible for safety and security, and has on it people who have spent their careers in safety. They did not just get assigned to it in Facebook. These are people who came to Facebook because they cared about safety and security. We cover WhatsApp as well. So, we are focused on ensuring that this service is safe. As we think about encryption on WhatsApp or on other services, part of that is figuring out the ways we can offer superior safety in an encrypted world. Some of that means focus on behaviour, trends, using artificial intelligence. However, absolutely this is an area of focus.

I now call the US representative, Congressman David Cicilline.

Mr. David Cicilline

Facebook's CEO, Mark Zuckerberg, recently said that Facebook should not fact-check political ads as Dr. Bickert has defined them, because political ads are already subject to public scrutiny and it is not the role of a private company to censor politicians. Does that accurately reflect Facebook's position?

Dr. Monika Bickert

Yes, it does.

Mr. David Cicilline

As we learned from our first session, politicians can use Facebook to micro target specific audiences with tailored ads, such as men between the ages of 55 and 75 who drive a pickup truck and watch Fox News. Micro targeting on Facebook allows an advertiser to limit the distribution of an ad to a very particular group of people. Is that correct?

Dr. Monika Bickert

There are limits on how specific that can be and limits on how one can use targeting, but yes, generally.

Mr. David Cicilline

If I were to pay for a false political advertisement and then seek to target audiences on Facebook who are susceptible to disinformation, would that be possible?

Dr. Monika Bickert

When Mr. Cicilline says audiences that are susceptible to-----

Mr. David Cicilline

I mean audiences I determine I want to micro target that may, in fact, believe the false representation I make in my ad. That is not prohibited by Facebook; in fact, that is its practice.

Dr. Monika Bickert

There are limitations in our policies and in the targeting criteria that we offer.

Mr. David Cicilline

However, within that we are allowed to micro target and we can pick the population that we think is susceptible to the false representation we are going to make in a political ad.

Dr. Monika Bickert

Mr. Cicilline can pick his population and he can target the ad within our policy.

Mr. David Cicilline

Would the person who saw or engaged with that advertisement know they were being targeted by false information?

Dr. Monika Bickert

They would know that they have been targeted in an ad. I can click on the ad and see why I am seeing this. Also, significantly-----

Mr. David Cicilline

No. Would they know they were being targeted by false information?

Dr. Monika Bickert

As Mr. Cicilline knows, our policy is if something is direct speech from a-----

Mr. David Cicilline

I take that as a "No". They would not know they are being targeted with false information.

Dr. Monika Bickert

Our policy is that we do not fact-check direct speech from politicians. However, if somebody receives an ad, they can click on it and see why they are seeing it. Our ads library shows not only the ad, but also the audience, including-----

Mr. David Cicilline

However, my question is in respect of the veracity. They would not know they are being targeted with false information. They would know why they are being targeted as to the demographic, their race or whatever, but not as to the veracity or falseness of the statement.

Dr. Monika Bickert

The reason that is hard to answer is political speech is so heavily scrutinised. There is a high likelihood that somebody would know if information is false and there is robust conversation around political speech so people may well-----

Mr. David Cicilline

With all due respect, Mark Zuckerberg's theory that sunlight is the best disinfectant only works if an advertisement is exposed to sunlight. However, as hundreds of Facebook employees made clear in an open letter last week, Facebook's advanced targeting and behavioural tracking tools make it "hard for people in the electorate to participate in the 'public scrutiny' that we're saying comes along with political speech. These ads are often so micro targeted that the conversations on our platforms are much more siloed than on other platforms."

It seems clear that micro targeting prevents the very public scrutiny that would serve as an effective check on false advertisements. Does the entire justification for this policy not completely fall apart given that Facebook allows politicians both to run fake ads and to distribute those fake ads only to people who are most vulnerable to believing them? This is a good theory about sunlight, but in practice Facebook's policies permit someone to make false representations and then to micro target who gets them. This big public scrutiny that serves as justification just does not exist.

Dr. Monika Bickert

Respectfully, I say there is great transparency in how the targeting happens. That is why we have the ads library, which is unprecedented. We literally show them. One can look up any ad in this library and see the breakdown of the audience who have seen the ad. Many of them are not micro targeted at all. In no way did this impair. We saw this recently in the press coverage of political advertisements. There is in the US and elsewhere robust conversation about whether political statements by politicians are accurate.

Mr. David Cicilline

When rolling out this recent public policy allowing politicians to pay Facebook to spread lies, Mr. Zuckerberg said it was not appropriate for one company to decide what political ads can appear on Facebook and what cannot. If the problem here is that Facebook should not be exercising this kind of power, is the problem not that Facebook has too much power? Should we not think about breaking up that power rather than allow Facebook's decisions to continue to have such enormous consequences for our democracy? Dr. Bickert said that Facebook does fact-checking for a number of other things but does not do fact-checking for political ads. The cruel irony is that her company is invoking the protections of free speech as a cloak to defend its conduct, which is, in fact, undermining and threatening the very institutions of democracy it is cloaking itself in. That is the cruel irony: the idea that it is only generating a small part of its revenue. Its CEO said it is 0.5%, which is €330 million in revenue.

That may seem insignificant to a company of Facebook's size but it is a substantial revenue source.

Does Facebook currently prohibit the payment of political advertisements in foreign currency? Can a politician or someone else in the US use rubles, or any foreign currency, to pay for a political advertisement?

Dr. Monika Bickert

In terms of any political advertisements that are run in the country, we verify the identity of the person-----

Mr. David Cicilline

That is not my question. My question is whether Facebook has a policy in place that prevents it from accepting foreign currency.

Dr. Monika Bickert

I cannot speak to the payments tools. I can tell the Congressman that when we have political advertisements that are offered in the United States, we have a process through the mail by which we verify that the advertisement is coming from an actor within the United States.

Mr. David Cicilline

I am told in public reporting that Facebook accepts foreign currency. It would seem to me that, as a minimal first step, Facebook might want to adopt a policy that it does not accept foreign currency in payment for domestic political advertisements. Since foreign interference in an election is prohibited by law, it might want to consider doing that.

Dr. Monika Bickert

Again, we do ensure that through our authorisation flow where we actually require-----

Mr. David Cicilline

Did Facebook accept rubles in connection with the American presidential election in 2016?

Dr. Monika Bickert

The measures I am speaking about were put in place after the 2016 election in part because of the lessons we learned there. What we do now is-----

Mr. David Cicilline

The lessons are not to prohibit foreign currency. Facebook still takes that.

Dr. Monika Bickert

Respectfully, the concerns the Congressman and others have expressed are about foreigners - people from outside one country - running political advertisements in the country where the election is happening. We get to the heart of that by requiring identity verification from the advertiser himself or herself.

I call Deputy James Lawless.

I will share time with Deputy Eamon Ryan. Staying with Facebook and to follow up on the previous speaker's contribution, was the Libra cryptocurrency an attempt to circumvent the issues we just talked about in terms of currencies from multiple jurisdictions coming in?

Dr. Monika Bickert

I am sorry. Could the Deputy repeat the question?

I refer to the Libra cryptocurrency that Facebook was rolling out. Was that an attempt to in some way address some of the issues around rubles and many other currencies being used in multiple jurisdictions? Was the Libra currency geared towards circumventing some of the traceability and auditing issues Dr. Bickert has just been talking about with the previous speaker?

Dr. Monika Bickert

No. The Libra product is unrelated and is about access to financial services. That is not something I work on. It is a separate project.

If I understand the position correctly, Libra may assist in concealing some of the issues that have just been discussed. I will move on.

Is it the case that, essentially, the only way for political messages to get across on Facebook is by advertising? The algorithms were changed about two years ago because there was a growth of negative news or, to an extent, spam on Facebook. My understanding is that the algorithms were rejigged so that family and friends type content was primary in people's newsfeeds. This means that the organic reach for political advertisers was so small, the only way they could get into newsfeeds again was by paying for it. While it may or may not be a small percentage of Facebook's revenue, it is a constant guaranteed source because the only way to get political messaging into newsfeeds is by advertising. Is that not the case?

Dr. Monika Bickert

It is certainly not the only way to get into feeds. If we look at the major politicians, some of them have very large page followings. The Deputy is right that the friends and family posts have been elevated in the past year or so but it is certainly not the only way that something would get into somebody's newsfeed.

Two allegations have been made against Facebook. One is that it promotes addictive behaviours in order to keep users on the platform. The second is that it has exploited its dominant position in some ways. An example that brings those two behaviours together is Facebook's advocacy for video content in recent years. I am told that the statistics for video content supplied by Facebook internationally and in various reports were such that the primacy of video content was exaggerated and boosted to the extent that it was way over the scale in terms of what was actually being seen. Video content was being reported as having such a humongous reach that newsrooms and media organisations began to recalibrate their offering, packages and reporting to model this trend but the video statistics, as reported, were a gross exaggeration of the reality being experienced on the platform. Is that the case?

Dr. Monika Bickert

I am sorry. I do not have any information on that but I can have the relevant team follow up on it with the Deputy.

Dr. Bickert might come back to me on that.

I have a final question. What percentage of Facebook's moderation team globally is made up of company employees and what percentage is made up of outside contractors?

Dr. Monika Bickert

I do not have an exact percentage for the Deputy-----

Dr. Monika Bickert

-----but we certainly use both. What I can tell him about our contract force is that they go through the same training and are subject to the same privacy and accuracy audits and controls.

They are not subject to the same conditions. I have met some of them in Ireland and elsewhere, and I have read the stories. They are certainly not subject to the same conditions. There have been many stories about the harshness, the pay inequality, the difficult conditions they work in and the psychological trauma and mental health issues that have arisen afterwards. They may be subject to the same training but they are certainly not subject to the same pay and conditions. What is the balance in that respect? Is it 50-50 or two thirds to one third? Is it primarily contractors that form the moderation team or is it Facebook employees?

Dr. Monika Bickert

The Deputy is right that these jobs can be very challenging. That goes for the full-time employees as well as the contractors. I want to acknowledge that. I was a criminal prosecutor for more than a decade. I worked a lot on child exploitation cases so that focus on making sure that we are providing resources is a major one. I, too, have toured many of our contractor workforce locations, including in Ireland. I have found them to be nice places to work. I know that there are counselling and other wellness resources for them and I have talked to the employees there. They always have challenges in their jobs but, generally, our attrition rates are very low and our family and friend referral rates are very high.

I will bring in Deputy Eamon Ryan.

How would Mr. Pancini answer the point made by Mr. Balsillie? We heard it yesterday when we were deciding on a workshop on this entire issue. Extensively, leading academics and researchers in the area agreed that the basic business model here is the core of the problem we face. I refer to the use of algorithms seeking to attract attention. I see it with YouTube in that one is immediately brought down a tunnel of confirmation of one's particular view. This issue is very commonly discussed but Mr. Balsillie and others argue that we have to look at the business model. What does Google say in that regard?

Mr. Marco Pancini

I would make two points. First, we believe that the open Internet has created an unprecedented opportunity for everyone, both on the user side but also small and medium enterprise creators from Ireland and across the world, to have a voice and find an audience to which to make their message, speech or, on occasion, economic message available. There is a value in these and in creating a business model that makes this possible. That is a key point. There are different business models. There are subscription models and advertising. Advertising still represents a way to make these services available to the vast majority of people.

Can I ask another question?

Mr. Marco Pancini

Sure.

Mr. Pancini is Italian. I refer to the atmosphere for public debate. The town hall debate in Italy changed in the past ten or 20 years. Has it become more or less civil?

Mr. Marco Pancini

I grew up in a country where television was the media that influenced public opinion on political speech for a long time. The Internet, in the specific Italian context, has opened up different voices - new parties to find and audiences to get out their message.

Most politicians have a sense that the world has become more divisive and polarised. The reason for a large part of that is because the business models of the social networks are driving divisive communications. Does Mr. Pancini refute that?

Mr. Marco Pancini

I agree with the expert from Graphica when he said it was a much more complex issue which also includes the role of the single communities that are sometimes radicalised or are very strongly convinced about the political message in spreading this political message to the echo chamber. That is part of the problem. Also, education and digital literacy are part of the problem. The business model is-----

When I am putting in a selection for YouTube, is what was said earlier true, namely, that in terms of the activities I carry out in a range of other areas, including how I use my credit card, Google knows more than I know about myself? Is it true that all that data is used to influence the videos that are put in my feed?

Mr. Marco Pancini

I respectfully disagree with that statement. It is very different when we look at content like music in that what the Deputy has seen before, other people have seen the same music and liked it.

When we look at the news, the only element that is really important in making sure that we are doing a good job in providing a recommendation to the user is if the source is authoritative. That is why we want to work for news and political speech with authoritative sources that can help us to ensure that the 80% of the result that the user finds on the platform comes from authoritative sources. It is not engagement that counts as news.

I will put the same question to Dr. Bickert. She has been focus of all the questioning because Facebook is probably the most toxic political platform. Our Georgian colleague invited her to visit Georgia. I would invite her to visit my Facebook page and see the nature of the commentary that is increasingly prevalent on the platform. It is not a community, it is war. For whatever reason, the algorithmic model, not just in recent advertising but in the 95% of other organic content, is leading towards political dialogue which is abusive. I have yet to meet a politician who does not think that is what is happening. To refer to the question Mr. Balsilli put, it is the business model, not just in respect of political advertising but the whole organic commentary model that is leading to political discourse which is not community, it is hate speech and abusive. It tends towards that. It is the experience of every politician I have met. Are we all wrong?

Dr. Monika Bickert

I echo what Mr. Pancini said. I do not think this is related to the business model, which is largely a very good thing. There is abuse but the data suggest that polarisation has been increasing since the 1970s. That is a separate topic. We take abuse of politicians or others very seriously. We do not allow harassment, threats or hate speech. We are not perfect at enforcing that line but we are committed to getting it right. I welcome feedback from the Deputy.

I welcome our guests. I have been a member of the Seanad for 17 years. I welcome Dr. Bickert, particularly she is the vice president of content policy at Facebook. She is resident in Menlo Park, in Galway I presume, not California.

Dr. Monika Bickert

I am in the Menlo Park, California. I am in new Menlo Park.

Contrary to what Deputy Eamon Ryan said, my daughter, Councillor Orla Leyden, launched a campaign to try to retain the Cuisle centre for the Irish Wheelchair Association, IWA, in Donamon in County Roscommon. The general media has literally ignored this issue. On Facebook, its service users, people in wheelchairs, and others who have difficulties and so on, have responded. There have been 35,000 hits. RTE has not responded to this issue; it has ignored it completely. The people my daughter and I represent are voiceless without Facebook. Whatever about hate speech, I do not have Facebook, I do not read Facebook and I do not want to know what people are saying about me on Facebook, that saves me any concerns. I am, however, concerned about an issue which is of such vital importance that, last Friday, the chief executive of the IWA told the 45 members of staff that they would be sacked on 29 November. They had no voice and Facebook is the face and voice of the voiceless.

I am delighted to welcome our esteemed elected members and our esteemed associates from the social media. Like most politicians in Ireland, I have seen content put up on social media and all the rest that is not correct. I accept that there is legislation coming in Ireland. If people say things on Facebook or some other social media platform and it is incorrect, the law of the land must bring them to account. To expect Facebook, which we have dealt with a lot today, to police itself about something an elected representative says is not a good enough standard for us. It has been thrashed out here several times. The Minister for Communications, Climate Action and Environment, Deputy Bruton, is considering legislation and I am well aware that this has been discussed in our own parliamentary party and it will I am sure make different amendments to it, like several politicians here. We have to be serious as politicians. There are different formats, guys pay for advertisements and say stuff but if they are saying stuff that is incorrect the court is the place they deserve to be grilled.

I do not know if there were any particular questions there. I might let witnesses in at the end. I am also chairman of the Joint Committee on Climate Action and I would like to raise the issue of Twitter banning political advertising.

I thought we were going to get a response from the witnesses.

Yes, I will let them back in again. I am just conscious of the time and that Twitter-----

I thought I might get a quick response.

They should come to Menlo in Galway, it is a much nicer place.

I will let them back in again. I just want to put a question.

The Chairman might lose track of a situation where I am saying that Facebook is the voice of the voiceless. In this regard, let us put the positive spin as well as negative spin.

Absolutely. The Senator has had his time. I will let the witnesses in.

I would like the response. That is all.

The Senator will get a response. I am conscious of time and I want to get a question in to Twitter which has not had an opportunity to answer.

It would appear that, under Twitter's new advertising rules, environmental groups will not be able to pay for advertisements promoting green policies or pro-climate policy content after 22 November but, on the other hand, there appear to be over ten current Exxon Mobil tweets relating to climate change that Twitter does not classify as political issue advertisements. Will they be classified as political advertising after 22 November?

Ms Karen White

We are very aware that this issue was raised in the past couple of days in the United States. We are still working out the details of that policy with regard to political and issue advertisements that would encompass issues of national importance. We hope to be in a position next week to provide more details of what types of advertisements will and will not be allowed under these new policies and I would be very happy to follow up with the committee then and to share more details. Towards the end of the month, advertisers will have a chance to become familiar with, and educated about, that new policy. It will take effect from later in the year.

I would welcome interaction on this because the concern is that there are environmental groups working positively to help people and governments take positive action to tackle climate change whereas oil companies which are allowed pay for advertisements talk about climate change too and there is a contradiction there and a real concern about how Twitter will enforce this ban and who will and will not be allowed to place political advertisements. We will all be watching that space.

Does Dr. Bickert or anyone else on the panel want to reply to Senator Leyden or Senator Davitt?

Dr. Monika Bickert

We do not want abusive actors, we do want a voice for others, and we can follow up with the Senator on specific concerns.

I am actually praising Facebook. I said Facebook is the voice of the voiceless. The national media have ignored this issue but Facebook has not.

I thank the Senator. We have only six minutes left. Will the next speakers just stick to asking a question?

My next question is to Google and I am going to keep it tight.

Just a question.

Yes. I understand that Google is a search engine and it performs as a directory but I also understand that any content, illegal or otherwise can be retrieved by the search engine because Google's view is that it is a directory. I was perturbed by this when I met Google recently, that it applies to disturbing content, freedom of speech, violent and pornographic content and illegal content. All it does is pull things back from the web and show them. Is that acceptable?

Mr. Marco Pancini

If I can correct the statement, the point is that Google as a search engine is indexing content and providing a link to content that is not hosted on Google. The same content on YouTube can be taken down because we host the content. The content that we link from Google, since it is not hosted on our networks, cannot be taken down by us. What we can take action on is the link. We can make sure that when somebody is searching something on Google, he or she cannot find the link that is leading to the illegal content. For illegal content that is hosted elsewhere, we do not have any technical possibility to take action. Of course we can work together with law enforcement and the technical community to solve the problem.

Ms Carol Brown

I want to go to Facebook in terms of what Dr. Bickert said about fact checking and that a correction would sit side by side with the original ad. That means that anyone who accesses the ad in the future would see the original statement and the fact-checked correction. What about those who have already seen just the original statement or ad?

Dr. Monika Bickert

If content has been fact checked, we do not allow it to run in an ad. With the organic presentation of it, just on a page, we do put the fact checkers' information by it. For somebody who has, say, already shared that content in the past, we send them a notification that the content has now been fact checked and rated false.

Ms Carol Brown

Why does Facebook not send the correction to all those who have seen that ad? One of the criticisms is around the lag time between the ad being fact checked and all those people who have already seen the original, and the fact that the correction is not sent to those who have already viewed the original ad.

Dr. Monika Bickert

When we send notifications, we do share a link to the fact-checking material, which may render as a thumbnail depending on where the person views it. We agree there is room for thinking about how we can improve this process. I welcome Ms Brown's feedback and thank her.

Mr. David Cicilline

When the Twitter CEO made the decision to ban political advertising, he stated:

A political message earns reach when people decide to follow an account or retweet. Paying for reach removes that decision, forcing highly optimized and targeted political messages on people. We believe this decision should not be compromised by money. [...] This isn't about free expression. This is about paying for reach. And paying to increase the reach of political speech has significant ramifications that today's democratic infrastructure may not be prepared to handle.

Is that an accurate statement of the CEO's position?

Ms Karen White

Yes.

Mr. David Cicilline

This argument that this is just political speech, which we heard Dr. Bickert make, and that it is really not about money but about free expression, has been completely rejected by Ms White's CEO. He spoke of significant ramifications for today's democratic infrastructure. Does Ms White know what he meant by that?

Ms Karen White

I think when it comes to political advertisements on Twitter, what he meant by that was that when they are targeting individuals and placing a targeted message within their feed, it removes the choice from that. We would, moving forward, much prefer to be in a position where that political reach is earned through retweets and various other means.

Mr. David Cicilline

I applaud Twitter for that.

Ms Keit Pentus-Rosimannus

According to advertising analytics and Cross Screen Media, which is a company that analyses advertising markets, the predicted spend on digital video ads will be around €1.6 billion during the 2020 election cycle. It was very good to hear from Dr. Bickert that despite that amount of money, financial incentives do not play any role in the decision to refuse to take down false political ads. She has repeatedly said that her company, Facebook, is not in a position to decide if a lie is a lie or not. If that is the case and if financial motivation does not play any role, I am still struggling to understand what exactly prevents Facebook from deciding that it will not run paid political ads and that it will not be a platform that can be used for amplifying the lies.

Dr. Monika Bickert

With regard to the revenue number, I think Mark Zuckerberg has put our revenue estimate out there and that is the figure I would use. It is a very small percentage of our revenue and is not-----

Ms Keit Pentus-Rosimannus

If that is the case, why does Facebook not say that since money does not play any role, it can easily give it up and say it will not be the platform that will amplify the lies?

Dr. Monika Bickert

We think that ads are an important way for politicians to be able to share their platforms and-----

Ms Keit Pentus-Rosimannus

I am a politician and I share my ideas without paying for it. It does not mean that Facebook has to kick out the politicians. The question is if it will allow itself to amplify lies for money.

Dr. Monika Bickert

We think that there should be ways that politicians can interact with their public, and part of that means sharing their views through ads. I will say that we are here today to discuss collaboration in this area with a thought towards what we should be doing together. Election integrity is an area where we have proactively said that we want regulation. We think it is appropriate. Defining political ads and who should be able to run them when and where are things for which we would like to work on regulation with government.

Ms Keit Pentus-Rosimannus

Yet Twitter has done it without new regulation. Why can Facebook not do it?

Dr. Monika Bickert

We think that it is not appropriate for Facebook to be deciding for the world on what is true or false. We think politicians should have an ability to interact with their audiences so long as they are following our ads policies. We are very open to how, together, we could come up with regulation that could define and tackle these issues.

I thank everyone. We are going to have to suspend our meeting for 15 minutes. I thank all our witnesses for coming before us this afternoon.

Sitting suspended at 12.17 p.m. and resumed at 12.30 p.m.
Top
Share