Session 1: The Evidence

In our first session we will hear from Dr. Karlin Lillington of The Irish Times, Dr. Johnny Ryan from Brave, Mr. Roger McNamee, an investor and author, and Mr. Ben Nimmo from Graphika. It is hoped that Ms Carole Cadwalladr of The Observer will be joining us shortly.

I need to read out some formal notices. I draw the attention of witnesses to the fact that by virtue of section 17(2)(l) of the Defamation Act 2009, witnesses are protected by absolute privilege in respect of their evidence to this committee. However, if they are directed by the Chairman to cease giving evidence on a particular matter and they continue to so do, they are entitled thereafter only to a qualified privilege in respect of their evidence. Witnesses are directed that only evidence connected with the subject matter of these proceedings is to be given and they are asked to respect the parliamentary practice to the effect that, where possible, they should not criticise or make charges against any person, persons or entity by name or in such a way as to make him, her or it identifiable.

I also advise witnesses that their submissions and opening statements to the committee will be published on its website after this meeting.

I remind members of the long-standing parliamentary practice to the effect that members should not comment on, criticise or make charges against a person outside the Houses or an official by name or in such a way as to make him, her or it identifiable.

The format of the meeting is that witnesses will be invited to make an opening statement of no longer than five minutes. I will signal when they have one minute remaining. I will be strict on time with both witnesses and parliamentarians because we are on a tight schedule.

I invite Dr. Lillington to make her opening statement.

Dr. Karlin Lillington

I have been a technology journalist and columnist for more than 20 years, primarily with The Irish Times. I am grateful to the committee for this opportunity to offer my perspective on these issues.

The existing business model of social media and search platforms, which is based on extracting and monetising as much personal data as possible from users while encouraging them to engage addictively with and return to the platforms, is a foundation for the serious problems we are discussing today. It is a vicious but highly lucrative circle in which clickbait material of hate, outrage, conspiracy and tribalism proves the most engaging, while the micro targeting of ads and content means only a select receptive audience may ever see material that becomes impossible to refute.

Too often, policy discussions focus on the risks posed by social media in established democracies, but the most vulnerable victims are, ironically, those who fight most courageously on behalf of democracy, namely, human rights defenders. For them, online threats can swiftly descend into violence, arrest, torture or death. Activists do not wish to leave the platforms because, despite their serious flaws, they are a major tool of democracy. They allow anonymity, communicate helpful information or help spread irrefutable evidence, and offer easy-to-use encrypted messaging. Many of the proposed solutions and interventions to social media problems, such as the banning of online anonymity or account registrations being tied to formal identity, only exacerbate the problems. If we better understood and more adequately addressed the serious risks and harms to human rights activists, we could better resolve the problems for all of us because studies indicate human rights activists are the outriders for these dangers.

In 2017, Front Line Defenders, an Irish-based international human rights NGO, analysed data on the murders of 312 activists. In 84% of these cases, the activists had received threats, often made online. As the organisation noted, the world's worst regimes know well that attacking the legitimacy and credibility of human rights defenders softens the ground and lessens the reaction when they are arrested, imprisoned or murdered. Women activists are regularly the recipient of some of the most loathsome and sexually explicit threats. In a survey of eight countries carried out last year, Amnesty International found that more than 40% of women who had been abused online feared for their physical safety and 24% feared for their family's safety because online mobs often issue graphic threats against their children.

Facebook has been particularly implicated in human rights reports, ignoring anti-democratic campaigns on the site and inexplicably viewing despots as opportunities to extend platform reach. For example, in the Philippines, Facebook eagerly helped the Duterte campaign learn more about social media use and considered him to be, in its own words, a king of social media engagement, even though his vigilante drug squads were already well-known for carrying out summary executions. The UN harshly criticised the company for its failure for many years to shut down co-ordinated threats that scaled into violence in Myanmar. One Myanmar legislator concluded that Facebook had been dangerous and harmful to the country's democratic transition. Avaaz, a human rights organisation, recently condemned the company for similarly failing to curtail violent threats to vulnerable minorities in Assam. Similar difficulties arise with Twitter, YouTube and any other platform that can carry a message, photo or video.

Human rights organisations have stated that platforms regularly back down in the face of government requests to remove posts by or accounts of journalists, activists and organisations. Acceding to a request of the Indian Government, Twitter recently took down 1 million tweets relating to Kashmir. Activists have been deplatformed in Vietnam. Algorithms that encourage and promote engagement also enable co-ordinated hate and disinformation campaigns such as a fake accounts broadcast propaganda campaign in Sudan to commend military generals who massacred demonstrators last summer.

The platforms need key reforms that will protect pro-democracy activists and start to remedy abuses elsewhere. They should work more extensively with trusted regional and local NGOs to better understand the context of government requests for content and account takedowns. They need to be more aware of and vigorous in assessing possible future harm and interference caused by their actions and promotional programmes. Governments and regulators should foreground risks to activists as they consider ways to manage online problems. Too many proposals undermine grassroots movements toward democracy and are as much a threat to democracy as the online harm they hope to combat. States and regulators will not address these broad problems unless they terminate the core surveillance capital business model of platforms and reduce their operational size to a level such that platforms can begin to manage and fix these problems.

Dr. Johnny Ryan

I thank the Chair and the distinguished members of the international committee. I represent Brave, a private web browser. Our CEO, Brendan Eich, invented JavaScript and co-founded Firefox. Since those early days, as all members are aware, the web has become grimy. Millions of people use Brave to make the web safe, fast and private. The problem of disinformation arises because of what happens every time one loads a web page. As the page loads, a broadcast of information about the user is sent to tens or hundreds of companies. This is intended to allow technology companies that represent advertisers to compete for the opportunity to show the user an ad. That sounds fine, but the broadcast data includes an inference of the user's sexual orientation, political views, religion, and any health issues from which he or she may suffer. It generally includes the precise thing the user is watching, listening to or reading at that moment, as well as where he or she is located. These data are bundled with identification codes for the user that are as specific to the user as his or her social security number. All the data I described can be put into a dossier about the user, whatever age he or she may be. This is a description of the multibillion dollar real-time bidding, RTB, industry. The broadcasts occur hundreds of billions of times per day. My written submission contains plenty of footnotes with evidence for this. This relates to perfect information from micro-targeted disinformation. I should say that I did not set a timer, but I will not filibuster.

It is almost certain that every voter in every election has been profiled based on almost everything they watch, read and listen to online. That is problem number one. The second problem is that the real-time bidding industry, which involves advertising, is a cancer eating at the heart of legitimate media. That works in at least two ways. If a person visits the website of The Irish Times, for which I used to work and for which Dr. Lillington currently works, and reads about a luxury car, and then later in the day that person goes to a less reputable website, it is very likely he or she will be targeted with an ad for a luxury car because the companies that received the information profiling that person as a high-value Irish Times reader interested in cars can now, at a significant discount, show him or her the ad on the poor quality website.

That happens even if one is not human. What about an ad for a bot? Not only does audience arbitrage, as it is referred to in the industry, commodify a worthy publisher with a unique audience, it also allows a criminal to operate fake people who pretend to view and click on ads. This diverts an estimated $5.8 billion to $42 billion per year from worthy publishers out of the wallets of real advertisers and into the pockets of criminals.

There is a glimmer of light, though. I admit that it comes from my company. We have been pioneering a new system of private ads that are opt-in. The engagement rate has been sky high over the past six months. We are proving that privacy can be good business.

We at Brave urge the grand committee's distinguished members to take a single specific action, that being, to pressure with their intergovernmental weight the two entities that control this global system. The first is called the Interactive Advertising Bureau, IAB. Its largest members are Facebook and Google and it controls the rules about how real-time bidding, RTB, works and what can be in a broadcast. The second is Google itself, which has its own proprietary version. We have given detailed evidence to 16 regulators through colleagues across Europe and we have seen investigations triggered under GDPR into both entities. This is good. Twelve months later, though, we are still awaiting enforcement.

If there is one thing we can leave the grand committee with, it is a plea to stop the business model of the bottom of the web and starve the data brokers who enable micro-targeted disinformation. There is an easy way to do that.

I thank Dr. Ryan for concluding on time. I now call Mr. McNamee.

Mr. Roger McNamee

It is a great pleasure to be here and to address the grand committee. I will reinforce everything that Dr. Ryan said. Everyone should think of that as the fundamental backdrop to what I am saying.

In the US and other countries, the institutions of liberal democracy are losing their ability to serve the needs of constituents. Many factors contribute to this. Historically reduced funding has been a major one, but, increasingly, the dominant role of technology in society is undermining the ability of liberal democracies to operate. Internet platforms have exploited the weaknesses of democratic institutions and accelerated their weakening. Platforms have positioned themselves to replace democratic institutions with initiatives such as Alphabet's Sidewalk Labs waterfront project in Toronto, Facebook's Libra cryptocurrency, Amazon's efforts in law enforcement and Microsoft's services for governments. The success of Internet platforms has produced harm to public health, democracy, privacy and competition on a global basis. The driver of that is the algorithmic amplification of hate speech, disinformation and conspiracy theories as well as micro-targeted advertising based on massive surveillance. Similar to the chemicals industry of the 1950s and 1960s, Internet platform profits are inflated because the industry does not pay the cost of the harm it causes. This is important. As Professor Shoshana Zuboff has noted, Internet platforms are gradually displacing democracy and consumer choice with algorithmic processes.

To protect competition, governments have an existing anti-trust toolkit. To protect public health, democracy and privacy, however, they need new tools. They need to constrain the business model of surveillance capitalism. The best way to do this is to declare that personal data are a human right, not an asset. This would limit business models to first party intended uses of data. There would be no third party commerce or use of private data, no predictive models based on personal data, no web tracking and no corporate surveillance. In this, I would like to go beyond the Brave solution.

Internet platforms behave as though governments lack the tools and political support necessary for meaningful regulation. Our challenge is to prove them wrong. We have to be prepared to shut them down for periods of time when they misbehave, given that they are clearly defying democratic governments and will persist in doing so until there is an incentive to change. I have spent 34 years as a technology investor. At one time, I was a mentor to Mark Zuckerberg and Sheryl Sandberg. I cannot be absolutely certain that I am right, but I am very confident that I am not wrong. I thank the grand committee for its time.

I thank Mr. McNamee. I call Mr. Nimmo.

Mr. Ben Nimmo

I thank the grand committee for the opportunity to attend. I will focus my comments on electoral interference and large-scale disinformation operations because these are what I study on a regular basis.

This is a vast and fast-moving problem set. According to the Oxford Internet Institute, 70 countries are now reported to be running organised social media information operations, up from 48 last year. We do not have enough data to prove whether this stems from a rise in operations, a rise in reporting or both. Either way, it indicates a global phenomenon. Most of these operations are aimed at domestic audiences, but we must remember that the Russian operation that targeted the US from 2014 onwards also started out by targeting the domestic opposition.

The evidence suggests that a state that has the capability to run domestic information operations can quickly pivot to external targets if the political need is there. Russia did so in 2014. Saudi Arabia did so after the murder of Jamal Khashoggi. China did so when the Hong Kong protests began. Nor is this limited to state actors. We saw the far right in Britain and America trying to interfere in the French presidential election in 2017. These operations do not solely play out on social media platforms. They also include websites and television stations. They can include on-the-ground events and local activists, some of whom are unaware of the role they are playing.

All of these problems are underpinned by a perception that online information operations are easy, cheap, effective and profitable. Since 2016, the narrative has emerged that Russia managed to tip the balance in the US election by running social media trolls. That narrative is significantly flawed, but it has caught on.

Unscrupulous marketing companies around the world are promising to "change reality" for their political clients through social media campaigns. Fake amplification on social media is very cheap. One can buy a three year old YouTube channel with videos already uploaded for just $1. Domestic actors on both sides in the US have experimented with Russia’s playbook.

However, we also know that the environment in 2019 is much less permissive than it was in 2016. The platforms, law enforcement and open source researchers are all actively hunting influence operations online. The rate of takedowns has accelerated dramatically since early 2018. Over the past 12 months, we have seen more than 50 takedowns just from Facebook, covering operations from some two dozen countries. That has forced interference operations to sacrifice engagement to stay concealed.

In this environment, I bring four urgent needs to the committee's attention. These are not the only four, but they are the areas where parliamentary work can have most immediate impact. First and of most direct relevance to elections, parliaments and political campaigns must urgently improve their own cybersecurity to prevent the sort of hack-and-leak operations that Russia used to such devastating effect in 2016. This is not a duty that can be passed on to the social media platforms. Every parliament and every campaign should ensure that all its staff have cyber training and contingency plans in place. This is expensive and many campaigns will argue that the money would be better spent on ads, but it is much less costly than seeing their emails leaked to the press a week before the election.

Second, we do not yet have a deterrence system in place. We have seen individual nations react to interference attempts, but we do not have a credible system for imposing unacceptable costs on foreign actors who attempt to interfere in elections.

Third, we need legislation that imposes systematic costs on the commercial operators who sell fake accounts or hire out their interference campaigns. Two weeks ago, we saw the first case of the Federal Trade Commission fining a US company for selling fake engagement. Social media companies can, and do, ban such operators from their platforms, but they cannot impose a direct financial cost. The black market in fake accounts is the easiest place for hostile actors to buy their assets, as China demonstrated over the Hong Kong protests.

Fourth, parliaments should lead discussions on how to reduce polarisation online, both through regulation and through education. This is a long-term challenge, but we should always remember that if we did not have domestic trolls, the foreign trolls would not have anyone to pretend to be. Such discussions will require technical analyses of how the platforms' algorithms suggest friends and content for users, but they will also require social analysis of how users select their online identities and tribes and how they can be encouraged to broaden them. Every human has a complex pyramid of identities in real life, but the dynamics of social media often reduce that to one-dimensional tribalism. If our parliaments can work across party lines and lead the debate on how to reverse the spiral of ever narrower partisan groups online, that would mark a step towards reducing the scope for online polarisation and online interference.

I thank Mr. Nimmo. Our final witness is Ms Cadwalladr. She has five minutes, and I will indicate after four to let her know she has one remaining.

Ms Carole Cadwalladr

I thank the grand committee for inviting me to attend and for the focus it is bringing to these important issues. When I accepted the invitation, I thought that its British MP members would be in attendance.

The fact they are not here today because Parliament was suspended two days ago in preparation for a general election is profoundly disquieting on many levels. The UK Digital, Culture, Media and Sport Committee, DCMS, has painstakingly detailed the risks to our elections, and yet the Government called this election having done absolutely nothing to address them. It has failed to undertake any of the committee's recommendations, which is a gross dereliction of its duty to protect the British public.

Nobody can be in any doubt about the risks to our democracy that are posed by these Silicon Valley tech platforms. I have spent three years investigating these risks and we now know many facts. We know the Brexit vote was fraudulently and illegitimately conducted. We know the tech platforms, particularly Facebook, facilitated multiple campaigns to break the law. We know the authorities have entirely failed to hold these perpetrators to account. We know that Britain is set to leave the European Union on the basis of this fraudulent and illegitimate vote. The failure to reckon with these crimes has led to a situation where the man who led one of these campaigns, Vote Leave, which carried out the single biggest electoral fraud in my country for more than a century, is now the Prime Minister, and the man who masterminded this scheme is his chief adviser. This is not a situation that should exist in any well-functioning democracy. Britain is now a warning to the rest of the world because what happened in our country could happen in this country too. The vast unchecked power of the Silicon Valley technology companies represents a truly global risk, and it is not even clear that democracy will survive them.

It is absurd that I am here today but another invited witness, Mark Zuckerberg, is not. The contempt he has shown to the nations represented here is extraordinary. We are in a truly staggering situation where a single company plays an absolutely central and pivotal role in elections in almost every single country across the world, yet it is answerable to none of them. This cannot and should not go on. I urge Congressman Cicilline to invite the committee to Congress for its next hearing. Mark Zuckerberg needs to be subpoenaed by this committee. He needs to answer proper, rigorous and informed questions. I believe Mark Zuckerberg has deliberately misled Congress. Two weeks ago, Representative Alexandria Ocasio-Cortez asked Mark Zuckerberg when he and Sheryl Sandberg learned about Cambridge Analytica. He said he could not remember but he thought it was in March last year. This simply does not bear scrutiny.

The entire story that Facebook has told about Cambridge Analytica, including who knew what and when, is crumbling. Throughout 2017, I and other journalists were writing about the company and we now know, thanks to the SEC report, that Facebook had being lying to reporters, including me. That, essentially, means it was lying to shareholders. We know that Facebook employees knew about Cambridge Analytica abusing Facebook data before even the very first report by Harry Fox Davis in The Guardian, again thanks to the SEC report. Mark Zuckerberg claims to have known nothing until March last year. This defies belief. Yesterday, we learned that the California Attorney General had tried to subpoena emails between Mark Zuckerberg and Sheryl Sandberg that would clarify all of this, but Facebook has repeatedly refused to hand them over. One has to ask why and what exactly it is that Facebook is trying to hide.

Facebook cannot be trusted to run the world's elections. No company can. I hope the company moves towards a total ban on micro-targeted political advertisements and that it seeks to obtain the forensic evidence of what actually happened on Facebook's platform in 2016 in the US election and the EU referendum. This information cannot remain the private property of a private company.

I thank Ms Cadwalladr. I acknowledge Lord David Puttnam, who will be representing the UK on this committee today and we are delighted to have him here. I will go through the list in alphabetical order. Every country has five minutes and I will indicate after four so speakers know they have one minute remaining. That includes the answers, so if witnesses keep their answers short and to the point, it will allow other countries to come in with their questioning. I call Ms Carol Brown of Australia.

Ms Carol Brown

Dr. Ryan talked about stopping the business model and said there is an easy way to do it. Would he like to elaborate?

Dr. Johnny Ryan

I would. This is part of the business model. When we are talking about hundreds of billions of broadcasts leaking data, one would imagine there are thousands of companies involved and one would imagine correctly, but there are only two standards that decide what data are being leaked around the place. One can act against the two entities that control those standards.

Ms Carol Brown

To follow up, I want to talk about obstacles but, given the testimony today, and Ms Cadwalladr highlighted her belief around the British Prime Minister and his chief adviser-----

Ms Carole Cadwalladr

It is not my belief. That has been documented by the Electoral Commission.

Ms Carol Brown

Is Ms Cadwalladr saying there is no political will in Britain to change the way the system is currently working?

Ms Carole Cadwalladr

In terms of changing our electoral laws, that depends on the Government. The recommendations have come from the Electoral Commission, from the DCMS committee and from the Information Commissioner's Office, ICO. There is a whole suite of recommendations that say our electoral laws do not work, are completely inadequate, are not fit for purpose, and are placing our elections at risk, and the Government has failed to act upon any of them. That is why calling an election in these circumstances is so particularly troubling.

Ms Carol Brown

What would the panel like to see come out of this hearing?

Ms Carole Cadwalladr

The committee is in this leadership position because it has collective power in terms of countries acting together. There have been these very strong recommendations from bodies, such as the Institute of Practitioners in Advertising, IPA, on a total moratorium on micro-targeted political advertisements. At the moment, we know these elections are not safe, and that has been documented. Until the platforms can prove they are safe and prove they will stay within the confines of the law, I think it is perfectly reasonable to have a pause, for which the Information Commissioner in Britain has asked, or a moratorium to give a chance to assess and to gather in the expert evidence, and see if the current situation is beneficial to our respective democracies.

Mr. Milton Dick

I am interested in Ms Lillington's evidence about the impact of social media platforms on democracies and elections. I am glad everyone in this room is committed to that, but it is really the people outside of this room who we are having a conversation with. In regard to the anti-democratic pro-regime nations where social media is used for freedom of speech, if we were to have a set of rules or guidelines around the globe about what is a standard set of procedures, how would that impact struggling democracies?

Dr. Karlin Lillington

For electoral processes?

Mr. Milton Dick

Whether in regard to truth in advertising, standards or media panels.

Dr. Karlin Lillington

The note of the Myanmar legislators is one to keep in mind in that we do not want platforms destabilising elections.

There needs to be a recognition that many of these countries are in a transitional period and are fighting for greater democracy. In regard to decision-making around processes, the NGOs and the activists are best placed to define by country the greater specifics of what might need to happen. We need to keep in mind that the remedies we try to take in established democracies for many of these problems, or to bolster elections, can work against the human rights defenders and emerging democracies.

Mr. Milton Dick

We do not want a situation where an election panel set up to observe elections in a struggling democracy is counterproductive in a nation where there could be an electoral commission or an independent body overseeing that process. It would be counterproductive to have that placed on pro-democratic reform or reformers.

Dr. Karlin Lillington

Yes. My message would be that the people to talk to would be the NGOs and the human rights defenders who would know this area in much greater detail than I would in terms of specific remedies.

I thank Dr. Lillington and Mr. Dick and I now call Ms Keit Pentus-Rosimannus.

Ms Keit Pentus-Rosimannus

I thank all of the witnesses for an excellent start to the meeting. My first question is to Mr. Ben Nimmo. I admire his dedication and his previous work in the Atlantic Council's DFRLab. I echo Mr. Nimmo's comment that disinformation operations are not some sort of innocent, so-called meddling. Rather, they are blatant attacks against the core of democratic states. In Russia, as we know, they are part of the military doctrine. They are used as weaponry tools, and should be treated as such. Hence, a proper co-ordinated international response is definitely needed. It is something we have been missing thus far.

There have been recent examples of how Russia-backed influence networks and the manipulation techniques were used in Africa. They did not use faked profiles created in the St. Petersburg troll factory on that occasion: they hired locals. They basically bought their locals' profiles. I ask Mr. Nimmo to comment on whether this is something we can expect in upcoming big elections and to elaborate on the next things we should be waiting for?

Mr. Ben Nimmo

I thank Ms Pentus-Rosimannus for the question. I would nuance the African finding slightly. There was a combination of fake profiles and genuine profiles and they were using some genuine people on the ground. It also looked like they had bought accounts from people. One of the problem sets here is the black market in recycled accounts. Something that we have seen consistently over the last three or four years is information operations taking more steps to try to hide because they are being hunted more. The latest take-down of probable Internet research agency accounts was announced approximately one week before the African announcement. They were going to extraordinary lengths to try to hide their traces. Mostly, they were copy-pasting genuine American comments. This means that one has to be looking for signals that are not content based because the content is all coming from somewhere else. It also means that they are getting much less engagement because if all they are doing is parroting somebody else's words then they do not have their own voice.

I think what we will see is more attempts to masquerade as other people. We have already seen many attempts to co-opt local activist groups. Part of the challenge is going to be communicating with activist groups ahead of the elections and teaching the broader electorate the type of cautionary steps they need to take before engaging with somebody else's Facebook group, which is asking them to take action on the ground. We need to teach them how to verify who is behind that. There are basic steps that one does in real life that people seem to forget online. Much of this is about teaching normal users the same wariness online that they would have in real life.

Ms Keit Pentus-Rosimannus

I thank Mr. Nimmo. On the deepfakes and synthetic reality, one can find the source when it comes to ordinary fake news, but it is so much more difficult when it is about the deepfakes. What developments are happening in that regard?

Mr. Ben Nimmo

In the long term, deepfakes have the potential to be a problem but they will be the greatest problem in communities which are predisposed to believe them anyway. Those communities will be predisposed to believe cheap fakes as well. The simplest way of faking is to repurpose a video from some other time and place and say that this is happening right now. That kind of content still goes viral. In the long term, deepfakes are a challenge but they are secondary to the problem of emotional engagement and the virality which is driven by outrage. If we start looking at outrage-driven virality, then we will have a good chance of dealing with the deepfakes fake problem at the same time.

Ms Keit Pentus-Rosimannus

I have a similar question for Dr. Johnny Ryan in regard to deepfakes fakes because he also highlighted micro targeting, which is definitely a problem. Lies tend to spread very fast even without targeting. How does Mr. Ryan see the deepfake developments?

Dr. Johnny Ryan

My focus is on the bottom of the web, that is, on the websites that are operated perhaps by a crank conspiracist. Once upon a time, he or she would done this in a shed in the bank of his or her garden but now we have a business model that allows this to become a viable business. Instead of it being a single crank, there is an incentive to host and operate whatever content, including deepfakes. My suggestion, in addition to everything we have already heard, is a focus on undermining the business model, the source of data and the source of cash.

I thank Ms Keit Pentus-Rosimannus, Mr. Nimmo and Dr. Ryan and I now invite Mr. Tom Packalén to put his questions.

Mr. Tom Packalén

I thank the witnesses for their excellent presentations. It is interesting to be here and to listen to all of them. I thank Ms Cadwalladr and Mr. McNamee for their great job of work in regard to Facebook and the Cambridge Analytica issue in particular, which was very bad for artificial intelligence, AI. It showed both sides of what AI can do. We had thought AI could be used only for good but it can be used as a weapon to do bad things.

There are many problems that need to be addressed by way of regulation, including hate speech, fake news, identity bubbles, election influence and interference. There are also many algorithmic problems. What is the biggest problem? Where should we start to solve this problem?

To whom is Mr. Packalén's question directed?

Mr. Tom Packalén

It is for all of the witnesses.

I ask Mr. McNamee to respond first.

Mr. Roger McNamee

The danger on a global basis is that each country experiences a different manifestation of the problem and so the tendency is to attack the symptoms. The problem could be with young people, teen suicide or bullying or, perhaps, elections and democracy or privacy. All of those problems derive from the underlying business model. It is a business model based on capturing and maintaining attention. Please do not be distracted by what in the United States we call First Amendment issues or free speech issues. Algorithmic amplification is a business choice. It is completely independent of speech. One can put all voices onto social networks without harm. Where the harm comes in is that the amplification looks to the most basic elements of evolutionary psychology, specifically , flight or fight. That is why outrage is so effective. The problem with outrage is that it is politically asymmetric. One does not see outrage being equally effective across the entire political spectrum and so it has resulted in a uniform shift to the far right globally. My observation is around how to stop this stuff happening.

It is very important to go after both algorithmic amplification and micro targeting because they together are what create the problem. The great, unique opportunity of the committee is that it is the only place where we convene people and bring together all the different experiences. It gives the opportunity for the political will to migrate to the root causes. I truly hope the committee will flourish and that we can do everything in our power to support it.

Mr. Tom Packalén

How to regulate posits a great challenge because it is so complicated and the technology in the companies is in their hands alone. Mr. McNamee knows about Facebook. Ms Cadwalladr stated Mr. Zuckerberg knew about Cambridge Analytica last year. It is deep inside what such companies do. It is a matter of algorithms.

Ms Carole Cadwalladr

Mr. Packalén is correct. There are multiple very complex issues but there are also some simple, straightforward issues. A foreign company played a central role in all of the representatives' elections and is unaccountable to any of their parliaments. It is wrong to consider it in any way a political issue. In the context of the national security of countries, this foreign company is playing an absolutely pivotal role. Politicians have no idea what role it plays in their elections and it is beyond the reach of any of their laws. That is completely unacceptable and profoundly worrying.

Politicians have the power. All of us are helpless and powerless in the face of such massive companies but politicians have come together, as a collective body of 12 parliaments, and have the ability to make a stand and say it is unacceptable, which I hope they do.

I thank Ms Cadwalladr and call Ms Goguadze.

Ms Nino Goguadze

I thank the witnesses for their presentations. My first question is for Mr. McNamee. Most online information is provided by consumers. Are people aware of the web data collection process? Do customers have a clear understanding of how data provided by them could be used in the future?

Mr. Roger McNamee

There is a grave misunderstanding of the data sets involved. My estimate, which is by no means precise, is that less than 1% of the data that Google and Facebook have are the data contributed voluntarily by consumers. The vast majority are acquired in the third party marketplace, either by tracking people online or by acquiring bank statements, credit cards, location from mobile vendors or purchase history from various companies and products. The core issue is that the surveillance takes place largely outside the awareness of people and without their direct participation. In the case of credit card processing companies, for example, which sell their customers' information, no consumer has a direct relationship with any of the companies or has any control over them.

One of the challenges is making people aware that the data being given up are not used to improve their experience, except to a small degree. Most of the data are used in a way that has social impact. In Myanmar, one would not have needed to be on Facebook to be dead but just to be a Rohingya, while in Christchurch, one did not need to be on Facebook or YouTube to be dead but just to be in the mosque. That is the problem we face. It is no longer an interpersonal, one-to-one relationship. Data are being used against whole populations, which is why the committee is so important. They are being used against populations globally.

Ms Nino Goguadze

Which entity could be responsible for informing people properly?

Mr. Roger McNamee

I do not know. I do my best every day, as does everyone on the panel. I know many of the people in attendance. Representative Cicilline, who is a friend of mine, has done an amazing job of making people aware. I attended a meeting two nights ago for 2,300 people in Florida, and I am sure 95% of them did not know what I have just stated. Making people aware remains the challenge. This is an opportunity for governments to educate citizens and in countries such as Georgia or Estonia, an effort has been made to do that because there is a clear threat to their viability.

Ms Nino Goguadze

I thank Mr. McNamee. My second question is for Dr. Ryan. In Georgia, any possible electoral interference is considered a managed process that could have been orchestrated or linked to forces outside the country. Russia is often a suspicious country, not only in Georgia. To address the problem properly, do we need to reveal who stands behind the orchestrated process and how they operate and get access to data collected on social networks?

Mr. Ben Nimmo

In general, I am in favour of transparency in respect of information operations as an educational tool. If people can be shown how it worked on the previous occasion, it will inoculate them against how it will happen the next time, and put pressure on the information operator to change its ways because it will have to find a new tactic. There have been at least four generations of takedown of content from the Russian Internet Research Agency, RIRA, and every time it has had to change tactics precisely because the previous attempt has been caught.

The more transparency, the better. The challenge is the attribution. We do not want people to say an instance must be the Russians because those behind the content behave the way the RIRA did on the previous occasion. The attribution very much comes down to how much the platforms can attribute. Transparency is important and educational but there is a limit beyond which attribution will probably not be able to go. We are in the space where one tells what one can but one has to accept that, sometimes, if the threat actors are clever enough, a hard attribution will not be possible. We want to avoid hysterical attributions to the effect that it is one bot online and that, therefore, it must be a Russian operation. Sadly, there has been much of that in this space recently. Part of what needs to happen is a realistic assessment of what we see, rather than an overexcited one.

I thank our guests. We might agree that political micro targeting can be dealt with by national legislatures but that is only one aspect of a very complex problem. If we look more deeply at the business model, given that Dr. Ryan, Dr. Lillington and Mr. McNamee all advocated going to the source of the problem, the main question, if we can get agreement today, is where the governance is and where the collaboration is in the international regulation to make it work. It is hard to do it as a nation state. Is it at the European Union, the UN Internet Governance Forum, some version of ICANN or the IAB, which Dr. Ryan mentioned? What has the best structure to make the business model change?

Dr. Johnny Ryan

In the case of real-time bidding, strangely, this city is the crucible, along with Brussels. The IAB is regulated, under the general data protection regulation, GDPR, from Belgium, and Google is regulated, under the GDPR, from Dublin. The two regulators have appropriate law and massive legal authority. The Belgian data protection authority is investigating the IAB, while the Irish data protection authority is investigating Google. The Irish regulator received my complaint in respect of the issue well over a year ago. I understand she is hard pressed and it is difficult to process a large amount of work. Nevertheless, elections are approaching in various parts of the world and these are essential, life-and-death matters.

I suggest that the forum is, in fact, in this House, in the Chamber.

That is because our regulator can be scrutinised by no one other than Members of the Oireachtas. I can sue the Data Protection Commissioner if she does not effectively act on the complaint and evidence I have given her. That is something we have in reserve. A far better way of forcing action is to provide resources to regulators in both jurisdictions and to make sure there is appropriate scrutiny of the regulator for these critical things. I realise I am going over time. The whole idea of this regulation was that it was risk-based. We would focus on the key and important players and leave the greengrocer, for example, alone. In that case, this biggest data breach we have ever had in digital history should be the number one item and it does not seem to be.

Mr. Nimmo would like to contribute. I ask him to be brief.

Mr. Ben Nimmo

I do not have an answer but I will make a nuanced point here. So far, we have been talking about the Silicon Valley tech giants and the social media platforms. We are already in a world where those are not the only major platforms around. We are also seeing major platforms from China, for example. Whatever we build in we will have to do so with a sense that we will not just be regulating for companies based in democratic countries but also for large social media platforms located in non-democratic countries. How we address that in the regulation will be a big question.

I thank the witnesses for their contributions and attendance. I have two questions which I will tie together and try to keep as brief as possible. The first relates to an incident from social media yesterday. As we know, the British general election is under way and we miss the presence of our British colleagues, with the exception of Lord Puttnam who is with us. The official Twitter account of the Conservative Party sent out a tweet with a mock-up of the Opposition Brexit spokesperson, Keir Starmer, poised to answer a question he was not asked. This is possibly the first time the headquarters of a mainstream political party has run fake news. We have seen this from many fringe actors but this is probably the first time I have seen a mainstream party officially engaging in fake news. It is probably no surprise because the main protagonist in that party is on the vote leave side.

Have we reached peak social media difficulty? The flipside of that question is that, as legislators, we have to anticipate the counter-arguments and play devil's advocate to some extent. With every new technology, from Gutenberg's printing press to the Pony Express when riders crossed the United States on horseback delivering telegrams, come those who will exploit these technologies. We were discussing this issue last night. Regulators and legislators have always tried to clamp down and sometimes it takes a while to get on board. Sometimes the public get on board faster and apply their own filter by taking fake news or, previously, a bogus telegram with a grain of salt. I fervently believe legislation and regulation are needed. To what extent have we reached a peak? Are people moving off the platforms already or will they remain an ongoing threat for some time?

Dr. Karlin Lillington

I would like to make a quick point that we need to recognise that democracy, as a global aspiration, will be undermined when the political leaders of the countries that are supposed to be the central democracies are themselves using social media to lie, spread disinformation and question and undermine elections. That emboldens and gives comfort to the dictators and autocrats in the most repressive nations and undermines activists. We are leaching our own democracies of any moral authority. I know this is a broader social issue that goes beyond concern about the platforms. However, these problems are tied together and this is something to think about.

If we have time at the end, I will come back to witnesses as I know there is more than one speaker from some countries.

Mr. Ben Nimmo

On the foreign threats, we have not yet hit peak activity because the idea is out there that this is effective but we are already past peak vulnerability. In 2016, nobody was looking for this and now people are.

I will bring in our guests from Singapore next. We have Mr. Amrin Amin and Dr. Janil Puthucheary.

Mr. Amrin Amin

I will touch on the point of business models that was mentioned by many members of the panel. I direct my question to Mr. Roger McNamee. The underlying issue is the business model. Mr. McNamee spoke about surveillance capitalism. If we dig deeper, the issue is that the more engagement, the better sensational news sells. Mr. McNamee spoke about stuff that triggers a fight or flight reaction as well as hate speech, conspiracies and disinformation. Given the various technological solutions at hand, there is little or no incentive for social media companies such as Facebook to prevent these harms as it would affect their bottom line. Is that correct?

Mr. Roger McNamee

It is worse than that. If one thinks about it from the perspective of the social media companies, they would like to appear to be co-operative without going after the business model. When they put in place measures like moderation to try to capture things after the fact, the thing to understand is there is latency, that is, a lag between when content goes up and when moderators can take it down. Almost all the harm from this stuff happens very quickly. It is not just that they have no incentive to do it. They have no technological way to solve these problems. The scale is so great that moderation cannot work. The business model has to be changed.

To respond to the previous question because these matters are related, we must remember that these companies are not sitting still. The business model is migrating from advertising-based businesses to taking over the functions of liberal democracy.

Mr. Amrin Amin

The key issue is they have community standards to appear to be helpful but in fact there is inaction. This is evident in various incidents we have seen around the world, for example, in what happened in Sri Lanka where there was a refusal to take down a post that could clearly qualify as hate speech and a refusal to fact-check political advertisements. There is clearly a profit motive. They like to have community standards to give a veneer of co-operation and certain code words that appear to be helpful, but in fact the underlying purpose is the insatiable thirst for profits. Is that right?

Mr. Roger McNamee

That is correct. I would like to reiterate what Mr. Nimmo said a moment ago, that is, that we have passed peak ignorance on this issue. The problem is the technologies are evolving and the surveillance is becoming so much deeper with technologies such as Alexa, Google Home and the ubiquitous facial recognition that is going around the world. We should keep in mind that because these companies control information flow - people get their information by searching Google and Facebook - they have the ability to use the data they have, the voodoo dolls they have created of each person which are a digital representation, to control search results and, therefore, control the choices available to people. That business model is the reason they can migrate from advertising into replacing government functions.

Mr. Amrin Amin

Because of this, Mr. McNamee would advocate for governments to have powers and effective levers to intervene swiftly to prevent harms, including requiring corrections, curbing the spread of disinformation by taking out bots and protecting people against foreign interference. Would that be correct?

Mr. Roger McNamee

I would answer that question by saying that, right now, there are no disincentives for bad behaviour so we need to scale up the punishments by perhaps two orders of magnitude. The punishments need to be measured in trillions not millions of dollars. The companies need to understand that undermining democracy should be punishable by extermination of the company. There is no reason any company should have the right to be an enabler of the destruction of the society in which it lives.

Mr. Amrin Amin

In short, there have to be laws that ensure social media giants do not just focus on profits and are accountable to governments and the people.

Mr. Roger McNamee

If I may, I would like the focus to be on their business models to prevent the problem from happening in the first place.

Mr. Amrin Amin

Correct.

Mr. Roger McNamee

If we try to do it in the way we are doing it now, by attacking symptoms, we will always be playing a loser's game because they will migrate away from this business model to new ones that are even more harmful, using the same underlying framework.

Mr. Amrin Amin

The regulations have to target the underlying business models to require them to adjust, change and adapt to ensure they meet-----

Mr. Roger McNamee

If personal data is made into a human right and is, therefore, not an asset that can be traded, that will take care of this problem.

We will move to the United Kingdom and Lord David Puttnam.

Lord Puttnam

I should explain I am here in a personal capacity because our parliament was dissolved yesterday. I bring apologies from Damian Collins, MP, for whom I was supposed to act as sidekick. Suddenly, I have become the main part of the show. One point I would like to get across is that my select committee would very much like to take evidence from the witnesses. We have a problem because unless we get that evidence, either in writing or orally, I cannot use it in our select committee report. The witnesses will definitely receive invitations and we will find a way of circumventing those obstacles. There is a wonderful irony here. I am an unelected politician, yet the one security I have is that I know our select committee will be reformed in January, whereas if I was in the House of Commons, I would have no such certainty.

It is a rather bizarre situation in which to find oneself. As I see it, we are on the horns of a major dilemma. The bad actors - Russia, China and whoever else - have no real interest in achieving specific outcomes, which is very unusual. They do not, for example, really care who wins the British election. What they care about is seeding confusion, disharmony and disrespect for democracy. The very worst that could happen - I suspect it might happen on 12 December - is that there would be a contested election. That would be a massive win for the people who wish to deal in disinformation. I have no idea what the result will be. Likely as not, it will end up in the Supreme Court. There is irony in all of this. The more one can create confusion, the more one justifies the sovereignty of the social media companies because, in their competence, they begin to look as if they are almost a safe haven, as opposed to the incompetence of plural democracy. That is a very serious problem and the problem is amplified by the fact that within our own nation states, going by all of my experience of 20 years of legislation and as I am sure Mr. Cicilline will confirm, if a government wants to defeat something that is coming at it, it will try to split ranks and sow confusion. What is happening is that each nation state is looking at the benefits of employment, the tax take and so on versus its own security and the security of democracy. That becomes an internal discussion within each country. It is certainly true in the United Kingdom, Ireland and other countries. It further confuses the entire issue.

We are looking at various possible solutions. I totally support everything Ms Cadwalladr said. What she did not mention that is bang up to date is the suppression of the Grieve report. We have a report that has been right the way through all of our institutions, but the Prime Minister personally has refused to release it. We had a question in the House of Lords on Tuesday when the Front Bench spokesman, the Minister, confirmed that it was a personal decision of the Prime Minister. That is outrageous. Our entire democracy relies on what is termed the "good chap" principle, that the Prime Minister of the day, man or woman, behaves like a good chap. I am sorry about the gender inflection, but that is the way it is. We are going through a process, whereby all norms are being defied. That is playing to the benefit of the major social media companies, which is very worrying.

We have had one suggestion made to us as a committee by which I am intrigued, namely, the possibility of having a Euro SWAT team of high competence that could be moved around in situations where there was palpable vulnerability to democracy on the grounds that the problem was so enormous no one country would be able to put together the permanent competencies to deal with it. Having a European SWAT team that would be constantly looking for it and that could be moved around might be one at least temporary answer to the problem we are discussing. Does anyone have a view on it?

Mr. Ben Nimmo

On having a SWAT team, absolutely, what we need is far more investigators in this space who would do this work all the time. There are probably more countries committing to information operations than there are people who are studying them; therefore, there is a massive problem. I would not, however, want to see a Euro SWAT team run by governments because giving governments the right to decide what goes up and what comes down is fundamentally something to which I would object. One nuance on information operations is that quite a lot of the time they do have an interest in achieving a specific outcome. We could see, for example, that the Russian operation in 2016 was very definitely about stopping Hillary Clinton from being elected. There are outcome-specific cases. A lot of the information operations we see are about increasing polarisation. That is where we need to have a conversation, not so much about the operations but about the underlying social dynamic of why polarisation happens in the first place.

We will move to Congressman David N. Cicilline from the United States of America.

Mr. David Cicilline

In the wake of the scandal of Cambridge Analytica, the committee has undertaken really important work to advance our understanding of data privacy, threats to democracy around the world in the digital age and the role of anti-trust and competition in the digital economy. These problems are not confined within geopolitical borders. They are real and affecting our democracies in fundamental and lasting ways. It is essential that we work together to find solutions to restore the Internet's promise. The committee was established to work out some solutions to these problems, as Chairman Bob Zimmer noted during the last meeting, and I strongly support that goal. To quote one of our panellists, Ms Cadwalladr, democracy is not guaranteed and not inevitable. We have to fight and we have to win as we cannot let the tech companies have this unchecked power. I want to begin with a question to Ms Cadwalladr. I thank her for the incredible work she did in her extensive investigation and reporting on the impact of Facebook on elections and our democratic institutions. Her work reinforces why a free and vibrant press is so critical to our democracy and holding those in power to account. One of the challenges we face in the United States is having people understand the impact of this one-on-one micro targeting of politicals and issue advertisements and how they work. I wonder if Ms Cadwalladr could speak to this for one moment, particularly about micro targeting in the political context.

Ms Carole Cadwalladr

What is most troubling about it for me is that individuals have no idea why they are being targeted or on the basis of what data they are seeing an advertisement. Somebody who lives in the same house might be targeted with a different advertisement and people would not be aware of it. This happens in darkness; therefore, it is influence being brought to bear on individuals based on unknown information that is being collected on them across the Internet. It is a really different idea about politics and elections from those we have had at any other time. It seems to be completely incompatible. We previously had a situation where parties and politicians would come up with ideas and place them in the public domain. We would have had a discussion about them and then voted on them. Now they are finding out information on you and in what you are interested, crafting a particular policy that somehow might speak to it and showing it to you in darkness, as it were. That is just not known to the rest of society at all and it is profoundly anti-democratic.

Mr. David Cicilline

Dr. Lillington has made some public criticisms of Facebook's decision to allow false political ads on its platform, as have I. There was an open letter published by hundreds of Facebook employees who have criticised this policy. Can Dr. Lillington see any defence for it?

Dr. Karlin Lillington

No. That is the very brief answer. It connects directly to what Ms Cadwalladr is speaking to also. On the idea that one can be allowed to micro target people with advertisements and then justify letting politicians or campaigns state anything they like about issues in these secretive advertisements that the entire Commons does not get to see means that Facebook's defence that it is a free speech issue and that people should be allowed to say what they want and then the voters can decide, the thing is if I am highly targeted with a single advertisement that appeals to ways in which they understand, I can be persuaded owing to micro targeting and how does anybody else dispute it? How do people have a discussion at the dinner table with their family if they have not all seen the advertisement in the newspaper or on television? There is no defence.

Mr. David Cicilline

I thank Mr. McNamee for his brilliant work and leadership on so many of these issues. I wonder if he would spend a moment speaking about the implications for the proposal he has suggested to recognise data privacy as a human right. Would it solve the problem in a way that some of the things like purpose specification and a number of other approaches might not?

Mr. Roger McNamee

At the end of the day, when we think about the current model and the end point, in the United States there are 220 million registered voters. In principle, there could be 220 million campaigns, each completely isolated from the next. The metaphor I use is imagine going to the doctor with a broken arm. Imagine if the doctor repaired the arm and then said, "I own your arm now." That is what happens with data. Any corporation that touches a piece of data claims ownership and the ability to do whatever it wants with it. Because there is a free market, they can create what Tristan Harris calls a data voodoo doll, a complete representation of a person's life in all respects. It is effectively part of one's body.

The notion that they are allowed to unilaterally exploit it to predict and manipulate a person's behaviour strikes me as being profoundly inappropriate. We can attack this by going after symptoms or we can attack the problem. In my mind I would like to reset and go all the way back to where we were 25 years ago and say data cannot be used in any way. There would be no models or anything else. We could then have a reasonable conversation about what uses of data were appropriate and which were not. That is instead of starting with the assumption that all things are good and trying to subtract from it. Such a process has brought us to a bad place. The implication is very simple. In the short run both Google and Facebook would lose profoundly in their earnings, but they would both have massive opportunities to rebuild them. To my mind, their profits should not be the first consideration of society. As somebody who has been involved with that world my whole life, I look at this issue while knowing the creative forces around the world will replace those functions instantaneously. It would be only a matter of weeks before one would have a global standard for alternatives based on good business models if they were willing to do it.

We have some more time if others wish to comment.

Mr. Tom Packalén

Mr. McNamee has mentioned that moderation is not possible before damage may be done. It is possible and to artificial intelligence questions, the solutions are artificial intelligence ones. It is impossible to handle those amounts of data manually, as Facebook is doing. It does not really want to do it. If there was a willingness to pursue these solutions, it would be possible. With respect to business models, there are identity bubbles; therefore, why does it not do "pre-moderation" with artificial intelligence?

Mr. Roger McNamee

Honestly, I do not know the answer to that question. Speaking philosophically, Mr. Mark Zuckerberg and the founders of Google believe efficiency is the most important goal in the universe; therefore, they are trying to eliminate friction. Everything needs to be automated and scalable. As many things as possible would be ignored. The notion of regulations and criticism is a form of friction with which they would not like to deal. If Mr. Zuckerberg puts in place an AI-based moderation system that can find 98% of child pornography, the problem is we would be dealing with billions of transactions and being only 98% right still means there would be tens of millions or hundreds of millions of failures. In the context of democracy, that is completely unacceptable. As much as I believe artificial intelligence will evolve eventually to having very powerful capability, today it is far more limited than the people who are pushing it are willing to admit.

One of the weaknesses where the platforms get around existing obligations and we need to legislate for new obligations is the platform-publisher chestnut. They say they are just a dumb terminal displaying binary code which is throwing up content and that in that way they are just a carrier. To my mind, the companies are curating content and selling advertisements based on personalised content. They are moderating content and making decisions about what can be taken down or left up. They have many of the attributes of a publisher. What do the delegates think?

Mr. Roger McNamee

Algorithmic amplification involves a business decision that should be treated completely separately from the publisher versus platform decision. It is a business choice and they should be liable for the consequences of predictable failure.

Dr. Karlin Lillington

We are well past the time when perhaps these platforms need to be termed as being some kind of hybrid between publisher and platform, with existing laws applying to them and making much of this work in a simpler way.

Lord Puttnam

To whom should a European SWAT team be accountable, if not governments? I agree on the point, but we must create an accountability framework.

Mr. Ben Nimmo

I am sorry, but I may have been unclear. I distinguish between accountability in terms of who is paying for it, who is organising it and how it is organised. I envisage having something almost like a police force which would be subject to and accountable to the law but not directly run by the political parties.

Lord Puttnam

Would it be like a specialist Interpol?

Mr. Ben Nimmo

It could be a specialist Interpol and there are many hypothetical models that could be used. We would want to have a body that would have to report to the government and law enforcement agencies but which would not be controlled by any political grouping or party. A government could not tell it what to do.

Ms Carole Cadwalladr

Lord Puttnam spoke about Mr. Boris Johnson personally blocking the report on Russia. I was wondering if he had any idea why Mr. Johnson would be personally invested in blocking a report on Russian interference in British politics. I could offer a couple of suggestions as to why that might be the case, but perhaps Lord Puttnam might like to comment first.

Lord Puttnam

The Hansard debate is very interesting. My belief is the report is being blocked because it begs real questions about the legitimacy of the original referendum. The last thing in the world Mr. Boris Johnson wants is a debate about the legitimacy of that result.

Ms Carole Cadwalladr

Mr. Johnson was British Foreign Secretary after the referendum in 2016 during, for example, the period in which we know that Mr. Arron Banks and Mr. Andy Wigmore were continuing to visit the Russian embassy in London, while they were also campaigning for Mr. Donald Trump in America. That is the Russian ambassador who was named in the Mueller report as being a conduit between-----

We need to be careful about naming people at this committee.

Ms Carole Cadwalladr

It is the Russian ambassador to London who was named by Mr. Robert Mueller as an intermediary between the Trump campaign and the Kremlin.

Dr. Janil Puthucheary

One of the problems for the committee is that this industry is not adequately regulated. In no other industry domain have we tried to have a global set of standards prior to the industry adhering to national laws or national sovereignty in regulation in the first instance. The tension arises as there is an issue about the accountability of platforms and the varying measures of accountability different countries and jurisdictions use in of the strength of democratic institutions brought to bear on the problem. I just want to get a sense from the delegates, on the work of the committee, of how we can balance the tension in arriving at some transboundary set of standards and norms versus solving the very practical problem of getting tech companies to adhere to local laws in the first instance.

Dr. Karlin Lillington

I suggest the GDPR as a model in the European Union. I have been covering technology issues for 20 years and there was always a very dismissive attitude taken towards privacy issues, that the European Union would never do anything and that the United States could do whatever it wanted. The GDPR forced everybody to rise to the level of privacy offered there.

Dr. Janil Puthucheary

I appreciate that, but the GDPR only covers the European Union.

Dr. Karlin Lillington

Yes, but it has forced countries outside it to rise to a level of oversight. It can form a basis to bring more to that level.

Dr. Janil Puthucheary

Yes.

We will shortly suspend the sitting as we are approaching the end of the session. Perhaps Senator Higgins might be brief.

I will go straight to the business model and looking at some of the key intersecting issues. Are there issues about data points and special categories of personal data being allowed to be used in micro targeting or algorithmic construction? To what extent do we economically and algorithmically reward the giving of misinformation? What is the intersection between Google Ads rewarding misinformation sites and advertisements appearing on trusted sites? Is there space to be exploited by publishers in trusted national spaces like national newspapers?

Dr. Johnny Ryan

The answer is "Yes". The GDPR makes personal data privacy a fundamental right. We have over 51% of global GDP if we consider all of the jurisdictions with GDPR clones on the way.

There is a question about special category data, which is very sensitive information about people. If one were to ask Facebook what it thinks about the fact that I might attend a Catholic church and read about Catholic issues, the response would be that that is an interesting collection of interests but it will only accept as special category data the word "Catholic" if written in the box next to religion. We have to be careful with these industries, even when there are tight, legal definitions. Where the enforcers are asleep in the middle of systematic infringement of this new regulatory framework which is emerging as a de facto global standard then this place being the home of that framework will show that the framework is not sustainable.

I am not sure if that data is key in that regard. What they call "observed data" can be personal data. In terms of the ring-fencing-----

I apologise for interrupting, but I have to cut Senator Higgins off. We are on a really tight schedule. I thank the witnesses for being here. We will suspend for 15 minutes to allow our parliamentarians to take a break before we commence the next session.

Sitting suspended at 10.31 a.m. and resumed at 10.45 a.m..