Data Collection by Digital Assistants: Discussion (Resumed)

We are now in public session and I would like to welcome to the meeting from Apple, Mr. Gary Davis, director of privacy Europe and worldwide director of privacy compliance, and Mr. Sam Sharps, head of government affairs, Ireland and the UK. From Google we have Mr. Ryan Meade, government affairs and public policy manager, and Mr. William Malcolm, privacy legal director.

Before we begin I wish to draw your attention to the fact that by virtue of section 17(2)(l) of the Defamation Act 2009, witnesses are protected by absolute privilege in respect of their evidence to this committee. However, if you are directed by the Chairman to cease giving evidence in relation to a particular matter and you continue to so do, you are entitled thereafter only to a qualified privilege in respect of your evidence. You are directed that only evidence connected with the subject matter of these proceedings is to be given and you are asked to respect the parliamentary practice to the effect that, where possible, you should not criticise or make charges against any person, persons or entity by name or in such a way as to make him, her or it identifiable.

I also wish to advise you that any submissions or opening statements you make to the committee will be published on the committee website after the meeting.

Members are reminded of the long-standing parliamentary practice to the effect that they should not comment on, criticise or make charges against a person outside the House or an official either by name or in such a way as to make him or her identifiable.

I remind members and witnesses to turn off their mobile phones or to switch them to flight mode as they will interfere with the television and broadcasting coverage.

Would it be possible to take the opening statements as read, because we have a vote coming up and we have to be in the Dáil Chamber for 2 p.m.?

They are only five minutes, are they not?

Could we have a very quick synopsis?

People are always told to keep them to five minutes, so I will keep it to the five minutes if that is okay. I will start with Mr. Davis.

Mr. Gary Davis

I will try to be quick. I am the director of privacy for Apple in Europe, worldwide director of privacy compliance and the appointed data protection officer in Europe under the GDPR. I thank the Chair and the members of the committee for the opportunity to speak to them today on the important issue of voice assistants and the use of data.

Apple has been operating in Cork since 1980 and we are proud of the many contributions we make to the economy and job creation. We employ 6,000 people in Ireland. Over the last four years, we have spent more than €1 billion with local companies, and our investment and innovation supports more than 27,000 jobs up and down the country.

We believe privacy is a fundamental human right. Our approach to privacy is different from that of other tech companies. We design our products from the ground up to minimise the amount of data Apple collects and we work vigilantly to protect our customers' personal data and give them control over their information.

Privacy is an issue larger than one company or one country, which is why we strongly support the GDPR and advocate for other countries to adopt similar approaches that deliver a highly effective and necessary framework. During the committee’s session last week, the discussion emphasised Article 25 of the GDPR, concerning privacy by design. This is a core value and fundamental principle that Apple has embodied from the beginning. Privacy is at the heart of every product and service we create, and we continually develop innovative technologies and techniques designed to minimise how much customer data we, or anyone else, can access while delivering world-class services to our customers.

Privacy by design applies to all Apple’s services. That includes Siri, Apple's intelligent assistant. Like all our services, our goal with Siri is to create the best user experience while vigilantly protecting user privacy.

We introduced Siri in 2011 as an integral part of our products, helping users get things done faster and easier. This includes tasks like making calls, sending messages, setting alarms, getting directions, finding photos and playing music and television shows, to name a few. Like other Apple services, we minimise the amount of data Siri collects and use that data only to improve Siri. We do not use Siri data to build a marketing profile, and we never sell it to anyone. Our product is the technology we create, not the customer.

We have built privacy protections into Siri according to some core principles that demonstrate our comprehensive approach to protecting user data. First, Siri uses as little data as possible to deliver an accurate result. When a person asks a question about a football match, for example, Siri uses his or her general location to provide suitable results but if a person asks for the nearest supermarket, more specific location data is used.

Second, we design Siri to operate with the most sensitive data on a person's device instead of having to send everything through Apple servers. For example, if a person asks Siri to read his or her unread messages, the content of the message never leaves his or her device and is not transmitted to Siri's servers because that is not necessary to fulfil his or her request. This means that messages are not available to Apple or any other third party.

Third, requests made to Siri are not linked to a person's Apple ID, phone number or email address. Instead, they are associated with a random identifier - a long string of letters and numbers associated with a single device to keep track of data while it is being processed. This is a feature we believe is unique among the digital assistants in use today.

As the committee heard last week, to improve voice assistants such as Siri, there is a need for human review of a very small sample of audio interactions. This helps to ensure that Siri understands users' questions and provides the right answer. For example, Siri has to recognise my Irish accent and all the members' accents, and members will understand the complexity if that is multiplied across the different languages, dialects, and styles of speech in the countries where we do business.

However, I want to be clear that human review of audio samples has always been conducted on a very small subset of audio samples from Siri requests. The people reviewing the audio samples are not shown an Apple ID, phone number, or email. As I mentioned earlier, all Siri requests are associated with a random identifier.

This August, customer concerns arose in regard to human review of Siri audio samples. In response, we immediately suspended human review of Siri audio requests, reviewed our practices and policies, and released the following improvements to Siri’s privacy protections. First, by default, we no longer retain audio recordings of Siri interactions. Second, users now have the choice to help Siri improve by learning from audio samples of their requests. For users who choose to share their audio it is still only associated with the random identifier I mentioned earlier. Third, if customers choose to help improve Siri, only Apple employees - not contractors - will be allowed to listen to audio samples of Siri interactions for the limited review purposes described earlier. Additionally, our team will work to delete any recording which is determined to be the result of an inadvertent trigger of Siri. Finally, there is now a "delete Siri and dictation history" option in settings that makes it easy for users to delete Siri requests that have been retained for six months or less.

Our mission is to make products and services that enrich the lives of our customers. Unlike others, we do not view our customers and their data as the product. That is why we build privacy protections into everything we do. It is why we intentionally limit our own access to customer data. Our products and features include innovative privacy technologies and techniques designed to minimise how much of the customer's data we or anyone else can access. We believe that users should control their data and they should understand how their data is used, stored, and protected. That is the approach we bring to Siri and all the services we offer. We do not sign in with the customer's Apple ID to use Siri and the customer's device processes as much information as possible without sending it to Apple’s servers, and powerful security features help prevent anyone, except the customer, from being able to access his or her information. We are constantly working on new ways to keep the customer's personal information safe. We look forward to continuing our partnership to build on the GDPR's progress and strive for the highest standard of user privacy protection.

I look forward to answering members' questions. I thank the committee.

Thank you, Mr. Davis. I now call Mr. Meade.

Mr. Ryan Meade

Thank you Chairman for the opportunity to contribute to the committee's deliberations on voice assisted technologies. I work with Google in Ireland as government affairs and public policy manager. I am joined by my colleague, William Malcolm, who is a director in Google's privacy legal team. We are very pleased to contribute to the meeting today and share some information on the Google Assistant product. Launched in 2016, Google Assistant is designed to be a conversational experience across devices that helps users get things done. Users talk to it to get results. They can ask for information, schedule events and alarms, make reservations, or issue commands to control a device such as a smart light or music player. The assistant responds with the most suitable answer or action. The ability to interact with technology through a simple conversation is helping more users enjoy the benefits of online access. Users speak in their natural language when talking to the assistant. By the end of this year, the assistant will be available in 32 languages in 83 countries, including the ability to handle a wide array of accents.

Advances in speech recognition are benefiting many users for whom typing is a challenge: users with dexterity issues, including people with physical disabilities and older adults, but also users who struggle with literacy or who speak languages where alphabetisation is an issue. In this sense, the Google Assistant represents the next stage in realising Google’s mission to make the world's information accessible and useful for everyone. For 20 years, Google has made many products that are advertising-supported and thus free for everyone. It has done this while setting a high bar for transparency, control, and security. Ad-supported business models have delivered tremendous benefits for users, including an array of world-class productivity tools, levels of security that were previously only available to large multinational corporations, and vast amounts of free publisher content.

When it comes to privacy and data, Google has long been focused on three core goals: transparency, control, and security. We want people to understand their data, make the choices that are right for them, and be assured their data is safe. We have introduced a variety of new products and tools to help users understand how their data is used, to be transparent, and to provide users with greater security and control. This includes Google Account, the central destination to review and control privacy settings, and the privacy and security check-up.

Our policies continue to evolve. Last year, we refreshed our global privacy policy to better describe the information we collect, why we collect it, and how we use it. We also explain how users can update their settings and manage their account. We now includes illustrations and animated videos to make the experience more engaging and understandable. Over 500 million users visit this site every year. Privacy is not one-size-fits-all, and we are focused on building tools that enable people to make the privacy choices that are right for them. Different users want to make different choices about how much information they share and how it is used. Tools like privacy check-up allow users to understand how their data is being shared, and through Google Account they can change how their data is used at any time.

Google Account is used extensively. In 2018, 2.5 billion unique users visited Google Account. Nearly 20 million people visit every day, demonstrating not only that users are aware of the tools, but they are using them regularly to make informed choices.

Despite these advances in privacy, our work in this area is never done. We know the question of human transcription of audio recordings is of interest to this committee, and we want to share how Google approaches it. As part of our work to develop and improve speech recognition technology for more languages, we use language experts around the world who understand the nuances and accents of a specific language. The role of these language experts is to review and transcribe a small set of queries to help us better understand those languages to further improve our speech recognition technology. In recent months, we have heard concerns about our use of language experts in this context.

When we learned about these concerns, we took steps to suspend this process of human transcription globally in order to investigate, and to conduct a full review of our systems and controls. We have since communicated more information about how audio recordings work and some changes Google is making which include the following. By default, we do not retain audio recordings. This is already the case and will remain unchanged. We do not store users’ audio data unless they choose to opt in. Opting in helps the assistant better recognise a user’s voice over time, and also helps improve the assistant for everyone by allowing us to use audio to understand more languages and accents. Users can view their past interactions with the assistant on their Google Account page or My Activity page, and delete any of these interactions at any time.

We are making the involvement of human review more explicit to increase transparency. We will roll out a change that gives existing assistant users the option to review their Voice and Audio Activity setting and confirm their preference on human language review before any such review of assistant user audio resumes. We will not include new audio in the human review process unless users have confirmed the new setting, and of course users are free to decline.

We are enhancing privacy protections for our transcription process. We already take a number of precautions to protect data during human review, for example, audio snippets are not associated with user accounts during the transcription process and language experts only listen to a small set of queries - around 0.2% of all user audio snippets - only from users who opt in.

We are automatically deleting more audio data. One of the principles we strive toward is minimising the amount of data we store, and we are applying this to the Google Assistant as well. We are also updating our policy to vastly reduce the amount of audio data we store.

We thank the committee for providing us with the opportunity to contribute to its deliberations on this important topic. We believe in putting users in control of their data, and we always work to keep it safe. We are committed to being transparent about how our settings work so that users can decide what works best for them. We look forward to discussing our approach and welcome any questions committee members might have today.

Thank you very much. We will start with Deputies Dooley and Cullinane.

I thank the witnesses for coming in. This is helpful. Our concerns were probably aroused by the information that came about as a result of the situation Mr. Davis talked about at Apple. It was a surprise to many people that data was captured in this way, that it was stored, that it was later used and that it was reviewed by humans, albeit to enhance the user experience.

The biggest difficulty people have raised with me is that they were not aware that data was being captured at times other than when they used a particular word to initiate the capture of data. In some cases, it was activated by the use of some other word. We were not made aware that there was something that triggered an immediate closing down of the capture of the data. Could the witnesses tell us what happens in an interaction where there is an inadvertent capture of data, where the wrong word is used and where it becomes clear, although maybe it does not, that the request is not a valid? What have the witnesses succeeded in doing there?

Maybe I misunderstood him but Mr. Davis talked about data being stored for a considerable period of time. He said it was possible for people to have it removed within six months. For how long is this data stored? After it has been reviewed, is it automatically deleted or destroyed? We are all familiar with data that gets deleted, but is it ever deleted? The same questions apply to Mr. Meade.

There is almost an overload in terms of users' capacity to opt in and opt out, to the extent that people get absolutely frustrated and in order to proceed, they press "Yes", "I do", "I will" or "Yes, please".

From Mr. Davis's perspective, legally, he is covered because everything has been opted into, accepted or whatever. Does he not accept that with something as critical as the potential for capturing audio information that was not intended to be captured, it is incumbent on him to make the user more fully aware of the potential for that rather than having it as another tick box, another click or whatever?

I am giving everyone five minutes with three minutes for replies. I might give a bit more time if anyone needs to go into more detail.

Mr. Gary Davis

I thank the Deputy for the question. As I mentioned, Siri has been available to our users since 2011. It is a service that a user needs to enable. It is not automatically on. Since the very beginning, Siri has been a privacy by design feature. It has always been the case that when one enables Siri, it is associated only with a random identifier generated by one's device, an identifier that Apple cannot link to one's identity. The identifier is such that the things one says to Siri are not linked to one's name, Apple ID, telephone number, email address, or anything of that nature. We worked extremely hard to ensure that Siri should just work in the way that the Deputy talked about, because we agree with him. The user should get the very best privacy without having to make a choice or a setting choice.

In response to the public concerns in relation to the audio grading-----

Apple identified a unique random identifier. Is Mr. Davis saying that under no circumstances, whether a law enforcement request or whatever, Apple could find some key to trace that back, at a minimum, to the device, recognising that the device will present itself elsewhere using other codes that are connected with an email and a telephone number?

Mr. Gary Davis

Of course, and that is the very thing we watch very closely as we are making improvements to Siri, and as we developed it in the first place. We adopt the exact same approach to every update to Siri. We work to ensure that this association cannot be made, so I can certainly say to the Deputy that we have never been in a position to provide Siri data to any third party. We cannot take a person's name or identity and feed it into our servers. We cannot take an extract of a person's voice and feed it into our servers and produce a result. We cannot do that because we set out to make sure we could not do that, and that is what we consider to be privacy by design.

Mr. Davis has set out a standard that Apple applies. Does Google follow the same standard?

Mr. Ryan Meade

To address the Deputy's earlier questions, in terms of misactivations of the assistant, our product is designed to detect a trigger, a keyword, or a physical trigger. That is processed on the device, so there is no transmission with our servers unless the device understands that the user wants to interact with it.

Or misunderstands, because that is the issue.

Mr. Ryan Meade

Yes, that is the point I was going to get to. Misactivations or false accepts can happen, particularly where there is a lot of background noise, other noisy environments and so on, but importantly, in all the devices, we have tried to provide feedback to the users so they are aware that the device is recording. We have also made it a lot easier for users to delete that recording, so one can simply say to one's assistant to delete what one just said. It will be deleted and will no longer be there.

The other question the Deputy had was about deletion of data. As I said, we have taken steps to automatically delete more data more quickly, and to make it easier for users to delete. The third question the Deputy had was about users getting frustrated with the level of notices and so on. That is a valid point and is one of the reasons we have put so much investment into developing the Google Account site where people can see in an understandable way what the settings are and adjust them - they can use that to opt out of a whole host of settings, including personalised advertising - and the privacy policy which we refreshed in the last year, again to move away from this idea of a wall of legal text into a more understandable scenario.

The Deputy's other question was-----

It was on the issue of privacy by design.

Mr. Ryan Meade

Does Mr. Malcolm want to address that one?

Mr. William Malcolm

We fundamentally believe in the principle of privacy by design. We have a robust set of requirements internally that implement GDPR requirements, including producing, very early, technical design documents that look at the privacy features of devices. A key principle with the assistant, right from the beginning, was making sure that individuals knew what voice data was recorded, and that they had access to it, that they could play that back, that they could delete it all, and that they could delete a single item. We had opt-in consent to the collection of that voice and audio data right from the start. It was really important to us that that transparency, that user choice, and that ability to delete the data existed from day one and that was a fundamental principle of the assistant when we launched it. We very much look to put privacy by design at the heart of the assistant.

On notice, transparency and choice, I am very proud of the work the Google teams have done. Our dashboards and tools are industry-leading, but we always accept we can do more and that is why we are looking at this issue of misactivations. Is there more we can do to make users aware of how that occurs in our notices? Also, is there more we can do to give users control over the sensitivity of the settings? It is difficult in environments where there is a lot of noise, and some microphones can mishear. We acknowledge that but we are doing work to give users more control over the sensitivity of the settings, and to always put them in control of how that information is then retained and deleted. Our teams have taken that very seriously with assistant from the beginning because they recognise that users using these devices need to place trust within them and trust with Google.

I call Deputy Cullinane.

My questions are to Mr. Davis. There was a report in The Guardian on 26 July of this year. The headline read: "Apple contractors 'regularly hear confidential details' on Siri recordings". I am sure Mr. Davis is aware of that article. It had to do with a whistleblower.

Mr. Gary Davis

I am, yes.

I will quote one of the things that the whistleblower said: "There have been countless instances of recordings featuring private discussions between doctors and patients, business deals, seemingly criminal dealings, sexual encounters and so on." Is that accurate?

Mr. Gary Davis

As we have covered in the session, there can be false activations of Siri. In fact, our grading programme is designed to identify those so that we can minimise it but maybe I will emphasise the point that a person's device is not listening him or her but it is listening for the words: "Hey Siri". At that point, it will contact our servers. It all happens on a person's device.

In terms of the precise things the whistleblower heard, I do not know precisely those elements but I go back to the point that that grader was not able to link those things to an identifiable individual. Those are the protections that we have put in place.

I want to go on to some of the quotes from the whistleblower. What he said was that a number of different words that were picked up wrongly by the iPhone that would trigger Siri and that created a situation where there was inadvertent recordings. He said that they were only encouraged to report these accidental activations as technical problems and there was no specific procedures to deal with sensitive recordings, so we are talking here about conversations between doctors and patients, criminal dealings, and sexual encounters between individuals. People want to know if these things happened. If they happened, what precautions and what actions has Mr. Davis's organisation taken since then to improve on that? He said there was no obligation on the staff to deal with them, other than having to report them as technical problems. People would have concerns about that, so can Mr. Davis relay those concerns people might have for us?

Mr. Gary Davis

Of course. As part of our statement that we issued at the end of August, we indicated that we would be putting in place a programme to identify more inadvertent triggerings of Siri and delete them where we hear them. Where a grader would hear something of that nature now - they are all Apple-badged employees - they are under instructions to delete them when they hear them.

Okay. He said there is not much vetting of who works there - that is, the company that would be contracted to do it - and he said the amount of data that they are free to look through seems quite broad. He said also that it would not be difficult to identify the person that they are listening to, especially with accidental triggers, because it could include addresses and names. That is quite a stark thing for a person who works in that company to say - this whistleblower - and that would concern people.

Mr. Gary Davis

Of course, and we took them seriously, which is the precise reason we immediately suspended the grading programme and took the important steps of bringing it all in-house, so that it is only undertaken by Apple-badged employees in Cork. We brought them in, and they have very clear instructions now in relation to the deletion of such inadvertent triggers, and we are working to ensure that we minimise them.

Okay. I did it very quickly because I have an iPhone. When a person asks Siri if it is always listening, the response is that it respects a person's privacy, but only when the person is talking to it. That is not quite correct, because if there are inadvertent recordings or listening, it is not as clear cut as that.

Mr. Gary Davis

Yes, the key thing we want is----

No, but that is what it said. I did it myself before I came in. I asked Siri if it was always listening, and it came back and said that it respects a person's privacy and that it is only listening when a person is talking to it. That is not quite correct.

Mr. Gary Davis

Siri is listening-----

Yes, inadvertently.

Mr. Gary Davis

-----when it hears the words "Hey Siri". I just do not-----

No, it is not. That is the whole point. There are words other than "Siri" that it picks up by accident or whatever. It records instances where somebody clearly does not say "Siri", which is not the person's fault if, for whatever reason, the machine or the device itself picks it up wrong. It is not factually correct to say that it is only listening when a person triggers the Siri system, because it is triggered as we know inadvertently.

I will let Mr. Davis speak.

Mr. Gary Davis

I thank the Chairman, and I think the question the Deputy is asking is the correct one. Our aim is to ensure that Siri is only activated when it hears the words "Hey Siri", or if a person has activated it on his or her watch, when it is raised in a certain way, or if a person wishes, he or she can press a sequence of buttons. A person can also disable "Hey Siri" if he or she wishes on his or her device. That is a choice that we make available, but in the way of all things, as I mentioned earlier, we are trying to ensure that we pick the words "Hey Siri" up in all dialects and languages. It could be that when they are pronounced by a person in a certain way, they may actually not be the words "Hey Siri", and it may be something quite close. That is what we are working on as part of our grading programme to fix.

I call Deputy Ryan.

I would like to address both Apple and Google, because the presentations we heard last week from UCD and the Data Protection Commissioner's office were very interesting on this subject. Would the witnesses agree with all the content in their submissions? The Data Protection Commissioner's office referred to the capability of the witnesses' companies now to translate audio clips into text clips, effectively, and that was a central part to all the machine devices, be it Siri, Google Assistant or mobile phones. Was the Data Protection Commissioner's office correct in the way it depicted the capability of the witnesses' companies to translate our audio into text?

Mr. Gary Davis

When a person speaks to Siri and asks it a question, the question is transcribed into computer generated texts on our servers and it is that text which is used to give the person back the response. If I ask Siri who won the football game last night, and I send that to our servers, those words are transcribed into text, and then that is the answer that is given back as to who won the game last night. That is the only transcription of text that we are undertaking.

Mr. Ryan Meade

Similarly for Google, when the system is activated through the hotwire door trigger, the audio that one speaks to the assistant is transmitted to the server for automatic transcription, and the text that is transcribed is used to provide one with the result of the query.

I was interested because I got a fair number of queries from constituents who were coming to me saying that they thought their machine was listening to them and that it was affecting what they were getting. I can give a personal example. I was in the Dáil the week before last and I had a question for the Minister for Education and Skills during Question Time. I was briefly in my office beforehand and we discussed questions for the Department of Education and Skills. I went to the Chamber and got out my phone because I wanted to check something. I tapped in D-E-P-T, and the second thing my search came up with was Department of Education. The first thing was department store. I can show it to the witnesses later. The Department of Education and Skills is not the Department I usually work with, it is usually the Department for Communications, Climate Action and Environment. Why did my Google search show Department of Education? I presumed it was because I had just spoken about the Department of Education and Skills and it heard that and translated it into my search.

Mr. Ryan Meade

I can assure the Deputy that was not the case. The device is not always listening for whatever a person says. It only listens if a person activates the assistant and there is an actual interaction with the assistant. A person gets feedback from his or her device to show that he or she is actually interacting with the assistant. There is no background listening going on or audio being transmitted to servers for that use. It would be impossible for me to say why the Deputy got that particular result. If I am right, the Deputy may have opted out of personalised advertising a few years ago, when we spoke about this before.

I did but the problem is that with this new device I do not have a clue, to be honest, what my security settings are. I should but the vast majority of people do not have a clue. Google is taking for consent what is not consent. When I open my Google search, what comes up in my Google feed is the ask Google Assistant. I could easily flick that on. I must have on that day because there is no other explanation as to why the Department of Education came up on my feed. It must have been recording, or hearing, what I said.

Mr. Ryan Meade

No, that is not the case. There are a number of reasons it could have come up. I mean-----

It says "Okay Google". I have just said "Okay Google". Am I being recorded now?

Mr. Ryan Meade

Is the screen displaying the assistant?

In the Google search thing, it has a little microphone thing.

Mr. Ryan Meade

I cannot see it so it is hard for me to say. It should provide some sort of visual feedback-----

Now it has the little microphone logo on. Does that mean I am being recorded?

Mr. Ryan Meade

Is the text appearing as the Deputy is speaking?

Mr. Ryan Meade

In that case, the Deputy is not being recorded.

Many people have a real concern here, that the business model is driving this not just for use of that in my search history but in my advertising and it is that volume of data these companies have. They now know more about me than I know about myself. What the Data Protection Commissioner's office and the UCD academics said last week raised concerns rather than undermined this. There needs to be a fundamental review of what consent is because it seems we are implying consent when there is not real consent in many of the ways in which these companies manage and use information.

Mr. Ryan Meade

I would go back to what I said at the start about the Google Account site which is where we give users control of all those details. All the activity that the Deputy considers may have been logged against his account will be displayed there and he can control not only what activity is captured but how it is used. He can opt out of any number of settings within that. We have tried to bring that into one place so it does not matter whether he changes devices. If he gets a new device but using the same Google account, his settings will carry over and he can continue to look at them there.

We also have the privacy and security checkup again which gives people an opportunity every so often to go in and review their settings. We are trying to make that situation easier for users so that if they do feel that they are not 100% sure about what they have opted in to there is one place they can go where they can see and control all of this.

I will bring in Mr. Davis on this point.

Mr. Gary Davis

I wish to reiterate something that I said in my opening remarks. If the Deputy has an Apple device, the things he says to Siri are not used for any other purpose. They are only used to provide the answer to the question that he provides, and to make Siri better.

If a person opts in to sharing with us, we cannot associate the things that he or she says to Siri to anything else, and that was a point I was making to Deputy Dooley-----

Just to make one slight tongue-in-cheek comment, we have been chatting here, and every time Mr. Davis has said "Hey Siri"-----

Mr. Gary Davis

Of course, and that is intended. That is the way in which it works, and those are the trigger words I want when I activate my device.

I call Deputy Chambers.

Mr. Davis said "Hey Siri" for himself but did he say it for everybody else? I suppose that is another question. To break this down, both Apple and Google claim that they have removed all identifying data. Is that correct?

Mr. Gary Davis

In our case, it is not the question of removing it. It has always been the case that Siri has been developed not to have personal identifiable information.

Mr. Ryan Meade

To clarify, we allow users control over what is recorded. We have to associate the recording with their account so they can go in and see what is recorded.

The issue here is that personal information is being obtained because the system cannot process the recording or differentiate. Mr. Davis said that when questioned about a football match, for example, Siri uses one's general location to provide suitable results, but if one asks for the nearest supermarket more specific data is used. What information does Siri pull in if it cannot process or understand that request? For example, if one believes one is making a request about a nearest supermarket but in reality one is asking about a football match, would it not be the case that Siri accesses more information than is necessary to process that particular request?

Mr. Gary Davis

I believe the Deputy is talking about whether it accesses a precise location, or-----

If it accesses a location because it has interpreted the request differently from the actual request-----

Mr. Gary Davis

Of course, and I gave that example as a way of demonstrating how we try to be thoughtful about how we collect data in different circumstances. Even if I want to find the nearest supermarket, restaurant or whatever else, and at that point we collect a more precise location, it is still only associated with the random identifier, the long string of numbers and letters, and it is not in any way associated with my identity, or in any way that we can work back to that. Even though we have put those privacy protections in place for Siri, we still challenge ourselves to not collect data if we do not need it.

Obviously, there was third party processing of data and Mr. Davis said that Apple has now brought that in-house. What oversight was in place to ensure that data was protected? What oversight was there to ensure it was not further processed? Is the move to ensure it brought in-house admittance that there were issues with the external third party processing of data?

Mr. Gary Davis

My view of the situation is that we had all the correct arrangements in place with third parties. We use third parties for various situations. We always ensure that we have appropriate legal arrangements in place and that we undertake appropriate oversight and due diligence in relation to all their use of personal information. In relation to bringing it in-house, that was in response to concerns from our customers who were concerned about things that they say to Siri being heard by human graders who were not Apple-badged employees, and so we took the extra step in order to attain the very high standards that we set for ourselves to bring them all in-house. We took the decision that that was the correct step in the circumstances.

Was Mr. Davis fully satisfied that the arrangements Apple had in place met GDPR standards?

Mr. Gary Davis

It absolutely did. It was something which-----

That is okay. We are tight for time so I will move to Google. In the case of some Dutch recordings that had been made available through leaks, where did the recordings originate? Was it from within Google?

Mr. William Malcolm

Back in early July, around 10 July, we became aware through press in the Netherlands and Belgium that recordings of users had been made available to the press. We conducted an internal investigation and we determined that those recordings were from Google users and that there was an unauthorised disclosure by that vendor. We took immediate steps to terminate the vendor's access. We also took the steps, as has been covered, of pausing audio transcriptions globally while we investigated.

Mr. Malcolm said that personal information has been stripped from these files forever. It is the case that the audio itself may identify some individuals.

For example, VRT, the Belgian national broadcaster, was able to provide some individuals with their own recordings based on this data. How does Mr. Malcolm ensure that information of that nature is removed before it is processed?

Mr. William Malcolm

To be absolutely clear, it is not linked to the account, so we are removing-----

But individuals could be identified.

Mr. William Malcolm

We are removing the account details so that the reviewer does not have access to the account information. In that case, the journalist concerned took a variety of steps to identify the individuals involved.

Would that be of concern to Mr. Malcolm, that despite the protections that people's privacy was breached in that regard?

Mr. William Malcolm

In this instance, the disclosure was unauthorised. We took immediate steps to limit the access of that vendor and shut that down and we also paused transcription globally. Clearly it is of concern and the steps that we took immediately following that event illustrate the seriousness with which the company took that issue.

Would that have been a breach of the GDPR in Mr. Malcolm's view?

Mr. William Malcolm

Google had appropriate organisational and technological safeguards in place, and clearly we are conducting an investigation to fully determine what more we can do to protect security. We notified the Irish Data Protection Commission, as we are required to do under the GDPR in that instance, and provided full details to it as our lead supervisory authority.

Similarly to Apple, Google was using third parties to process this data. What oversight was in place to ensure that the data was protected? What oversight was there to ensure it was not further processed, because obviously with the example we have given, there are concerns among individuals that there is an infringement on their privacy, based on the GDPR?

Mr. William Malcolm

We have a range of policies and internal processes in place to ensure that we are taking appropriate organisational and technological safeguards. We have a specific policy around access where there has to be a clear business case for access to data generally. We also have a separate policy covering access to voice and audio recording data, so there are safeguards to ensure that we have got proper oversight over the use of the data and we have a rigorous vendor-contracting process in place to ensure that we have appropriate oversight over vendors and contractual protection in place. Clearly in this instance an individual made an unauthorised disclosure and we took immediate steps to shut down that individual's access.

Last week we heard from the Data Protection Commission about ongoing investigations which include the witnesses' companies and breaches to the GDPR relating to digital assistants. Does Mr. Malcolm think Google will be found to have breached GDPR guidelines under the present use of data under digital assistants?

Mr. William Malcolm

We will co-operate fully with any investigation and I noted last week that the deputy commissioner, Dale Sunderland, talked about how the European Data Protection Board is looking at how it can frame standards and guidelines around this area and that is something we would also be interested in actively participating in.

I call Mr. Davis.

Mr. Gary Davis

I believe the deputy commissioner, Dale Sunderland, confirmed last week that there is no investigation in relation to Apple's use of digital assistants. We have had exchanges, but not an investigation.

I thank the witnesses for their engagement.

I asked this question last week when the Data Commission was before us. Do Google and Apple believe that users have a right to know the data that both companies hold on them and how they are profiled? They talked about going on to Google and about the millions of users who do this. As the witnesses heard from Deputy Ryan, many people are not aware, and certainly I would not be, of the profiles that the companies hold in relation to the data on users. I mention an easily accessible one-stop shop. Many users would be surprised to know the profiles that companies like Google and Apple have on them and how they use them to them help them, whether to know who won the football last night or what shops are nearby or for marketing and targeted ads.

My question is for Google and Apple. Do they believe that there needs to be an easier way for users to know exactly how they are profiled and what data that the companies hold on them?

Does Mr. Davis want to start?

Mr. Gary Davis

Thank you Chairman-----

It is not easy at the moment to see.

Mr. Gary Davis

The first step is to not collect data in the first place, if one can possibly to do so, and that is something that we do with a lot of Apple services. There is a lot of data processing that takes place right here in a person's device without ever coming to our servers. In circumstances where we do need to collect it, we try to collect it not associated with users. Siri was an example not associated with the user. If a person uses our maps service, it is not associated with the user. Where we need to collect it - for instance, if a person is storing something in iCloud, or if he or she is downloading an app or playing a song as part of music - well then, yes, there has to be data on our servers, and we do have a central place where we make that available to our users at and we get great feedback from our users in relation to it. One needs to step through the various progressions in relation to how companies should work through this, but we certainly believe the first question a person should ask is why we even need to collect that data.

When users go to, will they be able to see everything that Apple knows about them, that is, all data? They can see it is completely transparent. If I am using Apple devices I can go onto this site and I will know all the data-----

Mr. Gary Davis


-----and that it is completely transparent in relation to how Google is profiling me and what data it has on me.

Mr. Gary Davis

We are not profiling users. We are not creating a marketing profile in relation to users.

I will see all the data that Google holds in relation to me. I will be able to see it all - everything that it knows.

Mr. Gary Davis

To be clear, in our case, the date that we are collecting is to provide the service that a person is asking for - he or she wants data stored on iCloud or wants a response to a question he or she has asked to Siri. If we have the data associated with a person's identity, which in many cases we do not, we will provide that data there.

A person will know exactly what data Google will have.

Mr. Gary Davis

Absolutely, but a person should be able to understand it even before then. When a user launches any Apple service, we have a screen that appears with a data and privacy icon that explains to the user in what we believe are simple terms why his or her data is being collected in those instances where it is collected, and then we have some further support information behind that. A person gets great transparency at, but he or she gets it as the service launches as well.

I ask the same question of Google.

Mr. Ryan Meade

I would very much agree with your premise that users should have a central place they can go to control the data and make choices that suit them, and that is why we created Google Account, and have invested considerably to make that as comprehensive but user-friendly as possible. We obviously also strive for data minimisation, and are increasingly using technology to do more with less to generate those relevant results or other useful and helpful products for users without as much data collection. I would encourage users to review Google Account. We have made it available from more of our products now so we have a central button on each of the products. If a person is in maps or in search, he or she can go straight to his or her Google Account and can see exactly what data is stored, and crucially the user can control it, so he or she can delete whatever is there or opt out of things like ads personalisation and so on. That is available in a very user-friendly form and it has taken a big investment to make sure that it is presented in such a way that people can make use of it, and I think that is borne out by our usage figures. With 2.5 billion users, that makes it one of our most popular products.

When a person goes on to that, can he or she see everything? Is it fully transparent, because there is a belief out there that there is a lot of profiling and targeting happening, which can be very useful, do not get me wrong. In terms of the services being provided, we are talking about people who may not be able to type and who can use voice and so on. We all get the positives but it is about transparency around what Google is doing with people's personal data, maybe their children's data or maybe other people in the home who may not be using their sites but who are being potentially impacted by this. When a person goes on to Google Account, can he or she see absolutely everything - all the profiles in relation to him or her and the data that Google has on him or her? Can I see everything there? Is there full transparency around it?

Mr. Ryan Meade

A person can see all the interest categories that are used, for example, to target advertising or to serve him or her relevant ads.

However, there is a further step which is a subject access request form. I might ask Mr. Malcolm to talk about subject access requests.

Mr. William Malcolm

The My Account tools are pretty comprehensive, and we have tried to make them really usable, so if a person uses an Android device he or she can simply swipe away a piece of data and that will be deleted. If a person wants to take the further step of ensuring he or she gets everything he or she is entitled to under the GDPR, we also have a privacy troubleshooter tool where he or she can fill out a subject access request form. When we get those requests in from users, we send them to Google Account. Most are quite satisfied and most say thank you, that they did not know that was there, that was what they were looking for, that they can now see the activity we are holding, that they can make choices around how they want to receive personalised advertising or not, in that they can opt out completely-----

Would Mr. Malcolm have figures on the number of people who access subject access request forms? If I want to get the full details on what Google has on me, the profile, the data that it has around me-----

Mr. William Malcolm

I do not know the exact figures. I would be happy to follow up, but it is in the hundreds. It is certainly not in the thousands.

Is that worldwide - the hundreds?

Mr. William Malcolm

We are talking here about requests in Europe under the GDPR mechanism-----

In Europe, okay.

Mr. William Malcolm

What one has to bear in mind is that 2.5 billion users are interacting with the service and are able to see in real time what data we hold on them. When I joined Google, that was one of the questions I was asked most commonly, "What does Google know about you?". Through a lot of investment and work, we have the suite of tools that is there today. It took a huge amount of engineering effort to get to that point, but it answers that question. It shows what Google knows, and it gives one real active control over-----

People have to take a further step to really find out. Mr. Malcolm is saying that people have Google Account, but if they want to find further information on what Google knows, they have to fill in a separate form, a subject access request form. It is not that easy. People's concerns are around the data and how it is used, and that is the reason Google and Apple are here before us today. The concerns are about people's privacy and safety online, and how their data is being used. If there is an understanding about it, some people may be very happy to give the information but when they have to start going through hoops to find out exactly what Google knows about them, that raises concerns for them. I will go back to my original question. Does Mr. Malcolm believe I have a right to access and know how I am being profiled and targeted in a very easily accessible way, without having to jump through a number of hoops?

Mr. William Malcolm

The short answer is "Yes", we believe that. I think that the account controls and the level of access we provide to the data that we hold is industry-leading. That is not to say there is not more we can always do to improve the tool, but there is absolute transparency around the data we hold, and My Activity gives people that. They can also exercise all the privacy choices that Mr. Meade set out. We are absolutely committed to making those account tools a central repository where people can get the information. The GDPR has very particular ways how one needs to respond and the form is really just there for those users who want to access their statutory rights in the correct way. We respect that and we respect users' rights to exercise that statutory right, which is why we have made it really easy and intuitive via the troubleshooter. The evidence is that most users find the experience through My Account to be easy and preferable.

I do not know whether I agree. There is a lot of alarm among the general public. I do not want to repeat myself but I think if there was a very easily accessible route to find out how one was being profiled by what one was Googling, which can be very effective, and how one was being targeted, most people would be happy with that.

These companies have got the height of expertise to do this, and to make it more user friendly. They would not then be pulled into Oireachtas committees. We raised this with the Data Protection Commission last week. We asked whether it has to carry out all these investigations and whether it comes to this in order to get change within the industry. I am coming from the perspective of protecting the user, and at the same time acknowledging the great benefit that the Internet has. We need to be able to balance the rights and ensure that there is confidence in relation to people's safety online.

Does Mr. Davis want to come in?

Mr. Gary Davis

Again, Chairman, I had hoped to make a distinction in relation to Apple. We pride ourselves in having simple, easy-to-use interfaces, and so when we launch any feature product which collects personal information, our best and brightest human interface teams are part of that process. They are part of the process in order to make sure that our users can understand what is being accessed in those limited cases where we need to access their data.

Do you want to come in Mr. Meade?

Mr. Ryan Meade

Just to make the offer, we would be very interested in your specific feedback on Google Account. As you say, that is a huge investment we have made in making it easy for users to both see and control the data. We are always in the market for feedback, so we would be very happy to take that on board.

I thank you all for coming before the committee this afternoon.

The joint committee adjourned at 2.16 p.m. until 3 p.m. on Tuesday, 10 December 2019.