Skip to main content
Normal View

Joint Committee on Enterprise, Trade and Employment debate -
Wednesday, 21 Jun 2023

Artificial Intelligence in the Workplace: Discussion

Today we will look at artificial intelligence, AI, in the workplace. Advantages in technology have brought many opportunities for positive change in the workplace. However, developments in technology have also brought new risks and challenges that require appropriate scrutiny to ensure the rights of businesses and staff are sufficiently protected through robust legislation and policy. Emerging technology and artificial intelligence will have a particularly profound impact in several professions such as law, journalism and the creative industries. Today I am pleased that we have an opportunity to consider this and other related matters with our witnesses.

From the O'Reilly institute in Trinity College Dublin I welcome Professor Gregory O'Hare, professor of artificial intelligence and head of the school of computer science and statistics. From the Bar Council of Ireland I welcome Mr. Ronan Lupton SC. From the Irish Congress of Trade Unions I welcome Dr. Laura Bambrick, social policy officer, and Mr. David Joyce, equality officer, development officer, global solidarity officer and policy officer.

Before we begin I will mention parliamentary privilege. I will explain some limitations to parliamentary privilege and the practice of the Houses as regards references witnesses may make to another person in their evidence. The evidence of witnesses physically present or who give evidence from within the parliamentary precincts is protected pursuant to both the Constitution and statute by absolute privilege. Witnesses are again reminded of the long-standing parliamentary practice that they should not criticise or make charges against any person or entity by name or in such a way as to make him, her or it identifiable or otherwise engage in speech that might be regarded as damaging to the good name of the person or entity. Therefore, if their statements are potentially defamatory in relation to an identifiable person or entity, they will be directed to discontinue their remarks. It is imperative that they comply with any such direction.

The three opening statements have been circulated to members. To commence our consideration of this matter, I invite Professor O'Hare to make his opening remarks on behalf of Trinity College Dublin.

Professor Gregory O'Hare

I thank the Cathaoirleach. While artificial intelligence has garnered heightened interest in recent months, it is a technology that has existed for some considerable time. The origins can be traced back to none other than Alan Turing. In 1950, in a seminal paper entitled Computing Machinery and Intelligence, he postulated how to construct an intelligent machine and, significantly, how one might attest to the existence of such intelligence. The latter became known as the Turing test. The Dartmouth Summer Research Project on Artificial Intelligence conference, hosted by John McCarthy and Marvin Minsky five years later in 1956, was to become a key milestone. At this conference Allen Newell, Cliff Shaw and Herbert Simon presented their Logic Theorist computer programme, which sought to mimic human problem-solving. It is considered to be the first artificial intelligence programme.

Interestingly, the subsequent journey for artificial intelligence has witnessed twists and turns with many false dawns and unrealised promises counterbalanced by many landmark moments. I will provide exemplars of the latter. On 11 May 1997, IBM's Deep Blue defeated the reigning world chess champion, Garry Kasparov. On 8 October 2005, a Stanford-designed vehicle won the Defense Advanced Research Projects Agency's grand challenge of autonomously driving 211 kilometres across the desert. On 25 May 2017, Google's DeepMind AlphaGo defeated the world No. 1 ranked Go player, Ke Jie. On 12 October 2017, David Hanson's humanoid robot, Sophia, was granted citizenship of Saudi Arabia. On 30 Nov 2022, OpenAI released the open source generative AI tool, ChatGPT, where GPT stands for generative pre-trained transformer. On 27 and 28 March 2023, deep fakes of Donald Trump being arrested and Pope Francis in a white puffer jacket went viral. On 17 April 2023, Boris Eldagsen's AI generated work entitled "Pseudomnesia: The Electrician" won a Sony World Photography Award. On 18 April 2023, "Heart on My Sleeve", a generative AI track purported to be a collaboration between Canadian music superstars Drake and The Weeknd, was released online and went viral. On 1 May 2023, Geoffrey Hinton, the 2018 Turing Award winner and father of deep learning, resigned from Google to enable him to speak out freely about the risks of AI.

AI is a profoundly disruptive technology. History is strewn with examples of technological anxiety which accompanied key advances such as the wheel, the loom, the printing press, the combustion engine, the mobile phone, robotics, gene editing and now artificial intelligence. The latest generation of AI, generative AI as typified by ChatGPT, is underpinned by large language models, LLM, built and subsequently refined using both supervised learning and reinforcement learning with human feedback. While such models do not understand their inputs, they are nevertheless able to establish statistical patterns and learn correlations from data sets of unimaginable scale that enable them to generate content that exhibits contextual relevance and appropriateness.

ChatGPT is the fastest growing technology in history, having amassed more than 100 million users in two months. By way of comparison, the time taken to reach 100 million users by mobile phones was 16 years, iTunes six and a half years, Twitter five years, Facebook four and a half years, and TikTok nine months. Generative AI differentiates itself from previous AI offerings in that it originates content. Previous AI technologies typically inferred correlations, identified heuristics, provided recommendations, detected faults or performed diagnosis. These technologies resulted in automation and robotic deployments, typically addressing physical and assembly line tasks and predominantly displacing blue-collar workers, while assistive and advisory technologies typically complemented white-collar workers, enabling their performance of tasks to be faster and-or more accurate.

By contrast, generative AI is giving birth to new content and will have far-reaching effects on knowledge and white-collar workers. Professions such as journalism, media, the law, academia, marketing, architecture, engineering and the creative industries will all be profoundly affected. A recent Goldman Sachs report of March 2023 concluded that two thirds of US occupations will be impacted to some degree by AI-empowered automation and that generative AI could replace one fourth of all work-related tasks. Specifically, it predicts 44% of legal tasks being automated.

New businesses are already emerging. Examples include companies such as Anthropic, Deeper Insights, Stable Diffusion, Cohere and Stability AI. Such companies offer services by which to train generative AI on proprietary data sets generating new rich mixed-media content. A 2023 Littler report, which rather interestingly attributes ChatGPT as a co-author, points to the emergence of new roles, such as prompt engineers with skills to craft queries which will induce highly relevant and accurate responses from generative AI platforms.

AI in the workplace can manifest itself in a myriad of ways, including application screening, analysis and monitoring of facial expressions, eye contact, voice tone and cadence in video-recorded interviews, automation of tasks, monitoring engagement and biometric identification and classification. According to a 2023 OECD report, 49% of workers in finance and 39% in manufacturing said their company's application of AI collected data on them as individuals or how they perform their work.

Legislation is required. When passed, the EU's AI Act will seek to be the world’s first AI legislative framework. The Act is framed around input from the high level expert group of the EU on ethics guidelines for trustworthy AI.

It adopts several ethical principles, including respect for human autonomy, prevention of harm, fairness and explainability, the last of which demands system transparency, system auditability and system traceability. This will enable individuals to contest decisions of particular AI systems and seek redress as a result of such decisions.

The velocity of AI technology is, alas, fast exceeding the rate at which the law around AI can be framed.

Mr. Ronan Lupton

I am a senior counsel, based at the Law Library in Dublin. I have been nominated by the chair and CEO of the Bar of Ireland to attend today's joint committee hearing to assist the committee's deliberations on this fascinating area.

None of the remarks made by Professor O'Hare in his introduction came as a surprise to me, given my background. My career has spanned approximately 25 years and has included a lot of input and work in the telecom and Internet space. I listened with interest to the professor's timeline, going back to the 1960s and 1970s. AI has been around for some time but at the moment, we are moving to a sphere and environment at extreme pace. That is a key challenge.

To pick up from where Professor O'Hare left off, the key challenge is to keep pace with the technology and where it is going. My practice focuses on the areas of commercial, competition, chancery, media and regulatory law. I am a member of the recently founded Media, Internet and Data Protection Bar Association. I have taught criminal and constitutional law at a professional level and currently do so part time at UCD. With a second hat on, I chair the Association of Licensed Telecommunications Operators, ALTO, as some of the committee members will know. I have had questions from Deputy Bruton at previous committee meetings over the years.

The Bar of Ireland engaged me and a number of colleagues to write the draft submission on the issue of the AI liability directive consultation for the Department late last year. As a result, I was nominated to address the committee today. While I have not been invited to make a written submission to the joint committee on any particular legislation or framework at this time, this is a timely period, with the passing of the AI Act, as previously mentioned, on 14 June. In the US, the Biden and Harris Administration is now looking at AI-related issues. I understand the Administration is consulting on the topic at the moment.

I intend to address three areas in my evidence and contributions to the committee today. Those areas are the issues of AI and legal and employment rights arising in the workplace; strategic concerns, insofar as the committee may have questions on those related issues, considering AI technology and the future; and any observations on employment and employment rights evident in the AI Act as passed by the European Parliament. I will make some observations in that regard. The Act is set in a fashion which seeks to foster employment and protect the rights of workers and so forth. That is an important set of part of the amendments and the existing drafting within the AI Act. The general data protection regulation, GDPR, and compliance with GDPR norms, feature throughout the Act. A fairly significant contributor to the AI debate in Ireland said that the AI Act is like the GDPR on steroids. I would not necessarily agree. GDPR rights are found under Article 8 of the Charter of Fundamental Rights of the European Union and within the GDPR itself. In Ireland, those rights are found under the Data Protection Act, as passed. Those rights are rights of citizens in any event and what can be overlaid in relation to technology does not change those rights at all. In other words, when we are dealing with legislating for the future in terms of employment rights, and employer rights and obligations, there are no changes to what the GDPR says and does. One of the most interesting features of the Act, as passed, relates to the issue of a prohibition on biometric technology, specifically real-time biometric screening and scanning. The committee may have questions in that regard, which I will take.

Given that there has not been a particular request for written submissions and we are not looking at pre-legislative matters at the moment, I cannot attribute much of my evidence to the Bar of Ireland but I will delimit if I am answering on my own behalf.

I appreciate being asked to come before the committee and hope I will be able to address any questions the committee has.

Dr. Laura Bambrick

On behalf of the Irish Congress of Trade Unions, I thank the members of the committee for the invitation to input into its discussions on artificial intelligence in the workplace. I am accompanied by my colleague, Mr. David Joyce.

Trade unions acknowledge that AI systems offer immense opportunities for improving work and workplaces. For example, AI tools can improve worker safety and productivity and free them up do more rewarding work. At the same time, however, without appropriate regulation, the increased usage of these largely invisible technologies poses potential risks to workers, which is why we strongly endorse the European Trade Union Confederation calls for a dedicated EU directive on AI in the workplace.

In the same way that EU legislation sets minimum standards for occupational health and safety, new rules are needed to set European minimum standards for the design and use of AI in our workplaces and to guarantee that no worker is subject to the will of a machine. We need to equip the workforce with the skills required to keep pace with AI technologies. We also need to prepare for technological unemployment. We will need a just transition approach whereby policies are put in place to ensure that where parts of jobs, whole jobs or whole industries become redundant, workers’ living standards are protected through pay-related and proactive income supports, including through a genuine short-time work scheme for vulnerable but viable employment and retraining opportunities. We must also ensure other quality jobs are created for workers to move into.

The shift to remote working brought the intrusive use of AI to monitor and supervise workers centre stage and, as has been mentioned, in the past six months, the launch of the content generating AI platform, ChatGPT, has opened up public interest in the potential for AI to transform jobs and displace large swathes of the workforce along the way. However, the widespread adoption of AI-driven technology in the workplace predates the pandemic. One in five Irish workers are now using AI tools in their jobs, according to the latest Microsoft Ireland work trends index, which was published this month and surveyed 700 workers across Irish organisations.

Previous digitalisation was mainly characterised by technological innovations such as computerisation, automation and robotics. This was based on automated processes through explicit rules and manually written computer programmes. AI is different, as we have heard. It is highly disruptive and self-learning and can independently derive connections and make decisions. While logical "if-then" programme steps were, in principle, comprehensive until now, AI can induce decision processes that can no longer be explained by the programmers themselves after some time nor anticipated by developers. The danger of dehumanisation of decision-making processes, especially when used in human resources tools, for example, to recruit workers, monitor their work, analyse their behaviour and even terminate their employment, is already bitter reality.

At a European level, trade unions have been advocating from the beginning for regulation that promotes the positive effects of AI while shielding workers from potential harms that could arise, especially to their rights. The EU's AI Act is not suitable for regulating the use of AI in the workplace, preserving the dignity of workers and counteracting dehumanisation at work. Although the legislative process has not yet been completed, the proposal submitted by the European Commission was more than disappointing from the workers' point of view. It only requires software providers to self-assess their own technology between low-risk and high-risk before putting it on the market and did not include any rules on the use of AI in the workplace.

The amendments agreed by committee and approved by the European Parliament last week on 14 July are mostly welcome, including, first, requiring consultation with workers and their unions before introducing AI to the workplace. A recent OECD survey of workers on the impact of AI in the workplace in seven countries' manufacturing and financial sectors found that, where consultation took place, workers were more likely to report AI had a positive impact on their performance and working conditions. That is from March of this year. Second, the amendments require an impact assessment on fundamental rights before AI is introduced to a workplace. Third, they allow a member state to restrict the use of an AI system in the workplace if it is done to protect workers' rights.

The Parliament now has to defend these amendments in negotiations with the Council on the final text. Notwithstanding the outcome, major weaknesses remain. Although the EU Commission has defined AI systems used for hiring, promotion or dismissal as high risk, the use of AI applications in the workplace will only be restricted if it poses a significant risk to workers' safety or fundamental rights. It is not clear when a risk is considered high enough to be significant or how to determine the risk ex ante. Software providers can be expected to self-classify their own applications as non-significant. The procedure provided for in the legislation is not capable of preventing this and will only lead to forum shopping for the weakest supervisory regime.

Trade unions are not looking to hold back the tide of progress. We acknowledge the potential of AI for improving work and workplaces when used in the right way. We demand robust regulation. Workers' rights and protections must be fit for purpose to keep pace with these powerful technological developments. AI in the workplace must deliver for workers as much as for business.

I thank members for their attention and I am happy to take any questions.

I thank our guests for the information they have given thus far. This is one of the most important hearings we have had. It is an area where people tend to roll their eyes and say this is happening and there is little we can do about it. I do not think that is the case. I welcome the words from Dr. Bambrick and it is important we say again that workers are not looking to turn back the tide. The people are not Luddites. We understand this technology is here but it should not be allowed to steamroll workers' rights. If we do not recognise the potential dangers, we will not be able to mitigate them.

My first question is for Mr. Lupton but I am happy for anyone to come in on it. It relates to the Government strategy, AI - Here for Good. I do not know how the title is intended; more in hope than anticipation, perhaps. It states:

The public ... [service] has already embedded AI into the provision of certain public services, and is also piloting AI applications in a range of areas including agriculture, revenue and health. This has made the delivery of those services more efficient and has provided useful analytic data...

We will be the judge of that. The strategy says the GovTech board will be responsible for regulating the safe use of AI in the public sector. That board is made up of representatives of the enterprise advisory forum, which is made up of big IT companies. There is no one that I am aware of representing workers and human rights, putting a different perspective on this. It seems there is no balance. Is there anything we can do at this stage? Should we look to ensure the GovTech board is stronger or more representative? Is the structure fit for purpose? Because the public service has high levels of union density, the conditions are good and they tend to be the leaders in good practice, though not always. If they are to set the standard, should we ensure the GovTech board is more representative? Is there a different forum where workers' rights, human rights and all of those things can be represented?

Mr. Ronan Lupton

To start, I have to go back to the existing legal framework, which is the data protection situation. Picking up from Dr. Bambrick's input, there are issues but the existing frameworks should be maintained. We have the vast majority of tech companies in the world sitting in our back yard. The regulatory regime, while criticised, is quite good. Some of the investigations and findings take a long time to get results, in particular from the data protection side of the house, but when the decisions come out, they are fairly robust. We have the European central function of the data protection board and supervisor looking at them as well. That is the starting point for the vindication of rights.

Have we been good legislatively at putting in place protections for data sharing among Departments and semi-State bodies? I think the answer is "No". We have had to catch up with data-sharing agreements and so forth. The Deputy's observation is correct that aspects of AI are used to great benefit in the Passport Office, Revenue online and other services with back-end AI features and functionalities.

Taking the Deputy's question head on in relation to the board and functionality, there will always be stakeholders across the spectrum. If the observation is that the spectrum is not covered properly, then the answer to the question is that needs to be looked at by the Minister again. It might be a recommendation from this set of deliberations that that occur.

As to the forthcoming AI Act and what the State needs to do in relation to the strategy, it will have to change anyway based on what went on before the Parliament last week. The headline item was police body cameras. I use that as an example rather than getting into that debate, which may come later. That is coming. Is facial recognition technology coming? No. That is based on what went on last week, really. There are other issues too. The answer to the question is: it needs to be looked at.

It does. I looked at the make-up of the GovTech board and its responsibility for what will be the model of best practice for employers. If that falls short, it is a serious problem. Would either of our other guests like to comment on that?

Mr. David Joyce

We believe the involvement of unions at the earliest stage possible in developing initiatives around this is key to addressing the concerns Dr. Bambrick outlined in our statement. The challenges can only be properly approached if the right stakeholders are around the table. Because of the challenges, it is key that workers and their representatives are represented in any forum that is making decisions that will fundamentally impact the world of work. As Dr. Bambrick finished our statement with, this has to be seen in a broader context. We have aims and objectives for the world of work outlined in international instruments, the sustainable development goals, etc. How will this development impact those? How do we harness it in such a way that it coalesces with them rather than working against them? It is key to have unions and other stakeholders around the table.

I agree. The problem is this has been established and they are not. ICTU's submission referred to requiring consultation with workers and unions before the introduction of AI but it is here and, by all accounts, there was no consultation. There is an element of a lot of people running along behind this.

That is not to say we will not catch up. Of course we will. Perhaps this has gone on under the radar.

I have another question that relates to workers and jobs. It is about understanding how AI and machine decision-making works. When we talk about transparency, that is important, but to be brutally frank, if the witnesses were to show me an algorithm, it would not mean anything to me. If they told me the algorithm was the reason I, as a Deliveroo rider, did not get any shifts last week and do not have any money to pay my rent, I would find that hard to understand. When we call for transparency, it is not just about publicising or publishing the algorithm. There has to be a deeper understanding for workers to be able to get to grips with it. How can we do that? The obvious one is that the unions are in from the very start and that any systems put in place are open and transparent, but bear in mind these algorithms are setting work for people at the moment. What can be done? We refer to transparency about how decisions are made. What does that mean, practically? Practically, how can we make sure workers who may not have a high level of education or English as their first language have some control over their work? I ask Professor O'Hare to address it first if he does not mind.

Professor Gregory O'Hare

I thank Deputy O'Reilly. This is a really profoundly difficult question to answer. By way of trying to illustrate the colossal challenge it presents to science, not only is there typically not an algorithm and not only is there typically not a set of algorithmic steps that one could scrutinise, with a trained eye, AI and, in particular, deep AI, does not have an algorithmic basis. Even if I, a professor of artificial intelligence, looked at a particular AI application that was using deep learning, I would have great difficulty in being able to establish on the surface how it is arrived at, its deduction and recommendation or conclusion. Whenever we talk about transparency, we really mean things like auditability and one of the current watchwords at the moment, which is explainable AI. It is incumbent that the legislation mandates that such systems are able to support explainable AI. In other words, there may well be explanations in the spoken language or written language. These explanations should explain that, based on a statistical high correlation between concept X and concept Y, this resulted in an inference, and in turn, when this was combined with another strong correlation between some other phenomena, this resulted in the particular recommendation. That is the kind of explainability that we are moving towards. Some of these systems are huge in their extent and their complexity is enormous.

I would counsel people with regard to earlier questions that, of course, it is crucial that we have appropriate, considered engagement of all the stakeholders involved. I would counsel them that often such engagement ought not to be rushed and necessarily takes considerable time. The velocity at which the uptake and deployment of AI systems is occurring does not afford us that level of time.

That is the problem.

Mr. Ronan Lupton

Stepping back for a moment to the constitutional rights part of our society, employees have a constitutional right to associate and dissociate. Springboarding from that to the new economy of Deliveroo riders, drivers, cyclists or whatever they may be, some of the issues that have come before the Labour Court and the courts over time relate to the rights of these individuals who are not unionised, for example. Picking up on Professor O'Hare's comments, taking into consideration what can be made available to these individuals in their employment contracts, such as they are, with some being zero-hours contracts, for example, which are the issues we have to struggle with, it is a different dynamic when dealing with a union that has teeth and power. Ultimately, it comes to collective bargaining and so on. Ultimately, there are new aspects of the economy which we need to consider to a greater degree which just were not there before.

We have heard of all these issues in the news and media but, going back to what Professor O'Hare said, we do not have time to get into the weeds and detail on these issues. Going back to the data protection sphere, the information that is being processed about individuals, including sole trader contractors, taxi drivers, or whoever they happen to be, is information that is going through systems. They may be algorithms, they may be, going back to what the professor said, deep AI-----

Mr. Ronan Lupton

If it is affecting their ability to earn a livelihood, which is another constitutional right, over and above the right to associate and dissociate, then there needs to be some level of transparency. Of course, with my other hat on, which is about the IP and commercial sphere, they are not going to disclose their algorithms, because that will blow a hole in their business model. There is that dynamic. The Deputy's first question was about the stakeholders. If an advisory body has all big business and high tech, that will not necessarily give the full picture. Again, it goes back to what Professor O'Hare said. One needs all the stakeholders in the room and quickly. I would even go so far as to say to have the individuals who are subject to these things there to say where we are going. They are my own comments.

I would not at all think that was a radical suggestion. By all means, have the people who are impacted there. Do Dr. Bambrick or Mr. Joyce want to speak?

Mr. David Joyce

Regarding some things that we would like to see which would address what the Deputy is talking about, obviously all systems at work need to be transparent and explainable. We would like people to have the right to receive information about new developments being introduced and receive it in plain, understandable language. The two gentlemen to our left are very versed in all of this. We are perhaps not so, and how would we be?

We are in the majority. It has to be said.

Mr. David Joyce

Yes. It is also important that we and workers have the right to engage external expertise if that is required. The final thing I would say is that some sort of fundamental rights and equality impact assessment needs to be carried out along with workers and their representatives in this area, because there is obviously huge scope for all the unconscious bias that we all carry around with us being fed into this and coming out the other end, reproducing discrimination and-----

That is the thing that people are concerned about. I know my time is up and will finish on this. The unfortunate thing is that often when one confronts this, the answer is that it is huge, complicated and moving faster than the human brain. At some point, we have to be able to say, "Stop". If it takes time, it takes time, and if it needs to be done slowly, I think we as legislators have a responsibility, not to put the brakes on it and be Luddites sending us all back to buying our own paper, but to ensure that those checks and balances, transparency, human rights compliance and so on are stacked into the law to protect people who may suffer an adverse consequence for it.

I thank our guests for coming in before us today. Deputy O'Reilly used the word Luddite more than once. I am beginning to feel like one when looking at some of this stuff. A number of things strike me. The last line of Professor O'Hare's submission refers to the velocity of AI technology fast exceeding the rate at which the law regarding AI can be framed. That is quite scary. My sense, from keeping an eye on this, is that it is moving extremely fast. There are things happening that even Professor O'Hare admitted he cannot keep up with, and he is an actual expert on this.

When I think of this, it almost goes back to Data and "Star Trek". If one looks at science fiction, it is becoming science fact a lot faster than we can imagine. It is probably even unknown to us, which is quite scary. I would contend a lot of people did not know about AI until quite recently, and that it was a thing happening out there. I would also contend a lot of people, including us, do not really understand it, or know the implications of it. It also strikes me that we would want an Oireachtas committee dedicated to this topic almost exclusively. We have established a number of Oireachtas committees on specific topics. This topic is so important, is moving so quickly and is so complex, that we almost need a specific committee to deal with this.

Would that be the first recommendation?

It might be a recommendation. I would like to hear what our guests think of that.

As always, Deputy O'Reilly, being out first, has covered a lot of the issues we might have raised, but there you go. However, I have a few issues. I read the OECD report when it came out in March 2023 which Dr. Bambrick quoted. It talks about threats to fundamental rights and democracy. We had this discussion previously at this committee and at other committees with respect to fake news - I know this is this business committee - and how something can be put there. The Pope and Donald Trump being arrested, the Pope wearing a puffer jacket and so on was mentioned. What is real and what is not real? This technology can be used to put out what has been termed "fake news" in more ways than one. The threat that can pose to democracy and freedom is fundamental here. Will Mr. Lupton comment on that side of it?

There is also a risk of misuse of data by AI systems via nefarious piracy and so on. What can we do to safeguard against that and to ensure what is out there is the truth and is not mathwashed, which is a new term to me? Some of the findings from OECD report indicate there is a certain amount of negativity towards health and safety. Will Dr. Bambrick talk about health and safety with respect to AI? There has been a change in the nature of work. Repetitive work could be taken up by AI. Financial institutions across the world seem to be using AI more than others, which is quite interesting.

It has been mentioned that people with disabilities can be positively impacted by the use of AI, and we have had some hearings on that. Will somebody comment on that? I know I am asking a lot of questions.

Employers use AI to reduce staff costs and improve workers' performance. It was also stated in the report that male workers use AI now. We know the gender pay is there and why it is there but AI seems to be preferred by male workers, maybe because they hold positions of responsibility in greater numbers than female workers. We have had that debate on the gender pay gap and so on.

I refer to the impact on wages. It has been said that AI could decrease wages. I am sure Dr. Bambrick will be quite concerned about that. It is something I picked up from the report also. Those are some general comments and questions to start off with.

Mr. Ronan Lupton

AI is very well-known down the country. Is usually followed with the words "Don't drop the straw" in farming environments. It is certainly not the technological AI. I will pick on something Deputy Stanton asked.

I come from a farming background, so I know what Mr. Lupton is talking about.

This has suddenly gone in a different direction.

Mr. Ronan Lupton

Exactly. It is a joke.

I would contend that it could probably be used in that context as well, with genetics and so on.

Mr. Ronan Lupton

This is it. Robotics in milking and birthing are part of the AI equation. There are no two ways about that.

I want to pick up on an issue the Deputy raised which was that of disabilities. I will personalise this, if that is okay. My third child is fairly profoundly disabled. He has a condition called alpha thalassemia X, ATR-X, syndrome. He is unlikely to acquire speech, so I look forward to facilities that would be developed using AI technologies. They may be iPads or speech development technology. He may develop some form of brain function, which will get him to five or six years of age and he will require long-term care. From my point of view, I look to the future hopefully in that regard. Professor O'Hare mentioned gene editing technology. It is too late for my child because gene editing technology based on developments that AI would use to look at massive gene clusters and to correct gene deficiencies will be something that is handleable very shortly. Unfortunately, however, it will not help my kid.

Looking at the issues the Deputy raised relating to my area of practice, which is media, technology, disinformation, misinformation and so on, when one considers what goes out in the mainstream media - for example, in newspapers - there are procedures and processes within editorial newsrooms that have journalists filing copy, whether it be from the Houses of the Oireachtas or wherever it is around the country, in live feeds. They literally send them from their iPads or their personal computers, PCs. They go into a news review process and sometimes they are subject to privilege, if they are court proceedings. There may be accidents or issues happening live. Usually, they are sent to lawyers to review them and to make sure nobody is being defamed, there is no privacy breach, and there is no contempt of court or whatever it happens to be. Nowadays newsrooms are using artificial intelligence technology. There is an Irish company called CaliberAI which the committee may or may not have heard from. That is certainly an interesting development.

What technology in that sphere does not catch sometimes is the fake news and the disinformation part because the story can be written in such a way that it looks bona fide. Going back to the suntan issue that occurred in The Irish Times a while back, one can see how that occurred. Again, would a human have caught it? The answer is probably "No" because the story looked like it came from a contributor and looked bona fide.

I refer to generative technologies, such as ChatGPT and so on. Some of the examples are anecdotal and some are not. People have used generative technology to make legal submissions that have gone badly wrong for them and so forth. One of the issues, which is interesting, and Professor O'Hare mentioned the Pope, Donald Trump and deep fakes, is that a number of Irish celebrities have before the courts regarding fakes that have been put online saying that they are selling Bitcoin, Rolexes or whatever it happens to be. This is criminal activity; there are no two ways about that. It remains criminal technology whether it has been generated by an AI technology programme or by criminal gangs based wherever. They can be based in Ireland or anywhere in the world. That is a matter for law enforcement to deal with. I know we are at a meeting of the Joint Committee on Enterprise, Trade and Employment but there is a law enforcement question there and a civil liability question.

Effectively, the defamation laws here are good insofar as one can sue on foot of damage to reputation in that regard. It is a massive issue; there are no two ways about that.

In the Bar of Ireland's submission relating to civil liability issues, which is very much on point regarding the Deputy's question, and dealing with media, privacy, and data protection, what it said was that there should not be any whiplash or neck break changes to the legal norms here. In some other European countries, they have strict liability, for example, when it comes to AI and, in fact, the manufacturers of the technology can be brought before the courts. Whereas in Ireland, if we can track who it is or was on the balance of probabilities, one can succeed in a claim. The Bar of Ireland stated not to go so far as to change the law overnight, which again conflicts with what was said about the speed and pace of the technology. However, what we are talking about in terms of speed and pace is keeping up with how the technology develops.

I want to follow up on Deputy O'Reilly's earlier question, and it will probably be something the three contributors can agree with. The issue of regulatory impact assessments is something we have become used to in Ireland over the past 15 to 20 years, possibly because of the prevalence of European law coming in and more regulation in spaces. Regulatory impact assessments, when it comes to AI technology being dropped into media and literally any line of society and work, are of critical import. I am not talking about just a facade, with someone saying, "We have done a regulatory impact assessment. Thanks very much." I am not talking about that. I am talking employees rights being set out as follows and saying what the algorithm does or does not do insofar as that can be disclosed, for example.

I will go back to the point on journalism and media.

Journalism will not become automatic overnight. It is just not because, ultimately, feet are needed on the ground in the Oireachtas or wherever it is. There may be a transcript of today's proceedings but it will miss my joke about artificial insemination on a farm because the transcript will state it is "AI". That is a prime example of how it can go wrong. I will stop because I have taken up enough time but I am happy to come back to that, if any other contributors have questions along those lines.

Professor Gregory O'Hare

I will try to pick up on a few of the Deputy's comments. I concur with what was just said that the pace things are moving at is phenomenal, such that it has motivated Geoffrey Hinton, a very seasoned and established thought leader within Google, to resign. Many voices that members will have heard of, such as Elon Musk, Bill Gates and Geoffrey Hinton, are all calling for a global pause in AI. While Deputy O'Reilly referred to the legislative possibilities of effecting change within Ireland, with the utmost respect to Ireland or any individual country, we are talking about something that knows no boundaries. It knows no political, geographic or socioeconomic boundaries. This is something that demands, potentially, a global position. Ireland needs to find a way and a voice into that global discussion. That voice is a potentially significant one, given Ireland's fortunate position of playing host to significant tech companies. That is the first thing. Many people are advocating for a pause and these people are informed.

On some of the other points made, particularly the reference in the OECD report to the proportion of males adopting and routinely using AI seeming to be somewhat higher than that for females, I also noted that whenever I trudged through the many pages of the OECD report. However, I do not believe that to be a consequence of AI per se but, rather, an artefact of the unfortunate gender imbalance among IT workers globally. That is very manifest within Ireland and around the world. That is just a natural consequence of that.

On wages, there is certainly an opportunity for employers to reduce salaries. Whether that opportunity is ever exercised will remain to be seen, but certainly many of the skill sets - I am looking to my colleague in the legal profession, Mr. Lupton - and many routine legal actions could be automated and significantly supported. White-collar professions that were often the bastion of never being impacted by technological roll-outs are now finding themselves in the front line. In some sense, some people might regard that as a refreshing change because all the previous incarnations of technological evolutions have impacted blue-collar workers. There is at least the opportunity to reduce salaries. However, new generations of jobs will manifest themselves. If you look at any of the fundamental technological revolutions that have occurred in our history, almost unanticipated roles have emerged, which have often been highly skilled and incredibly well paid. It is a bit of a balance in that regard.

On the point around disability and the possibility of this technology assisting those who are less fortunate, without question, that is a possibility. It is very important we do not throw the baby out with the bathwater. This technology has profound opportunities. One of the issues I draw members' attention to is that the recent ChatGPT and OpenAI offerings have been provided out into the wild. No intellectual property, IP, is being protected. They have been offered up. In one sense, one could derive solace and reassurance from that because they are not being protected by one international IT company that will have this preferential position and could potentially hold the world to ransom. This technology is being put out into the wild. It is open source and you and I can use it. In fact, I recently toyed with ChatGPT and asked it to give me a short discourse on how I would recognise content produced by ChatGPT. I got a very interesting response. I had rather mischievously thought of producing my opening statement through ChatGPT but I thought that was too predictable and somewhat crass.

I actually double-checked that to see-----

(Interruptions).

Professor Gregory O'Hare

I would have expected nothing less.

It is using AI to check for AI.

I found out a load of information this week for myself, just through doing research or whatever. I put all three opening statements into this technology and pretended to ask it to do something. I was not expecting it to do it. It was just amazing stuff.

Professor Gregory O'Hare

By virtue of the fact that this technology is out in the wild, I have to counsel the committee about a potential sinister implication, in that this technology can be used by all and sundry. The question is whether it will always be used for a good purpose or whether there is a significant chance it will be used for Machiavellian purposes.

When the need for a global pause is talked about, it is worrying that 40 years on from being alerted to climate change we are still struggling to get countries on board. It is a worry. The EU has governance such that a fair degree of confidence can be had that it will protect citizens first. Not all governments would see the advent of such a powerful technology in that light. There is a risk of this being weaponised, similar to an arms race, between those who seek to regulate and those who do not. I am interested in hearing comments on that issue.

To go back to the principles the EU says should be applied in trying to regulate this, such as respect for human autonomy, prevention of harm, fairness and explainability, it surprises me there is not more attention to control, governance and the inputs used. Those are all outcomes a judge would have to evaluate after the fact to ascertain whether something is fair. If we are trying to regulate, it has to be auditable and explainable ab initio, as opposed to after the fact, whether something turns out to be fair. In practical terms, how ought governments think about - it will probably not be governments but global institutions - regulating this sector? Should we be applying rules around control, scale, monopoly situations and governance requirements?

An allied question strikes me regarding liability. How do you prove someone intentionally did something or other, if that person created a Frankenstein monster that goes and does its own thing? What is the enforceability against standards such as fairness? Presumably, whoever is in the dock would just throw their hands up in the air and say, "I did not expect this to happen". I am interested in those sort of principles. Should we approach this much more around insisting on reasonable care, dispersed ownership and not allowing certain inputs to go into the framework used to generate these decisions? That is the area that worries me.

Do we have the tools in the regulatory armoury to address something of this nature even if we move quickly?

Professor Gregory O'Hare

I will attempt to respond to Deputy Bruton’s questions, which are difficult to answer. We have already seen some of the difficulties, nuances and challenges around data and the appropriate curation of data. We are not simply talking about data here. We are talking about content that is being originated, that may not be accurate and that may well have fundamental financial or political impact and may lead to wars or boundary incursions. I do not think the legislative framework we have at the moment is in a position to be able to respond with the speed that we need. Things are starting to present themselves that have not previously been considered because they did not need to be considered.

I will just pick one little point on that topological landscape that is incredibly fast-moving. Thinking about ownership of content that is originated, who might own it? Going back to Deputy Bruton’s point on liability, generative AI can seed new content into the information and knowledge space. The question is whether there is some sort of admissibility process or control to verify the accuracy and appropriateness of the content, and I think we all know the answer to that. With the sheer speed at which content is being generated, that is not possible. If we take an avalanche of content that is being generated, and that content is being picked up by generative AI processes to generate new content, you get this incremental shifting of the information and content landscape. To whom would one attribute liability in terms of an adverse effect to content in that space? I would bow to the legal mind beside me as to how that can be achieved.

Mr. Ronan Lupton

I do not think there is any bowing. I refer to something being generated on ChatGPT on instruction and then published by an individual. For example, if I said “Give me a run down or a précis of Deputy Richard Bruton”, 90% of it is false and if I then run that in my newspaper, it is defamatory. What would the Deputy do? He would sue for defamation. I would say it was generated by ChatGPT and he should sue ChatGPT. The Deputy might just do that too. In other words, the rules and liability position now allow the Deputy to sue me as the publisher, because I would have published that, but he could also sue ChatGPT as well. It is a slightly interesting dynamic. If you think about everything, throw it on a whiteboard and say the future is this in terms of the legal changes required from the point of view of litigating matters rather than the point of view of where the technology is going, and ask if we should change the rules of engagement, duty of care, damage, tort, if you want to put it that way – it is probably the best way to put it – the answer is “No. Not right now.” because, ultimately, you will be able to find the tortfeasor. Someone may say it is a machine. The answer is the owner of the machine, because we will go after them. However, it should not be a strict liability situation because, ultimately, Ronan Lupton might have said that he wants to have the following précis on Deputy Bruton in the following way, about, for example, cooking online, what his particular constituency is or whatever it happens to be. Deputy Bruton would have recourse to the courts under the pre-existing mechanisms by which people vindicate their rights.

I wish to pick up on an interesting observation on the market power of the future or current technology platforms. I will do it by analogy to the Digital Markets Act. I hope members of this committee have heard about the gatekeeper provisions for the very large platforms and how they behave. They are the same platforms that, in many instances, will have developed AI and AI technology. It may be that the commission can extend or even use those frameworks at this stage to break the monopolistic behaviour on the markets – I guess market power is the same thing – or it may need to be developed further. It has to be European level, as far as I can see, because of the size of those organisations.

Another observation is if we look back 20 years ago, Facebook, Google and other big names did not feature on our landscapes. Back to Professor O’Hare’s observation, new entities and emerging technologies will come forward and the question is how to regulate them, because they will be below the radar. Deputy O’Reilly picked up on this and Dr. Bambrick did too. These are going to be new developers, innovators and people placing new items onto the market that fundamentally are self-certified. Again, it is back to this issue of regulatory governance and regulatory impact assessment. The issue of human autonomy, reputational harm and explainabilty are the three principles that Deputy Bruton highlighted. They are all there but how do they intermingle with the other rights? Professor O’Hare was right about this. Ultimately, we have the general data protection regulation framework, which is the processing of personal data. That should work hand in glove with the AI Act, insofar as people can vindicate their rights and deployers, developers and controllers of the data who are deploying AI systems must comply with the GDPR norm. How to cope with all of this is a massive question for the data protection regulators. We as a nation, but also a European nation, will see a massive change in complaints in this.

I made this point in my opening statement. The US is moving on this now. It did not move on data protection. It has a patchwork of data protection. It is not even the same as the European legislation in certain US states, which does not go anywhere near vindicating the data rights of US citizens. Whereas with AI, it is saying “Well hang on now. We have sit up here because this is important.”

Professor Gregory O'Hare

I wish to add something to that. It is important that we understand the fundamental difference between personal data and the associated legislation and the kind of things we are witnessing today. For example, I originally derive from Newry. There may well be some generative AI programme that makes some inference that the good people of Newry are more disposed to illegal activity because of some historical background – of course this is just tongue in cheek – and, therefore, there would be inferences made about me because there is data stored that says I was born in Newry. They do not store the data personally associated with me that results and is associated with the inference of people from Newry. I can do an appropriate disclosure of the data contained on me and there is nothing therein that potentially would protect or alert me. These are things that go way beyond personal data. All of the legislative framework that pertains to personal data makes various assumptions that the data is stored, protected, not shared and all the rest of it. However, we are talking about data relating to categories and classes of people, people from particular areas and people who previously did X, Y and Z. This is a fundamentally different form of content and it is not sufficiently legislated for within the legislative framework.

I call Deputy Mick Barry, who has seven minutes.

I am not sure if I will use all of my seven minutes. I came mainly with the intention of listening and learning rather than using my mouth. However, one question has crossed my mind in the course of the discussion so far.

By the way, I apologise because I will have to leave afterwards. I have to be somewhere at 11 a.m. This has been informative and interesting.

A certain amount of discussion has revolved around the issue of regulation. We all know that in other spheres, there is competition for investment based on a lowering of regulatory standards.

In many states, there will be lower standards in environmental regulations and there can be an attraction for capital to flow in that direction. It is often said that data centres are proliferating in the State because the climate suits them. It is relatively cool, not too hot, etc. That is a huge factor. The tax regime is a factor but the interpretation and implementation of general data protection regulation, GDPR, legislation is a factor as well. There have been plenty of examples in the recent history of the State where light-touch regulation has been linked to the idea of an attractive location for foreign capital to invest, etc. Do the witnesses see any issues and dangers in the approach of the State in terms of investment from those who are generating and developing artificial intelligence, AI, but also those who are using AI in their business? Are we to have a level playing pitch?

As for the European legislation that is on the way, I agree with the points raised by the ICTU representatives that it falls far short in protecting the rights of workers and citizens but the message is that this is for a level playing pitch. We can have European legislation and then it being interpreted in different ways in different states. It does not always work out that way. I would be interested in hearing the comments of the witnesses on that.

Mr. David Joyce

The Deputy has identified a really important issue. Something that may improve in the next year or so is the issue of collective bargaining rights here. If workers are to be involved, it must be noted that our framework for collective bargaining up to now falls way behind the rest of Europe despite all the EU directives, etc. Between the EU minimum wages directive and the negotiations we had in the high-level group with IBEC and the Department of Enterprise, Trade and Employment, there is potential for a big improvement in collective bargaining rights. Obviously, it is important across the range of workplace issues. In terms of the challenges posed by this context, that can play an important role.

Mr. Ronan Lupton

I will pick up on two points. Those investing in AI are doing so to make efficiencies in their businesses. That is why that is being done. When they are doing that, they will be subject to pre-existing frameworks under GDPR, which is a personal data framework. We are looking like we are slightly deficient in other aspects of data governance and regulation in that regard. There are approximately 26 legislative instruments coming centrally from Europe at the moment, which all involve aspects of law enforcement data - let us call them "new technology" legislation - and fundamentally come into the mix when discussing this issue. That is one side of the coin, which is an employer, an organisation or a government investing in AI to make efficiencies.

On the flipside of that coin is a generating AI, which is the investment and development story that Ireland has been traditionally good at in attracting foreign direct investment and companies to the State. There are complaints about data centres and climate change and all those issues, which we will park for the moment, but, ultimately, we are still an attractive economy because of our workforce in terms of education and the ability to perform and behave in a fashion, which makes us cutting edge to some degree.

The Deputy is correct in the sense of where Ireland was. We are not there anymore. We were seen as a soft-touch or light regulatory approach economy but that has changed in the past ten years. It has been forced to change in respect of the data protection side of the house but also issues relating to competition and regulation in sectoral spaces that have cleaned up many aspects of areas. WIth the Internet, for example, there was no regulation or there was so-called "self-regulation". We now have the Online Safety and Media Regulation Act 2022, the Digital Services Act coming on, the audiovisual media services directive 2, AVMSD2, and the Government putting very great care and attention and detail into how that regulator will work. It was only founded in the past nine months but, ultimately, it is a step in the right direction. We are no longer seen as that soft touch in the context of the global sphere but we are still attractive, and that is an important message. I am not making that statement on behalf of The Bar of Ireland. It is simply my own observation.

Are we seen as a soft touch on the AI front at present?

Mr. Ronan Lupton

No, I do not think so. What we are doing to a degree is waiting and seeing what is going on. We are participating in the central generation of the AI Act. We are engaging insofar as we can with the speed we can but - Professor O'Hare and, indeed, Dr. Bambrick have made this point - it is probably not enough. Going back to Deputy O'Reilly's initial as to whether the right stakeholders are engaged in this now, the answer is, "Probably not", and we have to bring that together. The question is: how quickly do you work, do you come up with some form of assembly to try and deal with these issues and does that feed the right answers?

Professor Gregory O'Hare

I remind the Deputy that the EU AI Act has been in gestation for quite some time. My understanding is there have been delays associated with it. It is hardly surprising given the set of circumstances that have occurred quite recently, which is bombarding that particular framing and construction of a very complex Act with an ever-fast moving landscape. That will continue. The velocity, as I referred to earlier, is remarkable and my understanding is that the crafting of any law that is to be worth its weight is a timely process. What we probably will see is a legislative framework that will attempt to anticipate future developments and that will potentially result in a legislative framework that is so generic that it might be difficult to enforce.

Deputy Bruton asked earlier how might we legislate for this or how can we control it. I will give the committee one example of something that happened in and around the deep fake space. For those members of the committee who are less aware, this is artificial construction and morphing of imagery, and seeding it into the Internet, for example, images that are patently not true such as the Pope in a white puffa jacket. There has been a movement to try and address that where the leading tech companies have come together to try and see another way whereby we could have some certification of images that are seeded into the Internet. Companies, such as Adobe and Microsoft, have shared their intellectual and technical power bases. We are starting to move to a situation whereby images that are verifiable, accurate and bona fide will have a little icon in the top right-hand corner and someone can click on this icon and establish the provenance of that particular image. We need to move towards something that has that kind of certification and provenance opportunity, not merely for imagery but for every kind of content that is seeded into the Internet. How we get there is a considerable challenge.

Mr. Ronan Lupton

I had the privilege of working for Dr. Vinton Cerf. Some of the committee members will know him. Dr. Cerf wrote transmission control protocol/Internet protocol, TCP/IP, which is the underlying protocol by which the Internet performs, in the early 1970s. It is the addressing system underneath the worldwide web, which Professor Timothy Berners-Lee was responsible for.

A couple of years ago, Dr. Cerf came to Dublin. I met him in the Shelbourne Hotel for breakfast and we discussed illegal downloads online and image sharing, copyright breaches, etc. Dr. Cerf's idea - it may have developed from this stage and I am conscious of the transcript - was like a land registry for content that would be shared on the Internet.

It is probably ten years since I met him but the suggestion of having flagging reflects where we are now. Ireland cannot lead the way on this, however, because if it did it would cause a big problem in that everything would be moved to Diego Garcia or some such island where it would not be regulated. There has to be groupthink at EU institutional level – I do not mean groupthink in a negative way but in a collective way – to reach the standard. Once we see companies such as Adobe, Microsoft, Google and Meta doing what I am referring to, they will ultimately be doing so for the correct reason, which is to stop child sexual abuse material or illegal content related to children online and terrorist content. Again, these are all parts of the legislative equation I mentioned, namely the 26 or 27 legislative instruments coming from the EU, which ultimately have to be dealt with by video-sharing platforms, technology companies and much smaller organisations that will be regulated by the likes of the Coimisiún na Meán. I hope that is useful.

I apologise as I have to leave straight after this contribution but I will read the transcript afterwards.

With regard to Luddites, I have to be very careful because my husband has something of an alternative view. He says the Luddites were simply heroes who understood technology. They get bad press very often.

With regard to the FSU report, with which our colleagues from ICTU will be familiar, the FSU carried out a study to determine how technology is now monitoring workers at work. The surveillance of workers at work is not new but the technology to establish, to the level of detail in question, whether they are happy, sad or stressed is very new. Does ICTU believe the EU AI Act will deliver the necessary protection from excessive surveillance of workers at work?

There is currently no forum or oversight body for AI in this State. We have gone through what GovTech will do. Strictly speaking, what must be done is not the role of the digital advisory forum either. A very helpful and useful suggestion was made by Deputy Stanton, namely that it would be very useful to have a joint committee to examine this issue specifically because it is incredibly broad. With regard to oversight, what else can we do in the short term and what would the witnesses like to see at this stage, conscious that things are moving fast and that it sometimes takes a while to establish a committee? My first question relates to excessive technological surveillance.

Dr. Laura Bambrick

With regard to the AI Act, as we said in our opening statement the proposal put forward by the Commission was very disappointing from a worker's perspective. The amendments, though not agreed yet by the Council, offer some comfort and are welcome but we still do not believe they go far enough. The European Parliament's text, which has yet to be agreed, does state the AI Act should, in any case, not prevent the Commission from proposing specific legislation on the rights and freedoms of workers affected by AI systems. The European Trade Union Confederation is proposing a dedicated directive that is looking at the use of AI in the workplace. We can see from the conversation we have had today that this is a vast topic. Often, it goes beyond the workplace because it has to be considered in the round before drilling down to consider the workplace. In viewing the AI Act, we look first at the macro level and then at the micro level. One of the major concerns over the AI Act is that it allows the software providers to self-assess the level of risk, which is significantly high. As was asked today, how is that done before putting the software on the market? We really have concerns about the shortcomings of the legislation. The AI Act is important and a first step but it will not deliver for workers.

What about oversight?

Mr. David Joyce

Oversight? I am sorry but I was thinking about surveillance.

A joint Oireachtas committee on AI would be really good. Even this morning, in discussing the topic before us, which is AI and the workplace, we have gone a lot further than envisaged. A workplace stream would be really important.

The nature of these discussions is such that they cannot be confined. AI will not be confinable.

Could the Bar Council of Ireland comment on excessive surveillance and the mechanism for oversight?

Mr. Ronan Lupton

I will pick up on one aspect of the Act, Article 29, and an amendment that was submitted, No. 5a. It states: "Prior to putting into service or use a high-risk AI system at the workplace, deployers shall consult workers representatives with a view to reaching an agreement in accordance with Directive 2002/14/EC and inform the affected employees that they will be subject to the system." The directive establishes a general framework setting out the minimum requirements for the right to information and the consultation of employees in undertakings or establishments within the Community. The Commission's work programme from 2021 refers to the recitals as well.

On the issue of processing employee information, let me revert to the data-protection or GDPR style of offence. The AI Act states the information is high-risk information, as is biometric data that is processed. With modern technology, there can be biometric data processed. Photographs, for example, are biometric data. To take the very simplest of examples, an employee badge is a form of biometric data that is processed. An employer, data controller or deployer, or whatever the person deploying or putting an AI system into an employment situation is called, will have to comply with data protection impact assessments under the regime. Records of what is processed must be maintained and employees will have to be told what the position is. Bearing in mind the union and non-union positions, the Act does, in fact, have employee information set out as high-risk data.

I take the point that Dr. Bambrick made in that she is right about self-certification, but, again, it is a question of what flushes that out. That is where there is a problem. It might be that the trilogue needs to examine that further. I am concerned that there could be a delay in the trilogue. Members will have seen this regarding the EU privacy regulation. It has taken years to get it to where it needs to be. We are still waiting for it. Ultimately, modernisation of that area is a problem.

Workers' rights case law in this area is well developed. It usually concerns CCTV cameras. A case that springs to mind is that of Copland in the UK, where somebody was placed under surveillance but not told about it. It was a European Court of Human Rights case. Cases like that have arisen from time to time. The Attorney General recently said the right to privacy is an under-ventilated right in the courts. That is all fine but we have privacy and data protection. Those are two separate rights under the charter but also under the Irish constitutional order. If we are taking our data protection rights seriously, the job of the Data Protection Commission suddenly becomes extremely large. It is a matter of whether it has enough resources or funding – I guess these are the same – to do what is required in an AI environment, be it this year or in five years.

What would the forum look like? There are many well-known academic AI experts in Ireland. Having these individuals at the table, in addition to legal, regulatory, governmental and tech industry experts, employees and union representatives, should occur. The work of the Internet Advisory Board, done years ago, was all fine, but if in dealing with these issues there is no will to determine where we should go as a State and if we cannot help Ireland Inc. to succeed or be more profitable and competitive while protecting the citizen and employee, we are at nothing.

There is a tie-down. Even though it may not be as fit for purpose as we would like it in regard to the categorisation of employee data as high risk and what happens on those systems, biometric and employee data go hand in glove insofar as they can. Throughout the AI Act, there are plenty of references to the numbered GDPR. It is something I am less concerned about but we will need to square that circle regarding what will flush out the issue of when things go wrong and how we get to that point.

Professor Gregory O'Hare

On the suggestion made several times during the meeting about the need for perhaps an Oireachtas joint committee, I respectfully disagree with some of my colleagues. It should relate not to AI in the workplace but to AI in society. After all, some people are not sufficiently fortunate to have employment and I do not think we should disadvantage them. AI permeates every aspect of life, not simply the workplace.

On the surveillance point, while I do not like to state the obvious, many aspects of our everyday lives are currently monitored, recorded and stored. Every time you go through a toll gate, your car registration number is stored and the ownership of that car is known. Likewise, every time you pass a vehicle number plate registration camera, that information is stored. What is really significant, however, is not so much the surveillance but the purpose of the data being accrued as a result of the surveillance.

I might give one or two illustrative examples. Various applications that exist in the automotive industry allow persons to avail of a preferential driving insurance rate, but to do so, they have to adhere to certain driving conditions. They might not be able to drive on a motorway, perhaps, or they might be allowed to drive only below a certain speed, and onboard technology that already exists in all cars could monitor their compliance with those terms and conditions. As a result of that, they will, one hopes, drive more safely and, therefore, they will benefit from the dividend of a reduced car insurance premium. Sometimes surveillance can have a very positive result.

To take another example, some time ago in research, we looked at the collection of data from citizens. I will intentionally omit the names of the companies for all the obvious reasons. The committee is probably aware that in the US, private health insurance is almost mandatory but, in addition to that, a relatively limited number of health insurance companies compete in that space. One of these large companies was collecting data to try to give preferential health insurance rates to some of its customers. Clearly, these were people who were younger and more active and who allowed themselves to present in health-monitoring booths in large corporate employers on a six-monthly basis. Their data were collected and those data on their general wellness were used to calculate their insurance risk and the rate of their insurance payments, but it did not stop there. This company engaged in a corporate relationship with a large supermarket chain, and every time individual citizens went to the supermarket, all the data harvested at the cash register, pertaining to their eating habits and those of their family, were captured. That was then conflated with other health-recorded data to build up a picture of the risk associated with that individual and his or her family. Of course, if they were perceived to be low risk and ate only vegetables, for example, and no processed foods or whatever others might buy every week, they would receive a benefit, but if they happened to be less aware of what was going on, their health insurance would increase incrementally.

In summary, the surveillance of people, the collation of data from disparate, varied sources and the kind of additionality of information that can result from that combination of those data sources and how they are used are what we need to be exercised by.

The issue is that where those data are collected, they are machine read. People cannot tell a machine if they are shopping for their elderly neighbour who does not work in that place or require health insurance-----

Professor Gregory O'Hare

Absolutely.

-----so the difficulty is not with the collection of data but with the storage, who can see the information and what they can do with it, as well as with the fact the machine that looks at it does not take account of the person but just the top level. A person can buy all the vegetables they like, but a machine cannot tell whether they eat them. They are only the person’s purchasing habits, not their eating habits.

Professor Gregory O'Hare

Yes.

I could spend a lot on sportswear, and while that does not mean I run every day, I might get cheaper health insurance as a result. There are serious issues we need to look at relating to machines examining people's behaviour and making decisions about them. That relates to purchasing habits but, at the end of the day, we are talking about issues that could have people sacked and that is why we need to be very cautious about this, without being Luddites, for want of a better word. We cannot move slowly because this will not move slowly, but that does not mean we cannot move at all.

I thank the witnesses for attending. I think this has been one of the most important meetings we have had in a while, with profound questions confronting us as legislators and policymakers in the Oireachtas. I take from this conversation that there is a clear call for regulatory oversight. Judging by the conversation to date, there is enormous potential for good from AI but also for bad, for both workers and wider society. The basic question in my head concerns whom and what we are regulating. I am interested in the views of Professor O'Hare and Mr. Lupton regarding the market for AI. I assume we are not talking about a small number of major companies. In light of that, is there an issue in that software such as ChatGPT open source? Open source was seen as a force for good in the past, but is there now potential, not least because this technology is open to being abused, distorted or changed, for open source to be problematic? If we are trying to regulate something we do not fully understand - somebody somewhere might understand the inputs but perhaps not the extent of the outputs - do we need to start thinking about the regulation of AI in the same space as how we regulate, say, biological hazards? Mr. Lupton spoke about the concept of a land registry-type regulation system. Is that where we need to go with regard to how we regulate AI, whereby it needs to be strictly regulated in terms of its generation and use because of its potential for harm in a range of spheres, not least in the workplace?

Professor Gregory O'Hare

Open source has, of course, always been heralded as sharing harmonisation, a provision of equality of opportunity and all the other phrases we have heard, and I have no reason to believe those benefits will not continue to persist.

What we are seeing is the potential use and intent associated with software that is openly provided. Had it not been provided openly, a clamour of questions would have been asked about concerns about an advantage being accrued by one large organisation and the potential uneven playing field that would produce. Open source is still a good thing, notwithstanding some of the difficulties that have manifested themselves of late.

On the Senator's second question about how we can control this, I remind members that the history of AI is strewn with attempts to achieve a definition of artificial intelligence. Intermediate definitions have been proposed and largely agreed upon but they have probably largely been superseded. Striving for a definition of AI is fundamentally difficult. It is almost akin to beauty. I cannot really define it but if I walk into a room and my eyes behold it, I will absolutely instantly recognise it. It is difficult to legislate for something that is so difficult to even define. We need to move rapidly. Even the speed at which we are moving at a European level is not sufficient. The boundaries around this technology are not governed by political or geographic barriers.

Mr. Ronan Lupton

I support and associate myself with Professor O'Hare's final remarks. There are certainly issues. The EU AI Act looks at providers, deployers, distributors, product manufacturers, authorised representatives and affected persons under its scheme. There are six or seven categories. The question was who and what is being regulated. With respect to who is being regulated, four or five from that list ultimately suggest that the regulation should be looking at distributors, deployers, employers and controllers. That is the sort of language that is being used. As to what is being regulated, it is the standards. If we put them in the workplace - to take the topic of this forum as an example - the standards are complex. We got into that a little earlier with Deputies Louise O'Reilly, Stanton and Bruton. We are considering issues that may not be straightforward. Who will disclose their algorithm for the payment of deliveries of takeaway fast food? They will immediately say it is a competitive issue and would breach their IP rights or whatever else, so they will not do so. That does not stop employees, for example, trying to vindicate a right to all the data about them on the employer's system or to state the data are wrong and ask for them to be corrected under the data protection frameworks, if they are able to interact with the AI. There is again a question mark about that. I hope that at least answers the question of who and what is being regulated, but there may be lines of the what and lines of the who in respect of the two strands of the Senator's question. In other words, we might decide to bring in a regulation for employers that deal with employees and technology. There might be three or four Bills under that banner. On the who side, there might be manufacturers, distributors and deployers of the technology. If we wanted to have a wholesale-retail kind of model, people in the wholesale space are doing the development work; people in the retail space are using it and actively have it functioning in their businesses and societies.

The next thing the Senator mentioned was the land registry type system. I will clarify if I may. I picked up from Professor O'Hare's correct submission that developments are ongoing with - I think two companies were mentioned, but there are more - Adobe and Microsoft where they are using a flagging mechanism to watermark, to use that expression, images to confirm the bona fides of those images. That is really what I was talking about, rather than having a land registry for all AI activity. That would be oppressive. However, for example, when there is a concern about the genesis of a particular image, if someone says it comes from RTÉ news and it has been watermarked in the right-hand corner, people can have some confidence - unless that has been faked - that the image is correct. This is the challenge. Ultimately, there are systems faking things that appear to be correct. Again, to use my example of celebrities selling Bitcoin and Rolex watches and so forth, we have seen that it is occurring. However, that is more criminal activity than automated computer generated activity, although I might be wrong about that. To clarify the point, I was not suggesting that everything should go into a big land registry-type database. However, the European institutions might decide that certain grades, such as high-end bigger monopolistic players of AI manufacturing and deployment technology might have some form of registry set-up to be able to say they record in a certain way and behave ethically as a result. However, if we look at the AI Act recitals, we see that the EU will be slow to oppressively regulate in that space. It will say it would be a limitation to innovation to go too hard on the regulation of that. There is a middle ground.

I will go back to Professor O'Hare's point about whether we should be doing things nationally. The answer is "Yes". However, if we go too far, it will make us unattractive and if we do not do enough, we will be a laughing stock for not doing anything. As I said to the Chair at the beginning of the meeting, today's topic is interesting. Usually people come to these committees with a written submission based on draft legislation. Today is more of a general discussion pre-legislation and that is interesting because there is such a wide debate on a wide spectrum of issues, some of which are beneficial. There is no doubt about the benefits AI can bring to certain sectors, but the problem is the downside to society. In the workplace, to some extent if a union is operating, the employees and members of the unions will be protected because they have collective bargaining power. One useful and interesting example was the change of ownership of Twitter recently. At the weekend, media were discussing changes to employment legislation as a result of what was seen as behaviour whereby the company did not consult the State properly. These are issues in which we were able to move quickly when we wanted to. We must focus on where the environment is going and try to get there before things change too dramatically.

Ultimately, all legislation can do is to regulate the relationships. In some ways, legislation will always be behind the curve of innovation in the AI space.

I want to pick up on the points being made by ICTU about the EU AI Act and the disappointing, serious gaps that remain in that legislation. On the question about the assessment of risk, what might be serious to one person may seem moderate or low risk to someone else. My question is for Dr. Bambrick and Mr. Joyce. Having collective bargaining, trade unions and the workers' voice in the room is vital, but is it enough, especially if there is a lack of understanding of how the software has been generated to start with? Do we need software to go through an initial point of regulation or initial framework of regulation before it can be deployed in the workplace? Once it is in the workplace, should employers and employees be able to mediate on how it is deployed? I am not sure whether my question is clear. Ultimately, my concern is that it is not good enough to say that if we have collective bargaining, all will be fine. We need an earlier process to ensure whatever is being deployed in the workplace is appropriate and suitable.

Dr. Laura Bambrick

That is a valid point, especially because the focus is on regulation and that is right. However, if we are going to discuss the topic of AI in the workplace, another area we must look at is the potential for technological unemployment. That will also mean preparing today's workforce to be able to use that technology in order to prevent unemployment where we can. That is not trade unions suggesting that AI will lead us to a jobless future. As Professor O'Hare mentioned, it will be like previous industrial revolutions, in that unknown jobs and industries will be created but there will be winners and losers.

Collective bargaining will not be enough to prevent that. We will need to look at our skilling, reskilling and upskilling opportunities and we will have to look at the right of workers to have paid leave.

At the moment, we are reliant on having a proactive employer to walk their employees through and future-proof their employees. On the other hand I mention individual workers in media, law, academia or research. None of the witnesses here today is future-proofed in the jobs we do. How do we be proactive? We need our employers to do it because we have no rights to income protection and skill training leave. Ireland is an anomaly in that. We should look at regulation of AI in general and in the workplace but we have to future-proof our workforce. We have to prepare those workers where parts of their jobs are going to be changed so that they will be able to move with their jobs, and where their jobs will be displaced, we must look at how we move them into new jobs and ensure those jobs that are created are good jobs for workers to move into. There is a whole body of work in looking at AI and the workplace and, unfortunately, we have not had the opportunity to discuss that today.

We had the Financial Services Union, FSU, before the committee a number of weeks ago and it was looking specifically at its own sector. It is important to say we had Mr. Larry Broderick before the committee, which was before my time here, and he is being buried tomorrow so it is important to note that and I extend my condolences, on behalf of the Labour Party, to the FSU and to Larry's family. There are other sectors that will be seriously affected. I do not know if congress has started looking at other sectors yet or if other unions have looked at the impact on other sectors of digitalisation and AI. Has that process commenced yet or are there plans in that regard?

Dr. Laura Bambrick

It is being done at a European level. Overnight we have seen that a popular German tabloid announced 200 jobs going to leverage AI. That could be a cover for something else but that is the official line going out: that it wants to embrace the AI possibilities and use them within the media, be more forward fronting, put digital first and do things like that. As was mentioned in the opening statement, where previous movements in technology have impacted on blue-collar workers, what we are seeing in the latest iteration of AI is looking at white-collar jobs. That is probably driving the public interest in it because we are the people who have access to the airwaves and committees to make it centre stage as an argument.

There will be few jobs that AI will not be part of because we are seeing that one of the leaders in it is in HR. Getting a job and monitoring it and decisions around retirement or redundancies will be involved. Even if you are not using AI in your day-to-day work, what you produce in your job will be overseen by AI in part. Few professions will not have AI used in them but the expert group on future skills needs, of which I am a member, did a recent report which found that one in three Irish jobs is at risk of digitalisation, which means there is a 70% risk of being disrupted. That does not mean those jobs will be displaced. It could be part of the job but that is the level of disruption that AI can bring. As I said, that will mean future-proofing our workforce and preparing those who will not come out, the winners and losers that there are in every other technology change.

I heard the UN Secretary General equally put out an urgent appeal that AI was a great threat. No one answered the issue of the chances of global regulation in this sector and the fear that in an environment which seems to be polarised, these technologies are almost being weaponised routinely to commit cybercrime and so on. Is anyone broaching that sort of arena? Open source technology might see this casually get into the hands of people with ill intent and there is a fear that there are some who will see this as a great opportunity to undermine a lot of global stability. I would be interested in where that debate is. How do we put in context the call by the UN Secretary General for something to be done urgently? He is head of the organisation you would think we would be looking to to deliver change in this arena.

Professor Gregory O'Hare

It is without doubt a technology that can present a great threat. Because it is out there in the wild, as it were, we have a situation whereby companies or organisations may not previously have had the opportunity to access this kind of technology and capability and, consequently, with this kind of opportunity, they now do. You could potentially have relatively small and ad hoc organisations, businesses or countries that now have the opportunity of being quite disruptive. How do we have that global discussion? I would have thought it has to come through forums like the United Nations but the Deputy made reference to the difficulties that have previously been witnessed in getting countries around the globe to agree on almost anything. Even global warming has had its difficulties in certain global constituencies. I would imagine that while the vast majority will probably see that we need to move rapidly and logically to try to address this potential existential threat globally, there will no doubt be certain countries that will not be of that mind.

Is there any move to a Paris Agreement-type approach where-----

Professor Gregory O'Hare

There has not been any such move thus far that I am aware of. Watch this space. This is fast moving and evolving. Things are starting to emerge and everything, including public opinion and legislative frameworks, are all playing catch-up with the developments we have witnessed in recent months.

Mr. David Joyce

On the world of work, internationally we would look to the International Labour Organization, ILO. While the 2019 ILO declaration for the future of work referenced challenges posed by digitalisation, etc., it is a declaration so there is nothing legally binding from it. The challenge internationally is to get agreement right across the board from countries, and in the ILO between workers, employers and governments, on introducing a standard in this area. It takes so long to get things on the agenda for that discussion and we have heard today the speed at which this is all developing. It is quite a challenge to get something going that would be effective and binding on the international stage beyond governments. Our Government plays an important role internationally in trying to promote such an approach but the challenges are huge.

I noted what Professor O'Hare said earlier concerning the statement made by Elon Musk, Steve Wozniak and others that "AI systems with human-competitive intelligence can pose profound risks to society and humanity". I also note that he mentioned that the Google scientist, Blake Lemoine, resigned because he more or less said AI systems were heading towards becoming sentient, which would mean they could make decisions on their own. If we are talking, therefore, about a potential global supercomputer, all interconnected, and gathering vast amounts of data and intelligence and then making decisions based on that data on its own and having the power to do things, as Deputy Bruton said, including perhaps weaponising things, then we are talking about a whole different level altogether. As he also said, this process is moving so fast now. We know how slow the process is to put through any kind of legislation through Houses like this, never mind at a European level.

The advantage of a regulator is that he or she can move much faster to keep up with stuff. Thinking about the setting up of a gambling regulator, on which I did a great deal of work and which is on the way to being established here at last, such an officeholder can sit down with representatives of the industry and, as the industry adapts, the regulator can adapt, monitor and regulate in this regard. I note the UK has ruled out having a dedicated regulator for AI systems. I have two questions. To Professor O'Hare, first, where is this development going? Was the Google scientist who resigned right? Should we be this concerned? To Mr. Lupton, are we heading for a European-type regulator or perhaps national regulators? As was said earlier, we have so many of these high-tech companies based here that we will probably have a major responsibility in this space.

It seems to me that in this committee we are talking about work, industry and business, and this all overlaps, but this development has almost gone way beyond that context in many ways. I note as well that we talked about breakthroughs in breast cancer treatment recently which were, fundamentally, underpinned by AI. It can do super-calculations on a massive scale and come up with breakthroughs such as these. As we discussed earlier, this is only beginning to happen in science and medicine.

I have also read that AI is key to the EU's digital transformation and yet the Union is behind in this area. We have these two conflicting things happening. We must, first, progress the digital transformation and not be left behind globally and, second, we must ensure AI systems do not take over, literally. I am almost back into the realm of science fiction now but science fiction is rapidly becoming science fact. These are the three issues I would like the witnesses to address.

Professor Gregory O'Hare

By way of the proclamations or the concerns expressed by globally well-informed individuals, it is difficult to anticipate exactly where our destination will be in this regard. The things these people are counselling us with regard to are possibilities. There is nothing illogical about the considered opinion they are sharing with us. Will this come through to fruition? Who knows? This is a technology that is pervasive and is manifesting itself in all different spheres of life, many of which are for good, but we must also be cognisant of the possibility of more sinister uses of this kind of technology. I guess what the Deputy is referring to is this sort of sentient, almost autonomous-type reasoner that performs adjudications and reasoning that is far beyond any of the individual originators of these AI systems. This is a possibility in future.

Mr. Ronan Lupton

On the second question, regarding where we are heading from an EU and national regulatory perspective, the proposed Act does set out national supervisory authority and a board structure in respect of notifications, approvals and all that. Again, therefore, much turns on how quickly the proposed Act will get through the trilogue process. I hope this will happen quickly. It might not be perfect in respect of where we would like it to be, unfortunately, but ultimately this will start the process nationally. Would it be right for us to go ahead and install a regulator here with a significant budget and start the process based on what we see in the draft? Probably not, but then again the draft is not that far away from where the final text will land. I think this addresses the question in that regard. Undoubtedly, we will have flavours of national interest that will seek to protect. There are no two ways about that. The question is what latitude we will have under the proposed Act to make these choices.

Returning to the issue of the debate, I cannot help but reinject the point concerning those not represented by unions, for example, including those who are sole traders. I am a sole trader who practises in the Law Library in Dublin, and I can tell the committee that if the courts were made more efficient by the use of AI technology, we would embrace that because more cases would get through the system. Replacing the advocate in this context, however, is obviously something I would object strenuously to.

Mr. Ronan Lupton

For various reasons. We can see, though, where the efficiencies would come about just in that regard and on that side of things. I have read with interest the aspects of the proposed AI Act. We have seen experiential evidence from the GDPR of how the European supervisory mechanisms do not necessarily work the way we would like them to. The national regulator may make a decision on a fine, for example, or a correction mechanism that is not agreed with by other EU member states and the Commission does not have a role in this aspect of things. This changed in the judicial services regime. Under the regulations to do with e-commerce that are coming forward, the Commission now has a role, which I think is better. The GDPR, therefore, may have to shift. Equally, this position is there in the proposed AI Act and there is input from the institutions. This is an important development.

Again, it is hard to see what will happen in this regard. I cannot help but associate myself with Professor O'Hare's comments. In respect of providing commentary on the first question, there are potential risks we just do not know about. It is concerning, however, when we hear people saying these computers are now sentient and things are going to happen which should not. One suggestion would be just to unplug it, but this is very facile and possibly a little bit Luddite.

It might not let you unplug it.

Mr. Ronan Lupton

That is true.

Mr. David Joyce

On the proposed AI Act, it will not be perfect, of course, but it will also be a minimum standard. There will be nothing, therefore, to prevent Ireland from improving on it in a national context, as we have in the cases of some other directives.

That concludes our consideration of this issue for today. I thank all the witnesses who came in to assist the committee in respect of its consideration of this important matter. As was said, the committee, and probably other committees, will be discussing this matter further as soon as possible. This concludes the committee's business in public session for today. I propose that the committee now go into private session to consider other business. Is that agreed? Agreed.

The joint committee went into private session at 11.48 a.m. and adjourned at 12 noon until 9.30 a.m. on Wednesday, 5 July 2023.
Top
Share