Skip to main content
Normal View

Joint Committee on Enterprise, Trade and Employment debate -
Wednesday, 29 May 2024

Impact of Artificial Intelligence on Businesses: Discussion (Resumed)

Members participating in the meeting remotely are required to do so from within the Leinster House complex only. Apologies have been received from Deputy Quinlivan.

In October 2023 the committee reported on artificial intelligence in the workplace.

One of the key observations of the committee was that further discussions would be needed to explore the wide-ranging impacts that AI may have. The issue of how Ireland can best position itself so as to be ready to face the challenges posted by AI and to seize the opportunities that AI presents is likely to be a major priority for all stakeholders in this area. The committee is pleased to have the opportunity to consider these matters further with the following representatives from Amazon, Microsoft and Google. I am pleased to welcome from Amazon, Dr. Sasha Rubel, head of AI policy, Amazon Web Services, AWS, and Mr. Ed Brophy, EU strategy and head of public policy, Amazon Ireland; from Microsoft, Mr. Kieran McCorry, national technology officer, Microsoft Ireland, Mr. Jeremy Rollison, head of EU policy, Microsoft Brussels, and Mr. Ciarán Conlon, deputy director of public policy, Microsoft Ireland; and from Google, Mr. Ryan Meade, government affairs and public policy manager, Google Ireland.

Before we start, I wish to explain some limitations to parliamentary privilege and the practice of the House with regard to references witnesses may make to another person in their evidence. The evidence provided by witnesses physically present or who give evidence from within the parliamentary precincts is protected, pursuant to both the Constitution and statute, by absolute privilege. Witnesses are reminded of the long-standing parliamentary practice that they should not criticise or make charges against any person or entity in such a way to make him, her, or it identifiable or otherwise engage in speech that might be regarded as damaging to the good name of the person or entity. Therefore, if their statements are potentially defamatory in relation to an identifiable person or entity, they will be directed to discontinue their remarks. It is imperative that they comply with any such direction.

The opening statements have been circulated to members. To commence our consideration of this matter, I invite Dr. Rubel to make opening remarks on behalf of Amazon.

Dr. Sasha Rubel

I thank the committee for the invitation to share Amazon and Amazon Web Services' views on AI and its impact on business. My name is Sasha Rubel and I am representing Amazon Web Services today. My colleague, Mr. Ed Brophy, is representing Amazon for Ireland. I will provide the committee with a short overview of Amazon and AWS's presence in Ireland, the material benefits we are already seeing for business resulting from AI adoption, and share some of our key findings regarding AI adoption blockers that we see as crucial to address and ensure AI benefits both Irish businesses and Irish citizens.

This year Amazon will celebrate its 20th anniversary in Ireland, having first invested here in 2004. We look forward to many more years of investing in the country. Amazon employs around 6,500 people in Dublin, Cork, Drogheda, as well as additional regional locations in diverse roles and disciplines. Over the last three years, Amazon opened its first fulfilment centre in Dublin, which created 500 new jobs and only this month it announced that it will launch a new dedicated store in Ireland in 2025 to enhance the retail experience for Irish customers and small businesses.

We already have more than 1,200 Irish small and medium enterprises, SMEs, selling on Amazon, creating over 3,500 jobs. Irish SMEs on Amazon recorded €150 million in export sales in 2022, an increase of 25% from 2021.

Amazon Web Services, AWS, a provider of both cloud computing and AI and machine learning services, directly employs more than 4,200 people in Ireland. This direct AWS employment has grown at an average rate of 38% per year over the last decade, and Indecon Economic Consultants found that AWS increased economic output in Ireland by €11.4 billion since 2012. The positive impact of AWS on job creation extends far beyond roles created within the company itself. More than 3,000 other jobs have been generated in the firm's suppliers and contractors, all of which are directly supported by contracts with AWS.

Some of those familiar suppliers that have grown with AWS and are a core part of the Irish cloud eco-system are the STS group from Waterford, H and MV engineering from Limerick and Hanley Energy from Meath. In fact, Irish contractors who work with AWS now export services to more than 28 countries across the globe. I am particularly excited about what Irish customers of AWS are doing with AI - customers like Ryanair, for example, are using AI across its organisation to improve operations, Coolplanet, which is using AI to support decarbonisation efforts of companies around the world, is based in Wicklow, or Cergenx, which is making a medical device that leverages AI to identify newborn infants who are most at risk of brain injury is based out of Cork.

Europe stands on the brink of an unprecedented opportunity to grow the economy and tackle key social issues through AI, some of which I mentioned, based in Ireland. Uptake of AI by European businesses has increased by a third, 32%, over the last year and the majority of these businesses, more than 70%, reported increased productivity, innovation or revenue as a result. If last year’s appetite for AI can be maintained, Europe could unlock an additional €600 billion in gross value added. This is a figure equivalent in value to the entirety of the European construction industry. This would bring the total economic impact of tech adoption in the region to €3.4 trillion by 2030, up from the 2022 forecast of €2.8 trillion.

In a recent survey, we found that most organisations expect to use AI and benefit from it. Some 93% of employers expect to use generative AI over the next five years and 49% of employers and employees believe AI could boost overall productivity. Among workers, 88% expect to use AI in their daily work by 2028 to facilitate ideas and creativity, automate repetitive tasks, and to help them make data driven decisions to provide better services for citizens. We also note that 2024 is going to be key to empowering businesses to capitalise on these positive changes.

Public appetite for AI is strong. The majority of European citizens believe AI will positively transform key public services, including education and healthcare, in the next five years. Businesses are now ready, so in addition to the 32% increase in AI adoption over the past year, we have also seen in parallel a 30% growth rate in cloud adoption, which is a foundational technology for broader digital adoption and for reducing the carbon footprint of innovation. There is strong belief in the transformative power of AI, with 50% of European citizens stating they are confident that AI will create more opportunities than risks regarding job security and the future of work.

However, there are blockers and opportunities for AI adoption that countries must address so they do not leave this powerful technology and what it can unlock on the table. The first is that the regulatory environment must provide certainty. In a survey we commissioned to Strand Partners, we found that 31% of businesses say that regulatory uncertainty is a blocker to AI adoption and that they intend to invest up to 48% less over a three-year period, if this regulatory uncertainty is sustained, and I look forward to talking more to the committee about this. The second is that digital skills must be addressed. There is an ongoing digital skills gap where 67% of businesses predict that in five years' time, a candidate’s digital skills will be more important than their university degree. In parallel, 61% of businesses in the EU say that not being able to find the staff with the right digital skills is blocking them from adopting AI. When we talked to employees, they said they would love to learn these new skills, but they lack time and the cost of these trainings are prohibitive. This is one of the reasons we launched its AI ready commitment last year, which makes available more than 80 self-paced classes that teach what AI and Machine Learning, AI/ML, is and how to develop and deploy it responsibly. The third is that we need to fill the gap in terms of the rate of adoption, where bigger businesses are adopting AI at a faster rate than start-ups. We all know, and I say this as a proud European, that start-ups are the backbone of the Irish and European economies. We need to be providing start-ups with the necessary skills to facilitate AI adoption.

How do we address those blockers? We need to democratise access to AI and close the compute divide. There is no AI without the cloud. Second, we need to ensure that resources are provided to start-ups to support innovation, which is why we launched our generative AI accelerator to provide the support necessary to start-ups across the EU to facilitate adoption of smaller companies.

Third, we have to ensure access to the cloud. In light of a shortage of tech skills, the cloud is a strategic advantage. Lastly, we must invest in understanding of responsible and trustworthy AI. We are convinced that responsibility drives trust, trust drives adoption and adoption drives innovation. Responsibility and innovation go hand in hand

Mr. Kieran McCorry

I thank the Leas-Chathaoirleach and members of the joint committee for inviting us to speak today. I am the national technology officer at Microsoft Ireland. I am joined by my Brussels colleague, Jeremy Rollison, who is the head of EU policy. Microsoft established operations in Ireland almost 40 years ago, in 1985. We are an all-island entity with regional and global teams in both Dublin and Belfast, totalling some 4,000 employees. Our roots go deep in Ireland and we are passionate about supporting Ireland's ambitions at an enterprise and social level.

Technology change, like any change, creates both opportunity and challenge. With artificial intelligence becoming more prevalent, it brings numerous opportunities for Irish businesses and society while raising legitimate questions about its use. We know that in the last decade or so, dealing with change is something that Irish businesses and workers have responded to with courage and creativity. This provides a source of confidence, allowing us to answer the following question positively. Can Ireland, including our businesses, workers and Government, respond with agility and a shared purpose so that it leads responsibly on the use of AI and enjoys the many positives available while minimising any negatives? Microsoft is convinced that Ireland can answer "Yes" to this question. If we can help along the way, we are ready and willing to do so.

Mr. Jeremy Rollison

I thank the committee for the opportunity to be here today. I want to talk about Microsoft's approach to AI. We believe generative AI will democratise access to information, enable businesses to be more productive and more efficient, empower individuals to work more effectively, and create new jobs and industries of which we cannot yet conceive. Technology companies need to build confidence among users in the workplace and in society, addressing the very reasonable and legitimate public and political concerns about these new AI technologies. Microsoft is committed to developing and deploying AI safely and responsibly in partnership with society and we have taken very deliberate steps going back several years. These include developing our own responsible AI programme in 2017 followed by the adoption in 2018 of our ethical AI principles. Creation of our responsible AI office followed a year later. We published a responsible AI standard, moving these principles into practice, sharing those learnings with others and updating that continuously as this technology evolves. Just last year, we published our five-point blueprint for public governance of AI. We are engaging in conversations around the world about how to most effectively regulate the risks around this technology.

We support the European Union's AI act with its risk-based approach and we acknowledge the work of the Department here in developing the national strategy, AI - Here for Good, appointing an AI ambassador, and more recently, the AI advisory council. These measures, including these committee meetings, are a sign of intent and engagement, seeking to navigate these new waters with care and purpose.

It is with this careful and mindful approach and continuous engagement with governments and key stakeholders that Microsoft brings new AI tools to market with an obligation to do so responsibly and safely. We believe Ireland's businesses and workers are as well positioned as any to harness the opportunities of AI. We believe Ireland has an unmatched technology cluster, a vibrant research and education sector, the best educated workers in the EU, and an enterprise base and workforce that has shown repeatedly that it can adapt, grow and thrive when presented with a challenge.

Mr. Kieran McCorry

Globally, McKinsey estimates that productivity gains from generative AI could add the equivalent of up to $4.4 trillion annually. These are big numbers. We calculate that it could add €18 to €20 billion to the value of the Irish economy each year. Last November, with Trinity College, we conducted research on generative AI adoption in the workplace. Some 49% of respondents were already using generative AI and a similar number expected productivity gains. Notably, multinationals here are more likely to be adopters than indigenous businesses. Perhaps this is a point worthy of further discussion.

On jobs impact, a 2023 OECD employment outlook report found that the net impact on employment was ambiguous with some labour displacement but also additions driven by productivity gains. Highly skilled, non-routine roles are the most exposed to generative AI progress and there is little evidence of significant negative employment effects due to AI. As we look ahead, the top three areas we believe are key to Ireland harnessing the positives and minimising the challenges are political alignment and support, SME capacity building; and skilling programmes. We believe the first requirement is well in hand with EU and domestic adoption of legislation and regulation. Still, it requires Government Departments, agencies like Enterprise Ireland, SOLAS and the local enterprise offices, and bodies like the enterprise digital advisory forum to work together.

We support the committee's October 2023 recommendation for this Parliament to explore the establishment of a joint committee focused on the use and deployment of AI, open to all voices and all communities. Second, for SMEs, there are issues of scale, capacity and resources. AI offers an opportunity for them to reduce admin and focus on why they got into business in the first place. They may need support to get started and assistance at appropriate times as their adoption progresses. There are many instances of the Government taking on similar challenges over the last ten or 12 years.

The final and possibly the most crucial response is that of skilling. The World Economic Forum's Future of Jobs Report 2023 revealed that a quarter of all jobs worldwide will change by 2028 and 60% of all employees will need reskilling by 2027. These findings are often a source of concern for workers. Microsoft understands this and has made significant commitments and introduced programmes to help to support the Government and communities through training when and where people need it. Our skill-up Ireland initiative makes AI skills training available to everyone through our dream space STEM programme, through partnerships with higher education institutions, via NGOs like Fastrack into IT, and via the connected hub network around the country. We would be happy to discuss how the Government and industry can develop partnerships to add more bandwidth and agility to provide learning opportunities for everyone.

At the recent Digital Ireland event in Dublin Castle, our visiting vice president, Mary Snapp, in response to a question about who should be driving regulation, mentioned that nobody elected Microsoft. This goes to the heart of our approach. Elected legislators and appointed regulators should set out the guardrails and as an industry we must respond substantively and be part of that conversation. We also look forward to discussing how some Irish organisations are already advanced in their AI adoption. In education, the Limerick-based start-up, Nurture, uses AI to support school teaching. In health, UCD’s AI_PREMie is developing better diagnostics for pre-eclampsia. In the legal profession, Irish company TrialView is embedding AI in its case management products. An Post is developing new business opportunities using generative AI. Many Government Departments are embracing AI to develop new systems to make accessing information simpler. We welcome members' questions and the discussion to follow and hope that the conversation continues beyond today as we work together to ensure Ireland leads in the AI space and offers benefits for everyone in our society.

Mr. Ryan Meade

I thank the Leas-Chathaoirleach and members of the joint committee for inviting me to speak today on the topic of artificial intelligence and its impact on businesses. Consideration of this topic is extremely timely, given the advances in generative AI technology in recent months, and I welcome the committee's initiative to hear stakeholder perspectives.

Google's mission is to organise the world's information and make it universally accessible and useful. The AI era is no exception. We are committed to developing AI technologies that benefit society, uphold ethical standards, and foster trust and transparency in the use of AI. Since 2016, we have been an AI-first company and in 2018 we were one of the first companies to establish a set of AI principles that describe our commitment to developing AI technology responsibly and our work to establish specific application areas we will not pursue. AI already powers many of our products that people use every day, from getting up-to-date travel information on Google Maps and eco-friendly routing giving the most fuel-efficient route, to scanning for spam messages in Gmail.

While AI has been a part of Google's global innovation story over the past decade, we stand at a pivotal moment in its development. The pace of progress is accelerating. Millions of people are now using generative AI across our products to do things they could not do even a year ago, from finding answers to more complex questions to using new tools to collaborate and create. At the same time, developers are using our models and infrastructure to build new generative AI applications, and start-ups and enterprises around the world are growing with our AI tools. At Google we are approaching this work boldly and responsibly. This means being ambitious in our research and pursuing the capabilities that will bring enormous benefits to people and society, while building in safeguards and working collaboratively with governments and experts to address and mitigate potential risks while working to realise the potential we want to achieve.

Ireland was among the first EU member states to adopt a national AI strategy - AI - Here for Good - setting out how Ireland can be an international leader, and seize the opportunity to drive productivity across our economy. The strategy in particular stresses the enormous benefits of widespread adoption of AI technologies across all types and sizes of businesses, including start-ups and SMEs. A total of 75% of larger businesses with more than 250 employees told us that they expected generative AI to significantly improve the productivity of their business in the next five years. By contrast, smaller businesses were less aware of the potential benefits of generative AI. For example, businesses with fewer than 50 employees were only half as likely to say that they were already using generative AI for help with writing and drafting documents as those with more than 250 employees. In the next five years, just 31% of small businesses thought they would be likely use it. As we all know, SMEs are the backbone of the economy, employing well over 1 million people in Ireland and 100 million across Europe. In our previous research, SMEs named knowledge and skills as one of the main barriers to starting, or continuing, their digital journey.

With every digital transition, we have seen how skills are vital to unlocking new opportunities for workers and businesses and to helping them innovate and grow. According to research undertaken by Public First, generative AI can save the average worker more than two weeks' worth of work a year. AI can help workers accomplish more with their resources and focus on the more rewarding aspects of their work. To ensure these opportunities are truly available to everyone and AI's benefits are widely shared, we need to take a collaborative approach and deploy a comprehensive, thoughtful workforce strategy that considers a wide range of perspectives.

We must continue to invest in people and in education and training programmes that help workers and small businesses of all backgrounds learn to use AI effectively. At Google we are building on our long-established digital literacy and skills training programmes to help ensure the opportunities presented by AI can be open to all. Since 2022, we have been partnering with Enterprise Ireland and the local enterprise offices on You're the Business, a digital upskilling initiative for Irish SMEs, with training that is free of charge and open to everyone. Yesterday we added a new educational pillar to our website, Get ahead with AI. There, SMEs can access AI tools and training through on-demand videos. The courses cover the basics all the way up to showing how to overcome challenges a business may currently face, such as assisting customers through chatbots, streamlining operations or boosting marketing abilities.

We have also collaborated with Coursera to offer Google AI Essentials, a 15-hour self-paced online course taught by AI experts at Google to help people across roles and industries get essential AI skills to boost their productivity, with zero experience required. With the Insight SFI Research Centre we have established a €1.5 million scholarship fund to support AI education in Irish third-level institutes for students from under-represented communities.

We are currently seeking applications for a €15 million AI Opportunity Fund for Europe from Google.org. The fund is an AI skilling initiative designed to equip workers, especially those who need more support to upskill themselves, with the foundational AI knowledge and tools needed for long-term positive professional outcomes. We have issued an open call, and organisations which reach those workers are invited to apply for support. Successful applicants will receive funding and comprehensive training based on Google and external AI courses. This call is open until June 28, and I am very grateful for the opportunity to plug it here today.

Enabling businesses to capture the benefits of AI will require partnership between governments, industry and civil society to invest in AI infrastructure and innovation, support training, develop sensible regulatory frameworks and promote widespread adoption and accessibility. We look forward to continuing this collaboration.

I invite members to discuss the issues with the representatives. I remind those participating remotely to use the raise hand feature and, importantly, to cancel it when they have spoken.

I have a couple of questions for everybody and my first question is for Mr. Meade. Concern has been expressed in the US about AI Overviews. Much of what I have read shows that the latter gives results that can be seen as misleading, such as adding glue to a pizza to keep the cheese on it. More seriously, it returns results which state that serial killers and nuclear war have benefits. This is misleading and deeply unhelpful. Does Google intend to launch this product here? Are there any concerns about this misinformation?

Mr. Ryan Meade

As Deputy O'Reilly says, an AI Overviews product was recently rolled out in the US. It has not been launched here in Europe. As I understand it, this feature provides an AI-generated overview of the search results for a particular query. It has been in testing for a while. One of the reasons it is being rolled out is that it is very helpful to users and they are more likely to click on the links for further context that are provided in by AI Overviews.

Since it has been rolled out, there have been a few stories about specific queries that have been producing misleading results. To describe how the system works, it is a large language model that produces a piece of text based on the results returned for a particular query. These cases are not typical of the experience of most users with the product. In most cases they are searches for uncommon pieces of information with a limited number of search results. We continue to refine these models and continue to test. The product is part of our Google Labs experiments. Each of these instances is fed back into the systems that review and refine the system and make it safer. It has not been launched here.

My question is whether Google intends to launch it here. I am aware it has not been launched here.

Mr. Ryan Meade

I do not have any information on that.

Mr. Ryan Meade

Any plan to launch would be in conjunction with discussions with regulators and relevant-----

I am sure they would have plenty to say about it. Certainly, what I have read in the media would not fill me with confidence. I appreciate the point on it being a discrete and small number of results but some of them, which I will not ventilate here because they do not deserve an airing, are incredibly disruptive and would be potentially disruptive in the political realm as well as in other ways.

Mr. Ryan Meade

Mixed into the genuine cases are some mock-ups of results that were not been returned by the search engine.

When will it be considered safe to launch?

Mr. Ryan Meade

I cannot give an answer on that. This is a discussion that is continuing.

I presume all of what we read about in the media will be dealt with in advance of a European launch.

Mr. Ryan Meade

We absolutely want to launch a product that is as safe as possible. There is no question about that.

Of course Google does. Mr. Meade would never be hanged for that answer. Everyone who comes in here would say they want to launch as safe a product as possible. This does not necessarily mean that the product is safe.

I take the point, however.

I have some general questions and I am conscious of the time limits. I will ask our guests about the mindsets in respect of AI, which is a subject in which this committee has taken a particular interest. We have seen there can be incredible benefits but we also hear from people who are very concerned that AI will take their jobs or ruin their companies. On the other side, people are saying that AI will not be able to do their jobs or they do not have time to deal with it. Neither of those mindsets are particularly helpful. We need to bring balance and an open-minded approach to the consideration of the potential for productivity. How can we win the debate on AI? With respect to the previous question, that is possibly not the best way to win any debate. There is a role for nearly all of these things in business, politics, the media and education. How can we bring all of this together to ensure we are having a well-informed debate and that we are in a position to win people over, as it were? Our guests may contribute in any order.

Mr. Jeremy Rollison

I am happy to take the first stab at it. This is a question we hear a lot. We are hearing exactly the same formulation. There are reasons to be optimistic, and we are seeing some of that, but there are many risks and concerns. In their interventions at the outset of the meeting, my colleagues noted a big emphasis on skilling. I do not want to put words into the mouths of others but I think we are aligned that it is not just about some of the AI skills but familiarity with the technology and its risks. I often reference the fact that even my mother asks me how to get AI. We have a lot of education to do in that regard.

There are different types of skills involved and it is not just about training the prompt engineers and data scientists of the future. We must talk about the risks, the disinformation context and the education that needs to go on across all of these different sectors. We have a big job to do. It is part of the dialogues. One entity alone cannot win that debate. We need a variety of conversations. I would like to lean into that skilling. Perhaps we need to come up with a better word because it is about awareness and training, and is beyond some of the computational skills. It is about how AI works in a day-to-day environment as opposed to in a professional environment in a tech company. We need to consider how it works in automotive and manufacturing contexts, and how it impacts hairdressers and plumbers. Where will we see any benefits from that side of things? There is a big task to do.

There is, and my question is how we do it. Industry will have to do the heavy lifting but it will need partners in education and politics, etc.

Mr. Ryan Meade

All of our opening statements mentioned skills, and we have gone into that. I would add adoption to that. One of the programmes we offer online is Google AI essentials. It is a short course but by the end of it, anyone using it has experienced what it is like to use a generative AI tool, and it takes away a lot of the mystery that might apply.

The other point is around AI literacy. As we have heard, these technologies a bit differently from previous waves, such as the World Wide Web, the mobile age and so on. This is a new era and there are things people need to know about how it works. They need to know, for example, how a large language model works and the nature of the results it might produce, how your data might be used when you are interacting with it and so on. We are looking at how we can play a role in that regard. Many of the organisations the Deputy mentioned, including SOLAS and others, definitely have a role. An opportunity exists now. It is easy for people to take their first steps, even by experiencing AI and having a go with it. That might not have been the case a year or two ago.

The Government's AI strategy anticipated this issue in 2021. If we are going to drive the best potential of AI adoption, we need to deal with trust and take everyone along together. There are some good actions in the strategy in that respect.

Dr. Sasha Rubel

I will complement what my colleagues have said. We think it is essential to address the skills gap. The responsibility lies in increased co-operation between the public and private sectors. We need all hands on deck for this. That is one of the commitments we made in the engagement we had as part of the donation of free digital technology kits to primary schools across Ireland. This is part of the enabling digital technology in primary schools programme. We see the impact of collaboration in addressing the issue.

As human beings, we are afraid of what we do not understand. It is essential to encourage understanding of AI and machine learning, AIML, technology but we do not need everybody to be an AIML coder. We need people to understand what the technology is, how to address those risks and how to develop and deploy it responsibly. That is one of the reasons we have placed a particular emphasis not only on the aspect of digital literacy but also on literacy related to responsibility and what that looks like. As I mentioned in my opening statement, there are incredible opportunities to use this technology to address social challenges in Ireland and beyond.

To complement what my colleagues have said, one of the things we are particularly committed to is educating citizens about how AI, as a technology, can also be used to address some of the challenges presented by AI. I will speak concretely. At the top of the minds of the policymakers here today will be the question of generative AI and misinformation and disinformation. That is one of the reasons we adhered to the Munich tech accord, as did the organisations represented by my colleagues here today. The accord looks at operationalising methods to harness AI to mitigate those risks by, for example, watermarking content to combat deepfakes. We are developing AI technologies to automatically detect and remove child sexual abuse material online. We can use AI to automatically detect and remove bias in job applications online that include biased language that discourages women to apply. In the same way as there are risks related to this technology, there are incredible opportunities to use it to address the risks. That is why we see digital skills as absolutely essential in empowering people to innovate and develop these solutions so we can mitigate those risks responsibly and harness the opportunities.

I apologise for being late. I was in the Dáil Chamber speaking on road safety, which I understand AI can also play a part in by looking at traffic flows and so forth. I thank our guests for coming in, for their presentations and for the work they are doing. This has been described as the fourth industrial revolution. It is moving so fast, which is an issue with which I intend grappling. So much is going on.

I was at Google headquarters yesterday for the launch of the generative AI in Ireland economic opportunities study, which was fascinating. As part of the presentation, it was noted that similar to other northern European countries, Ireland lags behind globally on AI innovation drivers, such as talent, research, development and commercialisation. The gap suggests that Ireland is at risk of losing its front-runner position and needs to focus on strengthening its strategic efforts on AI and AI-related innovation drivers. We cannot afford to fall behind at all because every other jurisdiction is moving so quickly. The whole area is moving quickly.

We spoke last week about universal translators, which are amazing. You can talk to someone in your language and they hear you in their language. It is like something from "Star Trek". We have gone beyond "Star Trek" in some of the stuff that is going on. Science fiction has become science fact. I would like to hear our guests' thoughts on innovation and innovation drivers, and what we need to do to ensure we do not lag behind.

Mr. Ryan Meade

I thank the Deputy for attending our launch yesterday-----

It was fascinating stuff.

Mr. Ryan Meade

-----and for reading the research. The research we launched yesterday came from Implement Consulting Group, which has looked at the potential economic opportunity for Ireland of generative AI. It paints a very positive picture overall. In terms of benchmarking Ireland against peer countries across northern Europe, it shows that Ireland has some strong foundations in the operating environment for businesses, the Government strategy and infrastructure. Where there is a slight warning signal is around the innovation drivers, as the Deputy said, which are things such as talent, research and development. That is not to say we are way behind any of these peer countries but it is about looking to the period ahead when people will be adopting AI and reaping the benefits. Will Ireland maintain its strong leadership position? That is the question.

The talent in our workforce and the research and development in our universities are issues that the Government has focused on over the years.

The picture now is that this is a new wave and we need to refocus and re-energise those efforts. What we have tried to do with our funding for the Insight research centre, which is across all Irish third-level institutions, is to provide a scholarship fund that will allow students from under-represented communities to take advantage of courses that are related to this area. That is a really important thing to drive the pipeline of future talent in this country.

Again, with talent we always have to think about the great advantage Ireland has in terms of remaining open to the rest of the European Union and the rest of the world with respect to bringing the best people here and so on. The Deputy is right to highlight that and I think it is something that should be given very serious consideration. However, I think some of those things can be addressed by simple adoption of AI technology. The more businesses are using the technology, the more talent is available to be drawn into research and development and so on.

I thank Mr. Meade. I want to address the next question to Mr. McCorry. I thank him for his presentation and the work he is doing here. Mr. McCorry pointed out in his presentation that a quarter of all jobs worldwide will change by 2028, and 60% of employees will need reskilling by 2027, which is not that far away. Mr. McCorry might let us know what kind of reskilling he has in mind there.

My other question is on Mr. McCorry's reference to the requirement for Departments and agencies like EI, SOLAS, the LEOs and so on, and the enterprise digital advisory forum to work together. Is there an implication that they are not working together and that they should work together? Are there other agencies that should be getting involved in this collaboration?

Mr. Kieran McCorry

I thank Deputy Stanton. First, I might just respond a little bit to the innovation point the Deputy made earlier. He is quite right in talking about the pace of change. It is tremendous. However, I would agree with Ryan from Google that there is a great opportunity here for us all to do more in this space. When we look at the opportunities to do more with regard to education and skilling, especially around the changes we are seeing with technology where, over the past 12 months, we have talked about these large language models, whereas we are now seeing a shift almost in the application of the technology to so-called small language models. There is headroom there for the overall skilling programme and what we are doing from an education perspective to focus on this and on innovation in that space. I also note that Ryan talked about scholarships that are being operated with various educational institutions. There is an opportunity to do more in that space.

That ties in to the overall reskilling question the Deputy has asked. The requirements for skilling are very broad. Almost everyone is going to be affected by this technology in some way, and we are not talking about niche skilling in terms of people being able to understand how to programme in AI or machine learning, as Sasha mentioned earlier. In fact, the way we have seen the technology evolve now is that people who have no background or expertise in that are able to avail of so-called no-code programming. The technology is evolving to allow people to do quite advanced things with technology they do not need to be expert in, but we need to open up opportunities for these people to see it. I do not think we have all of the ducks lined up for that particular element of it.

With respect to the Deputy's second question on whether these entities I mentioned are not working together very well, I think they are but this is an opportunity for them to do more. We welcome, as Jeremy mentioned in the opening statement, the introduction of the AI advisory council. I sit on the enterprise digital advisory forum as well. I think the Department of Enterprise, Trade and Employment has done a great job in defining the strategy but I still think there is more work to do on how we make it real for businesses, and how we outreach and communicate more to both broad society and businesses on how they can do more with regard to the uptake of this technology. I think that is a gap, and there is an opportunity to do something there.

I thank Mr. McCorry. I will address my final question to the representatives from Amazon. They said that businesses of all sizes must have access, and currently AI skews towards the larger companies, 51% versus 31%. What can we do about that? How can we address it?

Dr. Sasha Rubel

I thank the Deputy for the question. We are particularly focused on empowering start-ups to be able to have access to technology. It is one of the reasons we launched our service called Amazon Bedrock, which makes it very easy for smaller companies to adopt technology and to innovate with AI. We, in our DNA, have three commitments. One is customer choice, another is confidentiality linked to data security and privacy, and the third is control of costs. One of the things we hear from start-ups is, "Is innovating with AI not expensive?" What we have done through Amazon Bedrock is to make available different foundation models so that start-ups can access the model that is the best suited for their use case and pay exactly what they consume with regard to running inference or fine-tuning the models we make available, which are a plethora of different models through Bedrock.

In addition to this commitment to customer choice, the other element we have is a series of programmes that are particularly focused on supporting start-ups in scaling. We have what we call our AWS generative AI incubator, which particularly targets start-ups that are leveraging AI for social impact. We are just about to launch our AI for Good accelerator shortly, which is dedicated specifically to looking at companies and organisations but also non-profits that are harnessing AI for impact.

In parallel to these innovation incubators, we also offer a series of skilling initiatives that are dedicated to start-ups that would like to innovate with AI but do not necessarily know how. I would like to underline that these programmes are especially dedicated to getting under-represented groups more involved in AI, especially women. We have a series of initiatives, for example, our AWS re/Start programme or our AWS GetIT programme, which are dedicated to helping start-ups identify the issue they would like to address and how AI can help, and leveraging that in transforming these ideas, which are dedicated to social impact, into action and viable businesses.

I thank Dr. Rubel very much for that. There is a huge amount going on here. It is very hard to take it all in from everybody; it really is. We are particularly interest in this committee on assisting people from disadvantaged backgrounds who cannot get jobs. We did some work on people with disabilities getting into the workforce. My sense is that AI could assist people with disabilities in a huge way in overcoming the disabilities we put in their way. This is also the case for people from disadvantaged backgrounds and others. I see there is a fund here to help with that and for people to get in there.

I am involved in the Open Doors Initiative, which I think Amazon is involved with as well. It focuses especially on getting jobs for people who find it hard to get onto the employment ladder, and also helping companies to identify people and employ them because we are beyond full employment, we are told.

On the issue of people with disabilities and AI helping people to work and overcome the disabilities that, very often, society puts in their way rather than themselves, can we please get a comment from each of the witnesses on that, in the final minute or two that is left?

Dr. Sasha Rubel

I would answer that question on two levels. One is that we are very committed to looking at how AI can increase accessibility, especially for people who differently abled. As an example, our AI-powered Alexa looks particularly at how we can leverage AI to make sure people who are differently abled have access to the information they need in a format they can understand. In parallel to AI services dedicated to ensuring inclusivity, we also have programmes that are dedicated to making sure individuals who are differently abled can enter the workforce and can work in technology. As the Deputy mentioned, we have several programmes that are dedicated specifically to this.

Mr. Ryan Meade

I might answer the question with an invitation because we recently launched in Dublin something called the Accessibility Discovery Centre, which is a showcase for a lot of the AI tools the Deputy is talking about or other technological tools that can help people with disabilities to engage in their daily lives and professional lives. I can follow up with some further information. It is something that is available to drop in and have a tour of. The Deputy is quite right; there are a host of technological AI tools that will have an impact here, from making online meetings more accessible in this world of hybrid work, including AI-generated screen readers, subtitles and so on, to a whole host of other things. We have three Accessibility Discovery Centres in Europe now: one in Dublin, one in Zürich and the other in London. I can follow up with more information and would be very happy to have the Deputy down.

Mr. Kieran McCorry

On the accessibility element, we have an accessibility programme as well as part of our overall skills training programmes globally and locally. We are very focused on that. There is one thing I would say - and we are already showcasing some examples of this - which is the benefits that derive from using generative AI technology for people with accessibility issues. In particular there is one example we are showcasing at the moment in relation to people with autism and some of the tools like copilot in Microsoft Teams where there is the opportunity for people who may have some attention difficulty issues to use copilot to get a summary of what has happened if they have become distracted by people joining or entering those meetings. We are seeing real life examples of how that is useful.

I will make one other point, which Dr. Rubel raised it earlier, about diversity and inclusion overall. Some recent LinkedIn data shows that only 27% of women are involved in AI in the workplace. There is a real discrepancy there that we need to try to address to some extent to redress that imbalance.

I thank the witnesses. My time is up.

I thank Deputy Stanton, your timing is perfect. I now call on Deputy Shanahan.

I thank all the guests for coming in this morning. It is great to be having these conversations. I say a special thanks to the lads from Amazon who brought some committee members out to see a factory. That was a real eye opener. Maybe committee members could get an invitation from Google and Microsoft to go out and meet those guys as well.

As speakers already highlighted, there is so much to this and so many different areas to talk about. It is the next industrial revolution and it is already under way. The Dublin Tech Summit is happening in the RDS today and tomorrow. I certainly hope to get there. People will be blown away when they go there. I have already seen some of the stuff that is going on.

I will make just a couple of points to the general group first. The discussion has highlighted the political alignment for the SMEs in capacity building for skills promotion and all of that. I have a question on the State's ability to hang onto this tiger by the tail. We may have it by the tail at this moment in time but can we hold onto it? The challenges are immense. The witnesses have highlighted the issues around the adoption of the technology at the moment and that the multinationals are fluid to move and the return is investment based. They are adopting the technology and allowing people to train up. They are looking at this as opposed to the indigenous business sector. I would also put the State in that space as well. We have people in the State still using eight-year old laptops running old Microsoft systems, as well we know. How is the State going to manage this process of change given that business is moving at light-speed and the State is coming behind on a donkey and cart? Will the witnesses please address that?

Mr. Kieran McCorry

I thank the Deputy for mentioning the Dublin Tech Summit. I actually need to depart this meeting at 11.30 a.m. because I am speaking at the summit at 12.10 p.m., and Dr. Rubel is speaking at it this afternoon. The Deputy has highlighted a really important issue. We are seeing - almost bizarrely - that the adoption of generative AI in the public sector is happening at an impressive enough rate compared to what we have seen with other technologies in the past. There certainly is a job of work to be done there to make sure the State does enough to support businesses of all shapes and sizes.

In my opening statement I referred to the adoption of generative AI in multinational companies and it is considerably higher than the adoption in indigenous industries. This is really because of the capabilities and the resources that companies such as ours are able to provide. There is an opportunity for the State to do more to support those businesses. It also relates back to the skills issue as well. There is an awareness that needs to be driven, so that businesses are clear on the extent of what generative AI can do for their businesses of all shapes and sizes. Communications is part of this alongside other skills but this really will be relevant for businesses of all shapes and sizes. Just last week I was on Inishbofin working with small businesses to talk about how generative AI could help them improve their businesses. It was a welcome exercise for all involved. We are not even talking about very sophisticated or very expensive technology here. Even some of the free tools that are available already have the capability to make a substantial difference to businesses. Their problem was that they were not aware of this and they were not aware of what they could do. There is both a skills and communications requirement.

Private business is good at looking at opportunities, so that is less of a worry. The worry is how the State will continue to be able to support this space. If a forklift driver in their 50s is told, "We need you to reskill because we are going to put in robots here to deal with logistics and plant", what reskilling will such people do that will potentially keep them in a job? The issue of displacement will be a significant problem. The witnesses' organisations must be looking at this across all the world's economies and not just Ireland as this is a problem everywhere. Where do the witnesses see that going? Displacement is going to be a fact of life. There will be a significant number of people who are not going to be able to keep up with this or who will not be able to create a skill set such that they can replace the income they currently get. What should we do to try to approach this to mitigate and plan for that?

Mr. Kieran McCorry

In the first instance, I would say that these changes are not going to come into effect overnight. There will be a tail associated with them and there will be a continuum. I mentioned that Microsoft supports the initiative of the committee's report from November 2023 for a specific committee to look into this. It would be a great objective of that committee to look at the specifics of what can be done in those areas.

Multinational companies are quite good at looking at the impact generative AI technology will have on office workers or white collar workers, and I believe Deputy O'Reilly mentioned this on 15 May, and less so in trying to determine exactly what impact it will have on those working in more labour intensive tasks. I have no simple answer to the question but it is something we need to look at in much more detail.

Mr. Ed Brophy

Deputy Shanahan was at our fulfilment centre last year and it was great to welcome the Deputy and other committee members. The members would have seen the kind of concrete examples of how advanced technology, including AI, can enable the work of the forklift driver and that kind of worker the Deputy just mentioned. For example, using AI has cut down on walking times that workers have to do in our fulfilment centres. The members may be aware that our fulfilment centres and warehouses are the size of nine football fields. They are very big places, so to cut down on the amount of walking workers must do has made their activity there more efficient and has cut down on exposure to safety issues with the types of activity they do. It has increased the ergonomic aspect of their work in what they are picking, what they are packing and what they are storing. This is all using advanced technology.

I do not have an answer for how the forklift driver, for example, can be assisted with AI necessarily in terms of reskilling, but the technology provides an opportunity in making their job more fulfilling while freeing them up to do more skill-full tasks.

I apologise that I could not be here earlier. I warmly welcome all our witnesses. These hearings are very important because, as has been said a number of times, there is a lot that we still need to understand given we are still very much learning about generative AI. I am very excited about the prospect of what it can deliver in the public and private sectors but I am also cognisant of the risks.

A lot of the discussion today is about the uses and the take-up within businesses. From a regulatory perspective, the EU's Artificial Intelligence Act has just been passed. I am looking at the documents here and I believe the Microsoft representative is the only one who has explicitly referenced the legislation. I want to hear from all three organisations about what changes they have put in place to prepare for the implementation of the Artificial Intelligence Act. I believe there is a concern that for some organisations it would be business as is. Obviously the intent behind the legislation is that there would be significant change and particularly on the more harmful generative AI processes. I would like to hear in detail from the three organisations what changes have been put in place to prepare for the implementation of the AI Act.

Mr. Jeremy Rollison

I thank the Senator for the question. We are examining the matter every single day, given where the legislation was and the legislation happening around the world that is inspired by some of the various aspects. The simplest answer to the Senator’s question is that I believe there will be a lot of change. It is a good example of the challenge that exists in regulating the space most effectively, because it is not a product in a box in the way other things might be regarded. There is an ecosystem here and different layers. Right now, we are trying to understand where the line is drawn in the value chain for the sets of obligations the Act sets out for deployers versus developers and providers. Increasingly, the way our customers use the technology is such that the lines can get blurred. We are going to have a tremendous responsibility, directly from the Act’s obligations and in support of our customers, whom we are going to have to help in that regard. Most of the focus is on the risk-based approach and having a better understanding of where the high-risk boundaries are. That is where the clarity will come, along with further guidance and standards that emerge. There are various definitions or interpretations, depending on whom you speak to. Given the scrutiny in the space and our activity at every layer of the technology stack, we are at a stage at which we are going to be taking the most cautious interpretations possible. There are things that have to be taken into consideration regarding certain aspects. When we think of good examples of the workforce being high risk, where do we draw the line? Is it a case of using a Word document as opposed to the making of hiring decisions? With regard to prohibited practices for which enforcement is fastest and for which we have the earliest deadlines, we are focusing right now on where the deadlines are, what has been prohibited by the Act, and things we should all agree should be prohibited, such as social scoring and biometric identification. Once again, some of the lines can get blurred in practice with things that are on the market. It is a question of interpretation, and we are having dialogues with regulators to understand it better.

The EU deserves much credit for having moved first on this. Amidst the advances in the technology, there is more regulatory attention around the world. We are optimistic about much of the consensus on where the focus should be at the level of highest risk when some of the more powerful models emerge, but this is not going to be the last time AI will be regulated - far from it. There is a set of horizontal principles. We welcome the risk-based approach therein. It is a case of making sure we all have a similar understanding of where the highest risks are and our responsibilities directly regarding our customers and users of the technology. There is a lot of work at the moment.

Mr. Ryan Meade

I did not mention in our opening statement that our CEO said several years ago that he considers AI too important not to regulate. Therefore, we have been very supportive of the effort to regulate in this area. We broadly welcome where the AI Act has landed regarding the risk-based approach. As you look through the Act, you will see there is a lot to do, including in respect of many self-regulatory and co-regulatory codes of practice, guidelines and so on. We are very much in an evolving space now, so it would be very hard to give a specific answer on this change or that change but I can say that the AI Act will clearly guide our approach. Our approach has been principally guided by our AI principles, which we adopted in 2018. We report on progress against those principles every year, so one can see the decisions we have taken against different products, how to approach them, areas that we will not pursue and so on. The current process involves refining that approach in line with the existing regulatory environment, engaging with regulators and others to determine where the risk factors lie, and coming to a position that is in line with the regulation.

The framework, which entails applying different obligations to different risk profiles, is sensible, but there may be some discussion on what involves a high risk and what does not. We already have several practices that we have not and will not pursue because we believe they should not be pursued, but some that could end up ruled out by the AI Act may have socially beneficial uses. I am thinking in particular about the use of AI in the detection of illegal content, including child sexual abuse material. Therefore, there are some tricky areas that still need to be worked out, but we are on a very positive trajectory to align what companies have been doing based on their own principles with a more universally shared regulatory framework.

Dr. Sasha Rubel

I thank the Senator for the question. We have been very vocal in our support for risk-based approaches to AI regulation that are use-case specific and work backwards from the risk but that are also interoperable and build on initiatives like international standards, the OECD principles or the G7 code of conduct under discussion to ensure that innovators - in Ireland, for example - have a clear pathway to introduce their innovations at scale, not only in places like France and Germany but also in Canada. There is quite a bit of remaining work on secondary legislation. There is a need to engage various stakeholders to ensure not only that compliance obligations are technically feasible but also that they are interoperable so EU innovators can remain competitive in a very fast-moving environment.

There is a common misconception that companies are waiting to be regulated in order to do the right thing. All the companies here today have decided that just because something is technologically feasible does not mean it should be built. We are proactively deciding what we should build based on our responsible AI principles but we are also putting in place, even before regulation comes into effect, initiatives that adhere to various principles, such as transparency obligations. One of the key transversal obligations in the EU AI Act concerns transparency. Let me give an example in this regard. Our AI model service card, which is responsible AI documentation, provides customers with information on what acceptable use is, the kind of data used to train the models and the design choices that have been made concerning how a model has been fine-tuned or should be used. There are many initiatives we have taken as a company that pre-empt compliance obligations that will come into effect, because we know this is the right thing to do.

Mr. Ed Brophy

Let me bring it down to the national level. The AI Act was adopted on 21 May and will be in force in June. Therefore, it is pretty early days, but we are very glad to see that the Irish Government, through the Department of Enterprise, Trade and Employment, has already issued consultation information on national implementation. One of the key questions concerns determining what body will be the national competent authority in Ireland regarding the AI Act. That is a big issue that needs to be discussed and debated. There are a few options. You could have a centralised model, with a single national competent authority, or a vertical or sector-specific model, whereby key regulators in key sectors, such as financial services, consumer protection and the environment, regulate AI in their sectors. This entails a really big question, and Ireland is one of the first member states to get ahead of it. We really welcome the debate.

Mr. Kieran McCorry

Might I add to that very briefly? I realise the Senator’s question was very much directed towards representatives of multinationals in the room, but I believe that while ambiguity on what constitutes high risk is difficult enough for big companies like Microsoft Ireland to deal with, it is similarly difficult for smaller companies. That is going to be an opportunity, but also a challenge. Some assistance or supports for smaller companies to allow them to determine how they can be compliant with the Act will be paramount. Let me highlight the point I made in my opening statement. We see more generative AI adopted in multinationals than in indigenous companies due to resources. The exact same thing is true in determining compliance. It was reassuring on 7 January to see that the Government published its interim public sector guidelines for the responsible use of AI, specifically in the public sector, but it would be great to see something similar done for industry more broadly, now that the Act has been adopted.

I have a question on that. The risk-based assessment model is essentially an iterative one with a to-and-fro. Eventually standards, codes and approaches will evolve but given the speed at which the technology is evolving, does the model have much chance of keeping pace? I worry about what might be called the democratic deliberative process. The horse has largely already bolted in terms of the undermining of the political process and indeed journalism by some elements of social media, driven by AI. These things are moving so fast that I worry, certainly regarding one field I know a little about. Politics has changed so rapidly in a very short time, having become much more feral, much less interested in evidence about anything, much more box-ticking and more inclined to move to the most outrageous statement, and I suspect that, from the point of view of those running the platforms, AI is supporting this for good commercial reasons.

I worry about how this sort of responsibility test can really get down into the system. I just do not see it happening in my own arena. I do not know about others.

Moving to my second question, it was said that there can be no AI without the cloud. I am also on the climate committee and I see a real political tension in Ireland. There are those who say we need net-zero data centres today because 18% of Ireland's power is already being used by data centres compared with an average of 2% across Europe, that we are exposed and that the system is in difficulty. On the other hand, we have the moonshot of offshore renewable energy. I would have thought the witnesses' sector, the ICT sector, would be the great productive user of such power, should we get it. How are we going to get through these short-term pressures? The issue has become quite politicised. Many people point the finger. If they are not pointing to farmers, they are pointing to the witnesses' sector as the villains of the piece in our efforts to meet our climate targets.

I will ask my last question. Europe has no platform that is on the scale of the three companies represented here. The worry is that we will see innovation concentrated in the companies that can afford high computing speeds, cloud infrastructure and so on. Even the innovative spheres the companies are trying to create are obviously going to be branded spheres of innovation. They will be the Amazon, Google or Microsoft clubs of innovation. I get the impression that Europe is quite worried about that. We have to work with enterprise wherever it comes from but I would be interested in the witnesses' comments on that issue because it is a genuine issue that will influence thinking across the European Union in the coming years as regards regulation and how public policy should be balanced. There are countries that very much believe in the need to develop national champions and that it is the route to go. We are now in a geopolitical race and 70% of the world is controlled by autocrats and some countries believe that Europe needs to travel in the direction of strategic autonomy. I would like to hear the witnesses' views on that.

Mr. Jeremy Rollison

The understanding the Deputy just presented is very comprehensive. These are some of the risks we hear about every day. I captured three specific questions. One was on keeping pace with these risks, the next was on the sustainability impacts and energy concerns in this space and the last was on the risk of concentration. Even that is limited. There are more risks than that when we come back to some of the high-risk cases.

On the first point, there is no doubt that the speed of this technology's development presents challenges to policymaking. That is one of the reasons that the best attempt we can make to keep pace is to look at those core principles and focus on the highest risks. Perhaps we should even simplify the way in which some of these rules are focused. Where has there been negligence? We have a big responsibility in this space. With the power we see in this technology, we have even more responsibility to get this right now than we had in the past. This is a competitive space and we have seen mistakes made. Things have sometimes been released too quickly and we have learned about risks that we should and could have better mitigated. It is going to be a challenge to keep pace with it but focusing on the highest sets of risks and the biggest sets of concerns remains the best approach.

That is not to say it is easy. On the sustainability challenges, we are very proud of the ambitions we have announced in the sustainability space but we have had to recognise that it is going to be harder to reach those ambitions than we expected because of AI's current energy needs. There are reasons to be optimistic that AI can contribute to resolving some of those sustainability challenges but we have had to be very honest that we also see some of this challenge being exacerbated by the high-energy needs of this compute.

We hear about the risk of concentration a lot. We have tried to lean in and recognise our responsibility. For this to be successful, the keyword will have to be "partnerships". Few players, if any, occupy every layer of the stack here. To be successful with one layer, we need success at the other layers. For Microsoft to be successful in Europe, we have to make sure our partners are successful. This could refer to energy providers or application layers at the end of the stack. It is a very significant task and we have a big responsibility there. We want to remain optimistic but the Deputy captured a lot of the risks really well. It is a tough space to navigate.

Mr. Ryan Meade

I might briefly address the three important points the Deputy has made. The first point, which was on the impact on political discourse, goes a bit broader than AI but, to focus specifically on the AI aspect, we always say that we do not wait for regulation. It is important for our industry not to wait for regulation because, as the Deputy has said, the technology moves on. In many cases, the response required is not just a regulatory response, but a technological response. I will give an example. There is obviously concern around the use of synthetic media in elections, that is, deepfakes or whatever else. Google DeepMind, our AI research unit, has been developing a watermarking system for AI-generated content called SynthID. That is something we have been bringing to the market in advance of any regulation. It is really important for industry players to continue in that mode of not waiting for regulation and ensuring that, where a technological solution can help, it is brought to bear.

The Deputy's second point, which was on infrastructure, is really important. We have a pledge to run all of our offices and data centres on zero-carbon energy by 2030. With regard to data centres powering AI, we already have five data centres that use more than 90% carbon-free energy and we are continuing to engage in power purchase agreements. We now have one in Ireland that will bring our data centre in Ireland to more than 60% carbon-free energy next year. A lot of progress is being made in that area.

On the question of concentration, it is true that some of our companies have built a lot of infrastructure. We have data centres across the region. The important thing is that, through our Google Cloud products and services, this infrastructure is open for European players to engage with. There are already a lot of European challenger players who are taking advantage of Google Cloud or other companies' infrastructure to innovate and to take advantage of new generative AI tools. The Deputy is absolutely right to say that it is a concern expressed by policymakers but there is a good way forward and we can all work together to ensure that European players are successful.

Dr. Sasha Rubel

On the Deputy's question on national champions, at AWS we are convinced that there will not be one foundation model to rule them all. One of the main areas in which we co-operate with public sector counterparts is on the development of large language models in local languages. We hope to develop a model in Gaelic as well. These make public services available to citizens in the language most adapted to the needs on the ground. At an infrastructure level, we also recently launched our European sovereign cloud. This allows for counterparts to decide where their data is stored and how and by whom it may be accessed with respect to encryption.

On the question of data centres and sustainability, I will flag that, for AWS, transitioning to carbon-free energy sources is one of the most impactful ways to lower carbon emissions. Migrating to the cloud actually reduces the carbon footprint of innovation. We are also working on sustainability even at the level of our chips. We are developing AI and machine learning, AIML, chips, for example, AWS Trainium, that are much more energy efficient in running training and inference for large language models. I am conscious that there have been debates in the public sphere on this specific question so I would just like to say that, for us, it is not a question of the environment or the economy. We see this as the environment and the economy. To give a very quick example, we have the Tallaght district heating scheme, which is the first initiative in the nation to use excess heat from an AWS data centre in Tallaght to heat nearby public sector, residential, academic and commercial premises. We also recently announced a strategic collaboration with Bord na Móna that will see us become the anchor tenant of a new eco-energy park in the midlands. We see the possibility of economy and environment going hand in hand.

That concludes the first round of questions. We will move on to the second. I have three indicating already. The first is Deputy O'Reilly, who has seven minutes.

My first question is about the way in which the move from large language models to small language models can make genitive AI more accessible for SMEs and smaller organisations with more limited resources, including charities. At a high level, large language models have created opportunities around productivity and creativity. However, it requires access to big computing to avail of them. I am aware that Microsoft has launched a product in the past number of weeks. Given that small language models are designed specifically to perform well for simpler tasks and are, in many instances, more accessible and easier to use, what role do the witnesses see them playing for SMEs and smaller organisations such as charities into the future?

Mr. Kieran McCorry

I will respond first because I mentioned this earlier. Small language models will have a dramatic role in terms of making access to this technology available to others. The whole point of using small language models versus large language models is that they can be instantiated on smaller devices. Instead of having to use the cloud to access ChatGPT or Azure OpenAI, for example, users can access the same kind of technology without doing so. Perhaps it is not exactly at the same scale and sophistication but it is more focused on niche or discrete tasks on a mobile device. At the very least, over the next number of years, months or maybe even weeks at this rate of development, we expect it will be a good deal easier and simpler for individuals and businesses of all sizes to access bespoke systems and models that can run on a laptop or other mobile device. There are at least two benefits from this, which fit with the sustainability perspective. These models are much more efficient in terms of computing energy requirements, both to train and to operate. There will be many benefits from that.

Mr. Ryan Meade

The fact that large language models are large does not mean they are inaccessible to small businesses. Google Gemini is a very large language model that is available to use now. Small businesses can get a lot of value out of it. When it comes to more bespoke applications, such as in the case of a small business that wants to train a model on its own data and so on, there are products coming to market now that will enable it to do that in a very accessible way, without having to take on board any new technology. We need to watch this space. There is quite a lot coming through in this area, as well as there being quite a lot already available right now. The diversity of the types of tools that are available will only increase in the coming year.

Dr. Sasha Rubel

To complement what my colleague said, we see small language models as one of the key levers for sustainability in, through and on the cloud. Running small language models actually requires fewer computes because of the size of the training data sets, which leads to a smaller carbon footprint. We see the rise in small language models as a promising development, not only in terms of the twin digital and green transitions but also in terms of what it can mean for SMEs. They will be able to fine-tune and also develop models based on their own data for specific use cases. This is specifically promising in areas like health and education, where questions of sensitive data protection and privacy are particularly key. We see it as a growing trend to innovate more efficiently and in a more fine-tuned way, targeting actual impact based on the training data.

I thank the witnesses. I have a question on the Digital Economy and Society Index, DESI, which was published in 2023 and which drew on data from 2020. It shows that 22.7% of businesses in Ireland with more than ten employees are using big data, 47% are using the cloud and only 7.9% are using AI. How do we ensure the adoption of generative AI, classical AI, machine learning and automation across the economy to improve business processes and competitiveness and, ultimately, productivity? As legislators and policymakers, what can we do to assist in this regard? Even more importantly, what are the companies represented today prepared to do? I ask the witnesses to respond in reverse order from that in which they responded to my first question, if that is all right.

Dr. Sasha Rubel

This feeds back to my first comment in terms of skills. There is a dearth of understanding of what AI can and cannot do. It goes back to educating businesses, particularly small-scale businesses, and individuals to understand what AI can do for their organisations and also for them as individuals.

I will reflect very quickly on something that was said earlier. In a report we recently commissioned, which was released in February, we found that one of the demographics most interested in learning new AI skills are individuals close to retirement. They said they absolutely would like to learn these skills because they are conscious that this technology will completely transform the ways in which they access information. My 94-year-old father is using a generative AI chatbot in order to understand how to access his social security benefits. Previously, YouTube was a miracle for him. He understands how to use that generative AI chatbot. My 18-year-old daughter wants to be a prompt engineer. Without telling the committee how old I am, prompt engineering did not exist as a career when I was in college.

We really need to address the understanding of what this technology is and what it can do for people across the board, from my 18-year-old daughter to my 94-year-old father. These are the innovators who will create the solutions that have an impact on access to public services, on start-ups for social impact and all those areas. Skills are a key aspect. In parallel, we need initiatives that help start-ups to scale up and to understand that blockers, such as regulatory uncertainty and access to compute, are things that can be addressed through initiatives that build on public-private co-operation.

Mr. Ryan Meade

One of the recommendations in the national AI strategy is mentorship between multinationals and SMEs. That is something in which we have always tried to engage. AI presents even bigger opportunities for that type of interaction. In the past, there may have been a perceived gulf between the activities multinationals are doing and the activities SMEs are doing. In the space we are in now, we are all learning AI. A couple of years ago, a large part of our workforce would not have been using generative AI; now they are using it. In many cases, we are in the same process of learning as are SMEs. That opens up great opportunities. In our work with the local enterprise offices and Enterprise Ireland, we are trying to make that engagement real. The website for our campaign, You're the Business, already includes training videos.

Mr. Jeremy Rollison

There are three buckets in which to put the issues in terms of policymakers encouraging the use of generative AI. It is not only about focusing once again on the benefits, of which there are many; part of it is also education around the risk. The three buckets are really about driving awareness and making people understand what is out there and available. In that I would include the risks versus benefits dynamic we talked about earlier. The skills literacy commitments we discussed will go a huge way and are probably linked to that first bucket.

A point I wanted to make earlier was on the question about promoting innovation. When we get questions from our customers across Europe, particularly SME owners, who may be excited about aspects of this technology, what sometimes arises is the need for clarity on the rules. They want to understand. They know there are a lot of rules emerging in this space and they have heard about the risks. It is hard for them to digest all the information. They do not have large legal resources. The more sophisticated these rules become, it is understandable that it is hard for them to navigate that space. The key points are the skills literacy we talked about, driving awareness on both ends and then clarifying rules, in so far as possible, in order that SMEs feel comfortable engaging with the technology.

I thank the witnesses.

The Spanish AI supervisory authority was set up six months ago. Each of the organisations represented today has a presence there. I raised this issue at the previous meeting on AI. What engagement has each of the organisations had with the Spanish regulatory authority? Did they initiate contact with it? What has that engagement been like and what do they expect to come out of it into the future?

Mr. Jeremy Rollison

The authority initiated contact, and we have had dialogues since the outset. Even prior to its establishment, there was engagement in terms of whether it was within the scope of the AI Act and requirements to set up national competent bodies. Member states have a degree of discretion in how they do that. They can leverage existing bodies or set up new bodies. Spain moved pretty quickly in this regard. We look at that positively because it is one group that can help to navigate the other groups and bring them together. It would be impossible to say who initiated what. Much of how the engagement evolved was through a conversation of which we certainly have been a part.

Most recently, the Spanish authority asked us some of the same questions I have heard here today. How can we do more to clarify some of the rules? What are the opportunities for SMEs, in particular, in Spain? How can generative AI help to navigate some of the bureaucratic hurdles?

Is there something positive we can say here in terms of generative AI filling in forms and the like? We are also trying to get a sense from the market of that high-risk discussion we talked about earlier. What is out there right now? How is it being used? Where should we categorise some of these things? It is still early days to a degree.

Is it a policy discussion or a product safety interaction? That effectively is the focus of the authority. That is what I want to understand. You develop a generative AI product and it is for them to assess the various ins and outs of it. Has that happened with regard to Microsoft?

Mr. Jeremy Rollison

To a degree. Policy and product safety come up all the time. The AI Act starts from a product safety background.

Mr. Jeremy Rollison

We are talking about conformity assessments and what they look like in this space. Given that we are digesting the aspects of the Act that are leveraged on that, and so are the authorities, it is really both. I do not know where to draw the line between policy and product safety. I would almost put them in the same bucket because they are asking the same questions, such as how we are supposed to apply certifications here. We have questions for them too, about where our role is in that. There is a dialogue, which is encouraging. It is still early days. Spain was one of the fastest movers in that space. They are paving the way. It is very much the conversation the Senator described and one that is not going to stop any time soon.

Mr. Ryan Meade

I apologise, I do not have any specific information about our interaction with that office. If I had looked back at the transcript, maybe I would have noticed the Senator was going to ask the question but I am afraid I am not prepared for it. We do obviously interact with regulators in every country in which we operate, so I would be surprised if we had not been in touch. I can follow up with information from my colleagues. We have a Google safety engineering centre in Malaga which focuses on cybersecurity. That is a very relevant area on which I am sure there has been interaction. I can follow up with the Senator.

Okay, thank you.

Dr. Sasha Rubel

Similarly, I am happy to follow up with you regarding our engagement at the local level in Spain. I will say that systematically we are engaging with regulators and competent authorities that are dealing with AI. That ranges from data protection authorities to national AI committees or commissions to counterparts that are thinking through updates to the AI strategy. Spain was also one of the first movers in terms of thinking through what AI regulatory sandboxes would look like. That is an area in which we engaged very proactively. We are a big supporter of the importance of regulatory sandboxes and what they represent in terms of multi-stakeholder co-operation. We are aware that we are working backwards from, I believe, a mid-June deadline for nominations to the AI board. We expect to have more clarity with regard to competent authorities as the time continues.

Mr. Ed Brophy

If I can come in very quickly in respect of Ireland, Senator Sherlock's question is very pertinent because in the next 12 to 24 months we will see an increasing focus on AI policy and regulation at member state level. The focus has been on Brussels with the AI Act. It is interesting that Spain has been out in front but I think Ireland has been quite close, actually. The Department is really looking at this now in terms of designation of a national competent authority. It is important to say what that national competent authority will do. It will implement the legislation at national level but will also perform market surveillance activities and nominate someone to the European artificial intelligence board. It will play a key role and it is important that the Government is thinking about that now. It will address some of the issues the Senator is thinking about in respect of product and policy at a national level. It is super important that Ireland is doing that. Spain has got the lead but increasingly we will see it develop over the next 12 to 24 months within the various member states.

Thank you. Running through all the submissions and contributions today is the need for responsible AI. That requires regulation but also the tech companies' own internal practices as well. We have AWS and Amazon here today and we are all aware that there is a very poor reputation across many fulfilment centres for Amazon and AWS. I am delighted to hear Mr. Brophy talking about some of the positive aspects of AI with regard to ergonomics and health and safety. They are certainly very welcome. The other side of it is whether we can rely on Amazon within Ireland to respect workers' rights. When the website was being announced last month, Darragh Kelly said that it is open to every worker in the fulfilment centres to join a union. The critical question is whether Amazon will recognise trade unions here in Ireland.

Mr. Ed Brophy

Specifically on the question, we have never prevented our employees from joining unions. They are always welcome to join a union.

That is welcome.

Mr. Ed Brophy

We are entirely compliant with any laws, regulation or policy relating to labour relations in any member state we operate in within the EU, including Ireland. In terms of the future recognition of unions, I guess the question there is what happens with law and policy in Ireland around this. Ireland, as the Senator knows, is one of the member states that does not provide for that currently.

There is nothing to block Amazon from recognising trade unions. I have heard what Mr. Brophy had to say about positive use of AI with regard to helping Amazon workers. Surely it would send a very positive signal that Ireland is not one of those countries where Amazon has a poor workers' rights record. A key way to do that would be to recognise trade unions, so that it is not just left to legislation to enforce workers' rights but that Amazon actually has a healthy working environment within its fulfilment centres, which can ensure those workers' rights and national laws are actually abided by.

Mr. Ed Brophy

Some members of the committee have been out to our fulfilment centre. We would love to see other members coming out, including Senator Sherlock. We are really proud of the environment and work standards that we have there. I think if she came and saw it, she would be convinced that we operate an extremely compliant and positive work environment for our employees.

Do I take that as a "no", that Amazon is not willing to recognise the trade union members within the fulfilment centres in this country?

Mr. Ed Brophy

We have had no such request.

Okay, grand. When the request is made, I am sure the response will be positive.

Mr. Ed Brophy

We have had no such request so I cannot speculate on what the answer would be. As I speak now and just to be really clear, we have had no such request.

Okay, thank you.

I think Deputy Bruton said earlier that the genie is out of the bottle now. The genie in mythological times in the Arab world was an extremely powerful being that answered your wishes when you asked it for something. I get the impression that this AI is bordering on that in some ways; it is extraordinarily powerful. We talked about the AI Act in Europe. That covers Europe. There is also regulation in other OECD states and they are moving on that. However, a lot of this can be accessed through a smartphone, tablet or indeed a PC. The three companies in front of us were founded not so long ago by individuals, Larry Page, Jeff Bezos and Bill Gates, from a standing start, basically, in each case. What is to stop somebody in a country that does not have regulation from using AI for negative purposes, for developing pathogens, manipulating genetics or whatever? The genie is out of the bottle and it is available everywhere. We have our Act here but in other jurisdictions no such controls apply. That concerns me. I wonder if I could get our guests' reactions to that, please.

Mr. Jeremy Rollison

The Deputy has captured one of our biggest priorities and concerns about that space right now. This is a tool that can be misused by bad actors. Let us face it, there likely will be bad actors around the world. The good news is that, where we see the best advances in the technology at the moment, there is consensus among some of those like-minded governments about where we want to draw the line at prohibited practices and how we want to protect some of the technology that might currently be only in the hands of select jurisdictions from entering the hands of the bad actors. It is all the more reason that international meeting and alignment of the minds on what the appropriate regulatory authorities and approaches should be is part of the story we are trying to pitch as much as we can. The EU AI Act is one element of that. International alignment on these definitions of risk and how we protect this from falling into the wrong hands is one of the biggest priorities we have been advocating for, whether in the OECD context or the Hiroshima process. The good news is that we are seeing many leaders around the world recognising exactly that threat. The bad news is that there is that threat. There are a lot of things we can do and it is a good conversation to continue to have, especially as we see some of this technology advance. I share the Deputy's concern and the company shares that concern. It is stimulating the type of conversations around the globe right now that, increasingly, politicians are looking at, even from the standpoint of security implications. I can only say that it is recognised. We have a lot more work to do. It is a very true concern.

You do not need a government or a state. An individual can access this, in a back garage possibly, and do all kinds of stuff here. I will move on because my time is limited. The issue of super-intelligence, moving beyond human intelligence and even consciousness, has been raised on some occasions because this technology is moving so quickly. I know some people have disagreements about that. The sentient level has been mentioned by some. I know we are moving into the realm of science fiction. If someone told me two years ago that I would meet someone on a doorstep and use Google Translate to talk to him in his language while he talked to me in mine, or five years ago that I could look at and talk to my son in Australia or Vancouver, I would not believe it, but now it is happening. This issue of super-intelligence is moving so quickly. It is a concern to some. It is in the literature as well. The witnesses might comment on what way they see that going.

Mr. Ryan Meade

This is relevant to the Deputy's previous question too. There is a thing called the global frontier framework, which is a collaborative process which a number of our companies are involved in and engaging with governments on to address the question of the safety risks that arise as frontier models develop, become more capable and potentially create new use cases that were not anticipated or could potentially be unsafe. It is a live conversation. It is quite collaborative at the moment, which is good. Different industry players are coming forward with their own principles, frameworks and so on, and engaging in that conversation. The first point the Deputy made was about regulation in a particular region being sufficient. He is absolutely right. Global co-operation on these topics will be important. There are a couple of competing visions for that. Some people see it as a geopolitical competition. Others see it as a more collaborative approach. We would try to encourage collaboration as much as possible because that is ultimately what is required.

Dr. Sasha Rubel

Building on Mr. Meade's emphasis on the frontier model forum, which we are also happy to be part of, we are convinced that AI safety challenges are global challenges and they need global, multilateral and multi-stakeholder solutions. That is why we support, for example, what happened in the Bletchley Park summit in the UK. Most recently, last week, there was a summit on Seoul. In 2025, there will be a summit in France. We see the need not only to have like-minded countries, industry counterparts and civil society around the table, but also to make sure that these conversations are global. It is why initiatives such as the UN advisory body on AI and the conversations that are happening are very important, alongside initiatives like the global partnership on AI, which involve not only like-minded countries but also countries beyond OECD countries being part of that conversation. It is also why, at the Bletchley Park summit, our CEO mentioned the possibility of a multilateral agreement with regard to notifications to train models. This is one avenue to make sure that not only big industry players but also individuals have the necessary frameworks to mitigate the risks, particularly with national security, which is front of mind for many countries, and the possibility of deploying these foundation models.

Returning to what I said at the beginning about how we can use AI to mitigate the risks that are present, there is a possibility, which we are already doing, to harness AI to make sure that models will not and cannot do certain things. That is an area of technology that we are exploring and doing much research on.

Mr. Kieran McCorry

Microsoft too is a member of the global frontier framework forum. Our president, Brad Smith, has called for the opportunity to have a safety brake on AI technology, especially as it reaches what the Deputy called super-intelligence, also known as artificial general intelligence. I think we are some time away from that level of capability. Indeed, I hope so. I do not think it is an immediate concern but we are all alert to it.

I have a note here that the new global arms race appears to involve generative AI. The witnesses already highlighted some of the concerns. We have infrastructural concerns in Ireland, particularly given that we are exposed with transatlantic cables and so on. Will AI prevent hacking in the future or will it ensure it happens far more frequently? There is the issue of how AI might be used to protect data.

All three witnesses referred to the democratisation of generative AI. I would say to them that while that might be an aspiration, I do not see it happening. Having the witnesses here is emblematic of where the technology sector is. The big players will dominate. I have a phone with probably 150 apps on it. I use about eight of them and that is enough. Therefore, all my time is spent on those platforms.

Amazon mentioned STS Group, a company in Waterford. I know well that companies use Amazon Web Services. That goes back to my point that companies which build their businesses in these frameworks will basically be tied into them in the future. It goes back to this idea of the democratisation of generative AI. Those are more comments than points. Would the witnesses like to address some of them?

Dr. Sasha Rubel

We see AI and cybersecurity as one of the major conversations happening in policy spaces. We see many promising uses of AI to identify and stop cyberattacks. One of the most common use cases of generative AI is to combat cyberattacks, but also to combat fraud in the financial services. These are two early adopters of generative AI that we witnessed across industry verticals. I am happy to share a couple of use cases, particularly, for example, NASDAQ, which is using our services to identify and mitigate the risk of market manipulation. We see that as a promising use case.

On the protection of data and AI, privacy-enhancing technology is another area that we have invested much in, as an example. One of the biggest concerns that we hear from customers is about what is happening to their data, where it is stored, whether it is secure, and if it is being used to train and improve these different models. This is at the top of many customers' minds, in addition to responsibility. We worked, for example, with the University of Padua, to develop a privacy-enhancing technology that allows it to use generative AI for personalised learning. We see this as the future of the next iterations of generative AI and foundation models. It embeds privacy by design in the ways in which these models are developed and deployed.

The last question was about democratising AI. This is in our DNA from the beginning of the establishment of Amazon Web Services. We are convinced that, through this approach of customer choice, confidentiality and control over costs, more and more small and medium enterprises will have access to this technology. On the idea of being tied in, I am happy to revert to the Deputy with more information, particularly in light of the European Data Act, about how we ensure interoperability of our services to make sure that customers preserve that choice.

Mr. Ryan Meade

I will address the point about the use of AI to prevent hacking or cybersecurity issues. AI is central to our efforts in our trust and safety operations to ensure the prevention not just of fraud but also of the spread of harmful content online. I talked in previous committee sessions about how we train machine learning models to flag potential threats, whether harmful content, illegal content, fraud or spam. Year on year, they are becoming more effective at that. The Deputy can be sure that AI is being brought to bear to prevent any risks that AI may present which have been talked about today. It is important to allow that to develop.

The Deputy made a good point about the democratisation of generative AI. We have all mentioned statistics which already show that larger organisations are moving ahead with generative AI at a faster rate and to a greater extent than smaller businesses. I do not think that is the end of the story. The proliferation of new large language models and opportunities to engage with them has really taken off in the last couple of years.

There is a great opportunity right now for players of all sizes to get involved. It is probably fair to say that some of the most exciting obligations in generative artificial intelligence, GenAI, which we will be using in the next couple of years, are already being worked on by smaller developers, leveraging large language models which are brought to market by Google, Microsoft and by whoever else.

Mr. Jeremy Rollison

On Deputy Shanahan's first question about the infrastructure and the defence thereof, we like to talk sometimes about this attack vector expanse, in that there is more to protect than there was before and there are more misuses to mitigate and we have several responsibilities there.

One is to ensure that our own technologies and tools are designed in a way that will mitigate as much of that as possible and make it hard, if not impossible, for anyone to misuse that. Insofar as bad actors are using AI for nefarious purposes, we do not want them using our stuff for that. That is one. We then have an obligation to ensure that our customers are better at using this and are not only bound by those technological visions but by what we can do contractually or legally. Then, at least, at Microsoft, we have a tremendous responsibility as to where we are using AI to help or already mitigate the misuse of AI. There are good examples of where AI is making it faster to identify patterns in such attacks and the like. It is sometimes a bit of a cat and mouse game and there is no doubt about it.

On the democratising effect, and if we look at that broadly, this was one of the ways in which I wanted to answer the question earlier about accessibility and persons with disabilities. There is a democratising aspect here of natural language possibilities which are making entry to certain job categories different than was the case before. This is happening fast. When one thinks about a year or two ago, we were talking about coding, coding, coding. Look at how quickly we have changed that with the natural language possibilities around coding. We will be optimistic there but I want to be crystal clear that I think there is a great deal of work to do because we are getting many questions from customers, citizens and consumers. There are a few areas of optimism and we will keep leaning in there but I can appreciate some of the dubious ways of looking at that there.

I thank Mr. Rollison. I call Deputy Bruton who has seven minutes.

Briefly, everybody recognises the great importance of the responsibility with which companies like Mr. Rollison's conducts its business. I would be interested to know how the company manages that internally. At Government level, we have got used to things like whistleblowing legislation, risk-proofing of every new Act for certain features, and board and Cabinet-level reports, or whatever. To what extent is that sort of infrastructure developed within our guests' companies so that this is not a kind of additional add-on to what is going on generally in the commercial world where one has to tick boxes but that it is genuinely embedded in the work everyone does? It is also about the feeling that one can put one's hand up if things are not as they should be.

My second question is about the need to reskill. I would be interested to know how artificial intelligence will change how we educate. In Ireland, we have described the leaving certificate as being the Holy Grail. Now, people say it is not fit for purpose and some commentator, I believe, described this sort of memory-based examination as being for second class robots. Where is this impacting on the education of the next generation? I would be interested to hear our guests' views on that.

Mr. Jeremy Rollison

I will take a stab at the Deputy's first question. Very concretely, we have changed the way we are organised internally to recognise this threat. I will mention three things. One goes back to the responsible AI standard we built. It is now in version 2, and we will be updating that shortly as new rules have come into place. That is a set of guidance, templates and instructions for engineering teams across the company, regardless of which project they are working on, to go through with regard to a process of asking questions at this stage and building it at this stage. We are constantly updating that.

It is published in our RIA, as we would call it, which is the regulatory impact assessment-----

Mr. Jeremy Rollison

Exactly.

It shows that. That moves up the chain.

Mr. Jeremy Rollison

Yes, 100% and it includes exactly those impact assessment templates and the obligations for any engineering team, which is building that space.

Another one goes back to the establishment of our responsible AI committee in Microsoft. It was indeed for many of those escalations that might come up when questions are asked about a customer's request for us to design this or build that. Where can we have escalations all the way to the senior leadership level to say, "No", we are not going to do that, that is not one we are comfortable with and here is how we are going to mitigate that.

Finally, it has come up in the context of regulatory obligations but amidst that constellation, we have taken this red-teaming approach we have taken where we have colleagues, even third party colleagues, dedicated to essentially attacking our systems to find those vulnerabilities. Again, it is never perfect but it is more than just that three-pronged approach I mentioned.

On the education side, I defer to some of the remarks we made earlier. We hear a great deal from educators about the concerns of the younger generation who are sometimes more native to these applications but that does not replace any of the needs for the old school way of learning, for a lack of a better way of describing it. Those who will be best at using this technology are those who can harness it but if one is just relying on the AI to spit out an output, without checking the veracity or being aware of the inaccuracies that will come from that, I do not think one will be successful at all. Insofar as curricula need to be updated, it is more about how these tools can be used but not encouraging the idea that these are the only ways that one can do it. This comes back to the earlier sets of questions on education and what policymakers can do. There are some updates to curricula which are part of that but I do not think it in any way replaces the same sets of foundational skills we have talked about for decades.

Mr. Ryan Meade

Talking about our responsibilities, as I said earlier, this all starts with our AI principles which we have set out since 2018. Operationalising those involves a whole system of internal processes, including our responsibility and safety council which will ultimately adjudicate on a number of these tricky areas where risks are identified.

That is an internal process but, importantly, we also think about transparency. That is why we periodically report on the AI principles so that will very concretely show what positions we have taken against those principles and whether to pursue a particular technology or product.

On Deputy Bruton's point about education, I am not an expert in this area but we broadly see that generative AI itself will probably have a role in the classroom going forward. At the moment, there has been some discussion as to whether it will enable cheating as against the current model of producing answers to questions, and so on. I expect that will evolve over time with students using the technology creatively in class but educators will have to figure out what that looks like to ensure that there is fairness and continuity in assessment, and so on.

More broadly, the technology will mean that some of those fundamental skills which we hope to teach our young people will definitely still apply, particularly critical thinking and to be able to evaluate information, to understand where it is coming from, and so on.

It is also about creativity in innovation because we see that generative AI can be a tool to assist creativity and if we get to a good outcome in this area, one will see students using generative AI to spark creativity which will lead to innovation.

Dr. Sasha Rubel

On our approach to responsible AI, we very much have a four-pronged approach noting that we have a responsible and acceptable use policy. Our understanding is that there needs to be a life-cycle approach because responsibility should be the responsibility of everybody. These four-pronged approaches are very much dedicated to, first of all, transforming responsible AI from theory to practice. If one talks about, for example, our principles such as transparency, veracity and robustness, there is general consensus that these are important principles but what does that mean to an engineer? We have placed a great deal of focus on translating these principles into operational guidelines for engineers, not only inside our company, but also for our customers, so that they can translate those into practice in the design, development and deployment of their solutions.

The second aspect is integrating responsibility into the entirety of the life-cycle, as I mentioned. Third, it is about nurturing education and diverse teams. We see diversity as a key lever to responsibility, so we have a real emphasis on ensuring that teams are diverse, not only in background, but also in training, so that we get the best AI solution possible. We are convinced that the AI solutions which are being developed, and that those who are developing them, need to be as diverse as the communities and end-users that this solution is intended to help.

Further, we are very much focused on advancing the science behind responsible AI. Generative AI raises many new questions around intellectual property and copyright, fidelity and hallucinations, data protection and privacy. One of our commitments is very much about working with academics, civil society and policymakers to think through these questions.

Lastly, and very quickly on the Deputy's question on education, we see generative AI as completely transforming the education sector through multiple levels and the first is improving student outcomes.

In a lot of countries, teachers are completely overwhelmed and overworked. We see generative AI as an opportunity to help teachers make sure they are delivering the best possible student outcomes. On helping to increase online learning reach, I was home-schooled when I was young. If I had generative AI, my learning outcomes, path and trajectory would have been extremely different because it would have allowed me to access the information I needed in real time in order to progress.

On accelerating research and discovery in research institutions, we see amazing breakthroughs in generative AI that can digest an enormous amount of data that would have taken six months to garner insights and wisdom from in six minutes. We really see this technology transforming the scale of research, including in areas like clinical trials and medicine.

We see this happening across the board, from primary education to higher education. Part of our commitment in responsible AI is the necessity to have a human in the loop. We see AI complementing teachers rather than replacing them. That is part of our commitment as regards human oversight and responsibility.

That concludes the second round. There is one person indicating in the third round. Deputy O'Reilly has seven minutes.

I thank the Leas-Chathaoirleach. I am not sure I will need all seven minutes. To follow up on a point made previously, will our guests from Amazon confirm that the company has not received a request to recognise a trade union? Is that right?

Mr. Ed Brophy

I am not aware of such a request.

That is very interesting because I was under the very clear impression it had. That is okay. Is Mr. Brophy aware that there are trade union members working in Amazon's warehouse or fulfilment centre?

Mr. Ed Brophy

I believe there are, yes.

There are but Amazon does not recognise that union.

Mr. Ed Brophy

I am not aware that we have received any formal request for recognition from the union in question, the Communications Workers' Union.

Mr. Ed Brophy

If I am incorrect on that, I will investigate and come back to the Deputy directly. As she will know, there is a long-standing invitation for her and her Sinn Féin colleagues to come out to our fulfilment centre in Baldonnel to observe what we have going on.

I believe the Chairperson has already visited so at least one of my colleagues has been there. I want to ask about the reference that was made to productivity improvements and automation. Although he is not here now, Mr. McCorry also made reference to protecting workers from the potential negative use of AI. What is each company doing or prepared to do to make sure that workers are protected and that technologies improve the world of work as a benefit for both workers and businesses? I ask because, while technology has many benefits and it is absolutely fantastic that you can talk to your colleagues at 11.30 p.m., if you need to, it is also incredibly intrusive. Technology has facilitated a lot of communication but it has also facilitated an always-on 24-7 type of culture, which is clearly bad for our health and which I would argue is also not good for business. In the absence of a legal right to disconnect, what are the companies' views on what can be done to protect people? Some workers will now have their work allocated via AI and algorithms. People will potentially be hired and fired without human involvement. That is concerning to me. I do not believe it is something that can be mitigated but it is nonetheless a concern. Through the adoption of generative AI, we can drive competitiveness and productivity but how can we do that while protecting workers and their rights and ensuring that technology makes the worker's job more enjoyable and productive?

Mr. Jeremy Rollison

I am happy to try to answer that. A lot of this, especially regarding AI and its use in the workplace, is recognised in some of the new regulation that has emerged. It has been captured by a lot of policymakers who are looking at exactly that concern. We learned a lot from our own experience during the pandemic when so many were connected. We launched some features and products to which the reactions were negative and we had to change those features and products. On one of the examples the Deputy mentioned, I can attest to the fact that I certainly do not enjoy being bothered at 11.30 p.m. with colleagues in the US who are nine hours behind. It is a very small thing but it is an example of where technology can go. We have built in ways for employees using our tools to set the hours during which they may be contacted. It can even be done the other way around. Some may work better at 11.30 p.m. but it is not nice to send those types of messages to others at that time, so you can set messages to only be sent during their working hours. There are small feature changes and technological improvements we can make to mitigate some of that issue.

On the more serious side of AI and its use in the workforce, this goes back to the high-risk definitions we talked about before. There are going to be much heavier obligations regarding the use of AI. I emphasise that we hope it will be humans using AI to inform a decision rather than decisions being entirely farmed out to AI. I believe prohibitions will increasingly emerge in that space. Those are important policy decisions. Some of that has been captured by the AI Act. It is a concern. We are engaged with works councils across Europe on this issue. I have even been part of discussions in which Microsoft's own internal employees have been talking about practices and codes for the internal use of our own technologies. It is an ongoing conversation and we are having it with labour unions and works councils across the world, both internally and externally.

Mr. Ciarán Conlon

I might come in very quickly. This is something the National Competitiveness and Productivity Council has been looking at. In its most recent competitiveness outlook report, from 2023, it cited an OECD employment outlook report from 2023. We have cited some of the findings or outcomes of that report. The report also states that social dialogue can help achieve a fair distribution of the productivity gains from AI adoption. It also states that consultation with workers leads to better outcomes in respect of AI and performance and working conditions. That goes right to the heart of what we are doing here today and what Microsoft is advocating for. It is a constant challenge and constant dynamic. It is something the NCPC is very interested in and that is something that needs to become embedded in this conversation.

Mr. Ryan Meade

The Deputy is quite right to say technology can be intrusive. We have all experienced that in our roles. We have brought some technological solutions to bear on that. Again, during the pandemic, people's patterns of work and so on shifted. We now have things like a scheduled send function on Gmail so that you do not have to wait for the other person to wake up before sending an email. You can send it but it will only be received by the other person during his or her working hours. These are little technological solutions that are helpful but they do not get around the overall issue, which is that leadership is required. Embedding those practices in an organisation comes down to the business leadership empowering people to use them and to speak up when their working hours are not respected and so on. It is about a combination of helpful technological tools and, more importantly, leadership.

Dr. Sasha Rubel

I will complement what my colleagues have said and highlight a use case that is representative of where we anticipate the world going with regard to the uptake of generative AI and what it represents for the future of work. In February 2024, the National Bureau of Economic Research in the United States published an article that highlighted that, in the early 1900s, telephone operation was among the most common jobs for American women and that telephone operators were ubiquitous. Between 1920 and 1940, AT and T undertook one of the largest automation investments in modern history, replacing operators with mechanical switching technology in over half of the US telephone network. Although it eliminated most of the jobs that were held by women, it did not reduce future cohorts' overall employment. The decline in operators was counteracted by employment growth in other areas. We see that in the trajectory of the history of technological innovation. Jobs disappear and new jobs are created. As mentioned by one of my colleagues earlier, a lot of the jobs that will exist in 2030 do not exist yet.

In general, we see a lot of enthusiasm about AI. Earlier this year, we published a report in co-operation with Access Partnership that highlighted that 86% of employers expect to use AI-related tools by 2028 and that 86% also anticipate that their organisations will be driven by AI. In parallel, 80% of employees plan to use generative AI tools within the next five years and are excited about what it means for reducing repetitive tasks. Most notably, the report anticipates that non-tech talent will use AI to complete up to 30% of their daily work, increasing productivity. There is also an increase in the amount that employers are willing to pay employees with digital skills.

Again, going back to my opening statement, we see digital skills as one of the three key pillars for investment alongside regulatory certainty and support to the start-up ecosystem.

That concludes our consideration of this matter today. I thank the witnesses for assisting the committee in its consideration of this very important matter. I now propose that the committee goes into private session to consider other business. Is that agreed? Agreed.

The joint committee went into private session at 11.50 a.m. and adjourned at 12 noon sine die.
Top
Share