I thank the Cathaoirleach for the opportunity to contribute to the committee's deliberations on the topic of online harassment, harmful communications, and related offences. I work with Google in Ireland as government affairs and public policy manager and am based in our EU headquarters here in Dublin. Google supports all efforts by legislators and governments to engage with stakeholders in considering appropriate protections, remedies, and forms of redress for individuals who are the victims of online harm. A range of governments, technology platforms and civil society groups are currently focused on how best to deal with illegal and problematic online content.
There is broad agreement on letting people create, communicate and find information online while preventing people from misusing content-sharing platforms like social networks and video sharing sites. We recognise that there can be a troubling side of open platforms and that bad actors have exploited this openness. We take the safety of our users very seriously, and we are committed to ensuring that inappropriate content that appears on our platforms is dealt with as quickly as possible.
Now 21 years old, Google has grown from a small start-up to a global company with legal obligations in each of the countries in which it operates. We work hard to protect our platforms from abuse and have been working on this challenge for years, using both computer-science tools and human reviewers to identify and stop a range of online abuse, from get-rich-quick schemes to disinformation, to the utterly abhorrent, including child sexual abuse material online. We respond promptly to valid notices of specific illegal content, and we prohibit other types of content on various different services. A mix of people and technology helps us to identify inappropriate content and enforce our policies, and we continue to develop and invest in smart technology to detect problematic content hosted on our platforms.
As well as making significant investment in technology and human resources, we have engaged with policymakers in Ireland and around the world on the question of the appropriate oversight for online content-sharing platforms. Google is supportive of carefully crafted and appropriately tailored regulation that continues to address the challenges of problematic content online. We are keen to work constructively with legislators to build on the existing legal framework and to build trust and confidence in the systems and procedures that ensure online safety.
Having considered the committee's issues paper, our comments today are directed towards those aspects that concern the role of Internet service providers in preventing online harassment and certain harmful communications. We have submitted a longer written statement which outlines all of these points in greater detail. In this statement, we have also provide some comments on the approaches taken on this issue in other jurisdictions where Google operates and which were mentioned in the committee's issue paper.
In the statement, we suggest a number of central principles that should be considered for approaching oversight of content-sharing platforms and problematic content online. These include clarity, suitability, transparency, flexibility, overall quality and co-operation. I have set them out in more detail in the longer statement and can refer to them later if the committee wishes.
In framing any measures in this area, it is important for legislators to have regard to and build on the existing legal framework. We operate in an environment where extensive regulation of online content and actions already exists and is being enforced. Many laws, covering everything from consumer protection to defamation to privacy, already govern online content. From consumer rights legislation to the new EU Audiovisual Media Services Directive, online behaviours come under the scope of a diverse and evolving set of legislation, multi-stakeholder initiatives and regulators.
Specifically relating to regulation of Internet service providers as a means of combating online abuse, the distinctive role played by Internet service providers is reflected in the EU legislation that underpins the regulation of electronic commerce in Europe. The eCommerce directive provides strong incentives for service providers to establish and operate efficient notice and take-down procedures. A service provider that does not operate such procedures will be exposed to potential legal liability for unlawful content hosted on its platform. Many service providers, including Google, have developed extensive infrastructures which provide efficient tools for the reporting and removal of illegal content.
The eCommerce directive has the advantage of setting out different requirements for different types of Internet intermediaries, rather than being aimed at a particular business activity. It has led to the growth of a wide variety of services and business models, and is flexible enough to cover the multiplicity of activities and content types online. For example, an online news site can contain content authored by the news organisation, along with material licensed from third parties and also user-generated comments - the news site will be directly responsible for the editorial content it publishes but will have different legal responsibilities with respect to user comments that the website is hosting as an intermediary. This online intermediary liability regime has fostered the huge economic and cultural benefits of the Internet while ensuring platforms are taking appropriate and speedy actions in removing unlawful content online.
In addition to legal regulations, Google has over the years developed extensive community guidelines and content policies that offer clear rules on what it does not allow on its platforms. These often go above and beyond the law and we employ thousands of staff around the world, working 24 hours a day, to ensure violations are acted upon. Companies have also worked together to address these challenges, for example, with the Global Internet Forum to Counter Terrorism, a coalition sharing information on curbing online terrorism.
We continue to improve on our processes and our technology to enforce these rules. We continually review and update our policies based on new trends and invest in new machine learning, ML, technology to scale the efforts of our human moderators.
ML is helping us detect potentially violative content and surface it for human review. For example, YouTube removed 9 million videos during the second quarter of 2019, 7.9 million of which were first flagged through our automated flagging system.
For example, YouTube removed 9 million videos during the second quarter of 2019, 7.9 million of which were first flagged through our automated flagging system. Of those videos, 81.5% had no views at the time they were taken down.
We thank the committee for providing us with an opportunity to contribute to its deliberations on the topic of online harassment, harmful communications and related offences. Addressing problematic content is a shared responsibility across society, in which companies, Governments, civil society, and users all have a role to play, and it is appropriate that this committee is hearing from a variety of voices on this topic. We hope the committee will give our suggestions for approaching oversight of content-sharing platforms due consideration and look forward to further discussion.