I thank the committee for the invitation to participate in today's session. I am the director of public policy for Twitter in Europe. I am joined by my colleague, Mr. Ronan Costello, public policy manager for Twitter in Europe. We are pleased to be here with the committee today for this session which aims to focus on how industry can be part of the solution. Twitter is committed to improving the collective health, openness and civility of the conversation on our platform. Our success is built and measured by how we help to encourage more healthy debate, conversations and critical thinking. Conversely, abuse, malicious automation and manipulation detract from it.
I will use this opportunity to briefly walk through three specific areas where Twitter has been doing critical work to prioritise online safety and election integrity. These include our investments in proactive technology to better enforce the Twitter rules, our policies on political advertising and synthetic or manipulated media, and our focus on state-backed information operations. I will also share some insights into the structural and operational changes Twitter has made since 2017 to protect conversations on the platform during elections while building partnerships that promote a better understanding of our online environment.
It would be instructive at this point to re-emphasise the public commitment made by our CEO, Jack Dorsey, in May 2018 to prioritise the health of public conversation on Twitter above all else. He recognised that the platform had come to be used in ways that were harmful and unforeseen and he said Twitter would hold itself accountable towards progress. Since then, we have leveraged a combination of policy, people and technology to yield positive results. It is our view that people who do not feel safe on Twitter should not be burdened to report to us, so we have significantly ramped up investment in proactive technology and tools to better tackle issues such as abuse, spam and automation, which detract from people having healthy experiences on our service.
More than 50% of the tweets we remove for abuse are surfaced proactively for human review by technology, rather than relying on reports. This is an increase from 20% last year. While we will strive to improve this further, it is a significant enforcement milestone and a positive indicator that our investment in technology is helping us to tackle abusive behaviour at scale. Figures released just last week in the latest edition of our biannual transparency report further outline the trends and progress we saw in the first half of this year. We have increased by 105% our rate of action on violating content. We took action on 133% more accounts for violation of our hateful conduct policy. We took action on 68% more accounts for violations of our policies on abuse. Taken as a whole, the progress I have summarised reflects Twitter's mission and commitment to enhance the health of the public conversation on our service.
The scale, speed and targeting effects of online political advertising have been widely discussed lately. Last Wednesday, our CEO announced that Twitter had made a decision to stop all political advertising. This policy is global, includes all candidate and issue advertisements and will come into effect in the very near future. We continue to update our rules and policies in response to evolving threats and technological challenges.
We share the public concern regarding the use of disinformation campaigns that rely upon the use of manipulated and synthetic media, commonly referred to as deepfakes.
On Monday, 21 October, we publicly announced that we have been working on a policy to address comprehensively synthetic and manipulated media on Twitter. In the coming weeks, we plan to open a public feedback period to get input on this from the public. We want to listen and consider a variety of viewpoints in our policy development process and we want to be transparent about our approach and values.
We appreciate that some of the threats on our platform can be urgent, and our expertise and analyses can be bolstered by partnerships with external researchers, journalists and academics. One area where we have unlocked these valuable partnerships to help provide more transparency on our platform is in the area of state-backed information operations. For more than a year, we have been publicly disclosing comprehensive datasets of tweets and related media information we identify on the platform that we have attributed to malicious state actors. We launched this initiative to empower academic and public understanding of these co-ordinated campaigns around the world and to enable third-party expert analysis of these threats and tactics. Using our archive, these researchers have conducted their own investigations and publicly shared their insights and independent analyses.
Since January 2017, we have launched numerous election related product and policy changes, expanded our enforcement operations and strengthened our team structure. We further expanded our enforcement capabilities for global elections by creating a dedicated reporting feature to allow users to report content that undermines the process of registering to vote or engaging in the electoral process. This reporting feature was first used this year for the Indian and European Parliament elections.
The challenges we face as an online society are complex, constantly evolving, and often without precedent. Industry and Twitter cannot address these issues alone. Nor is our industry monolithic in its approach to these issues. Each of us has different services, varying business models, and often complementary but distinct principles. This should be recognised as we continue our engagement. Every stakeholder in this conversation has a role to play. We propose a whole-of-society approach to improving the health of online conversation and citizenship. We all need and deserve a thoughtful approach and long-term perspective in this discussion, and Twitter very much welcomes the opportunity to participate.