This year, a number of significant elections are scheduled to take place worldwide. From June 6-9, 2024, voters across the 27 Member States of the European Union are set to participate in the elections for Members of the European Parliament (MEPs). We are dedicated to aiding this democratic process by providing voters with reliable information, protecting our platforms from misuse, and equipping campaigns with top-notch security tools and training. Throughout our efforts, we will place a greater emphasis on the role of artificial intelligence (AI) in combating misinformation and utilize AI models to reinforce our measures against abuse.
Supplying Voters with High-Quality Information
In the lead-up to the elections, individuals require practical, pertinent, and timely information to navigate the electoral process. Here are some ways in which we facilitate access to necessary information:
- Voting details on Google Search: In the upcoming months, users searching for topics like “how to vote” will find information about voting procedures, such as ID requirements, registration, voting deadlines, and guidance for various voting methods including in-person and mail-in voting. We are collaborating with the European Parliament to aggregate information from the Electoral Commissions and authorities in the 27 EU Member States.
- Authoritative information on YouTube: Our systems prominently feature content from authoritative sources related to elections on the YouTube homepage, in search results, and the “Up Next” panel. Additionally, YouTube displays information panels above search results and below videos to provide additional context from credible sources. For instance, YouTube may present various election information panels in relation to election candidates, parties, or voting.
- Transparency on Election Ads: Advertisers intending to run election ads within the EU on our platforms are required to undergo a verification process and include in-ad disclosures that clearly indicate the source of the ad. These ads are published in our Political Ads Transparency Report, allowing public access to details such as expenditure and placement. We also impose restrictions on how advertisers can target election ads.
Safeguarding our Platforms and Combating Misinformation
To enhance the security of our products and prevent misuse, we are continuously improving our enforcement systems and investing in Trust & Safety operations. This includes our Google Safety Engineering Center (GSEC) for Content Responsibility in Dublin, dedicated to online safety in Europe and globally. We also partner with the broader ecosystem to counter misinformation.
- Enforcing Policies and Leveraging AI Models: Our well-established policies inform our approach to areas such as manipulated media, hate speech, harassment, and incitement to violence — as well as policies concerning demonstrably false claims that could undermine democratic processes, outlined in YouTube’s Community Guidelines and our political content policies for advertisers. Our AI models play a crucial role in reinforcing our enforcement efforts. With advancements in our Large Language Models (LLMs), we are building more adaptable and efficient enforcement systems to swiftly address emerging threats.
- Collaboration to Counter Misinformation: Following our initial contribution of €25 million to launch the European Media & Information Fund, aimed at strengthening media literacy and combating misinformation across Europe, funding has been allocated to 70 projects across 24 countries. These projects cover various areas such as fact-checking during elections and critical events, and improving media literacy among hard-to-reach populations. We also support the Global Fact Check Fund and various civil society, research, and media literacy efforts from partners, including Google.org grantees such as TechSoup Europe, as well as Civic Resilience Initiative, Baltic Centre for Media Excellence, CEDMO, and more.
Assisting with AI-Generated Content
As with any emerging technology, AI brings new opportunities and challenges. While generative AI simplifies content creation, it also raises concerns about the trustworthiness of information, as seen with “deepfakes.” We have implemented policies across our products and services to address misinformation and disinformation related to AI-generated content. Here are some ways we aid users in navigating AI-generated content:
- Ads Disclosures: We have expanded our political content policies to mandate advertisers to disclose if their election ads feature synthetic content that inauthentically portrays real or realistic-looking people or events. Additionally, our ads policies already prohibit the use of manipulated media, such as deepfakes or doctored content, to mislead audiences.
- Content Labels on YouTube: YouTube’s misinformation policies prohibit technically manipulated content that misleads users and poses a significant risk of harm. In the upcoming months, YouTube will require creators to disclose the creation of realistic altered or synthetic content and display a label indicating this information to viewers.
- Responsible Approach to Generative AI Products: In line with our principled and responsible approach to our Generative AI products like Gemini, we have prioritized testing across various safety risks including cybersecurity vulnerabilities, misinformation, and fairness. As a precaution, we are soon restricting the types of election-related queries for which Gemini will generate responses.
- Providing Additional Context to Users:
- The “About this image” feature in Search assists users in evaluating the credibility and context of images found online.
- Our double-check feature in Gemini, which allows users to verify content across the web that supports Gemini’s response, is currently being rolled out in EU countries.
- Digital Watermarking and Enhanced Transparency:
- SynthID, a tool from Google DeepMind, embeds a digital watermark into AI-generated images and audio.
- We have recently joined the C2PA coalition and standard, an industry-wide initiative aimed at providing more transparency and context for AI-generated content.
Equipping Campaigns and Candidates with Superior Security Features and Training
Given the increased cybersecurity risks associated with elections, we are diligently working to help high-risk users such as campaigns and election officials bolster their security measures against existing and emerging threats, while also educating them on the usage of our products and services.
- Security Tools for Campaigns and Elections: We offer complimentary services including our Advanced Protection Program — a robust set of cybersecurity protections — and Project Shield, which provides unlimited protection against Distributed Denial of Service (DDoS) attacks. Moreover, we collaborate with organizations such as PUBLIC, The International Foundation for Electoral Systems (IFES), and Deutschland sicher im Netz (DSIN) to expand security training and offer security tools including Titan Security Keys, which safeguard against phishing attacks and unauthorized access to Google Accounts.
- Addressing Coordinated Influence Operations: Our Threat Analysis Group (TAG) and the team at Mandiant Intelligence help identify, monitor, and combat emerging threats, spanning from coordinated influence operations to cyber espionage campaigns targeting high-risk entities. We provide regular reports on actions taken through our quarterly TAG bulletin and engage with government officials and industry peers to share threat information and suspected election interference. Mandiant also assists organizations in developing comprehensive election security programs and fortifying their defenses with proactive exposure management, intelligence-driven threat hunts, cyber crisis communication services, and threat intelligence tracking of information operations.
- Useful Resources at euelections.withgoogle: We are launching a dedicated EU-specific hub at euelections.withgoogle to provide resources and upcoming training sessions to assist campaigns in connecting with voters and managing their security and digital presence. In preparation for the European Parliamentary elections in 2019, we conducted in-person and online security training for over 2,500 campaign and election officials, and in 2024, we aim to expand on these efforts.
These initiatives build on our work in elections across other countries and regions. We are committed to collaborating with governments, industries, and civil society to uphold the integrity of elections in the European Union, building upon our commitments outlined in the EU Code of Practice on Disinformation. In the forthcoming months, we will share more about our efforts to inform voters, support campaign activities, and safeguard our platforms against evolving threats, including during our Fighting Misinformation Online event in Brussels on March 21.