The Munich Security Conference has been a platform for global leaders, businesses, experts, and civil society to engage in candid discussions on fortifying and safeguarding democracies and the global order for 60 years. With the rise of geopolitical challenges, crucial elections, and increasingly sophisticated cyber threats, these conversations are more critical than ever before. The new role of AI in both offense and defense adds a significant new dimension to these discussions.
Earlier this week, Google’s Threat Analysis Group (TAG), Mandiant, and Trust & Safety teams released a new report revealing that Iranian-backed groups are using information warfare to manipulate public perceptions of the Israel-Hamas conflict. The report also provided updates on the cyber aspects of Russia’s war in Ukraine. TAG also highlighted the proliferation of commercial spyware used by governments and malicious actors to target journalists, human rights advocates, dissidents, and opposition politicians. There continue to be reports of threat actors exploiting vulnerabilities in outdated systems to compromise the security of governments and private enterprises.
Amid these escalating threats, there exists a historic opportunity to use AI to bolster the cyber defenses of democracies worldwide, providing new defensive tools to businesses, governments, and organizations on a scale previously only accessible to larger entities. At Munich this week, discussions will focus on using new investments, commitments, and partnerships to address AI risks and leverage its potential. In a world where attackers innovate using AI while defenders cannot, democracies cannot thrive.
Leveraging AI to enhance cyber defenses
Cyber threats have long been a challenge for security professionals, governments, businesses, and civil society. AI has the potential to tip the scales and give defenders a decisive advantage over attackers. However, like any technology, AI can also be exploited by bad actors and become a conduit for vulnerabilities if not developed and deployed securely.
In response, a newly launched AI Cyber Defense Initiative aims to harness AI’s security potential through a proposed policy and technology agenda designed to secure, empower, and advance our collective digital future. The initiative builds on the Secure AI Framework (SAIF), which is intended to assist organizations in creating AI tools and products that are secure by default.
As part of the AI Cyber Defense Initiative, a new “AI for Cybersecurity” startup cohort is being introduced to strengthen the transatlantic cybersecurity ecosystem, along with an expansion of the $15 million commitment for cybersecurity skilling across Europe. Additionally, $2 million will be allocated to bolster cybersecurity research initiatives, and Magika, the Google AI-powered file type identification system, will be open-sourced. Furthermore, investments will continue in the secure, AI-ready network of global data centers, with plans to invest over $5 billion in European data centers by the end of 2024, supporting secure and reliable access to various digital services, including broader generative AI capabilities such as the Vertex AI platform.
Protecting democratic elections
This year, elections are set to take place across Europe, the United States, India, and numerous other countries. Google has a long-standing commitment to supporting the integrity of democratic elections, exemplified by the recent announcement of an EU prebunking campaign ahead of parliamentary elections. This educational campaign aims to teach audiences how to identify common manipulation techniques before encountering them through short video ads on social media platforms in France, Germany, Italy, Belgium, and Poland. Efforts to prevent abuse on platforms, surface high-quality information to voters, and provide information about AI-generated content to aid in informed decision-making will be sustained.
While there are understandable concerns about the potential misuse of AI to create deep fakes and mislead voters, AI also offers a unique opportunity to prevent abuse on a large scale. Google’s Trust & Safety teams are leveraging AI to enhance abuse-fighting efforts, enforce policies at scale, and adapt swiftly to new situations or claims.
Collaboration with industry peers continues, with joint efforts to share research and counter threats and abuse, including the risk of deceptive AI content. In a recent development, Google joined the Coalition for Content Provenance and Authenticity (C2PA), focused on establishing a content credential to provide transparency into the creation and editing of AI-generated content over time. These collaborations build on existing cross-industry initiatives for responsible AI with the Frontier Model Forum, the Partnership on AI, and other endeavors.
Collaborative efforts to defend the rules-based international order
The Munich Security Conference has proven to be a resilient forum for addressing and overcoming challenges to democracy. Over the last 60 years, democracies have collectively addressed historic shifts, such as those presented by AI. Once again, there is an opportunity for governments, businesses, academics, and civil society to come together to establish new partnerships, harness the potential of AI for positive impact, and reinforce the rules-based global order.