Microsoft Could Help Protect You from Abusive AI, A Call for Legislative Action
This week, Microsoft unveiled a comprehensive playbook aimed at tackling the misuse of AI technology, particularly focusing on the threats posed by deepfakes and other AI-generated content. The document not only highlights Microsoft’s proactive steps but also outlines crucial legislative requests for lawmakers to consider.
Microsoft’s Vice Chair and President, Brad Smith, personally delivered these requests in Washington. Speaking at a Bipartisan Policy Center event, Smith emphasized the human nature behind AI-related problems. He warned of the dangers posed by AI’s capacity to create believable deepfakes, which can spread false information about public figures, impersonate personal or business contacts, or generate inappropriate images, including those involving children. Smith stressed the urgent need for collective action, stating, “We need to come together as a group of companies. But we need laws as well.”
IS YOUR COMPUTER SECURE?
FREE Malware Removal
Detect & Remove Adware, Viruses, Ransomware & Other Malware Threats with SpyHunter (FREE Trial)
IS YOUR COMPUTER SECURE?
FREE Malware Removal
Detect & Remove Adware, Viruses, Ransomware & Other Malware Threats with SpyHunter (FREE Trial)
IS YOUR COMPUTER SECURE?
FREE Malware Removal
Detect & Remove Adware, Viruses, Ransomware & Other Malware Threats with SpyHunter (FREE Trial)
Key Legislative Proposals
The 52-page whitepaper, titled “Protecting the Public from Abusive AI-Generated Content,” proposes three key legislative measures:
- Deepfake Fraud Statute: This would empower federal and state prosecutors to take legal action against AI-generated frauds and scams.
- Federal Labeling Law: Modeled after the Caller ID statute, this law would require AI providers to label synthetic content using advanced provenance tools.
- Revised CSAM and NCII Laws: Existing laws against child sexual abuse material (CSAM) and non-consensual intimate imagery (NCII) would be updated to include AI-generated versions.
Bipartisan Support and Industry Responsibility
At the event, a diverse group of speakers voiced support for these legislative efforts. Senator Amy Klobuchar (D-MN) emphasized the need for immediate action, stating, “We just need to get some guardrails in place. We can always add fancy things later.” Representative Laurel Lee (R-FL) highlighted the broad political support for protecting children from AI threats, referencing recent legislative successes like the REPORT Act.
Michelle DeLaune, President and CEO of the National Center for Missing and Exploited Children, underscored the escalating issue of AI-generated CSAM, noting that AI technology is being misused to create harmful content tutorials. Alexandra Reeve Givens, CEO of the Center for Democracy & Technology, supported the DEFIANCE Act, which targets non-consensual and faked images.
Challenges and Next Steps
While there is strong bipartisan support for measures to protect children and curb AI misuse, implementing effective laws remains challenging. Smith pointed to Microsoft’s efforts in promoting the C2PA (Coalition for Content Provenance and Authenticity) watermark standard, which has gained traction among major tech players like OpenAI and TikTok. Additionally, Microsoft’s educational initiatives, such as the Minecraft game “The Investigators” and the “Real or Not?” image quiz, aim to enhance public AI literacy.
However, questions remain about whether other companies will commit to similar efforts. The urgency of the situation is highlighted by recent incidents, such as the use of AI to create misleading political content, demonstrating the need for immediate and decisive action.
In conclusion, Microsoft’s call for regulatory action against abusive AI practices marks a significant step toward safeguarding society from the potential dangers of AI. As legislative discussions progress, it will be crucial for both the tech industry and lawmakers to work together to implement these necessary protections.