Back in July, the White House secured commitments from Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI to help manage the risks that artificial intelligence potentially poses. More recently, eight more companies—Adobe, Cohere, IBM, Nvidia, Palantir, Salesforce, Scale AI, and Stability—also pledged to maintain “the development of safe, secure, and trustworthy AI,” as a White House brief reported.
Let’s explore why this is so important, especially as AI continues to develop.
The Plan: AI-Generated Content Will Be Watermarked
As beneficial as artificial intelligence has proven to be, it has also proven to be a tool for cybercriminals and other threat actors to use to their advantage to great effect. From these tools being used to create deepfaked images to replicated voices scamming people out of thousands of dollars, there are countless ways that AI can potentially be weaponized by threat actors using legitimate tools.
This is why the Biden White House is pushing for these companies to create the technology needed to watermark AI content in such a way that the platform used to create it can be identified. The theory is that these watermarks would help prove whether an AI platform was involved in creating content, helping to spot potential threats and confirming that these platforms are being built and innovated upon to spot them more effectively.
In addition to the watermark, other safeguards have been agreed to by the technology firms:
- Investments will be made into cybersecurity to protect the essential data that powers AI models
- Independent experts will be charged with testing AI models before they’re released to ensure that major risks associated with AI are accounted for in their security
- Research into the risks AI places to society at large, such as bias and inappropriate use, will be conducted and any identified instances will be flagged
- Third parties will be more able to discover vulnerabilities and report them so they can be resolved
- These firms and companies will also share all AI risk management-related data with others, as well as society at large and academia.
- These firms have also committed to disclosing their security risks and the risks their products pose to society, including their bias.
- These firms have also committed to creating AI that tackles some of society’s largest, most pressing issues.
Granted, these standards and practices aren’t enforceable by the government, but they serve as an invaluable first step towards more secure artificial intelligence.
We Can Help Secure Your Business Against Today’s Threats
We’ve long been committed to fulfilling business IT needs, particularly in regard to cybersecurity. Give us a call at 734-927-6666 / 800-GET-XFER to find out what we can do for you and your operations.
Comments