Announcing the Google Secure AI framework

Announcing the Google Secure AI framework

Posted on
59 / 100

Google today announced the release of a conceptual framework to help collaboratively secure AI technology. Google says SAIF is inspired by security best practices, such as supply chain auditing, testing, and control. Which the company has applied to software development, incorporating its understanding of the huge security trends and risks specific to AI systems.

“The potential for artificial intelligence, especially generative AI, is enormous. However, in pursuit of progress within these new frontiers of innovation, there must be clear industry safety standards for building and deploying this technology responsibly. That is why we are excited today to present the Intelligence Framework Safe Artificial Intelligence (SAIF), a conceptual framework for secure AI systems.

A framework across the public and private sectors is necessary to ensure that responsible actors protect the technology that supports AI developments, so that they are secured by default when implementing AI models. Today marks an important first step.”

Google Secure AI Framework

“The latest AI innovations can improve the scope and speed of security incident response efforts. Adversaries are likely to use AI to expand their sphere of influence, so it is important to use AI and its existing and emerging capabilities to remain agile and cost-effective in protecting against them. Consistency across frameworks can support Control Mitigate AI risks and scale protection across different platforms and tools to ensure all AI applications are best protected in a scalable and cost-effective manner.

At Google, this includes extending Secure Protection by default to AI platforms such as Vertex AI and Security AI Workbench, and building controls and protections into the software development lifecycle. Capabilities that address generic use cases, such as the Perspective API, can help the entire organization take advantage of the latest protections.”

“Continuous testing of applications through continuous learning can ensure detection and protection capabilities address a changing threat environment. This includes techniques such as reinforcement learning based on incidents and user feedback and includes steps such as updating training datasets and fine-tuning models to respond strategically to attacks and allowing the software used to build the models to embed More security in context (such as detection of anomalous behavior). Organizations can also conduct regular red team exercises to better ensure safety for AI-powered products and capabilities.”

For more information about the new Google Secure AI Framework, go to the company’s official blog by following the link below.

Source: Google

Filed Under: Technology News, Top News

Latest togetherbe

 

disclosure: Some of our articles include affiliate links. If you buy something through one of these links, togetherbe may earn an affiliate commission. Learn about our disclosure policy.

Gravatar Image
My Miranda cosgrove is an accomplished article writer with a flair for crafting engaging and informative content. With a deep curiosity for various subjects and a dedication to thorough research, Miranda cosgrove brings a unique blend of creativity and accuracy to every piece.

Leave a Reply

Your email address will not be published. Required fields are marked *