Sponsored by Looka AI – Exclusive lifetime deal

Endor Labs Highlights AI Transparency and the Risks of Open-Washing

Experts from Endor Labs, an open-source security firm, particularly focus on the concept of “openness” in AI systems. Andrew Stiefel, the Senior Product Marketing Manager at Endor Labs, highlights the importance of applying lessons learned from software security to AI. 

He points out that the U.S. government’s 2021 Executive Order on Improving America’s Cybersecurity mandates organizations to create a software bill of materials (SBOM) for products sold to federal agencies. This SBOM acts as an inventory of open-source components, aiding in the detection of vulnerabilities, and Stiefel argues that similar principles should be applied to AI.

Julien Sobrier, Senior Product Manager at Endor Labs, adds depth to the discussion by explaining the complexities involved in defining what it means for an AI model to be “open.” He emphasizes that for a model to be considered truly open, all components, including the training set and weights. Sobrier warns against “open-washing,” where organizations may claim transparency while imposing restrictions. 

Amid these discussions, DeepSeek is emerging as a notable player by releasing parts of its models and code as open-source, which has been positively received for promoting transparency and security. Stiefel notes that this approach allows for community auditing of their systems for security risks and offers a roadmap for managing AI infrastructure.

The trend towards open-source AI is gaining momentum, with a report indicating that 60% of organizations prefer open-source models for generative AI. Endor Labs’ research shows that companies typically use between seven and twenty-one open-source models per application, highlighting the need for security teams to understand the potential risks. 

As the adoption of open-source AI accelerates, managing risks becomes crucial. Stiefel outlines a systematic approach involving discovery, evaluation, and response to ensure safe model adoption. Both Stiefel and Sobrier stress the need for the community to develop best practices for building and adopting AI models responsibly.

Facebook
X
LinkedIn
Pinterest
Reddit
'

Thank You!

Check you email for prompt book

Exclusive Gift 🎁

Get FREE AI Prompt Book!

Sign up & Get  1000’s of Prompts and Weekly AI Updates Directly in your Inbox !