Open-Source GenAI Models Achieve Enterprise-Grade Security with Guardrails, Study Shows
A new evaluation led by LatticeFlow AI in collaboration with SambaNova has demonstrated that open-source generative AI models can achieve enterprise-grade security when equipped with targeted guardrails—outperforming many closed models in real-world testing scenarios. The study marks the first quantifiable proof that open-source GenAI models, once properly secured, are viable for deployment in highly regulated industries such as finance, healthcare, and government. The research assessed the top five open foundation models, measuring their security performance in two configurations: the base model as released, and the same model enhanced with a dedicated input filtering layer designed to block adversarial prompts and manipulative inputs. Before guardrails were applied, security scores for the open models ranged as low as 1.8%. After implementation, scores surged to 99.6%, while maintaining over 98% quality of service—demonstrating that robust security does not come at the cost of usability. Dr. Petar Tsankov, CEO and Co-Founder of LatticeFlow AI, emphasized the importance of technical rigor in AI governance. “At LatticeFlow AI, we provide the deepest technical controls to evaluate GenAI security and performance,” he said. “These findings give AI, risk, and compliance leaders the clarity they’ve been missing—enabling them to move forward with open-source GenAI safely and confidently.” The evaluation focused on cybersecurity risks, simulating enterprise-relevant attack scenarios such as prompt injection, data leakage, and model manipulation. The results show that with the right safeguards, open-source models can meet or exceed the security standards required for sensitive applications. Harry Ault, Chief Revenue Officer at SambaNova, highlighted the growing demand from enterprises. “Our customers—ranging from leading financial institutions to government agencies—are rapidly adopting open-source models and accelerated inference to power next-generation agentic applications,” he said. “This evaluation confirms that, with the right safeguards, open-source models are enterprise-ready, offering transformative benefits in cost efficiency, customization, and responsible AI governance.” For financial institutions and other regulated sectors, the implications are significant. As GenAI transitions from experimentation to production, regulators, boards, and internal risk teams are demanding greater transparency, auditability, and control. This study provides the evidence needed to prove that open-source models can meet those standards when properly secured. The findings also address a long-standing barrier to open-source adoption: the lack of clear, measurable data on model risk. By delivering transparent, quantifiable insights, LatticeFlow AI is helping organizations make informed decisions about their AI strategies. LatticeFlow AI, a pioneer in AI governance, developed COMPL-AI—the world’s first EU AI Act-compliant framework for generative AI—developed in partnership with ETH Zurich and INSAIT. The company combines Swiss precision with scientific rigor to build trust in AI through evidence-based governance. This evaluation not only validates the security potential of open-source GenAI but also redefines how enterprises should approach AI adoption—with a focus on control, transparency, and measurable risk mitigation.