As businesses rush to deploy AI chatbots and agentic AI systems, a quiet but growing risk threatens their operations. Security vulnerabilities that traditional testing methods can’t keep up with. Recognizing this critical gap, New York-based startup SplxAI is stepping in with a dynamic and automated approach to protect conversational AI at scale. With a fresh $7 million seed round. The company is now racing to safeguard the next generation of AI applications before attackers can exploit them.
SplxAI’s funding round was led by LAUNCHub Ventures, with backing from Rain Capital, Runtime Ventures, DNV Ventures, Inovo, and South Central Ventures. This capital injection will accelerate the growth of SplxAI’s security platform. Helping businesses protect both internal AI agents and customer-facing AI chatbots. Their platform does this through automated penetration testing, real-time monitoring, and dynamic threat remediation. All tailored for the complex nature of large language model (LLM)-powered systems.
Co-founded in 2023 by Kristian Kamber and Ante Gojsalić, SplxAI was born out of a simple but urgent realization. Traditional security tools are outdated in the age of AI. Kamber, who previously worked in software and cybersecurity sales at Zscaler, teamed up with Gojsalić, an AI consultant, to build a solution from the ground up. The team also includes elite AI red teamers with experience at major security events like Wiz and Black Hat.
Since launching its platform in August 2024, SplxAI has experienced 127% quarter-over-quarter growth. With clients like KPMG, Glean, Infobip, and Brand Engagement Network already relying on their tools to secure AI operations. Their most recent innovation, Agentic Radar, is an open-source software tool that scans agentic workflows for security gaps through static code analysis.
SplxAI is positioning itself as the only security company focused solely on the fast-changing world of agentic AI systems. According to Kamber, the future of AI security must be proactive and scalable — and that’s exactly where SplxAI is headed. “Manual testing just doesn’t cut it anymore,” he explained. “We’ve built the only scalable platform capable of protecting AI agents from day one.”
The startup has also achieved SOC2 Type I compliance. A crucial milestone that underscores its commitment to protecting customer data and maintaining high security standards. Beyond technology, SplxAI is investing in leadership. Sandy Dunn, formerly the CISO at Brand Engagement Network, has joined as the company’s Chief Information Security Officer. Where she’ll lead the Governance, Risk, and Compliance (GRC) vertical.
Adding to its momentum, SplxAI has partnered with Hackrate to combine ethical hacking with automated AI red teaming. This hybrid approach enhances the company’s ability to simulate advanced adversarial scenarios. Such as prompt injection attacks, hallucinations, and off-topic LLM behavior, which are notoriously difficult to detect with static rules.
With projections showing that over 33% of enterprise software will integrate agentic AI by 2028. SplxAI is addressing a challenge most companies are only just beginning to understand. As AI systems evolve from simple assistants into complex, autonomous agents, new risks emerge — ones that could undermine trust, data integrity, and even legal compliance.
Stan Sirakov of LAUNCHub Ventures, who now joins the SplxAI board, summed up the urgency: “This is the only team building scalable risk management for agentic AI. We’re proud to support their mission.”
As AI adoption continues to surge, companies will need more than just optimism and innovation — they’ll need real-time, adaptive AI security. And with this funding, SplxAI is well-positioned to lead the charge.