Join the event trusted by corporate leaders for almost two decades. VB Transforma connects people building AI Real Enterprise. Learn more
Ciso knows exactly where their nightmare AI is coming the fastest. This is inference, a sensitive stage in which models live in real data, leaving companies exposed to quick injection, data leaks and jailbreak.
Databicks Ventures AND Noma secrety They are confronted with these threats at the application stage. Supported by the recent round of series A in the amount of $ 32 million led by Ballistic Ventures and GlileT Capital, with mighty support with Databicks Ventures, the partnership aims to solve critical safety gaps that hindered the implementation of AI Enterprise AI.
“The reason why enterprises will hesitate to implement artificial intelligence is security,” said Niv Braun, general director of Noma Security, in an exclusive interview for Venturebeat. “Thanks to Databicks, we set up real-time threat analyzes, advanced protection of the application layer and proactive AI Red connected directly in the flow of the company’s work. Our joint approach enables organizations to organize safely and imagining their ambitions AI,” said Braun.
AI application protection requires real -time analysis and defense of the executive environment, the Gartner finds
Time-honored cyber security priority treats peripheral defense, leaving threats to AI inference. Andrew Ferguson, vice president of Databicks Ventures, emphasized this critical lock in security in an exclusive interview with Venturebeat, emphasizing the urgency of clients in the safety of the application layer. “Our clients clearly indicated that the security of AI in real time is crucial, and NOMA exceptionally provides this ability,” said Ferguson. “Noma directly refers to the vulnerability in the field of application safety with continuous monitoring and precise control elements of the executive environment.”
Braun expanded this critical need. “We have built our protection of the executive environment especially for increasingly complex AI interactions,” explained Braun. “Real time analytics at the application stage provides enterprises, maintain a solid defense of the executive environment, minimizing the unauthorized exposure to data and manipulation of the opponent model.”
Gartner’s last analysis confirms that the company’s demand for advanced AI Trust, risk and security management (TRISM) The possibilities are growing. Gartner predicts that by 2026 80% Unauthorized AI incidents will result from the internal improper apply, not external threats, strengthening the urgency of integrated management and security in real time.
Gartner’s AI Tryzom frames are illustrated by comprehensive safety layers necessary for effective risk management AI Enterprise. Source: Gartner
Proactive red noma teams aim to ensure AI integrity from the very beginning
Braun said that a proactive approach to the red team is strategically crucial for identifying gaps in security for long before the production of AI models, said Venturebeat. By simulating sophisticated attacks opposite during pre -production testing, NOMA reveals and solves the early risk, significantly increasing the solidity of the environmental environment.
During the interview with VentureBeat, Braun developed a strategic value of a proactive red team: “Red teaming is necessary. We are proactively discovering the pre -production of gaps in security, ensuring the integrity of AI from the first day.”
(Louis will run a round table with the Red team in VB Transformation 24 and 25 June, Register today.)
“Shortening time for production without prejudice to safety requires the avoidance of excessive engineering. We design testing methodologies that directly inform about the protection of the executive environment, helping enterprises safely and efficiently go from testing to implementation,” advised Braun.
Braun continued to develop the complexity of contemporary AI interactions and the depth required in proactive red methods of the team. He emphasized that this process must evolve along with the increasingly sophisticated AI models, especially the generative type: “Our protection of the executive environment has been specially built to operate more and more complex AI interactions,” explained Braun. “Every detector we use integrates many layers of security, including advanced NLP models and language modeling, ensuring that we provide comprehensive security at every stage of inference.”
The red team practices not only confirms models, but also strengthens the trust of enterprises to safely implement advanced AI systems, directly in accordance with the expectations of leading directors for company information security (CISO).
Like Batabicks and Noma, they block critical threats to AI inference
Securing the AI inference from emerging threats has become the highest priority for CISO, because enterprises scale their pipelines of the AI model. “The reason why enterprises hesitate to fully implement artificial intelligence on a large scale,” emphasized Braun. Ferguson repeated this urgent need, noting: “Our clients clearly indicated that the security of AI in real time is critical, and Noma exceptionally meets this need.”
Together, Databicks and Noma offer integrated real -time protection against sophisticated threats, including rapid injection, data leaks and Jailbreak model, at the same time strictly consistent with standards, such as DASF 2.0 DASF 2.0 and OWASP guidelines.
The table below summarizes the key threats on the AI inference and the way Databicks-Ane is soothed by:
Threat vector | Description | Potential influence | Noma-Databicks soothing |
Rapid injection | Malicious input data is the superior instructions of the model. | Unauthorized data exposure and harmful content generation. | Quick scanning with multi -layer detectors (NOMA); Input validation via DASF 2.0 (Databicks). |
Confidential data leakage | Accidental exposure of confidential data. | Violation of compliance, loss of intellectual property. | Sensitive data in real time and masking (noma); Unity directory management and encryption (Databicks). |
Jailbreaking model | Avoiding built -in safety mechanisms in AI models. | Generating inadequate or malicious output. | Detection and enforcement of Jailbreak during performance (NOMA); Management of the MLFLOW model (Databicks). |
Apply of an agent tool | Incorrect apply of integrated functionality of AI agents. | Unauthorized access to the system and escalation of privileges. | Monitoring of agent interactions (noma) in real time; Controlled implementation environments (Databicks). |
Agent’s memory poisoning | Injection of false data into the agent’s lasting memory. | Damaged decision making, disinformation. | AI-SPM integrity controls and memory safety (NOMA); Delta Lake Data Versioning (Databicks). |
Intermediate quick injection | Embedding malicious instructions in trusted expenditure. | Agent kidnapping, unauthorized performance of tasks. | Input scanning in real time for malicious patterns (noma); Secure pipelines for receiving data (Databicks). |
Like Lakehouse Lake architecture, architecture supports management and security
The architecture of Lakehouse Databicks combines established management possibilities of established data warehouse with the scale of the data lakes, centralization of analysis, machine learning and AI loads in one managed environment.
By forcing management directly in the data cycle, Lakehouse Architecture refers to conformity and security threats, especially at the stages of inference and the executive environment. It adapts strictly with industry frames, such as OWASP and Miter Atlas.
During our interview, Braun emphasized the adaptation of the platform to the strict regulatory requirements, which he sees in sales cycles and existing customers. “We automatically map our security controls on widely accepted frames, such as OWASP and Mitr Atlas. This allows our clients to have some compliance with critical regulations, such as the EU AI Ace and ISO 42001. Management is not just about check -in fields. It’s about embedding transparency and compliance directly into operative flows.”

Databicks Lakehouse integrates management and analysis to safely manage AI loads. Source: Gartner
Like Databicks and Noma plan to secure AI Enterprise on a scale
Adoption AI Enterprise accelerates, but with the expansion of implementation, as well as the risk of security, especially at the stage of inferences in the model.
The partnership between Databicks and Noma’s security directly concerns this, ensuring integrated management and detection of real -time threats, focusing on securing AI’s flows from development through production.
Ferguson clearly explained the justification of this combined approach: “Enterprise AI requires comprehensive security at every stage, especially during the performance. Our partnership with NOMA integrates proactive threat analysis directly with AI operations, giving enterprises protection that AI must convince their implementation.”