DataCentreNews India - Specialist news for cloud & data centre decision-makers
Modern gpu data center with runtime protection shield layers

Check Point targets securing AI factories at runtime

Wed, 21st Jan 2026

Check Point Software has set out a security case for what it calls "AI factories", with a focus on protecting AI systems at runtime as organisations scale up data centre infrastructure for model training and inference.

The company said the expansion of AI workloads in enterprise environments has exposed gaps in existing security approaches. It pointed to threats such as prompt manipulation, model poisoning and attacks on underlying infrastructure.

AI Factories

Check Point Software described an "AI Factory" as a purpose-built, large-scale data centre environment that runs the lifecycle of AI work. It said this includes data pipelines, compute, software tooling, and operational processes used to build, train, scale and serve models.

The company framed the shift as a move from pilot projects to operational deployments. It said this created new operational dependencies between infrastructure teams and security teams.

Check Point Software said many organisations now focus on GPU procurement and utilisation. It argued that security risks sit alongside capacity planning and performance management.

Runtime Risks

The company said AI runtime security represents a blind spot for many enterprises. It placed emphasis on attacks that target live inference and model-serving environments, rather than development and training alone.

It cited Gartner findings that it said showed 32% of organisations had already experienced AI attacks involving prompt manipulation. It also cited Gartner findings that it said showed 29% had faced direct attacks on their GenAI infrastructure in the last year.

Check Point Software also cited a figure that nearly 70% of cyber security leaders said emerging GenAI risks require significant changes to existing cyber security approaches.

It argued that traditional security controls do not reliably recognise AI-specific traffic and workflows. It pointed to Model Context Protocols and Retrieval-Augmented Generation patterns as areas where existing tools can lack visibility.

It also cited a survey by Lakera that it said found 49% of organisations reported high levels of concern about their current vulnerabilities.

Nvidia Collaboration

Check Point Software referred to a recent collaboration with Nvidia. It described the work as an example of security integration alongside AI infrastructure rather than bolted on after deployment.

It argued that security integrations often introduce a "performance tax". It said security leaders and infrastructure leaders now face pressure to limit latency and preserve compute capacity in production environments.

Offload Approach

Check Point Software set out an approach that it said relies on offloading certain security processing to dedicated infrastructure components.

It said deep packet inspection and runtime monitoring can consume CPU and GPU resources. It said an alternative approach uses Nvidia DOCA Argus telemetry and moves security processing to the BlueField DPU.

The company said this model allows for monitoring and workload isolation while preserving GPU capacity for training and inference workloads.

Agentic Layer

Check Point Software also pointed to changes in application design as organisations move beyond chat interfaces. It said "Agentic AI" that interacts with enterprise systems increases the attack surface.

It described a model that uses insights from an AI red-team platform and applies them through a web application firewall for runtime protection of large language model inputs and outputs.

The company listed prompt injection, jailbreaking and LLM poisoning among the threats it said security teams aim to block at the source.

User Governance

Check Point Software also described a user layer focused on employee adoption of GenAI tools and associated compliance requirements. It said organisations face pressure to maintain audit trails and manage regulatory exposure.

It also pointed to data leakage risks. It said real-time data loss prevention aims to reduce the likelihood that proprietary code or customer data appears in public model training sets.

Leadership Focus

Check Point Software said the separation between infrastructure purchasing and later-stage security coverage no longer fits the scale of AI deployments. It called for joint planning between CIO-led infrastructure functions and security leadership.

It said leadership should focus on three pillars: visibility, isolation and performance. It described telemetry as central to visibility, including identification of shadow GenAI usage and unauthorised servers. It described micro-segmentation as central to isolation and limiting lateral movement between sensitive workloads. It described DPU integration as central to performance, with security scaling in line with AI traffic without introducing latency.