Skip to content

Services, Wiki-Artikel und Blog-Beiträge durchsuchen

↑↓NavigierenEnterÖffnenESCSchließen

AI Infrastructure Security

The invisible attack surface
of your AI infrastructure.

Most AI security incidents don't happen in the model - they happen in the surrounding infrastructure. Open MLflow instances. Unsecured model registries. Unverified base models. Unprotected training pipelines. We check everything.

MITRE ATLAS NIST AI RMF ISO 42001 AI SBOM
ML PIPELINE - ATTACK PATHS
Data Ingestion VULNERABLE
Training Pipeline EXPOSED
Experiment Tracking OPEN
Model Registry UNSECURED
Serving Endpoint UNMONITORED
Supply Chain UNVERIFIED

MITRE ATLAS · AML.T0010 · AML.T0020 · real configuration

Fixed-price quote
from EUR 12,000
Quote turnaround
48h (business days)
MLOps platforms tested
6
Subcontractors
0

The Problem

Most AI attacks don't target the model

AI security is often viewed as a purely algorithmic problem - adversarial attacks, prompt injection, jailbreaking. But in practice, the infrastructure surrounding the model is the weakest link. Unsecured MLOps environments open up a broad, real attack surface to adversaries - beyond any ML-specific technique.

Open Jupyter notebooks and MLflow UIs

Many ML teams run experiment tracking interfaces without authentication on the internal network - accessible to any attacker with network access. A single compromised workstation is sufficient.

Model registry without signing and audit trail

Who can promote which model? Has the production model been replaced by a prepared model? Without cryptographic signing and immutable logs this cannot be determined.

Unverified base models from public sources

New model checkpoints are uploaded to Hugging Face daily - without central security review. Compromised models with backdoors are often only discovered after fine-tuning and production deployment.

REAL ATTACK SCENARIO - ML PIPELINE COMPROMISE

RECON › Nmap scan: port 5000 open - MLflow Tracking Server
ACCESS › No auth token required - direct UI access
ENUM › 124 experiments · 18 model artefacts · S3 bucket URL
UPLOAD › Prepared model uploaded to registry - promoted
IMPACT › Backdoor model deployed - MITRE ATLAS AML.T0020

MITRE ATLAS - RELEVANT TECHNIQUES

AML.T0010 ML Supply Chain Compromise
AML.T0019 Publish Poisoned Datasets
AML.T0020 Poison Training Data
AML.T0044 Full ML Model Access
AML.T0047 AI-Enabled Product or Service

What we test

Six layers of AI infrastructure security

From the data pipeline to the serving endpoint - we review the entire ML lifecycle for vulnerabilities.

01

MLOps Pipeline Security

Security review of the CI/CD infrastructure for ML: training pipelines (Kubeflow, Airflow, GitHub Actions), experiment tracking (MLflow, Weights & Biases), authentication, authorisation, secrets management and network segmentation. Review for pipeline injection possibilities and unauthorised model manipulation.

KubeflowMLflowAirflow
02

Model Registry Security

Access control (RBAC for read, write, promote), cryptographic signing of model artefacts, immutable audit logs, network access restriction and integrity checks of stored models. Review for privilege escalation paths and unauthorised model substitution.

RBACArtifact SigningAudit Log
03

AI API Endpoint Security

Security review of inference APIs: authentication and authorisation, rate limiting against model extraction and DoS, input validation, output filtering, query pattern monitoring (extraction detection) and transport security (TLS, mTLS). Review for enumeration, reconnaissance and side-channel attacks.

Rate LimitingAuthTLS
04

Data Pipeline Integrity

Data lineage tracking, integrity checks on training data before processing, validation of data transformations and feature engineering steps. Review for unauthorised modification of training data (data poisoning vectors) in the pipeline.

Data LineageIntegrity CheckPoisoning
05

AI Supply Chain Audit

Inventory and verification of all external AI components: pre-trained models (Hugging Face, TF Hub), training frameworks and dependencies (PyTorch, Transformers, LangChain), public datasets and data labelling providers. AI SBOM creation. Backdoor review of selected base models.

AI SBOMProvenanceBackdoor Check
06

Container & Cloud Security for AI

Security review of container infrastructure for AI workloads: Docker images for known vulnerabilities, Kubernetes RBAC for ML namespaces, GPU resource isolation, cloud IAM permissions (S3, GCS, Azure Blob for model artefacts) and network policies for training and inference clusters.

KubernetesCloud IAMContainer

MLOps Platforms

All platforms - one assessment

We test all mainstream MLOps platforms and custom pipelines. The test approach is tailored individually to your infrastructure.

Kubeflow

Kubernetes-native
  • Pipelines endpoint auth
  • KFServing inference auth
  • Katib RBAC
  • Network segmentation
  • Istio configuration

MLflow

Open Source
  • Tracking server auth
  • Artifact store ACL
  • Model registry RBAC
  • Artifact signing
  • API endpoint exposure

Amazon SageMaker

AWS Cloud
  • IAM role permissions
  • S3 bucket policies
  • Endpoint configuration
  • Feature store access
  • Studio domain isolation

Google Vertex AI

GCP Cloud
  • Service account rights
  • GCS bucket permissions
  • Endpoint IAM policies
  • Matching Engine access
  • Workbench isolation

Azure Machine Learning

Azure Cloud
  • Managed identity
  • Datastore access rights
  • Compute network segmentation
  • Endpoint auth
  • Registry RBAC

Custom Pipelines

Airflow · Prefect · ZenML
  • DAG access control
  • Secrets management
  • Artifact integrity
  • Pipeline injection vectors
  • Monitoring blind spots

Methodology

How an AI Infrastructure Assessment works

Classical penetration testing craft combined with AI-specific review techniques - per MITRE ATLAS and NIST AI RMF.

01

2-3 days

Scoping & Infrastructure Inventory

Complete inventory of AI infrastructure: MLOps platforms, training pipelines, serving endpoints, data stores and external dependencies. Threat modelling per MITRE ATLAS - which attacker profiles and tactics are realistic? Creation of the initial AI SBOM: which pre-trained models and datasets are in use?

MITRE ATLASAI SBOM
02

2-3 days

Network & Access Reconnaissance

Network scanning of ML infrastructure: which ports and services are reachable internally/externally? Service banner analysis for MLflow, Jupyter, TensorBoard, KFServing. Cloud IAM enumeration: which roles and permissions are assigned? Identification of overprivileged service accounts and roles.

NmapCloud IAM ReviewService Enum
03

4-6 days

MLOps Platform Penetration Test

Active penetration test of all identified MLOps components: authentication bypass tests, privilege escalation attempts, pipeline injection tests (can unauthorised training jobs be triggered?), artefact manipulation tests (can models be replaced without authorisation?) and lateral movement analysis within the ML cluster.

Custom ExploitsPipeline InjectionRBAC Bypass
04

2-4 days

API Endpoint & Inference Security

Security review of all inference APIs: authentication mechanisms, authorisation granularity, rate limiting effectiveness (simulation of model extraction queries), input validation, output filtering, TLS configuration and monitoring coverage. Estimation of extraction costs: how many queries for an accurate surrogate model?

API TestingExtraction SimTLS Audit
05

2-4 days

Supply Chain & Artefact Audit

Detailed review of all external AI components: backdoor detection in used base models (Neural Cleanse, STRIP, ABS methods), CVE scan of all ML framework dependencies, trustworthiness assessment of training data sources and review of the model signing chain. Full AI SBOM finalisation.

Neural CleanseCVE ScanningProvenance Audit
06

2-3 days

Reporting & Remediation Roadmap

Technical report with CVSS scoring, MITRE ATLAS mapping and prioritised remediation roadmap. Compliance mapping: NIST AI RMF (Govern, Map, Measure, Manage), ISO 42001 operational controls and EU AI Act Art. 15 (robustness). AI SBOM deliverable. Management summary and optional closing presentation with the ML team.

NIST AI RMFISO 42001AI SBOM

Typical total duration: 15-23 days - depending on the number of MLOps platforms and infrastructure complexity.
You receive a binding fixed-price quote within 48 business hours from EUR 12,000.

Frameworks & Standards

One assessment - all compliance evidence

Every finding is mapped to relevant standards and regulations. Your report is ready for audits and certifications.

MITRE ATLAS

The AI-specific ATT&CK framework documents real attack TTPs against AI systems. We structure our attack scenarios along the ATLAS matrix - from reconnaissance to impact.

Supply Chain · Pipeline · Evasion

NIST AI RMF

The NIST AI Risk Management Framework defines operational AI security controls. Our assessment provides evidence for all four core functions: Govern, Map, Measure, Manage.

Incl. Adversarial ML Profile

ISO/IEC 42001

The operational security of AI infrastructure is a core component of ISO 42001 controls. Our assessment delivers audit-ready evidence for certification processes.

38 controls · 9 objectives

EU AI Act - Art. 15

High-risk AI systems must be robust against attacks on infrastructure. Our report maps all findings to Art. 15 and delivers auditable compliance evidence.

High-risk AI · GPAI since Aug. 2025

NIS-2 & NIST

AI infrastructure in critical sectors is subject to NIS-2 security requirements. The assessment report is designed as evidence for supervisory authorities.

Critical infrastructure · Financial sector

AI SBOM Deliverable

Every assessment concludes with a complete AI Software Bill of Materials: inventory of all AI components, provenance data and risk classification of supply chain elements.

Machine-readable · CycloneDX-compatible

Warum AWARE7

Was uns von anderen Anbietern unterscheidet

Reine Awareness-Plattformen testen keine Systeme. Reine Beratungskonzerne sind zu weit weg. AWARE7 verbindet beides: Wir hacken Ihre Infrastruktur und schulen Ihre Mitarbeiter - mittelstandsgerecht, persönlich, ohne Enterprise-Overhead.

Forschung und Lehre als Fundament

Rund 20% unseres Umsatzes stammen aus Forschungsprojekten für BSI und BMBF. Unsere Studien analysieren Millionen von Websites und Zehntausende Phishing-E-Mails - publiziert auf ACM- und Springer-Konferenzen. Drei unserer Führungskräfte sind gleichzeitig Professoren an deutschen Hochschulen.

Digitale Souveränität - keine Kompromisse

Alle Daten werden ausschließlich in Deutschland gespeichert und verarbeitet - ohne US-Cloud-Anbieter. Keine Freelancer, keine Subunternehmer in der Wertschöpfung. Alle Mitarbeiter sind sozialversicherungspflichtig angestellt und einheitlich rechtlich verpflichtet. Auf Anfrage VS-NfD-konform.

Festpreis in 24h - planbare Projektzeiträume

Innerhalb von 24 Stunden erhalten Sie ein verbindliches Festpreisangebot - kein Stundensatz-Risiko, keine Nachforderungen, keine Überraschungen. Durch eingespieltes Team und standardisierte Prozesse erhalten Sie einen klaren Zeitplan mit definiertem Starttermin und Endtermin.

Ihr fester Ansprechpartner - jederzeit erreichbar

Ein persönlicher Projektleiter begleitet Sie vom Erstgespräch bis zum Re-Test. Sie buchen Termine direkt bei Ihrem Ansprechpartner - keine Ticket-Systeme, kein Callcenter, kein Wechsel zwischen wechselnden Beratern. Kontinuität schafft Vertrauen.

Für wen sind wir der richtige Partner?

Mittelstand mit 50–2.000 MA

Unternehmen, die echte Security brauchen - ohne einen DAX-Konzern-Dienstleister zu bezahlen. Festpreis, klarer Scope, ein Ansprechpartner.

IT-Verantwortliche & CISOs

Die intern überzeugend argumentieren müssen - und dafür einen Bericht mit Vorstandssprache brauchen, nicht nur technische Findings.

Regulierte Branchen

KRITIS, Gesundheitswesen, Finanzdienstleister: NIS-2, ISO 27001, DORA - wir kennen die Anforderungen und liefern Nachweise, die Auditoren akzeptieren.

Mitwirkung an Industriestandards

LLM

OWASP · 2023

OWASP Top 10 for Large Language Models

Prof. Dr. Matteo Große-Kampmann als Contributor im Core-Team des international anerkannten OWASP LLM-Sicherheitsstandards.

BSI

BSI · Allianz für Cyber-Sicherheit

Management von Cyber-Risiken

Prof. Dr. Matteo Große-Kampmann als Mitwirkender des offiziellen BSI-Handbuchs für die Unternehmensleitung (dt. Version).

Frequently asked questions about AI Infrastructure Security

Everything you need to know about MLOps security, AI supply chain and model registry hardening.

MLOps Security refers to the protection of the entire operational infrastructure behind AI and ML systems: training pipelines, experiment tracking, model registry, serving infrastructure, monitoring and data pipelines. While many AI security discussions focus on the model itself, real incidents show that most attacks target the infrastructure around it - unsecured MLflow instances, open Jupyter notebooks, unprotected S3 buckets containing model weights or compromised CI/CD pipelines for ML training. MLOps Security ensures that the entire ML lifecycle is secured against attacks.
An ML training pipeline is fundamentally an automated software development process - with the same attack surfaces as classical CI/CD pipelines, plus additional AI-specific risks. Compromised pipelines enable: data poisoning through manipulation of training data before training, model substitution (replacing the produced model with a prepared model), backdoor injection into the fine-tuning step and exfiltration of training intellectual property. Many ML teams run Jupyter notebooks, MLflow or Kubeflow without consistent authentication and network segmentation - an open gateway for attackers with network access.
An AI SBOM (AI Software Bill of Materials, also ML-BOM or model card with provenance data) is a machine-readable inventory of all components that went into an AI system: pre-trained base models, training datasets, frameworks used (PyTorch, TensorFlow, Hugging Face Transformers), data processing libraries and external APIs. Analogous to the SBOM in classical software development (required by the US Executive Order on Cybersecurity), an AI SBOM enables rapid identification of affected components when new vulnerabilities emerge (e.g. a backdoor discovered in a popular base model on Hugging Face). For the EU AI Act and ISO 42001, documentation of the technical supply chain is a mandatory component.
AI Supply Chain Security protects all external components that flow into your AI workflow: pre-trained models from public repositories (Hugging Face, TensorFlow Hub, PyPI packages such as transformers or langchain), public training datasets (Common Crawl, ImageNet, Wikipedia dumps), data labelling providers (crowdsourcing platforms) and cloud ML services (SageMaker, Vertex AI). Attackers who compromise a widely used pre-trained checkpoint can infect thousands of downstream models with a single backdoor - enormous leverage. MITRE ATLAS documents real supply chain attacks on AI systems under AML.T0010 (ML Supply Chain Compromise).
We test all mainstream MLOps platforms and custom setups: Kubeflow (Kubernetes-native ML pipelines, including Katib hyperparameter tuning and KFServing endpoints), MLflow (experiment tracking, model registry, MLflow Projects and MLflow Serving), Amazon SageMaker (Studio, Pipelines, Model Registry, Endpoints, Feature Store), Google Vertex AI (Pipelines, Model Registry, Endpoints, Matching Engine), Azure Machine Learning (Pipelines, Model Registry, Managed Endpoints, Datastores) and fully custom pipelines based on Airflow, Prefect, ZenML or custom Python scripts. The test approach is tailored individually to your platform and configuration.
An AI Infrastructure Assessment starts from EUR 12,000 for a focused review of a single MLOps platform (e.g. MLflow plus associated API endpoints only). A comprehensive assessment of the entire AI infrastructure - pipeline, registry, APIs, supply chain, container security - is in the range of EUR 20,000 to EUR 35,000. The exact costs depend on the number of MLOps platforms, the complexity of the data pipelines and the desired test scope. You receive a binding fixed-price quote within 48 business hours - no hourly rates, no additional charges.
A model registry is the central artefact repository for your ML models - and often the least secured element of the entire MLOps infrastructure. Critical protective measures: role-based access control (RBAC) for all registry operations (read, write, promote, delete), immutable audit logs of all changes, cryptographic signing of model artefacts (analogous to package signing in software registries), network segmentation (no public internet access to the registry endpoint), secrets management for credentials and API keys and regular integrity checks of all stored artefacts. In the assessment we review all these dimensions and identify privilege escalation paths.

How secure is your AI infrastructure really?

Our experts review your MLOps pipelines, model registry, inference APIs and AI supply chain - with fixed-price commitment and AI SBOM deliverable.

Kostenlos · 30 Minuten · Unverbindlich