AI Infrastructure Security
The invisible attack surface
of your AI infrastructure.
Most AI security incidents don't happen in the model - they happen in the surrounding infrastructure. Open MLflow instances. Unsecured model registries. Unverified base models. Unprotected training pipelines. We check everything.
MITRE ATLAS · AML.T0010 · AML.T0020 · real configuration
- Fixed-price quote
- from EUR 12,000
- Quote turnaround
- 48h (business days)
- MLOps platforms tested
- 6
- Subcontractors
- 0
The Problem
Most AI attacks don't target the model
AI security is often viewed as a purely algorithmic problem - adversarial attacks, prompt injection, jailbreaking. But in practice, the infrastructure surrounding the model is the weakest link. Unsecured MLOps environments open up a broad, real attack surface to adversaries - beyond any ML-specific technique.
Open Jupyter notebooks and MLflow UIs
Many ML teams run experiment tracking interfaces without authentication on the internal network - accessible to any attacker with network access. A single compromised workstation is sufficient.
Model registry without signing and audit trail
Who can promote which model? Has the production model been replaced by a prepared model? Without cryptographic signing and immutable logs this cannot be determined.
Unverified base models from public sources
New model checkpoints are uploaded to Hugging Face daily - without central security review. Compromised models with backdoors are often only discovered after fine-tuning and production deployment.
REAL ATTACK SCENARIO - ML PIPELINE COMPROMISE
MITRE ATLAS - RELEVANT TECHNIQUES
What we test
Six layers of AI infrastructure security
From the data pipeline to the serving endpoint - we review the entire ML lifecycle for vulnerabilities.
MLOps Pipeline Security
Security review of the CI/CD infrastructure for ML: training pipelines (Kubeflow, Airflow, GitHub Actions), experiment tracking (MLflow, Weights & Biases), authentication, authorisation, secrets management and network segmentation. Review for pipeline injection possibilities and unauthorised model manipulation.
Model Registry Security
Access control (RBAC for read, write, promote), cryptographic signing of model artefacts, immutable audit logs, network access restriction and integrity checks of stored models. Review for privilege escalation paths and unauthorised model substitution.
AI API Endpoint Security
Security review of inference APIs: authentication and authorisation, rate limiting against model extraction and DoS, input validation, output filtering, query pattern monitoring (extraction detection) and transport security (TLS, mTLS). Review for enumeration, reconnaissance and side-channel attacks.
Data Pipeline Integrity
Data lineage tracking, integrity checks on training data before processing, validation of data transformations and feature engineering steps. Review for unauthorised modification of training data (data poisoning vectors) in the pipeline.
AI Supply Chain Audit
Inventory and verification of all external AI components: pre-trained models (Hugging Face, TF Hub), training frameworks and dependencies (PyTorch, Transformers, LangChain), public datasets and data labelling providers. AI SBOM creation. Backdoor review of selected base models.
Container & Cloud Security for AI
Security review of container infrastructure for AI workloads: Docker images for known vulnerabilities, Kubernetes RBAC for ML namespaces, GPU resource isolation, cloud IAM permissions (S3, GCS, Azure Blob for model artefacts) and network policies for training and inference clusters.
MLOps Platforms
All platforms - one assessment
We test all mainstream MLOps platforms and custom pipelines. The test approach is tailored individually to your infrastructure.
Kubeflow
Kubernetes-native- Pipelines endpoint auth
- KFServing inference auth
- Katib RBAC
- Network segmentation
- Istio configuration
MLflow
Open Source- Tracking server auth
- Artifact store ACL
- Model registry RBAC
- Artifact signing
- API endpoint exposure
Amazon SageMaker
AWS Cloud- IAM role permissions
- S3 bucket policies
- Endpoint configuration
- Feature store access
- Studio domain isolation
Google Vertex AI
GCP Cloud- Service account rights
- GCS bucket permissions
- Endpoint IAM policies
- Matching Engine access
- Workbench isolation
Azure Machine Learning
Azure Cloud- Managed identity
- Datastore access rights
- Compute network segmentation
- Endpoint auth
- Registry RBAC
Custom Pipelines
Airflow · Prefect · ZenML- DAG access control
- Secrets management
- Artifact integrity
- Pipeline injection vectors
- Monitoring blind spots
Methodology
How an AI Infrastructure Assessment works
Classical penetration testing craft combined with AI-specific review techniques - per MITRE ATLAS and NIST AI RMF.
2-3 days
Scoping & Infrastructure Inventory
Complete inventory of AI infrastructure: MLOps platforms, training pipelines, serving endpoints, data stores and external dependencies. Threat modelling per MITRE ATLAS - which attacker profiles and tactics are realistic? Creation of the initial AI SBOM: which pre-trained models and datasets are in use?
2-3 days
Network & Access Reconnaissance
Network scanning of ML infrastructure: which ports and services are reachable internally/externally? Service banner analysis for MLflow, Jupyter, TensorBoard, KFServing. Cloud IAM enumeration: which roles and permissions are assigned? Identification of overprivileged service accounts and roles.
4-6 days
MLOps Platform Penetration Test
Active penetration test of all identified MLOps components: authentication bypass tests, privilege escalation attempts, pipeline injection tests (can unauthorised training jobs be triggered?), artefact manipulation tests (can models be replaced without authorisation?) and lateral movement analysis within the ML cluster.
2-4 days
API Endpoint & Inference Security
Security review of all inference APIs: authentication mechanisms, authorisation granularity, rate limiting effectiveness (simulation of model extraction queries), input validation, output filtering, TLS configuration and monitoring coverage. Estimation of extraction costs: how many queries for an accurate surrogate model?
2-4 days
Supply Chain & Artefact Audit
Detailed review of all external AI components: backdoor detection in used base models (Neural Cleanse, STRIP, ABS methods), CVE scan of all ML framework dependencies, trustworthiness assessment of training data sources and review of the model signing chain. Full AI SBOM finalisation.
2-3 days
Reporting & Remediation Roadmap
Technical report with CVSS scoring, MITRE ATLAS mapping and prioritised remediation roadmap. Compliance mapping: NIST AI RMF (Govern, Map, Measure, Manage), ISO 42001 operational controls and EU AI Act Art. 15 (robustness). AI SBOM deliverable. Management summary and optional closing presentation with the ML team.
Typical total duration: 15-23 days - depending on the number of MLOps platforms and infrastructure complexity.
You receive a binding fixed-price quote within 48 business hours from EUR 12,000.
Frameworks & Standards
One assessment - all compliance evidence
Every finding is mapped to relevant standards and regulations. Your report is ready for audits and certifications.
MITRE ATLAS
The AI-specific ATT&CK framework documents real attack TTPs against AI systems. We structure our attack scenarios along the ATLAS matrix - from reconnaissance to impact.
Supply Chain · Pipeline · Evasion
NIST AI RMF
The NIST AI Risk Management Framework defines operational AI security controls. Our assessment provides evidence for all four core functions: Govern, Map, Measure, Manage.
Incl. Adversarial ML Profile
ISO/IEC 42001
The operational security of AI infrastructure is a core component of ISO 42001 controls. Our assessment delivers audit-ready evidence for certification processes.
38 controls · 9 objectives
EU AI Act - Art. 15
High-risk AI systems must be robust against attacks on infrastructure. Our report maps all findings to Art. 15 and delivers auditable compliance evidence.
High-risk AI · GPAI since Aug. 2025
NIS-2 & NIST
AI infrastructure in critical sectors is subject to NIS-2 security requirements. The assessment report is designed as evidence for supervisory authorities.
Critical infrastructure · Financial sector
AI SBOM Deliverable
Every assessment concludes with a complete AI Software Bill of Materials: inventory of all AI components, provenance data and risk classification of supply chain elements.
Machine-readable · CycloneDX-compatible
Warum AWARE7
Was uns von anderen Anbietern unterscheidet
Reine Awareness-Plattformen testen keine Systeme. Reine Beratungskonzerne sind zu weit weg. AWARE7 verbindet beides: Wir hacken Ihre Infrastruktur und schulen Ihre Mitarbeiter - mittelstandsgerecht, persönlich, ohne Enterprise-Overhead.
Forschung und Lehre als Fundament
Rund 20% unseres Umsatzes stammen aus Forschungsprojekten für BSI und BMBF. Unsere Studien analysieren Millionen von Websites und Zehntausende Phishing-E-Mails - publiziert auf ACM- und Springer-Konferenzen. Drei unserer Führungskräfte sind gleichzeitig Professoren an deutschen Hochschulen.
Digitale Souveränität - keine Kompromisse
Alle Daten werden ausschließlich in Deutschland gespeichert und verarbeitet - ohne US-Cloud-Anbieter. Keine Freelancer, keine Subunternehmer in der Wertschöpfung. Alle Mitarbeiter sind sozialversicherungspflichtig angestellt und einheitlich rechtlich verpflichtet. Auf Anfrage VS-NfD-konform.
Festpreis in 24h - planbare Projektzeiträume
Innerhalb von 24 Stunden erhalten Sie ein verbindliches Festpreisangebot - kein Stundensatz-Risiko, keine Nachforderungen, keine Überraschungen. Durch eingespieltes Team und standardisierte Prozesse erhalten Sie einen klaren Zeitplan mit definiertem Starttermin und Endtermin.
Ihr fester Ansprechpartner - jederzeit erreichbar
Ein persönlicher Projektleiter begleitet Sie vom Erstgespräch bis zum Re-Test. Sie buchen Termine direkt bei Ihrem Ansprechpartner - keine Ticket-Systeme, kein Callcenter, kein Wechsel zwischen wechselnden Beratern. Kontinuität schafft Vertrauen.
Für wen sind wir der richtige Partner?
Mittelstand mit 50–2.000 MA
Unternehmen, die echte Security brauchen - ohne einen DAX-Konzern-Dienstleister zu bezahlen. Festpreis, klarer Scope, ein Ansprechpartner.
IT-Verantwortliche & CISOs
Die intern überzeugend argumentieren müssen - und dafür einen Bericht mit Vorstandssprache brauchen, nicht nur technische Findings.
Regulierte Branchen
KRITIS, Gesundheitswesen, Finanzdienstleister: NIS-2, ISO 27001, DORA - wir kennen die Anforderungen und liefern Nachweise, die Auditoren akzeptieren.
Mitwirkung an Industriestandards
OWASP · 2023
OWASP Top 10 for Large Language Models
Prof. Dr. Matteo Große-Kampmann als Contributor im Core-Team des international anerkannten OWASP LLM-Sicherheitsstandards.
BSI · Allianz für Cyber-Sicherheit
Management von Cyber-Risiken
Prof. Dr. Matteo Große-Kampmann als Mitwirkender des offiziellen BSI-Handbuchs für die Unternehmensleitung (dt. Version).
Frequently asked questions about AI Infrastructure Security
Everything you need to know about MLOps security, AI supply chain and model registry hardening.
What is MLOps Security?
Why is the ML pipeline an attack vector?
What is an AI SBOM and why do I need it?
What is AI Supply Chain Security?
Which MLOps platforms do you test?
What does an AI Infrastructure Assessment cost?
How do I secure my model registry?
How secure is your AI infrastructure really?
Our experts review your MLOps pipelines, model registry, inference APIs and AI supply chain - with fixed-price commitment and AI SBOM deliverable.
Kostenlos · 30 Minuten · Unverbindlich