VAARHAFT
Technology

Image Trust
Layer

Proprietary deep learning models that detect fully AI-generated images and AI-manipulated photographs — at over 95% accuracy, in under 500 ms.

>95%Detection Accuracy
<500msInference Time
2 ModesGen + Manipulation
Hero Visual — Neural Activation Map or 3D Model Rendering
Capabilities

Two Detection Modes.
One Unified Model.

The Image Trust Layer addresses both fundamental forms of visual deception — full AI synthesis and post-capture manipulation — within a single inference call.

Mode 01Active

AI Generation Detection

Identifies images that were entirely synthesized by generative models — GANs, diffusion models, and neural renderers. No original photograph exists.

  • GAN fingerprint analysis
  • Diffusion model artifact detection
  • Statistical frequency distribution analysis
  • Semantic inconsistency checks
Example Output — AI Generation Probability Visualization
Mode 02Active

AI Manipulation Detection

Detects post-capture edits applied to authentic photographs — face swaps, object insertion, region inpainting, and AI-assisted retouching.

  • Splicing boundary localization
  • Copy-move forgery detection
  • Error Level Analysis (ELA)
  • Compression artifact inconsistency
Example Output — Manipulation Heatmap Overlay
Deep Technology

Proprietary CNN
Architecture

No off-the-shelf foundation models. Our entire detection pipeline is built from scratch on deeply custom convolutional architectures, trained exclusively on proprietary forensic datasets.

ArchitectureCustom ResNet variants
AttentionSpatial self-attention
Training Set>10M labeled images
InferenceOptimized ONNX runtime
FrequencyFFT artifact detection
ELA EngineError level analysis
Neural Architecture — v3.2LIVE
InputRGB+EXIFConvFeature MapsDeep CNNResidualsAttentionSpatialOutputDecisionAI GeneratedManipulatedAuthentic

The highlighted Deep CNN layer contains 42 custom residual blocks trained on manipulation-specific frequency artifacts invisible to the human eye.

Replace with full architecture diagram — SVG or Figma export
Proprietary Dataset Composition — Category breakdown visualization
Data Advantage

Our Data is
Our Moat

Models trained on public datasets fail against real-world fraud tactics. VAARHAFT assembles, curates, and labels proprietary training data that reflects exactly the manipulation techniques our customers encounter in production.

>10MLabeled training images
50+Manipulation techniques
100+Camera models covered
WeeklyRetraining cadence
Performance Benchmarks
95.4%Overall Detection Accuracy

Validated on an independent hold-out test set of 100,000+ images spanning 50+ manipulation types and 30+ generative model architectures.

AI Generation Detection97.1%
Manipulation Detection93.8%
False Positive Rate3.2%
Processing Speed (req/s)240
Model Evaluation Chart — Confusion Matrix, ROC Curve, or Precision-Recall
Analysis Pipeline

From Upload
to Result in
< 500 ms

Five tightly integrated processing stages run in parallel where possible, delivering forensic-grade analysis faster than a human blink.

01

Image Ingestion & Normalization

JPEG, PNG, TIFF, HEIC, and PDF inputs are decoded, normalized to a canonical tensor, and EXIF metadata is extracted.

02

Multi-Scale Feature Extraction

A custom CNN backbone extracts hierarchical feature maps across 4 resolution scales, capturing both macro and micro-level patterns.

03

Frequency Domain Analysis

Fast Fourier Transform (FFT) and Error Level Analysis (ELA) surface invisible compression artifacts and generation fingerprints.

04

Ensemble Classification

Specialized sub-models vote on AI generation probability and manipulation probability independently, then combine via calibrated ensemble.

05

Explainability & Heatmap Generation

Grad-CAM and custom activation mapping produce spatial heatmaps that localize suspicious regions with pixel-level precision.

Why VAARHAFT

Technology Leadership
Through Specialization

We don't adapt general-purpose AI to forensics. We build forensic-specific architectures from the ground up — with proprietary data, purpose-built pipelines, and complete control over every layer.

Proprietary Training Data

Over 10 million images assembled and labeled in-house, covering 50+ manipulation types across diverse capture conditions. Generic public datasets cannot match this specificity.

Continuous Model Evolution

When Stable Diffusion 4 or a new GAN architecture ships publicly, we detect it within weeks. Weekly retraining cycles ensure we stay ahead of the generative AI curve.

German Engineering & Compliance

Developed and hosted entirely in Germany. Full EU AI Act compliance, GDPR by design, zero third-party data sharing. Audit-ready architecture at every layer.

Explainable AI (XAI)

Every detection result includes a spatial heatmap, confidence score, and breakdown by detection mode. Not a black box — full audit trails for compliance and human review.

In Production

Powering the
Fraud Scanner

The Image Trust Layer is the core detection engine inside VAARHAFT's Fraud Scanner — available as a fully managed API. No infrastructure to manage, no model maintenance. Integrate once, benefit permanently.

  • Single REST API call — results in under 500 ms
  • Fraud probability score (0–1) + spatial heatmap
  • Batch processing for high-volume enterprise workflows
  • Automated PDF audit reports for compliance
  • User interface available — no coding required
POST /v2/analyze/image
200 OK · 312ms

// Request

{

"image_url": "https://example.com/claim.jpg",

"mode": "full_analysis"

}

─────────────────────

// Response

{

"fraud_probability": 0.97,

"detection_mode": "ai_generated",

"confidence": 0.95,

"heatmap_url": "https://cdn.vaarhaft.com/...",

"processing_ms": 312

}

Get started

Talk to our experts

Get a detailed insight into how you can protect yourself from image‑based fraud

Book a Demo

Kostenlos · Unverbindlich