Industries
Proprietary deep learning models that detect fully AI-generated images and AI-manipulated photographs — at over 95% accuracy, in under 500 ms.
The Image Trust Layer addresses both fundamental forms of visual deception — full AI synthesis and post-capture manipulation — within a single inference call.
Identifies images that were entirely synthesized by generative models — GANs, diffusion models, and neural renderers. No original photograph exists.
Detects post-capture edits applied to authentic photographs — face swaps, object insertion, region inpainting, and AI-assisted retouching.
No off-the-shelf foundation models. Our entire detection pipeline is built from scratch on deeply custom convolutional architectures, trained exclusively on proprietary forensic datasets.
The highlighted Deep CNN layer contains 42 custom residual blocks trained on manipulation-specific frequency artifacts invisible to the human eye.
Models trained on public datasets fail against real-world fraud tactics. VAARHAFT assembles, curates, and labels proprietary training data that reflects exactly the manipulation techniques our customers encounter in production.
Validated on an independent hold-out test set of 100,000+ images spanning 50+ manipulation types and 30+ generative model architectures.
Five tightly integrated processing stages run in parallel where possible, delivering forensic-grade analysis faster than a human blink.
JPEG, PNG, TIFF, HEIC, and PDF inputs are decoded, normalized to a canonical tensor, and EXIF metadata is extracted.
A custom CNN backbone extracts hierarchical feature maps across 4 resolution scales, capturing both macro and micro-level patterns.
Fast Fourier Transform (FFT) and Error Level Analysis (ELA) surface invisible compression artifacts and generation fingerprints.
Specialized sub-models vote on AI generation probability and manipulation probability independently, then combine via calibrated ensemble.
Grad-CAM and custom activation mapping produce spatial heatmaps that localize suspicious regions with pixel-level precision.
We don't adapt general-purpose AI to forensics. We build forensic-specific architectures from the ground up — with proprietary data, purpose-built pipelines, and complete control over every layer.
Over 10 million images assembled and labeled in-house, covering 50+ manipulation types across diverse capture conditions. Generic public datasets cannot match this specificity.
When Stable Diffusion 4 or a new GAN architecture ships publicly, we detect it within weeks. Weekly retraining cycles ensure we stay ahead of the generative AI curve.
Developed and hosted entirely in Germany. Full EU AI Act compliance, GDPR by design, zero third-party data sharing. Audit-ready architecture at every layer.
Every detection result includes a spatial heatmap, confidence score, and breakdown by detection mode. Not a black box — full audit trails for compliance and human review.
The Image Trust Layer is the core detection engine inside VAARHAFT's Fraud Scanner — available as a fully managed API. No infrastructure to manage, no model maintenance. Integrate once, benefit permanently.
// Request
{
"image_url": "https://example.com/claim.jpg",
"mode": "full_analysis"
}
─────────────────────
// Response
{
"fraud_probability": 0.97,
"detection_mode": "ai_generated",
"confidence": 0.95,
"heatmap_url": "https://cdn.vaarhaft.com/...",
"processing_ms": 312
}
Get a detailed insight into how you can protect yourself from image‑based fraud
Kostenlos · Unverbindlich