Wodan AI – Zero Trust AI https://wodan.ai/ Empowering innovation with Zero Trust AI, where your data remains yours Wed, 04 Feb 2026 09:20:41 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.1 https://i0.wp.com/wodan.ai/wp-content/uploads/2025/12/Favicon-onwhite.png?fit=32%2C32&ssl=1 Wodan AI – Zero Trust AI https://wodan.ai/ 32 32 251047348 Sovereign AI in Europe: what changes when systems go live https://wodan.ai/2026/01/07/sovereign-ai-in-europe-what-changes-when-systems-go-live/ Wed, 07 Jan 2026 09:02:00 +0000 https://wodan.ai/?p=58372 “Sovereign AI” is moving from conferences into budgets. At its simplest, sovereign AI means a nation can develop and run AI using its own infrastructure and data, under its own governance. The term got mainstream attention in early 2024 as leaders and vendors started framing AI as national capability, not just software. In Europe, the...

The post Sovereign AI in Europe: what changes when systems go live appeared first on Wodan AI - Zero Trust AI.

]]>
“Sovereign AI” is moving from conferences into budgets. At its simplest, sovereign AI means a nation can develop and run AI using its own infrastructure and data, under its own governance. The term got mainstream attention in early 2024 as leaders and vendors started framing AI as national capability, not just software.

In Europe, the conversation is now tied to concrete programs. AI Factories are being rolled out through the EuroHPC ecosystem, and the EU is also pushing toward much larger AI gigafactories through InvestAI.

The common starting point is infrastructure: where compute sits, which jurisdiction applies, who operates the stack. That starting point matters. In production, it is rarely the deciding factor.

Alongside that, InvestAI was launched with the stated aim of mobilising €200 billion for AI investment, including a €20 billion fund for AI gigafactories. In mid-2025, Reuters reported strong market interest in the gigafactory push, with dozens of bids.

By December 2025, the European Commission published a Memorandum of Understanding on AI Gigafactories, and the EIB described its role in supporting financing structures and advisory support.

This is why “sovereign AI” is no longer just language. It is becoming architecture, funding, and vendor selection.

It is tested when teams monitor production, debug failures, investigate incidents, integrate third-party services, and move fast under pressure. That is when sensitive data is most likely to appear in plaintext, even if residency rules are followed.

This view is shaped by our work at Wodan AI, where we focus on keeping sensitive data protected during computation, because that’s where governance usually gets tested.

This is not a moral argument about good or bad practices. It is about operational reality.

If a system needs plaintext to compute, plaintext will spread. Not because teams are careless, but because modern stacks include many tools that implicitly assume visibility.

This is the gap many sovereign AI programs still under-specify: what happens to sensitive data during processing.

AI makes this distinction unavoidable because the value is created when data is used. Usage expands the number of systems involved, the number of integrations, and the number of people who can affect exposure.

For business leaders, the symptoms are familiar. Compliance and legal reviews get slower because boundaries are hard to explain end-to-end. Vendor risk becomes harder to manage because the real system includes tooling outside the core platform. Production rollouts stall because exceptions multiply.

This is the point where sovereignty moves from policy to operating model.

Confidential computing is commonly described as protecting data during processing, typically using hardware-based trusted execution environments.

Fully homomorphic encryption (FHE) is another path, allowing computation over encrypted data without decrypting it first.

These are not interchangeable approaches, and a business audience does not need a deep technical comparison to understand the key point: both aim to reduce how often sensitive data must be exposed in plaintext to make systems work.

That reduction has direct executive value. It shrinks the trust boundary. It reduces the number of tools and roles that need raw access. It makes governance more durable when teams are under operational pressure.

In a sovereign AI context, that is not a nice-to-have. It is the difference between “sovereign on paper” and “sovereign in production.”

The next step is to treat runtime data protection as a first-class requirement, not a technical footnote.

If plaintext remains the default, “sovereign” becomes harder to defend the moment systems go live.

European Commission Digital Strategy, “Seven consortia selected to establish AI Factories…” (Dec 10, 2024)

EuroHPC JU, “The EuroHPC JU Selects Additional AI Factories…” (Mar 12, 2025)

European Commission Digital Strategy, “Second wave of AI Factories set to drive EU-wide innovation” (Mar 12, 2025)

European Investment Bank, “EIB Group and European Commission join forces to finance AI gigafactories” (Dec 4, 2025)

World Economic Forum, “Sovereign AI: What it is, and 6 strategic pillars for achieving it” (Apr 25, 2024)

NIST, “Fully-Homomorphic Encryption (FHE)” (Privacy-Enhancing Cryptography project page)

Ready to see encrypted-in-use AI in action? Book a demo of Wodan AI solution today.


 

The post Sovereign AI in Europe: what changes when systems go live appeared first on Wodan AI - Zero Trust AI.

]]>
58372
Podcast: AI Security Hype vs Reality https://wodan.ai/2025/11/10/ai-security-hype-vs-reality-richard-stiennon/ Mon, 10 Nov 2025 00:03:17 +0000 https://wodan.ai/?p=58278 AI Security Hype vs Reality: The Year Attackers Got Smarter For years, AI in cybersecurity has been more theater than substance. Dashboards that promised “self-learning defense.” Vendors who sold Bayesian math as “machine intelligence.” Slide decks louder than their code. Then came November 30, 2022. The day language models learned to think in sentences—and the...

The post Podcast: AI Security Hype vs Reality appeared first on Wodan AI - Zero Trust AI.

]]>
AI Security Hype vs Reality: The Year Attackers Got Smarter

For years, AI in cybersecurity has been more theater than substance. Dashboards that promised “self-learning defense.” Vendors who sold Bayesian math as “machine intelligence.” Slide decks louder than their code.

Then came November 30, 2022. The day language models learned to think in sentences—and the security world quietly crossed a line it still hasn’t processed.

On the latest episode of the Secure AI Podcast, Richard Stiennon, founder of IT Harvest and one of cybersecurity’s longest-serving analysts, sat down with Bob Dubois, CEO of Wodan AI, to talk about what changed, who’s losing ground, and how fast the next phase is moving.

 

The End of “Good Enough” Security

For Stiennon, the biggest story isn’t that AI entered cybersecurity. It’s that AI is now good enough to make “good enough” dangerous.

“Before large language models, vendors were faking it,” he says. “They hired statisticians, not data scientists. Most of what they called AI was glorified anomaly detection.”

That pretense collapsed once models could read machine language, interpret logs, and summarize incidents faster than any analyst. Suddenly, triage—the unglamorous backbone of every Security Operations Center—could be automated end to end.

The result: systems that don’t just analyze alerts but act on them, isolating infected hosts or resetting credentials in seconds.

And that, Stiennon warns, changes the calculus.
“You can look at every single log now. You can do something about it. For a human, that’s burnout in 45 minutes. For an AI agent, it’s continuous.”

 

Attackers Didn’t Wait

The defenders weren’t the only ones watching.

AI has also become the great equalizer for attackers. Open-source models, stripped of ethical filters, can now build and execute full attack chains—from reading CVE feeds to generating live exploits.

“You can already drop a CVE into an LLM and get an exploit in minutes,” says Stiennon. “That collapses the time from disclosure to weaponization. If the exploit is the fuse, the bomb is already built.”

The implication is brutal: while companies still debate compliance checklists, attackers are automating reconnaissance, lateral movement, and data exfiltration. Entire intrusions can now unfold faster than a SOC shift change.

“Mean time to breach used to be months,” he says. “Now, it’s minutes.”

 

SOC Automation or Extinction

Stiennon’s advice for CISOs is blunt.

“Engage one of the 39 SOC automation platforms right now. Because next year, you’ll replace your SOC entirely with automation.”

By mid-2026, he predicts, manual SOCs will be legacy infrastructure.
The future isn’t just about alert reduction—it’s about removing the human bottleneck entirely. Teams that adapt will reassign their analysts to higher-order tasks like vulnerability management or DLP redesign. Those that don’t will drown in false positives until something slips through.

 

The Analyst’s View: Innovation vs Credibility

The conversation turns reflective when Dubois asks how large enterprises should navigate the explosion of AI-labeled vendors.

“Big analyst firms cover about 3% of the industry,” Stiennon notes. “I track over 4,300 vendors. Gartner lists 134. So the innovation pipeline is invisible to most buyers.”

That blind spot, he argues, isn’t neutral—it’s dangerous. Enterprise procurement rewards size over novelty, forcing CISOs to buy “safe” rather than “smart.” And the 30-thousand-dollar pay-to-play gatekeepers keep younger, better technologies off the radar.

“Every CISO says they want innovation,” he says, “but their RFPs are written to exclude it.”

 

Data in Use: The Blind Spot No One Talks About

If AI is rewriting defense, encryption remains the part still written in invisible ink.

“Companies think SSL saves them,” Stiennon says. “They see the lock icon and assume their data is safe. But SSL ends at the server. After that, it’s naked.”

In other words, most organizations encrypt data in motion and at rest—but not in use, the very moment it’s most vulnerable.

For Wodan AI, that’s where the industry must move next: keeping data encrypted while it’s being analyzed or computed on.
“Zero Trust means you hold the keys,” Stiennon adds. “No one touches your data unless you say so. That’s the real definition.”

 

Guardrails, DLP, and the New Arms Race

The market’s response to AI risk is already visible.
What began as “AI guardrails” has evolved into AI-powered DLP, designed to stop sensitive data from slipping into models or training pipelines.

It’s also becoming lucrative. Stiennon’s tracking shows 173 AI security startups and 12 acquisitions in the past year—representing $2.8 billion in returns on $2.5 billion invested.

The math tells its own story: security may be late to AI, but capital is catching up.

 

The Final Metric: Speed

Asked which metric every CISO should add to their dashboard tomorrow, Stiennon answers instantly.

“Mean Time to Detect. And most don’t even know it.”

He believes that as automation compresses attack timelines, detection speed becomes the defining indicator of survival.

“Attackers have minutes. You need seconds,” he says. “That’s where we’re headed.”

 

Beyond the Hype

For all the talk of hype, Stiennon ends with optimism. The tools are finally good enough to change the equation—if leaders move now.

The era of good enough is over. The new question isn’t whether AI belongs in security. It’s whether security can exist without it.

 

Listen to the full episode:
AI Security Hype vs Reality | Wodan AI Podcast

About the Guest:
Richard Stiennon is the Founder and Chief Research Analyst at IT Harvest and author of Security Yearbook.

About the Host:
Bob Dubois is the CEO of Wodan AI, enabling privacy-preserving computation through encrypted-in-use technology for data-driven industries.

The post Podcast: AI Security Hype vs Reality appeared first on Wodan AI - Zero Trust AI.

]]>
58278
Homomorphic Encryption vs Trusted Execution Environments: What Recent Attacks Reveal https://wodan.ai/2025/10/10/blog-homomorphic-encryption-vs-trusted-execution-environments/ Fri, 10 Oct 2025 01:37:22 +0000 https://wodan.ai/?p=58268 Trusted Execution Environments (TEEs) were once the crown jewel of confidential computing. Intel SGX, AMD SEV, and ARM TrustZone promised to protect data during processing by isolating it inside secure hardware enclaves. For years, this model has powered many things from cloud-based machine learning to fintech data analytics. But recent months have delivered a wake-up...

The post Homomorphic Encryption vs Trusted Execution Environments: What Recent Attacks Reveal appeared first on Wodan AI - Zero Trust AI.

]]>
Trusted Execution Environments (TEEs) were once the crown jewel of confidential computing. Intel SGX, AMD SEV, and ARM TrustZone promised to protect data during processing by isolating it inside secure hardware enclaves. For years, this model has powered many things from cloud-based machine learning to fintech data analytics.

But recent months have delivered a wake-up call. Two major academic attacks – WireTap (Georgia Tech & Purdue) and Battering RAM (KU Leuven/COSIC & University of Birmingham) – show that when hardware trust breaks, the entire TEE model collapses.

 

When “trusted” hardware turns fragile

In WireTap, researchers used a €50 memory-bus interposer to recover Intel SGX’s ECDSA attestation key. Once stolen, that key allows attackers to impersonate genuine hardware, bypass attestation checks, and silently read enclave data.

Battering RAM, a separate but complementary attack, exploited how data flows between CPU and memory, undermining integrity guarantees in both Intel SGX and AMD SEV.

Both reveal a shared truth: when secrets are decrypted inside the enclave, they become vulnerable to physical tampering, speculative execution flaws, and supply-chain exploits. Confidentiality depends entirely on hardware that can now be cloned, probed, or replayed.

 

Why homomorphic encryption changes the equation

Homomorphic encryption (HE) takes a different stance. Instead of relying on hardware isolation, HE keeps data encrypted at all times, even while computations are performed.

  • Data never leaves ciphertext form, eliminating the “cleartext window” TEEs expose.
  • Security rests on mathematical hardness, not on vendor firmware or attestation certificates.
  • Even if an attacker gains full physical access to the machine, all they see is noise.

In short, when enclaves can be breached, algebraic privacy prevails.

Homomorphic encryption trades trust in silicon for trust in math. That shift matters now more than ever.

 

The performance myth is fading

Critics often cite HE’s computational overhead as a barrier. That was true a decade ago. But recent progress – from lattice-based optimizations to GPU or FPGA-based acceleration – is changing the landscape.
For workloads such as statistical inference, data aggregation, or credit risk scoring, modern HE can now achieve practical latency while retaining cryptographic confidentiality.

As attacks erode confidence in hardware isolation, the cost gap between TEE and HE becomes less about performance and more about risk appetite. What is the price of a broken attestation chain compared to a few extra milliseconds of compute?

 

Toward hybrid confidential computing

The future is likely hybrid: selective use of TEEs for orchestration or hardware acceleration, wrapped by HE for the sensitive core of computation.

Research projects such as Integrating Homomorphic Encryption and Trusted Execution (arXiv, 2023) and TEE FHE propose models where enclaves handle performance-critical steps but never see decrypted data. This layered design treats the TEE as a convenience, not as the root of trust.

At Wodan AI, we see this principle in action. Our encrypted-in-use architectures apply homomorphic encryption for active data protection, minimizing reliance on external trust anchors. If the hardware fails, the data still holds.

 

The takeaway: trust less, encrypt more

The recent breaches make one thing clear: hardware trust is not absolute trust.

When your security depends on a single component – firmware integrity, attestation chain, or bus isolation – one breach can undermine everything.
Homomorphic encryption shifts that dependency away from opaque silicon and into transparent, verifiable cryptography.

For teams building systems in finance, healthcare, or defense, this is no longer a theoretical choice. It’s an operational one.

If you handle sensitive data, ask yourself:

Would your confidentiality survive if the hardware itself were compromised?

If not, it’s time to explore encrypted-in-use computing.

 

Sources:

 

Ready to see encrypted-in-use AI in action? Book a demo of Wodan AI solution today.

 

 

The post Homomorphic Encryption vs Trusted Execution Environments: What Recent Attacks Reveal appeared first on Wodan AI - Zero Trust AI.

]]>
58268
Wodan AI Partners with Tunnel ID to Deliver Privacy-Preserving Identity Verification https://wodan.ai/2025/08/20/wodan-ai-tunnel-id-privacy-preserving-identity-verification/ Wed, 20 Aug 2025 02:54:26 +0000 https://wodan.ai/?p=58247 We’re excited to announce our partnership with Tunnel ID, bringing together two breakthrough technologies to create the most secure, private, and user-friendly identity verification solution. The Challenge We’re Solving Together Identity fraud has reached crisis levels globally, with cybercrime costs hitting $9.5 trillion worldwide in 2024. Deepfake fraud attempts have exploded by 2,137% over the...

The post Wodan AI Partners with Tunnel ID to Deliver Privacy-Preserving Identity Verification appeared first on Wodan AI - Zero Trust AI.

]]>
We’re excited to announce our partnership with Tunnel ID, bringing together two breakthrough technologies to create the most secure, private, and user-friendly identity verification solution.

The Challenge We’re Solving Together

Identity fraud has reached crisis levels globally, with cybercrime costs hitting $9.5 trillion worldwide in 2024. Deepfake fraud attempts have exploded by 2,137% over the last three years. Traditional identity systems are failing under pressure from AI-powered attacks.

Organizations face an impossible choice: security, user experience, or privacy. You can optimize for two, but the third always suffers. This partnership changes that.

What Tunnel ID Brings

Tunnel ID revolutionizes identity verification by asking “are you actually there?” instead of “do you have the right credentials?”

Their technology verifies real human presence without storing biometric data. Users simply show their face for instant verification. No passwords, no codes, no vulnerable databases  making Tunnel ID a full identity engine, not just a verification layer

Key capabilities:

  • Face-based authentication in under 2 seconds
  • Account recovery without email or SMS
  • KYC compliance without data storage
  • Protection against deepfakes and AI attacks

What Wodan AI Brings

Wodan AI has developed encrypted AI processing that analyzes sensitive data while it remains mathematically protected. Our technology enables AI to process biometric patterns and risk indicators without ever seeing raw data.

This solves the hidden vulnerability: even when biometric data isn’t stored, it’s typically processed unencrypted, creating exposure windows for attackers.

Our capabilities:

  • Real-time fraud detection on encrypted data
  • Risk scoring through protected computation
  • Cross-border processing while maintaining privacy
  • Future-proof protection against quantum threats

The Integrated Solution

Together, we create complete privacy-preserving identity verification. Tunnel ID establishes real presence while Wodan AI processes all signals through encrypted AI.

The user experience stays simple: show your face, get access. But the security transforms completely. Data gets encrypted immediately, AI analyzes encrypted patterns, and verification happens through mathematical proofs.

Industry Applications

Financial Services: Instant digital banking onboarding with encrypted fraud detection and GDPR compliance without data storage.

Healthcare: Patient authentication that maintains data protection while enabling seamless care access.

Enterprise: Secure authentication for employees and AI agents through the same encrypted framework.

E-commerce: Identity verification without collecting personal data, reducing liability while improving experience.

Market Impact

This partnership solves the “security triangle” for the first time. Organizations can now optimize security, privacy, and user experience simultaneously.

The global identity verification market is growing at 14.4% annually, constrained by these tradeoffs. Our solution removes those constraints entirely.

Ready to Transform Your Identity Stack?

Whether you’re in financial services, healthcare, or enterprise security, we’d love to show you how this partnership can transform your approach to digital identity.

Schedule Demo today.

The revolution in privacy-preserving identity verification starts with a conversation.

About Tunnel ID: A  tool for  presence-based identity verification technology, enabling secure authentication with zero biometric footprint.

The post Wodan AI Partners with Tunnel ID to Deliver Privacy-Preserving Identity Verification appeared first on Wodan AI - Zero Trust AI.

]]>
58247
Privacy-First AI for Health: How Ciphertext Keeps Saving Lives https://wodan.ai/2025/07/31/ai-and-cybersecurity-in-healthcare/ Thu, 31 Jul 2025 18:03:39 +0000 https://wodan.ai/?p=58234 A pager goes off in the cardiac unit at 01:37. A nurse has to send a CT scan to the triage model before the surgeon wheels a patient into theatre. In most hospitals that file leaves its secure folder, lands on a server, runs through the model in plain text, then returns with a prediction....

The post Privacy-First AI for Health: How Ciphertext Keeps Saving Lives appeared first on Wodan AI - Zero Trust AI.

]]>
A pager goes off in the cardiac unit at 01:37. A nurse has to send a CT scan to the triage model before the surgeon wheels a patient into theatre. In most hospitals that file leaves its secure folder, lands on a server, runs through the model in plain text, then returns with a prediction. At that exact moment it is readable by anyone who has slipped past the perimeter. That gap – sometimes just a few milliseconds – costs the healthcare industry an average of?USD?9.77?million every time an attacker makes it through.

Clinicians cannot wait for perfect privacy. They need results in seconds, regulators need proof that personal data stays safe, and researchers need ways to share insights without shipping raw genomes across borders. Fully homomorphic encryption, or FHE, offers a path that meets all three demands.

What makes FHE different

Traditional “encryption at rest” works like a bank vault that must be opened for every withdrawal. FHE is a safe?deposit box that lets the bank count the money without opening the lid. Algorithms compute directly on ciphertext, so scans, lab values, and DNA strings never revert to plain text during processing. That single shift closes the last open window in the security stack, the moment when today’s AI workloads are most exposed.

The Wodan AI approach in practice

Wodan keeps both the input data and the machine?learning model in ciphertext from ingest to result. Each inference is cryptographically signed, proving that only approved code touched the record. Hospitals plug the runtime into existing containers without rewriting models, and an immutable log rolls every event into an audit file that satisfies HIPAA and GDPR evidence requests. The architecture follows a Zero Trust posture – every user, device, and workload must prove identity before access.

Four stories from the ward and the lab

  • Encrypted diagnostics. A radiology team runs a stroke?detection model on CT images while the files stay encrypted. The model flags intracranial bleeds in under a second, and the record never appears in plain text. The hospital meets the HIPAA technical safeguard for “encryption during transmission and processing” and still shaves minutes off door?to?scan time.
  • Genomic discovery across borders. A pharma consortium wants to scan population?scale DNA pools for rare mutation signatures. With FHE they push the algorithm to each site, run the computation on ciphertext, and pool only the encrypted results. No raw sequence leaves the originating country, so the study clears GDPR transfer rules.
  • Collaboration without disclosure. Two clinics need to refine a sepsis?prediction model but cannot share patient records. Each runs training rounds locally on encrypted data, exchanges only encrypted gradients, and converges on a stronger model while every record stays on?prem.
  • Remote monitoring that respects privacy. Wearable devices stream vitals to a predictive engine hosted in the cloud. FHE keeps the data encrypted in transit and at the point of inference, so the provider can alert clinicians to deterioration trends without handling readable telemetry.

Bringing it to your stack

Start by encrypting the dataset at the edge so no record leaves your network unprotected. Containerise the model with Wodan’s runtime and benchmark latency against your clinical?workflow target – most inference tasks complete well under a second on common GPU nodes. Map the immutable log to HIPAA sections?164.308 and?164.312, then point your GDPR data?protection impact assessment to the same evidence. Scale on a pay?per?use plan that grows from proof of concept to nationwide roll?out without capital hardware spend.

The takeaway

FHE changes the default from “decrypt to innovate” to “keep it encrypted and go faster.” It protects patients in the moment, satisfies auditors on demand, and frees researchers to collaborate without fear of exposure. If you want to see a stroke?triage model run on ciphertext – or test another workload – book a short demo. We will run the inference inside your environment, and no personal data will ever leave your premises.

 

Ready to see encrypted-in-use AI in action? Book a demo of Wodan AI solution today.

 

 

The post Privacy-First AI for Health: How Ciphertext Keeps Saving Lives appeared first on Wodan AI - Zero Trust AI.

]]>
58234
Privacy paradox in AML regulation: Share data while not exposing PII https://wodan.ai/2025/05/29/privacy-paradox-in-aml-regulation/ Thu, 29 May 2025 15:40:14 +0000 https://wodan.ai/?p=58111 When EU legislators signed off on the Anti-Money-Laundering Regulation (AMLR) and Directive 6 (AMLD6) last year, the headline was clear: “tear down the silos and let financial-crime data flow.” The fine print—Article 75—goes even further, allowing (and sometimes obliging) banks, PSPs, crypto venues, casinos, and even luxury goods dealers to swap customer-level intelligence in private-to-private...

The post Privacy paradox in AML regulation: Share data while not exposing PII appeared first on Wodan AI - Zero Trust AI.

]]>
When EU legislators signed off on the Anti-Money-Laundering Regulation (AMLR) and Directive 6 (AMLD6) last year, the headline was clear:

“tear down the silos and let financial-crime data flow.”

The fine print—Article 75—goes even further, allowing (and sometimes obliging) banks, PSPs, crypto venues, casinos, and even luxury goods dealers to swap customer-level intelligence in private-to-private partnerships. The package is already in force (as of 10 July 2024) and will become fully applicable from 10 July 2027, according to Finnius Advocaten.

Great news for investigators. A nightmare for privacy officers.

Below, we unpack the new rule set, the GDPR paradox it creates, and how Wodan AI’s encrypted-in-use platform, Dropnir, enables you to comply with both without ever decrypting your data.

 

What changed?

AMLR AMLD6
Legal form Regulation (direct effect) Directive (transpose)
Key date In force Jul 10, 2024 ? applies Jul 10, 2027 Same
Headlines Single EU rule-book, € 10k cash cap, Article 75 information-sharing partnerships Harmonised offences & penalties

 

A political deal was struck on January 18, 2024, by Finnius Advocaten.

 

Article 75 in one paragraph

“Members of partnerships for information sharing may share information where strictly necessary to meet their AML/CFT duties.” Better Regulation

  • What’s shareable? Customer identifiers, transaction metadata, risk scores, and alert reasons.
  • With whom? Any obligated entity, including national FIUs, across borders.
  • Guard-rails? DPIA, supervisory notification, civil liability safe harbour.

 

The Privacy Paradox

 

AMLR wants… GDPR insists on…
Broad datasets & five-year retention Data minimisation & “erase when no longer necessary”
No customer consent (tipping-off risk) Valid lawful basis & transparency
Cross-border pooling Purpose limitation & transfer safeguards

 

Practitioners are already referring to this as the GDPR-AML dilemma: two EU flagships pulling in opposite directions. Mondaq.

 

Why PETs beat “trust me” NDAs

 

Stopping money-laundering networks means correlating patterns across institutions—but nobody wants another central data lake. Privacy-Enhancing Technologies (PETs)—federated queries, fully homomorphic encryption (FHE), secure enclaves—let firms compute on each other’s data without copying or decrypting it. Regulators from Singapore’s COSMIC to the US Patriot Act utilities have endorsed the approach; Article 75 now gives the EU a legal footing to do the same, according to William Fry.

 

Where Wodan AI fits

 

Dropnir: encrypted-in-use by design

Our containerised API layer keeps both the request and the response encrypted during processing. Peers only ever see ciphertext; Wodan AI never sees anything. Wodan AI – Secure AI.

 

Getting ready for 2027: a four step playbook

 

  1. Stand up a sandbox
    Spin up Dropnir and load hashed customer keys + minimal features to pass the “strict necessity” test.
  2. Run a joint DPIA
    Map Article 75 controls line-by-line to GDPR Art 35 before you share a single byte.
  3. Federate, don’t replicate
    Keep computations where the data already lives; pay only for the queries you run.
  4. Log everything: If you can’t prove why, when, and what you shared, expect fines.

 

Key take-aways

 

  • Timeline: Rules are live now; mandatory from July 10, 2027.
  • Opportunity: Private-private sharing to unmask mule networks.
  • Risk: GDPR conflict on minimisation, consent, and retention.
  • Fix: End-to-end encrypted federated analytics with Wodan AI Dropnir.

 

Ready to pilot a secure Article 75 partnership?

Book a 30-minute demo and discover how Dropnir keeps your AML models effective and your customer data secure and protected.

Any questions? Contact us

The post Privacy paradox in AML regulation: Share data while not exposing PII appeared first on Wodan AI - Zero Trust AI.

]]>
58111
Secure AI podcast https://wodan.ai/2025/01/29/secure-ai-podcast/ Wed, 29 Jan 2025 16:39:16 +0000 https://wodan.ai/?p=58040 In this edition of the Secure AI podcast, Wodan AI’s CEO, Bob Dubois, and ethical hacker Robbe Van Roey explore AI security and the emerging threats it faces. Secure AI podcast Robbe gained notable recognition for his attacks on AWS and NVIDIA AI systems, making him an ideal guest to discuss the vulnerabilities and challenges...

The post Secure AI podcast appeared first on Wodan AI - Zero Trust AI.

]]>
In this edition of the Secure AI podcast, Wodan AI’s CEO, Bob Dubois, and ethical hacker Robbe Van Roey explore AI security and the emerging threats it faces.

Secure AI podcast

Robbe gained notable recognition for his attacks on AWS and NVIDIA AI systems, making him an ideal guest to discuss the vulnerabilities and challenges in AI security.

 

 

AI security is becoming increasingly important as systems become more powerful and integrated into our daily lives.

In this episode, Bob and Robbe discuss real-world AI security breaches and what they reveal about current vulnerabilities. Robbe shares his firsthand experience testing AI defenses and identifying weaknesses in high-profile systems.

From adversarial attacks to data poisoning, this episode highlights the biggest threats AI faces today.

The conversation also delves into the ethical dilemmas surrounding hacking AI systems for security research.

What can organizations do to strengthen their AI security? Bob and Robbe offer expert insights on this crucial topic.

Understanding AI vulnerabilities is essential for building safer, more resilient systems.

Whether you are an AI developer, a cybersecurity professional, or simply curious about AI threats, this episode is for you.

Follow us to stay updated on key insights and expert perspectives from Secure AI.

 

The post Secure AI podcast appeared first on Wodan AI - Zero Trust AI.

]]>
58040
Why FHE in Federated Learning? https://wodan.ai/2025/01/23/why-fhe-in-federated-learning/ Thu, 23 Jan 2025 18:00:56 +0000 https://wodan.ai/?p=58030 Why FHE in Federated Learning?
1. Enhanced Privacy:
FL already ensures data stays on the client side, but transmitting gradients or model updates to a central server can still leak sensitive information through inference attacks.
FHE adds an extra layer of security by ensuring that even the server cannot access the raw gradients or updates—it only processes encrypted data.

The post Why FHE in Federated Learning? appeared first on Wodan AI - Zero Trust AI.

]]>
FHE in Federated Learning

1. Enhanced Privacy:

  • FL already ensures data stays on the client side, but transmitting gradients or model updates to a central server can still leak sensitive information through inference attacks.
  • FHE adds an extra layer of security by ensuring that even the server cannot access the raw gradients or updates—it only processes encrypted data.

2. Secure Aggregation:

  • The central server can perform operations like summing or averaging encrypted updates without ever decrypting them. This is particularly useful for use cases like healthcare, finance, and sensitive IoT applications.

3. Compliance:

  • FHE helps meet stringent privacy regulations like GDPR, HIPAA, or CCPA by preventing any unauthorized access to sensitive data.

4. Trust Minimization:

  • FHE reduces reliance on the server’s trustworthiness. Even if the server is compromised, sensitive data remains secure.

Advantages of FHE:

  • Absolute Privacy: Data and computations remain fully confidential.
  • No Trusted Aggregator: FHE eliminates the need for a trusted third-party aggregator in FL.
  • Robust Against Attacks: Protects against both external threats and malicious insiders.

When Does FHE + FL Make the Most Sense?

  • High Privacy Demand: When clients handle extremely sensitive data (e.g., medical records, financial data).
  • Untrusted Server: When the central server cannot be fully trusted.
  • Collaborative Contexts: Industries like healthcare, insurance, or cross-border collaborations where data sharing is highly sensitive.

Alternatives/Complements to FHE

If the computational cost of FHE is too high, consider:

  • Secure Multi-Party Computation (SMPC): Distributes computations across multiple parties without revealing data.
  • Differential Privacy (DP): Adds noise to updates, protecting individual data points.
  • Hybrid Approaches: Use FHE for the most sensitive operations and other techniques for less critical computations.

In summary, FHE in FL makes a good use case, especially for high-stakes privacy applications. The main barrier is computational and communication overhead, so its feasibility depends on the use case and available resources.

Any question? Contact us!

The post Why FHE in Federated Learning? appeared first on Wodan AI - Zero Trust AI.

]]>
58030
How does Fully homomorphic encryption (FHE) differ from Partial homomorphic encryption (PHE)? https://wodan.ai/2025/01/15/differences-fhe-and-phe/ Wed, 15 Jan 2025 17:22:35 +0000 https://wodan.ai/?p=57995 Homomorphic encryption is a cutting-edge cryptographic technique that allows computations on encrypted data without decryption, ensuring the underlying information's confidentiality.
This encryption method is classified into three main categories: Partial Homomorphic Encryption (PHE), Somewhat Homomorphic Encryption (SHE), and Fully Homomorphic Encryption (FHE).

The post How does Fully homomorphic encryption (FHE) differ from Partial homomorphic encryption (PHE)? appeared first on Wodan AI - Zero Trust AI.

]]>
Differences FHE and PHE

Table of Contents

summary
Types of Homomorphic Encryption

  • Fully Homomorphic Encryption (FHE)
  • Partially Homomorphic Encryption (PHE)
  • Somewhat Homomorphic Encryption (SHE)

Key Differences Between FHE and PHE

  • Definition and Functionality
  • Complexity and Performance
  • Security Trust Models
  • Application Suitability

Technical Foundations

  • Historical Background
  • Computational Complexity
  • Key Techniques
    • Use of Bootstrapping
  • Standardization Efforts

Current Research and Developments

  • Algorithm Acceleration Schemes
  • Hardware Acceleration Schemes
  • Future Directions

Challenges and Limitations

Summary

Homomorphic encryption is a cutting-edge cryptographic technique that allows computations on encrypted data without decryption, ensuring the underlying information’s confidentiality. This encryption method is classified into three main categories: Partial Homomorphic Encryption (PHE), Somewhat Homomorphic Encryption (SHE), and Fully Homomorphic Encryption (FHE).

Among these, FHE is particularly notable for its ability to perform unlimited operations of both addition and multiplication on encrypted data, offering a comprehensive solution for secure data processing across various applications, including finance and healthcare.[1][2]

The distinction between Fully Homomorphic Encryption and Partial Homomorphic Encryption lies in their operational capabilities and complexity. While PHE supports only one type of operation—either addition or multiplication—FHE allows for arbitrary computations, making it Turing complete and thus more versatile in its application.[3]

However, this flexibility comes at a cost; FHE is more computationally intensive and slower than PHE, which may restrict its practical deployment in resource-constrained environments.[4][5] This complexity also introduces challenges in noise management and operational efficiency that researchers continue to address.

FHE has gained prominence due to its potential to enhance data privacy and security, particularly in sectors that handle sensitive information. Nonetheless, its real-world application remains hindered by some computational overhead and storage requirements, critical considerations for industries prioritizing efficiency and security.[6][7] In contrast, PHE, while limited in scope, is often favored for applications where performance and speed are paramount, as it requires less computational power and is easier to implement.[8] The ongoing evolution and research in the field of homomorphic encryption aim to bridge these gaps and expand the usability of FHE in practical scenarios, paving the way for more robust data protection methodologies in the digital age.[9][10]

Types of Homomorphic Encryption

Homomorphic encryption is a cryptographic technique that allows for computations on encrypted data without revealing the underlying plaintext. This approach provides various forms of encryption based on the types of operations they permit. The main categories of homomorphic encryption include partially homomorphic encryption (PHE), somewhat homomorphic encryption (SHE), and fully homomorphic encryption (FHE).

Fully Homomorphic Encryption (FHE)

Fully homomorphic encryption represents the most advanced and versatile form of homomorphic encryption, allowing for an unlimited number of both addition and multiplication operations on encrypted data. This capability enables complex computations without ever needing to decrypt the information, thereby maintaining data security throughout the process.

Gentry-BGV Scheme: Based on lattice-based cryptography, this scheme facilitates arbitrary computations but is computationally intensive.

Dijk-Gentry-Halevi-Vaikuntanathan (DGHV) Scheme: Another prominent FHE scheme known for its applicability in various secure computation scenarios.

Although FHE provides expansive encryption capabilities, practical implementations may encounter operational constraints, such as noise management and computational resource requirements, which can complicate usage in real-world applications- [1][2][3].

Partially Homomorphic Encryption (PHE)

Partially homomorphic encryption enables operations of only one type on encrypted data, either addition or multiplication, but not both. This type of encryption can be useful in specific applications where such operations suffice.

RSA Encryption: Primarily supports multiplicative homomorphism, allowing for unlimited multiplications of encrypted values without decryption.

ElGamal Encryption: Also facilitates multiplicative operations on ciphertexts.
While PHE schemes are straightforward to implement and computationally less intensive, they are limited in versatility due to their inability to support both addition and multiplication simultaneously[4][5].

Somewhat Homomorphic Encryption (SHE)

Somewhat homomorphic encryption allows a limited number of operations, combining both addition and multiplication, though with restrictions on the total number of computations that can be performed before the security is compromised. This approach strikes a balance between security and performance, making it suitable for certain applications.

Paillier Cryptosystem: Supports additive homomorphism, allowing for the addition of encrypted values.
DGHV Scheme: Offers support for both addition and multiplication but is limited in the depth of operations that can be executed[4][6].
SHE schemes are adequate for many applications where FHE’s computational complexity is unnecessary, but they cannot handle arbitrary operations indefinitely.

Key Differences Between FHE and PHE

Fully Homomorphic Encryption (FHE) and Partial Homomorphic Encryption (PHE) serve different purposes and have distinct characteristics in the realm of data security and cryptography.

Definition and Functionality

FHE allows for arbitrary computations on encrypted data without needing to decrypt it first, thereby enabling the evaluation of any function over the ciphertexts.[7] In contrast, PHE supports only specific operations (either addition or multiplication, but not both) on the encrypted data. This limitation means that while PHE can be useful for certain tasks, its utility is restricted compared to FHE, which is Turing complete and can compute any computable function when combined with basic operations like addition and multiplication.[8]

Complexity and Performance

One of the primary distinctions between FHE and PHE is the complexity involved in their respective encryption and decryption processes. FHE tends to be significantly more complex due to the necessity of adding noise to encrypted data to enhance security, which can lead to performance trade-offs. As a result, FHE operations can be slower than performing equivalent operations without encryption.[9][10] On the other hand, PHE typically offers faster performance due to its simpler operational scope.

Security Trust Models

FHE is designed to require trust only in the underlying mathematics rather than in the system, administrators, or software used.[9] This can provide a greater sense of security for applications handling sensitive data. In contrast, PHE may require a
higher level of trust in the host environment due to its limited operational framework, which can potentially expose data to more risks during processing.

Application Suitability

While FHE is not yet fully optimized for high-scale, general-purpose applications, it has begun to find its place in various industries that prioritize data privacy and security. For instance, FHE is advantageous for processing sensitive data in fields like healthcare and finance where data sharing poses privacy concerns.[11][12] PHE, however, remains relevant for specific scenarios where limited operations suffice and where speed is a more critical factor than flexibility or completeness.

Technical Foundations

Fully Homomorphic Encryption (FHE) is a significant advancement in the field of cryptography, allowing computation on encrypted data without needing to decrypt it first. This section discusses the technical foundations that differentiate FHE from Partial Homomorphic Encryption (PHE) and highlights the complexities involved in its implementation.

Historical Background

The conceptual groundwork for FHE can be traced back to 1978 when Rivest, Adleman, and Dertouzos initially proposed the idea. However, it wasn’t until 2009
that Craig Gentry introduced a practical scheme utilizing a lattice-based approach and a technique called “bootstrapping,” marking the transition of FHE from theory to potential real-world applications[13]. Gentry’s innovation has spurred ongoing advancements aimed at enhancing the practicality and efficiency of FHE, particularly as the demand for robust data privacy solutions has escalated in the digital age[13].

Computational Complexity

One of the primary challenges of FHE lies in its computational complexity. Operations performed using FHE are slower than those performed on unencrypted data, often by several orders of magnitude[14]. This performance issue is attributed to the intricate data representations and the additional processing required by CPUs and GPUs to handle FHE computations. The data expansion associated with homomorphically encrypted data further complicates matters, as it generally requires more storage space compared to its unencrypted counterparts[14].

Key Techniques

FHE employs various mathematical constructs, including polynomials and the residue number system, to facilitate encrypted computations. For example, schemes such as the Brakerski-Gentry-Vaikuntanathan (BGV) and Brakerski-Fan-Vercauteren (BFV) utilize polynomial arithmetic to manage encrypted data operations[9]. These methods allow for operations like addition and multiplication to be performed directly on ciphertexts, although they introduce additional layers of complexity regarding coefficient moduli and basis changes during multiplication[15].

Use of Bootstrapping

Bootstrapping is a critical technique in FHE that enables the refreshment of encrypted data, allowing for more complex computations without a significant loss of security. This process mitigates the “noise” that accumulates during encryption operations, ensuring that the data remains valid and operable within the confines of the encryption scheme[13]. The ability to perform bootstrapping is what sets FHE apart from PHE, where only specific operations (addition or multiplication, but not both) can be performed on the encrypted data.

Standardization Efforts

Recognizing the complexities involved in FHE, there is a concerted effort to establish global standards and best practices for its implementation. Initiated by Intel and ongoing within the ISO/IEC framework, ISO/IEC 28033 aims to create a multipart standard covering the definitions, foundational techniques, and application standards for FHE, which will facilitate broader deployment and usability across various domains[9].

Current Research and Developments

Research on fully homomorphic encryption (FHE) has significantly evolved in recent years, particularly focusing on improving the efficiency and practicality of FHE schemes. A comprehensive classification of existing acceleration methods for FHE reveals two primary categories: algorithmic acceleration and hardware acceleration- [16].

Algorithm Acceleration Schemes

Recent studies emphasize algorithmic-based methods that aim to optimize the number of operations needed for encryption, decryption, and homomorphic operations. For instance, researchers have explored optimizations such as the use of the Number Theoretic Transform (NTT) and Barrett reduction techniques to enhance performance[16]. While some advancements have been made, many algorithmic acceleration schemes face limitations in achieving substantial speedups, with a significant focus on NTT and bootstrapping operations[16]. This indicates a critical need for theoretical breakthroughs in algorithm design, particularly for bootstrapping, to meet practical application requirements effectively.

Hardware Acceleration Schemes

On the hardware side, specialized designs such as Application-Specific Integrated Circuits (ASICs) have demonstrated substantial acceleration effects. These hardware solutions provide enhanced customization flexibility, allowing for optimized data storage and processing capabilities tailored for FHE operations[16]. Studies have shown that ASIC-based approaches are increasingly being integrated into deep neural networks (DNN) to meet practical needs, further demonstrating their utility in accelerating FHE implementations[16]. Furthermore, novel hardware architectures like Processing-in-Memory (PiM) have been proposed, offering a different paradigm by allowing calculations to occur directly in memory, thereby reducing data transmission time and enhancing performance. However, the application of PiM remains largely theoretical and faces challenges in practical implementation[16].

Future Directions

The current landscape of FHE research indicates various potential future directions, including the exploration of new FHE algorithms, the design of hybrid acceleration schemes that combine both algorithmic and hardware-based methods, and advancements in novel hardware architectures[16]. By highlighting these avenues for further investigation, researchers aim to push the boundaries of FHE technology, fostering its application across diverse fields where privacy preservation is crucial[16]. As the research community continues to address these challenges, the efficiency and applicability of fully homomorphic encryption are expected to improve, making it a more viable option for practical use.

Challenges and Limitations

Fully homomorphic encryption (FHE) presents unique challenges and limitations that hinder its widespread implementation, particularly in sensitive fields such as healthcare. One of the most significant obstacles is the substantial computational
overhead associated with FHE operations, which can result in slower data processing and analysis compared to traditional methods[17][18]. This overhead arises from the complexity of FHE algorithms, necessitating ongoing research and development to optimize these algorithms for practical use in real-world applications[16][17].

In addition to computational demands, there are challenges related to the storage costs of encrypted data. FHE often requires more storage space than unencrypted data due to the overhead involved in maintaining the encryption scheme, which can limit its feasibility for large datasets typically encountered in healthcare settings[17]- [18]. Furthermore, the performance of FHE can be affected by factors such as noise growth, which is an unavoidable side effect of operations on encrypted data. Noise accumulation can compromise the accuracy and reliability of results, presenting further barriers to the practical deployment of FHE solutions[19].

To address these challenges, researchers have proposed various acceleration schemes aimed at enhancing the efficiency of FHE operations. These schemes focus on algorithmic and hardware improvements to reduce the processing burden and make FHE more competitive with conventional data processing techniques[16]. Incremental adoption strategies are also recommended, allowing organizations to gradually integrate FHE into their workflows while assessing its impact and making necessary adjustments[18]. Moreover, fostering collaboration between academia and industry can yield fresh insights and promote effective practices for the application of FHE[18].

By Manuel Pérez Yllan

References
[1]: Understanding Homomorphic Encryption: Enabling Secure Data Processing …
[2]:
Homomorphic Encryption: Ensuring Data Privacy in Cloud Computing
[3]: Fully Homomorphic Encryption (FHE) | by Arnav Panjla
[4]: Homomorphic Encryption.
[5]: Advantages of Homomorphic Encryption – IEEE Digital Privacy
[6]: Homomorphic encryption – Wikipedia
[7]: Fully Homomorphic Encryption: Introduction and Use-Cases
[8]: The Rise of Fully Homomorphic Encryption
[9]: Intel Continues to Lead Efforts to Establish FHE Standards for …
[10]: Deep dive on fully homomorphic encryption
[11]: A 5-minute guide to Fully Homomorphic Encryption (FHE)
[12]: Fully Homomorphic Encryption: A Case Study
[13]: The Past, Present, and Future of Fully Homomorphic Encryption
[14]: Mathematical Certainty in Security: The Rise of Fully Homomorphic …
[15]: A High-Level Technical Overview of Fully Homomorphic Encryption
[16]: Practical solutions in fully homomorphic encryption: a survey analyzing …
[17]: Fully homomorphic encryption revolutionizes healthcare data privacy and …
[18]: Navigating Fully Homomorphic Encryption For Data Protection
[19]: Fully Homomorphic Encryption performance – MathLock

The post How does Fully homomorphic encryption (FHE) differ from Partial homomorphic encryption (PHE)? appeared first on Wodan AI - Zero Trust AI.

]]>
57995
Best Practices for Developing Secure AI Systems https://wodan.ai/2025/01/08/secure-ai-systems/ Wed, 08 Jan 2025 18:19:19 +0000 https://wodan.ai/?p=57989 Artificial intelligence (AI) is transforming various industries and enhancing the way we live and work. However, its rapid evolution brings the responsibility to ensure that AI systems are developed securely and ethically.
Below, we outline essential guidelines for providers creating AI systems, whether developing them from scratch or utilizing pre-existing tools and services.

The post Best Practices for Developing Secure AI Systems appeared first on Wodan AI - Zero Trust AI.

]]>
Artificial intelligence (AI) is transforming various industries and enhancing the way we live and work. However, its rapid evolution brings the responsibility to ensure that AI systems are developed securely and ethically. Below, we outline essential guidelines for providers creating AI systems, whether developing them from scratch or utilizing pre-existing tools and services.

Secure AI Systems

Understanding the Foundation of Secure AI Development

Building a secure AI system starts with a strong foundation. Whether leveraging third-party tools or crafting an AI model from the ground up, providers must ensure the following:

1. Transparency: Clearly communicate the purpose, functionality, and limitations of the AI system.
2. Compliance: Adhere to local and international laws governing AI use, such as GDPR and AI Act regulations.
3. Data Security: Protect user data through encryption, secure storage, and regular audits to prevent breaches.

Mitigating Bias and Ethical Concerns

AI models are only as unbiased as the data used to train them.
To promote fairness and inclusivity, it is essential to:

1. Evaluate training datasets for diversity and eliminate discriminatory patterns.
2. Continuously monitor outputs to identify and address any unintended biases.
3. Engage in ethical reviews throughout the system’s lifecycle.

Implementing Robust Testing and Monitoring

Secure AI systems require ongoing evaluation. To ensure their safety, providers should:

1. Conduct thorough testing for vulnerabilities before deployment.
2. Implement real-time monitoring to detect anomalies or malicious activity.
3. Regularly update models and algorithms to address newly discovered risks or threats.

Whether you are building an AI system from scratch or using existing tools, following secure development guidelines is crucial for maintaining trust and minimizing risks. By prioritizing transparency, ethics, and robust monitoring, providers can ensure their AI systems serve society responsibly and securely.

How are you incorporating these guidelines into your AI projects?
Any question? Contact us!

The post Best Practices for Developing Secure AI Systems appeared first on Wodan AI - Zero Trust AI.

]]>
57989