Cybersecurity Archives - Wodan AI - Zero Trust AI https://wodan.ai/category/cybersecurity/ Empowering innovation with Zero Trust AI, where your data remains yours Wed, 04 Feb 2026 09:20:41 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.1 https://i0.wp.com/wodan.ai/wp-content/uploads/2025/12/Favicon-onwhite.png?fit=32%2C32&ssl=1 Cybersecurity Archives - Wodan AI - Zero Trust AI https://wodan.ai/category/cybersecurity/ 32 32 251047348 Sovereign AI in Europe: what changes when systems go live https://wodan.ai/2026/01/07/sovereign-ai-in-europe-what-changes-when-systems-go-live/ Wed, 07 Jan 2026 09:02:00 +0000 https://wodan.ai/?p=58372 “Sovereign AI” is moving from conferences into budgets. At its simplest, sovereign AI means a nation can develop and run AI using its own infrastructure and data, under its own governance. The term got mainstream attention in early 2024 as leaders and vendors started framing AI as national capability, not just software. In Europe, the...

The post Sovereign AI in Europe: what changes when systems go live appeared first on Wodan AI - Zero Trust AI.

]]>
“Sovereign AI” is moving from conferences into budgets. At its simplest, sovereign AI means a nation can develop and run AI using its own infrastructure and data, under its own governance. The term got mainstream attention in early 2024 as leaders and vendors started framing AI as national capability, not just software.

In Europe, the conversation is now tied to concrete programs. AI Factories are being rolled out through the EuroHPC ecosystem, and the EU is also pushing toward much larger AI gigafactories through InvestAI.

The common starting point is infrastructure: where compute sits, which jurisdiction applies, who operates the stack. That starting point matters. In production, it is rarely the deciding factor.

Alongside that, InvestAI was launched with the stated aim of mobilising €200 billion for AI investment, including a €20 billion fund for AI gigafactories. In mid-2025, Reuters reported strong market interest in the gigafactory push, with dozens of bids.

By December 2025, the European Commission published a Memorandum of Understanding on AI Gigafactories, and the EIB described its role in supporting financing structures and advisory support.

This is why “sovereign AI” is no longer just language. It is becoming architecture, funding, and vendor selection.

It is tested when teams monitor production, debug failures, investigate incidents, integrate third-party services, and move fast under pressure. That is when sensitive data is most likely to appear in plaintext, even if residency rules are followed.

This view is shaped by our work at Wodan AI, where we focus on keeping sensitive data protected during computation, because that’s where governance usually gets tested.

This is not a moral argument about good or bad practices. It is about operational reality.

If a system needs plaintext to compute, plaintext will spread. Not because teams are careless, but because modern stacks include many tools that implicitly assume visibility.

This is the gap many sovereign AI programs still under-specify: what happens to sensitive data during processing.

AI makes this distinction unavoidable because the value is created when data is used. Usage expands the number of systems involved, the number of integrations, and the number of people who can affect exposure.

For business leaders, the symptoms are familiar. Compliance and legal reviews get slower because boundaries are hard to explain end-to-end. Vendor risk becomes harder to manage because the real system includes tooling outside the core platform. Production rollouts stall because exceptions multiply.

This is the point where sovereignty moves from policy to operating model.

Confidential computing is commonly described as protecting data during processing, typically using hardware-based trusted execution environments.

Fully homomorphic encryption (FHE) is another path, allowing computation over encrypted data without decrypting it first.

These are not interchangeable approaches, and a business audience does not need a deep technical comparison to understand the key point: both aim to reduce how often sensitive data must be exposed in plaintext to make systems work.

That reduction has direct executive value. It shrinks the trust boundary. It reduces the number of tools and roles that need raw access. It makes governance more durable when teams are under operational pressure.

In a sovereign AI context, that is not a nice-to-have. It is the difference between “sovereign on paper” and “sovereign in production.”

The next step is to treat runtime data protection as a first-class requirement, not a technical footnote.

If plaintext remains the default, “sovereign” becomes harder to defend the moment systems go live.

European Commission Digital Strategy, “Seven consortia selected to establish AI Factories…” (Dec 10, 2024)

EuroHPC JU, “The EuroHPC JU Selects Additional AI Factories…” (Mar 12, 2025)

European Commission Digital Strategy, “Second wave of AI Factories set to drive EU-wide innovation” (Mar 12, 2025)

European Investment Bank, “EIB Group and European Commission join forces to finance AI gigafactories” (Dec 4, 2025)

World Economic Forum, “Sovereign AI: What it is, and 6 strategic pillars for achieving it” (Apr 25, 2024)

NIST, “Fully-Homomorphic Encryption (FHE)” (Privacy-Enhancing Cryptography project page)

Ready to see encrypted-in-use AI in action? Book a demo of Wodan AI solution today.


 

The post Sovereign AI in Europe: what changes when systems go live appeared first on Wodan AI - Zero Trust AI.

]]>
58372
Podcast: AI Security Hype vs Reality https://wodan.ai/2025/11/10/ai-security-hype-vs-reality-richard-stiennon/ Mon, 10 Nov 2025 00:03:17 +0000 https://wodan.ai/?p=58278 AI Security Hype vs Reality: The Year Attackers Got Smarter For years, AI in cybersecurity has been more theater than substance. Dashboards that promised “self-learning defense.” Vendors who sold Bayesian math as “machine intelligence.” Slide decks louder than their code. Then came November 30, 2022. The day language models learned to think in sentences—and the...

The post Podcast: AI Security Hype vs Reality appeared first on Wodan AI - Zero Trust AI.

]]>
AI Security Hype vs Reality: The Year Attackers Got Smarter

For years, AI in cybersecurity has been more theater than substance. Dashboards that promised “self-learning defense.” Vendors who sold Bayesian math as “machine intelligence.” Slide decks louder than their code.

Then came November 30, 2022. The day language models learned to think in sentences—and the security world quietly crossed a line it still hasn’t processed.

On the latest episode of the Secure AI Podcast, Richard Stiennon, founder of IT Harvest and one of cybersecurity’s longest-serving analysts, sat down with Bob Dubois, CEO of Wodan AI, to talk about what changed, who’s losing ground, and how fast the next phase is moving.

 

The End of “Good Enough” Security

For Stiennon, the biggest story isn’t that AI entered cybersecurity. It’s that AI is now good enough to make “good enough” dangerous.

“Before large language models, vendors were faking it,” he says. “They hired statisticians, not data scientists. Most of what they called AI was glorified anomaly detection.”

That pretense collapsed once models could read machine language, interpret logs, and summarize incidents faster than any analyst. Suddenly, triage—the unglamorous backbone of every Security Operations Center—could be automated end to end.

The result: systems that don’t just analyze alerts but act on them, isolating infected hosts or resetting credentials in seconds.

And that, Stiennon warns, changes the calculus.
“You can look at every single log now. You can do something about it. For a human, that’s burnout in 45 minutes. For an AI agent, it’s continuous.”

 

Attackers Didn’t Wait

The defenders weren’t the only ones watching.

AI has also become the great equalizer for attackers. Open-source models, stripped of ethical filters, can now build and execute full attack chains—from reading CVE feeds to generating live exploits.

“You can already drop a CVE into an LLM and get an exploit in minutes,” says Stiennon. “That collapses the time from disclosure to weaponization. If the exploit is the fuse, the bomb is already built.”

The implication is brutal: while companies still debate compliance checklists, attackers are automating reconnaissance, lateral movement, and data exfiltration. Entire intrusions can now unfold faster than a SOC shift change.

“Mean time to breach used to be months,” he says. “Now, it’s minutes.”

 

SOC Automation or Extinction

Stiennon’s advice for CISOs is blunt.

“Engage one of the 39 SOC automation platforms right now. Because next year, you’ll replace your SOC entirely with automation.”

By mid-2026, he predicts, manual SOCs will be legacy infrastructure.
The future isn’t just about alert reduction—it’s about removing the human bottleneck entirely. Teams that adapt will reassign their analysts to higher-order tasks like vulnerability management or DLP redesign. Those that don’t will drown in false positives until something slips through.

 

The Analyst’s View: Innovation vs Credibility

The conversation turns reflective when Dubois asks how large enterprises should navigate the explosion of AI-labeled vendors.

“Big analyst firms cover about 3% of the industry,” Stiennon notes. “I track over 4,300 vendors. Gartner lists 134. So the innovation pipeline is invisible to most buyers.”

That blind spot, he argues, isn’t neutral—it’s dangerous. Enterprise procurement rewards size over novelty, forcing CISOs to buy “safe” rather than “smart.” And the 30-thousand-dollar pay-to-play gatekeepers keep younger, better technologies off the radar.

“Every CISO says they want innovation,” he says, “but their RFPs are written to exclude it.”

 

Data in Use: The Blind Spot No One Talks About

If AI is rewriting defense, encryption remains the part still written in invisible ink.

“Companies think SSL saves them,” Stiennon says. “They see the lock icon and assume their data is safe. But SSL ends at the server. After that, it’s naked.”

In other words, most organizations encrypt data in motion and at rest—but not in use, the very moment it’s most vulnerable.

For Wodan AI, that’s where the industry must move next: keeping data encrypted while it’s being analyzed or computed on.
“Zero Trust means you hold the keys,” Stiennon adds. “No one touches your data unless you say so. That’s the real definition.”

 

Guardrails, DLP, and the New Arms Race

The market’s response to AI risk is already visible.
What began as “AI guardrails” has evolved into AI-powered DLP, designed to stop sensitive data from slipping into models or training pipelines.

It’s also becoming lucrative. Stiennon’s tracking shows 173 AI security startups and 12 acquisitions in the past year—representing $2.8 billion in returns on $2.5 billion invested.

The math tells its own story: security may be late to AI, but capital is catching up.

 

The Final Metric: Speed

Asked which metric every CISO should add to their dashboard tomorrow, Stiennon answers instantly.

“Mean Time to Detect. And most don’t even know it.”

He believes that as automation compresses attack timelines, detection speed becomes the defining indicator of survival.

“Attackers have minutes. You need seconds,” he says. “That’s where we’re headed.”

 

Beyond the Hype

For all the talk of hype, Stiennon ends with optimism. The tools are finally good enough to change the equation—if leaders move now.

The era of good enough is over. The new question isn’t whether AI belongs in security. It’s whether security can exist without it.

 

Listen to the full episode:
AI Security Hype vs Reality | Wodan AI Podcast

About the Guest:
Richard Stiennon is the Founder and Chief Research Analyst at IT Harvest and author of Security Yearbook.

About the Host:
Bob Dubois is the CEO of Wodan AI, enabling privacy-preserving computation through encrypted-in-use technology for data-driven industries.

The post Podcast: AI Security Hype vs Reality appeared first on Wodan AI - Zero Trust AI.

]]>
58278
Homomorphic Encryption vs Trusted Execution Environments: What Recent Attacks Reveal https://wodan.ai/2025/10/10/blog-homomorphic-encryption-vs-trusted-execution-environments/ Fri, 10 Oct 2025 01:37:22 +0000 https://wodan.ai/?p=58268 Trusted Execution Environments (TEEs) were once the crown jewel of confidential computing. Intel SGX, AMD SEV, and ARM TrustZone promised to protect data during processing by isolating it inside secure hardware enclaves. For years, this model has powered many things from cloud-based machine learning to fintech data analytics. But recent months have delivered a wake-up...

The post Homomorphic Encryption vs Trusted Execution Environments: What Recent Attacks Reveal appeared first on Wodan AI - Zero Trust AI.

]]>
Trusted Execution Environments (TEEs) were once the crown jewel of confidential computing. Intel SGX, AMD SEV, and ARM TrustZone promised to protect data during processing by isolating it inside secure hardware enclaves. For years, this model has powered many things from cloud-based machine learning to fintech data analytics.

But recent months have delivered a wake-up call. Two major academic attacks – WireTap (Georgia Tech & Purdue) and Battering RAM (KU Leuven/COSIC & University of Birmingham) – show that when hardware trust breaks, the entire TEE model collapses.

 

When “trusted” hardware turns fragile

In WireTap, researchers used a €50 memory-bus interposer to recover Intel SGX’s ECDSA attestation key. Once stolen, that key allows attackers to impersonate genuine hardware, bypass attestation checks, and silently read enclave data.

Battering RAM, a separate but complementary attack, exploited how data flows between CPU and memory, undermining integrity guarantees in both Intel SGX and AMD SEV.

Both reveal a shared truth: when secrets are decrypted inside the enclave, they become vulnerable to physical tampering, speculative execution flaws, and supply-chain exploits. Confidentiality depends entirely on hardware that can now be cloned, probed, or replayed.

 

Why homomorphic encryption changes the equation

Homomorphic encryption (HE) takes a different stance. Instead of relying on hardware isolation, HE keeps data encrypted at all times, even while computations are performed.

  • Data never leaves ciphertext form, eliminating the “cleartext window” TEEs expose.
  • Security rests on mathematical hardness, not on vendor firmware or attestation certificates.
  • Even if an attacker gains full physical access to the machine, all they see is noise.

In short, when enclaves can be breached, algebraic privacy prevails.

Homomorphic encryption trades trust in silicon for trust in math. That shift matters now more than ever.

 

The performance myth is fading

Critics often cite HE’s computational overhead as a barrier. That was true a decade ago. But recent progress – from lattice-based optimizations to GPU or FPGA-based acceleration – is changing the landscape.
For workloads such as statistical inference, data aggregation, or credit risk scoring, modern HE can now achieve practical latency while retaining cryptographic confidentiality.

As attacks erode confidence in hardware isolation, the cost gap between TEE and HE becomes less about performance and more about risk appetite. What is the price of a broken attestation chain compared to a few extra milliseconds of compute?

 

Toward hybrid confidential computing

The future is likely hybrid: selective use of TEEs for orchestration or hardware acceleration, wrapped by HE for the sensitive core of computation.

Research projects such as Integrating Homomorphic Encryption and Trusted Execution (arXiv, 2023) and TEE FHE propose models where enclaves handle performance-critical steps but never see decrypted data. This layered design treats the TEE as a convenience, not as the root of trust.

At Wodan AI, we see this principle in action. Our encrypted-in-use architectures apply homomorphic encryption for active data protection, minimizing reliance on external trust anchors. If the hardware fails, the data still holds.

 

The takeaway: trust less, encrypt more

The recent breaches make one thing clear: hardware trust is not absolute trust.

When your security depends on a single component – firmware integrity, attestation chain, or bus isolation – one breach can undermine everything.
Homomorphic encryption shifts that dependency away from opaque silicon and into transparent, verifiable cryptography.

For teams building systems in finance, healthcare, or defense, this is no longer a theoretical choice. It’s an operational one.

If you handle sensitive data, ask yourself:

Would your confidentiality survive if the hardware itself were compromised?

If not, it’s time to explore encrypted-in-use computing.

 

Sources:

 

Ready to see encrypted-in-use AI in action? Book a demo of Wodan AI solution today.

 

 

The post Homomorphic Encryption vs Trusted Execution Environments: What Recent Attacks Reveal appeared first on Wodan AI - Zero Trust AI.

]]>
58268
Wodan AI Partners with Tunnel ID to Deliver Privacy-Preserving Identity Verification https://wodan.ai/2025/08/20/wodan-ai-tunnel-id-privacy-preserving-identity-verification/ Wed, 20 Aug 2025 02:54:26 +0000 https://wodan.ai/?p=58247 We’re excited to announce our partnership with Tunnel ID, bringing together two breakthrough technologies to create the most secure, private, and user-friendly identity verification solution. The Challenge We’re Solving Together Identity fraud has reached crisis levels globally, with cybercrime costs hitting $9.5 trillion worldwide in 2024. Deepfake fraud attempts have exploded by 2,137% over the...

The post Wodan AI Partners with Tunnel ID to Deliver Privacy-Preserving Identity Verification appeared first on Wodan AI - Zero Trust AI.

]]>
We’re excited to announce our partnership with Tunnel ID, bringing together two breakthrough technologies to create the most secure, private, and user-friendly identity verification solution.

The Challenge We’re Solving Together

Identity fraud has reached crisis levels globally, with cybercrime costs hitting $9.5 trillion worldwide in 2024. Deepfake fraud attempts have exploded by 2,137% over the last three years. Traditional identity systems are failing under pressure from AI-powered attacks.

Organizations face an impossible choice: security, user experience, or privacy. You can optimize for two, but the third always suffers. This partnership changes that.

What Tunnel ID Brings

Tunnel ID revolutionizes identity verification by asking “are you actually there?” instead of “do you have the right credentials?”

Their technology verifies real human presence without storing biometric data. Users simply show their face for instant verification. No passwords, no codes, no vulnerable databases  making Tunnel ID a full identity engine, not just a verification layer

Key capabilities:

  • Face-based authentication in under 2 seconds
  • Account recovery without email or SMS
  • KYC compliance without data storage
  • Protection against deepfakes and AI attacks

What Wodan AI Brings

Wodan AI has developed encrypted AI processing that analyzes sensitive data while it remains mathematically protected. Our technology enables AI to process biometric patterns and risk indicators without ever seeing raw data.

This solves the hidden vulnerability: even when biometric data isn’t stored, it’s typically processed unencrypted, creating exposure windows for attackers.

Our capabilities:

  • Real-time fraud detection on encrypted data
  • Risk scoring through protected computation
  • Cross-border processing while maintaining privacy
  • Future-proof protection against quantum threats

The Integrated Solution

Together, we create complete privacy-preserving identity verification. Tunnel ID establishes real presence while Wodan AI processes all signals through encrypted AI.

The user experience stays simple: show your face, get access. But the security transforms completely. Data gets encrypted immediately, AI analyzes encrypted patterns, and verification happens through mathematical proofs.

Industry Applications

Financial Services: Instant digital banking onboarding with encrypted fraud detection and GDPR compliance without data storage.

Healthcare: Patient authentication that maintains data protection while enabling seamless care access.

Enterprise: Secure authentication for employees and AI agents through the same encrypted framework.

E-commerce: Identity verification without collecting personal data, reducing liability while improving experience.

Market Impact

This partnership solves the “security triangle” for the first time. Organizations can now optimize security, privacy, and user experience simultaneously.

The global identity verification market is growing at 14.4% annually, constrained by these tradeoffs. Our solution removes those constraints entirely.

Ready to Transform Your Identity Stack?

Whether you’re in financial services, healthcare, or enterprise security, we’d love to show you how this partnership can transform your approach to digital identity.

Schedule Demo today.

The revolution in privacy-preserving identity verification starts with a conversation.

About Tunnel ID: A  tool for  presence-based identity verification technology, enabling secure authentication with zero biometric footprint.

The post Wodan AI Partners with Tunnel ID to Deliver Privacy-Preserving Identity Verification appeared first on Wodan AI - Zero Trust AI.

]]>
58247
Privacy-First AI for Health: How Ciphertext Keeps Saving Lives https://wodan.ai/2025/07/31/ai-and-cybersecurity-in-healthcare/ Thu, 31 Jul 2025 18:03:39 +0000 https://wodan.ai/?p=58234 A pager goes off in the cardiac unit at 01:37. A nurse has to send a CT scan to the triage model before the surgeon wheels a patient into theatre. In most hospitals that file leaves its secure folder, lands on a server, runs through the model in plain text, then returns with a prediction....

The post Privacy-First AI for Health: How Ciphertext Keeps Saving Lives appeared first on Wodan AI - Zero Trust AI.

]]>
A pager goes off in the cardiac unit at 01:37. A nurse has to send a CT scan to the triage model before the surgeon wheels a patient into theatre. In most hospitals that file leaves its secure folder, lands on a server, runs through the model in plain text, then returns with a prediction. At that exact moment it is readable by anyone who has slipped past the perimeter. That gap – sometimes just a few milliseconds – costs the healthcare industry an average of?USD?9.77?million every time an attacker makes it through.

Clinicians cannot wait for perfect privacy. They need results in seconds, regulators need proof that personal data stays safe, and researchers need ways to share insights without shipping raw genomes across borders. Fully homomorphic encryption, or FHE, offers a path that meets all three demands.

What makes FHE different

Traditional “encryption at rest” works like a bank vault that must be opened for every withdrawal. FHE is a safe?deposit box that lets the bank count the money without opening the lid. Algorithms compute directly on ciphertext, so scans, lab values, and DNA strings never revert to plain text during processing. That single shift closes the last open window in the security stack, the moment when today’s AI workloads are most exposed.

The Wodan AI approach in practice

Wodan keeps both the input data and the machine?learning model in ciphertext from ingest to result. Each inference is cryptographically signed, proving that only approved code touched the record. Hospitals plug the runtime into existing containers without rewriting models, and an immutable log rolls every event into an audit file that satisfies HIPAA and GDPR evidence requests. The architecture follows a Zero Trust posture – every user, device, and workload must prove identity before access.

Four stories from the ward and the lab

  • Encrypted diagnostics. A radiology team runs a stroke?detection model on CT images while the files stay encrypted. The model flags intracranial bleeds in under a second, and the record never appears in plain text. The hospital meets the HIPAA technical safeguard for “encryption during transmission and processing” and still shaves minutes off door?to?scan time.
  • Genomic discovery across borders. A pharma consortium wants to scan population?scale DNA pools for rare mutation signatures. With FHE they push the algorithm to each site, run the computation on ciphertext, and pool only the encrypted results. No raw sequence leaves the originating country, so the study clears GDPR transfer rules.
  • Collaboration without disclosure. Two clinics need to refine a sepsis?prediction model but cannot share patient records. Each runs training rounds locally on encrypted data, exchanges only encrypted gradients, and converges on a stronger model while every record stays on?prem.
  • Remote monitoring that respects privacy. Wearable devices stream vitals to a predictive engine hosted in the cloud. FHE keeps the data encrypted in transit and at the point of inference, so the provider can alert clinicians to deterioration trends without handling readable telemetry.

Bringing it to your stack

Start by encrypting the dataset at the edge so no record leaves your network unprotected. Containerise the model with Wodan’s runtime and benchmark latency against your clinical?workflow target – most inference tasks complete well under a second on common GPU nodes. Map the immutable log to HIPAA sections?164.308 and?164.312, then point your GDPR data?protection impact assessment to the same evidence. Scale on a pay?per?use plan that grows from proof of concept to nationwide roll?out without capital hardware spend.

The takeaway

FHE changes the default from “decrypt to innovate” to “keep it encrypted and go faster.” It protects patients in the moment, satisfies auditors on demand, and frees researchers to collaborate without fear of exposure. If you want to see a stroke?triage model run on ciphertext – or test another workload – book a short demo. We will run the inference inside your environment, and no personal data will ever leave your premises.

 

Ready to see encrypted-in-use AI in action? Book a demo of Wodan AI solution today.

 

 

The post Privacy-First AI for Health: How Ciphertext Keeps Saving Lives appeared first on Wodan AI - Zero Trust AI.

]]>
58234
Privacy paradox in AML regulation: Share data while not exposing PII https://wodan.ai/2025/05/29/privacy-paradox-in-aml-regulation/ Thu, 29 May 2025 15:40:14 +0000 https://wodan.ai/?p=58111 When EU legislators signed off on the Anti-Money-Laundering Regulation (AMLR) and Directive 6 (AMLD6) last year, the headline was clear: “tear down the silos and let financial-crime data flow.” The fine print—Article 75—goes even further, allowing (and sometimes obliging) banks, PSPs, crypto venues, casinos, and even luxury goods dealers to swap customer-level intelligence in private-to-private...

The post Privacy paradox in AML regulation: Share data while not exposing PII appeared first on Wodan AI - Zero Trust AI.

]]>
When EU legislators signed off on the Anti-Money-Laundering Regulation (AMLR) and Directive 6 (AMLD6) last year, the headline was clear:

“tear down the silos and let financial-crime data flow.”

The fine print—Article 75—goes even further, allowing (and sometimes obliging) banks, PSPs, crypto venues, casinos, and even luxury goods dealers to swap customer-level intelligence in private-to-private partnerships. The package is already in force (as of 10 July 2024) and will become fully applicable from 10 July 2027, according to Finnius Advocaten.

Great news for investigators. A nightmare for privacy officers.

Below, we unpack the new rule set, the GDPR paradox it creates, and how Wodan AI’s encrypted-in-use platform, Dropnir, enables you to comply with both without ever decrypting your data.

 

What changed?

AMLR AMLD6
Legal form Regulation (direct effect) Directive (transpose)
Key date In force Jul 10, 2024 ? applies Jul 10, 2027 Same
Headlines Single EU rule-book, € 10k cash cap, Article 75 information-sharing partnerships Harmonised offences & penalties

 

A political deal was struck on January 18, 2024, by Finnius Advocaten.

 

Article 75 in one paragraph

“Members of partnerships for information sharing may share information where strictly necessary to meet their AML/CFT duties.” Better Regulation

  • What’s shareable? Customer identifiers, transaction metadata, risk scores, and alert reasons.
  • With whom? Any obligated entity, including national FIUs, across borders.
  • Guard-rails? DPIA, supervisory notification, civil liability safe harbour.

 

The Privacy Paradox

 

AMLR wants… GDPR insists on…
Broad datasets & five-year retention Data minimisation & “erase when no longer necessary”
No customer consent (tipping-off risk) Valid lawful basis & transparency
Cross-border pooling Purpose limitation & transfer safeguards

 

Practitioners are already referring to this as the GDPR-AML dilemma: two EU flagships pulling in opposite directions. Mondaq.

 

Why PETs beat “trust me” NDAs

 

Stopping money-laundering networks means correlating patterns across institutions—but nobody wants another central data lake. Privacy-Enhancing Technologies (PETs)—federated queries, fully homomorphic encryption (FHE), secure enclaves—let firms compute on each other’s data without copying or decrypting it. Regulators from Singapore’s COSMIC to the US Patriot Act utilities have endorsed the approach; Article 75 now gives the EU a legal footing to do the same, according to William Fry.

 

Where Wodan AI fits

 

Dropnir: encrypted-in-use by design

Our containerised API layer keeps both the request and the response encrypted during processing. Peers only ever see ciphertext; Wodan AI never sees anything. Wodan AI – Secure AI.

 

Getting ready for 2027: a four step playbook

 

  1. Stand up a sandbox
    Spin up Dropnir and load hashed customer keys + minimal features to pass the “strict necessity” test.
  2. Run a joint DPIA
    Map Article 75 controls line-by-line to GDPR Art 35 before you share a single byte.
  3. Federate, don’t replicate
    Keep computations where the data already lives; pay only for the queries you run.
  4. Log everything: If you can’t prove why, when, and what you shared, expect fines.

 

Key take-aways

 

  • Timeline: Rules are live now; mandatory from July 10, 2027.
  • Opportunity: Private-private sharing to unmask mule networks.
  • Risk: GDPR conflict on minimisation, consent, and retention.
  • Fix: End-to-end encrypted federated analytics with Wodan AI Dropnir.

 

Ready to pilot a secure Article 75 partnership?

Book a 30-minute demo and discover how Dropnir keeps your AML models effective and your customer data secure and protected.

Any questions? Contact us

The post Privacy paradox in AML regulation: Share data while not exposing PII appeared first on Wodan AI - Zero Trust AI.

]]>
58111
Managing Cybersecurity and Privacy Risks in the Age of Artificial Intelligence: Launching a New Program at NIST https://wodan.ai/2024/12/31/managing-cybersecurity-and-privacy-risks-in-the-age-of-artificial-intelligence-launching-a-new-program-at-nist/ Tue, 31 Dec 2024 18:23:57 +0000 https://wodan.ai/?p=57984 The rapid advancement of technology has made artificial intelligence (AI) a transformative force across various industries worldwide. While AI presents numerous opportunities, it also brings significant challenges related to cybersecurity and privacy. Recognizing the need to address these issues, the National Institute of Standards and Technology (NIST) has launched a new program aimed at managing...

The post Managing Cybersecurity and Privacy Risks in the Age of Artificial Intelligence: Launching a New Program at NIST appeared first on Wodan AI - Zero Trust AI.

]]>
The rapid advancement of technology has made artificial intelligence (AI) a transformative force across various industries worldwide.

While AI presents numerous opportunities, it also brings significant challenges related to cybersecurity and privacy. Recognizing the need to address these issues, the National Institute of Standards and Technology (NIST) has launched a new program aimed at managing the cybersecurity and privacy risks associated with AI.

New Program at NIST

The Growing Need for AI-Specific Cybersecurity Measures

As AI continues to infiltrate multiple sectors, it is essential to ensure that cybersecurity practices evolve accordingly. The complexity and adaptability of AI systems create unique vulnerabilities that traditional cybersecurity frameworks may not adequately address. Cybercriminals can exploit these vulnerabilities, leading to data breaches, manipulation, and violations of privacy.

NIST’s new program focuses on developing robust standards and guidelines tailored specifically for AI technologies. This initiative will help organizations understand how to safeguard their AI systems, ensuring they remain secure, ethical, and compliant with privacy best practices.

Key Focus Areas of NIST’s New Program

The new program at NIST will concentrate on several key areas:

1. Risk Management Framework: NIST will provide a comprehensive risk management framework to help organizations assess and mitigate cybersecurity and privacy risks specific to AI.

2. AI Governance and Ethics: The program will emphasize the importance of governance, transparency, and ethical considerations when deploying AI systems.

3. Collaborative Research and Development: NIST will promote collaboration between the public and private sectors to drive innovation in AI security.

By focusing on these critical areas, NIST’s initiative aims to create a safer environment for the development and deployment of AI technologies, while prioritizing privacy protections.

Wodan leverages its expertise in AI governance and ethics to support NIST goals by developing robust frameworks that ensure responsible AI deployment aligned with ethical principles and regulatory compliance. By collaborating on research and development initiatives, Wodan brings interdisciplinary insights to help enhance NIST’s efforts in creating standards that promote transparency, fairness, and accountability in AI systems. This partnership fosters innovation while addressing societal and ethical challenges, paving the way for trustworthy and equitable AI technologies.

Any questions? Contact us!

The post Managing Cybersecurity and Privacy Risks in the Age of Artificial Intelligence: Launching a New Program at NIST appeared first on Wodan AI - Zero Trust AI.

]]>
57984
The Role of Cybersecurity in AI https://wodan.ai/2024/12/17/the-role-of-cybersecurity-in-ai/ Tue, 17 Dec 2024 16:27:38 +0000 https://wodan.ai/?p=57966 Protecting AI Systems from Cyber Threats Artificial Intelligence (AI) has revolutionized various industries by enabling automation, predictive analytics, and improved decision-making. However, as AI systems become more complex and widespread, they also become attractive targets for cyberattacks. Cybersecurity is essential for protecting AI systems from vulnerabilities that could compromise their performance and integrity. Hackers may...

The post The Role of Cybersecurity in AI appeared first on Wodan AI - Zero Trust AI.

]]>
Protecting AI Systems from Cyber Threats

Artificial Intelligence (AI) has revolutionized various industries by enabling automation, predictive analytics, and improved decision-making.

However, as AI systems become more complex and widespread, they also become attractive targets for cyberattacks.
Cybersecurity is essential for protecting AI systems from vulnerabilities that could compromise their performance and integrity.
Hackers may exploit weaknesses in AI algorithms, manipulate data, or target the infrastructures that support AI models.
Implementing robust cybersecurity measures is crucial to ensure the confidentiality, integrity, and availability of AI-driven systems.

Addressing Data Privacy and Integrity

AI systems depend heavily on data for training and decision-making.
Cybersecurity measures protect this data from unauthorized access, breaches, and corruption. If hackers manipulate training datasets, the AI model might generate biased or harmful outcomes—a phenomenon known as data poisoning.

Privacy concerns also arise when sensitive information used in AI processes is exposed. Effective encryption techniques, access controls, and secure data transmission protocols are vital for ensuring data privacy and preventing unauthorized interference.

Preventing AI Model Theft

Developing AI models requires significant resources, time, and expertise.

Cybercriminals may attempt to steal these models to gain unauthorized access to proprietary technology. Cybersecurity tools like watermarking and secure APIs help protect intellectual property by restricting unauthorized use or reproduction of AI algorithms.

Ensuring Trust in AI

To maintain trust in AI systems, users must be confident that these systems operate securely and reliably.

Cybersecurity frameworks ensure that AI systems cannot be manipulated to produce misleading results or perform malicious tasks.

By integrating security measures into the development and deployment of AI, organizations can mitigate risks while fostering trust among stakeholders. In conclusion, cybersecurity is not optional but a necessity for the safe and reliable advancement of AI technology.

By addressing emerging threats, it ensures that AI can thrive securely in our digital world.

Any questions? Contact us

The post The Role of Cybersecurity in AI appeared first on Wodan AI - Zero Trust AI.

]]>
57966