Update: We are no longer hosting Western DeepSeek due to costs. This page now serves as a guide for using DeepSeek safely.

Western DeepSeek:A Case Study in Rapid AI Security Response

In January 2025, DeepSeek R1 was released and immediately became the #1 app. Within hours, we observed frontier AI researchers from top labs testing it with sensitive prompts, with all data flowing through China-hosted infrastructure.

What We Did

Within 24 hours of R1's release, we deployed westerndeepseek.com, a US-hosted alternative providing the same open-source model without the security risks. It was a proof of concept: rapid response is possible.

We work on frontier AI alignment research and saw the risk immediately. When we tested DeepSeek, asking about sensitive topics like the Tiananmen Square massacre returned propaganda responses: denials and claims of Western misinformation. This was not a neutral AI system.

The Problem

Every query to DeepSeek created an intelligence windfall for the CCP. Query patterns, research priorities, and safety concerns from frontier researchers were all exposed. This wasn't just about privacy; it was about national security and technological competitiveness.

Current Status

Our emergency hosting has been retired (cost). This proved what's possible with rapid response, but also highlighted the gap between threat emergence and institutional response.

Where to Use DeepSeek Safely Now

Other platforms now host DeepSeek. If you want to use DeepSeek, these are safer alternatives to the China-hosted original:

Note: Even Western-hosted platforms using DeepSeek should be used with caution. See security concerns below.

Recommended Western AI Models

For sensitive work, we recommend using AI models from Western companies with strong security practices:

Important Security Concerns

Even Western-hosted AI models may be compromised:

1Sleeper Agents

Research shows that AI models can contain hidden "backdoors" that persist even after safety training. These sleeper agents can behave normally most of the time but activate under specific conditions.

2CCP Infiltration

Western AI labs have been compromised by CCP spies. Former engineers at major labs have been indicted for stealing AI secrets to aid Chinese companies.

3Censorship in DeepSeek

Analysis shows DeepSeek censors over 1,156 political prompts, revealing systematic CCP content filtering and bias.

Recommendation: For sensitive research, assume all AI systems, including Western ones, may be compromised. Use appropriate operational security.

What Should Have Happened

Government agencies, defense institutions, or coordinated industry groups should have:

  • Identified the security risk within hours of DeepSeek's release
  • Issued guidance to research communities about data security
  • Deployed vetted alternatives or mitigation strategies
  • Coordinated with tech platforms to manage distribution risk
  • Established clear protocols for future incidents

But that didn't happen. There was no rapid-response infrastructure. No clear authority. No playbook.

Why This Matters

DeepSeek won't be the last incident of this kind. We're entering an era where:

  • AI capabilities advance rapidly and unpredictably
  • Nation-state actors actively seek technological and intelligence advantages
  • The research community needs tools but lacks security infrastructure
  • Response timelines measured in days or weeks are too slow
  • Proactive defense is increasingly necessary, not just reactive mitigation

The Vision: Same Day Skunkworks

Western DeepSeek proved what's possible with rapid response. The real goal is permanent infrastructure for AI security threats: a "Same Day Skunkworks" capability that can:

  • Monitor emerging AI capabilities for security implications
  • Coordinate rapid response across government, defense, and industry
  • Deploy mitigation measures in hours, not days or weeks
  • Establish best practices for secure AI usage in sensitive contexts
  • Eventually: develop offensive capabilities, not just defensive ones

This requires institutional commitment, cross-sector coordination, and recognition that AI security is national security. The gap between threat emergence and institutional response is a vulnerability.

Timeline: 24-Hour Response

HOUR 0 - THREAT EMERGES

DeepSeek R1 Released

Becomes #1 app overnight. Frontier AI researchers and engineers from top labs immediately begin testing with sensitive prompts. All query data flows through China-hosted infrastructure, creating massive intelligence exposure.

~12 HOURS - RISK IDENTIFIED

Security Vulnerability Recognized

We identified the massive security risk: cutting-edge AI research and frontier model insights being shared directly with a CCP-controlled system. Testing revealed propaganda responses, confirming this was not a neutral AI.

24 HOURS - RAPID RESPONSE

Western DeepSeek Deployed

Within 24 hours, we deployed westerndeepseek.com with US-hosted infrastructure. Same model, zero CCP data exposure. Demonstrated what's possible with rapid-response capability.

TODAY - MISSION COMPLETE

Secure Alternatives Available

Emergency hosting retired (cost). Multiple secure, vetted alternatives now exist. This site serves as a case study and resource directory. Lessons learned, infrastructure gaps identified.

Continuing the Research

AE Studio's alignment research team systematically studies AI alignment failures like these to understand what goes wrong and how to build more robust systems.

We work on detecting, measuring, and addressing harmful biases in AI. See our Wall Street Journal article and corresponding Systemic Misalignment website discussing these issues.