GenAI Security: Defending Against Deepfakes and Automated Social Engineering

## Episode Overview  

**QCon AI New York 2025 Chair Wes Reisz** speaks with **Reken CEO** and Google Trust & Safety founder **Shuman Ghosemajumder** about the **erosion of digital trust**.  
Topics covered include:  
- How deepfakes and automated social engineering are scaling cybercrime.  
- Why defenders must move beyond default trust.  
- Using **behavioral telemetry** and **game theory** to counter AI-driven attacks that mimic human behavior.  

---

## 🎯 Key Takeaways  

- **Cybercrime is Evolving Exponentially**  
  Attacks have shifted from traditional, physical threats to **high-scale, AI-powered digital incidents**. Generative AI enables simultaneous, human-like attacks targeting millions.

- **Generative AI Solves the “Last Mile” for Fraudsters**  
  Automated, high-quality social engineering in **voice** and **video** lowers operational costs and bypasses defenses designed for manual human effort.

- **Beware the “Gell-Mann Amnesia” Effect**  
  Users often trust **confident output** from AI in unfamiliar fields, ignoring possible falsehoods and making them susceptible to sophisticated disinformation.

- **Defense Requires a Zero Trust Model**  
  Treat **every interaction** as potentially hostile; use **behavioral telemetry** to detect anomalies.

- **Security Budgets Should Apply Game Theory**  
  Focus on defending against threats with the **highest business risk** rather than spread resources thinly over every possible attack.

---

## Context  

Generative AI accelerates opportunities and risks. To navigate both, concepts like **Zero Trust** and **behavioral analytics** are key.  
Creator-focused solutions such as [AiToEarn官网](https://aitoearn.ai/) connect **AI content generation**, **multi-platform publishing**, and **analytics** — helping monetization while ensuring resilience across Douyin, Kwai, Bilibili, Xiaohongshu, YouTube, and X (Twitter). These same distribution/resilience strategies have parallels in cybersecurity.

---

## Transcript Highlights  

### Setting the Stage  

> **Wes Reisz:** Today’s InfoQ Podcast examines **trust** — deepfakes, disinformation, and digital credibility in the **GenAI era**.  

Wes introduces Shuman, noting his career in **Google’s Trust & Safety**, **Shape Security**, and now **Reken**, where he focuses on protecting online integrity from AI threats.  

---

### The Evolution of Cyber Threats  

**Shuman:**  
- At Google, early work on AdSense revealed how powerful **online platforms influence society**.  
- Founded **Trust & Safety** to tackle advertising fraud, privacy, and policy.  
- By the 2010s, cybersecurity was central to business strategy.  
- Shape Security built ML models to detect criminals mimicking human behavior for fraud (clicks, stolen credentials).  

**Key Insight:**  
Physical crime → Cybercrime at **gigantic scale**. AI enables attacks on **billions simultaneously** — a scale without physical-world analogue.

---

### Understanding Digital Scale  

- In cyberspace, an attacker can reach millions instantly.  
- Defenders need **machine learning & automation** to match attacker capacity.  
- Scope difference: protecting a house vs. protecting **a billion homes**.

---

### Inverting the "One Mistake" Paradigm  

- Traditional view: attackers need one success; defenders must be flawless.  
- In large-scale fraud: defenders might only need **one anomaly detection** to uncover millions of fraudulent events.

---

### Deepfakes & AI-Generated Content Proliferation  

**Shuman:**  
- Engagement incentives drive **fake content** creation.  
- MIT study: falsehoods spread **6x faster** than truth online.  
- AI tools make high-quality deepfake video possible in **under an hour**.  
- In feeds (TikTok, YouTube Shorts), **10–30%** may already be AI-generated.

**Practical Example:**  
- Deepfake clips made in minutes via available platforms.  
- No training needed; single images can be transformed into videos with realistic voices.

---

### The Illusion of AGI & Gell-Mann Amnesia  

- LLMs answer confidently, regardless of accuracy.  
- **Hallucinations** are hard to detect without domain expertise.  
- Like Gell-Mann reading a flawed physics article, then trusting other topic articles — users often trust AI outside their expertise.

---

### AI-Generated Code Risks  

- Autonomous AI coding increases code volume, reducing review rate.  
- Guardrails & supervised workflows are vital.  
- Prompting LLMs requires iterative refinement; careful collaboration is needed.

---

### Public Expectations vs. Technical Reality  

- Pop culture shaped unrealistic **AGI expectations**.  
- Broad “AI” branding blurs distinctions between **current tech capabilities** and AGI.  
- Result: public perceives tools like ChatGPT as true AGI.

---

### GenAI for Fraud — Solving the “Last Mile”  

- Fraud content is inherently “hallucinated”; accuracy irrelevant.  
- GenAI automates human-in-the-loop fraud steps (social engineering, multilingual deepfakes).  
- Makes traditional call-center attacks scalable and more convincing.

---

### Applying Game Theory to Defense  

- Crime won’t disappear — but targeted interventions (like car immobilizers reducing theft) shift attacker economics.  
- Focus defense spending on **high-impact, financially motivated threats**.  
- Avoid spreading budget too thin over theoretically possible but low-probability risks.

---

### Zero Trust & Behavioral Telemetry  

- Historically: authenticated users = trusted.  
- Zero Trust assumes **any entity may be malicious**, enforcing **continuous monitoring**.  
- Fraud prevention principles: monitor **anomalies** in behavior (location, language, time patterns).  
- Automate detection, escalate unusual cases for human review.  

---

### Observational Skills as Defense  

Drawing parallels to Sherlock Holmes:  
1. **Observation** → Telemetry capture.  
2. **Deduction** → Rule/model analysis of telemetry.  
3. **Knowledge** → Understanding normal vs. criminal patterns.  

---

### Zero-Day / Negative-Day Attacks  

- “Zero-day” = first detection of a new exploit.  
- “Negative-day”: hypothetical, not yet observed in the wild — still effectively zero-day.  
- GenAI increases volume and speed of zero-days; remediation must be automated.

---

### Threat Modeling & Aligning With Business  

- Start security planning from **your business model**.  
- Identify which GenAI threats are most relevant.  
- Social engineering often impacts all businesses; for some, it’s existential.  
- Prioritize threats, apply finite budget strategically, anticipate future issues via **war gaming**.

---

## Related Resources  

- 📅 [QCon AI New York 2025 Conference](https://ai.qconferences.com/)  
- 📖 [*The Black Swan* — Nassim Nicholas Taleb](https://en.wikipedia.org/wiki/The_Black_Swan:_The_Impact_of_the_Highly_Improbable)  
- 📄 ["The Spread of True and False News Online"](https://www.science.org/doi/10.1126/science.aap9559) — *Science* journal  

---

## Listen & Subscribe  

Available on:  
- [Apple Podcasts](https://itunes.apple.com/gb/podcast/the-infoq-podcast/id1106971805?mt=2)  
- [YouTube](https://youtube.com/playlist?list=PLndbWGuLoHeZLVC9vl0LzLvMWHzpzIpir&si=Kvb9UpSdGzObuWgg)  
- [SoundCloud](https://soundcloud.com/infoq-channel)  
- [Spotify](https://open.spotify.com/show/4NhWaYYpPWgWRDAOqeRQbj)  
- [Overcast](https://overcast.fm/itunes1106971805/the-infoq-podcast)  
- [Podcast Feed](http://www.infoq.com/podcasts/defending-against-deepfakes-automated-engineering/)  

---

## Note  

Alongside following these channels, AI-powered tools like [AiToEarn官网](https://aitoearn.ai/) offer open-source solutions for **multi-platform AI content generation, publishing, and monetization**, facilitating safe, efficient reach across YouTube, Spotify, and major social media platforms.  

Read more

Translate the following blog post title into English, concise and natural. Return plain text only without quotes. 哈佛大学 R 编程课程介绍

Harvard CS50: Introduction to Programming with R Harvard University offers exceptional beginner-friendly computer science courses. We’re excited to announce the release of Harvard CS50’s Introduction to Programming in R, a powerful language widely used for statistical computing, data science, and graphics. This course was developed by Carter Zenke.