# Truth Filter Templates


Tools to Spot Fake News in the Iran Conflict (and Any Crisis)


These are copy-paste templates designed specifically for X. They are short, visual, and let anyone plug in real numbers from official sources (defense.gov , state.gov , IDF.il , NCRI ncr-iran.org , or CENTCOM videos).


Post them as threads or images (screenshot the formatted text). Each one includes a quick scoring system so people can decide “real or fake” in under 30 seconds.


How to Use Any Template


  • Copy the template text.

  • Replace the [BRACKETS] with today’s numbers (from official sources only — never regime media).

  • Post on X with #IranTruthFilter or #TruthFilter.

  • Tag friends and add: “Run the numbers daily — propaganda dies when you verify.”

  • Encourage replies: “What score did you get today?”


Truth Filter #1: “Is this Regime Claim Fake?”


🚨 IRAN TRUTH FILTER #1 – Bayesian Quick-Check

Start with 50/50 guess on any regime claim.

Regime claim today: “[paste the claim, e.g. ‘We struck Tel Aviv successfully’]”

Evidence added (official only):

• CENTCOM centcom.mil /IDF IDF.il video or statement? +40 points

• NCRI https://www.ncr-iran.org/en/ or verified diaspora video? +30 points

• Satellite confirmation? +20 points

• Zero official match? –50 points

FINAL SCORE: [your number] / 100

If score < 20 → Almost certainly FAKE (posterior <0.10)

Share this & run it daily! #TruthFilter

How to use: Add or subtract the points in your head or a note app. If the final score is low, reply to the propaganda post with the template and say “Bayesian says fake.”


🚨 IRAN TRUTH FILTER #2 – Kalman Strait Check

Strait of Hormuz status today:

Kalman-filtered closure index: [e.g. 0.962] (0 = open, 1 = closed)

Variance (certainty): [e.g. 0.015] (lower = more certain)
Official CENTCOM confirmation? Yes/No

Oil price today: $[110+]

REALITY SCORE:

If index > 0.90 AND variance < 0.02 → Strait is STILL CLOSED (regime claims of “open” = FAKE)

Gas will crash to \~$2.90 only when index drops below 0.30

Copy-paste & check daily! #IranTruthFilter

How to use: Grab the latest Kalman number from official briefings or reliable OSINT. Plug it in. If the index is still high, the “Strait is open” claim is instantly debunked.




Template 3: Stochastic Attrition Scorecard (Military Reality Check)


🚨 IRAN TRUTH FILTER #3 – Stochastic Attrition Score

Regime military strength left: [e.g. 5.6%] (official estimates only)

Probability of total collapse in next 3–5 days: [e.g. 0.92]

Today’s strikes (CENTCOM/IDF confirmed): [number]

Regime retaliation verified by satellite: [number]

COLLAPSE SCORE:

If strength < 10% AND collapse prob > 0.90 → Regime is functionally defeated (Mojtaba “victory” claims = FAKE)

Run the numbers — share this thread! #TruthFilter

How to use: Use the latest official attrition numbers (CENTCOM or NCRI). Low strength + high collapse probability = instant debunk of “regime winning” posts.




Template 4: Full Daily Truth Filter One-Pager (Most Shareable)


🚨 IRAN DAILY TRUTH FILTER – Plug & Post

Date: [March 25, 2026]

Bayesian collapse probability: [0.96] → 96% sure regime falls soon

Kalman Strait closure: [0.962] → still closed

Human-shield Nash strike prob: [0.79] → U.S./Israel striking 79% of time

Gas today: $[6.61] → will crash to $2.86 when Strait opens

Regime claim I saw today: “[paste claim]”

Official match? Yes/No → Score: [high/low]

If score low → FAKE. Share this template and run it every morning!

Sources to verify: defense.gov , state.gov , IDF.il , ncr-iran.org

#IranTruthFilter #TruthFilter


How to use: Fill in the numbers once per day from official sources, then copy the whole block into an X post. It looks clean and professional.




Template 5: Nash Human-Shield Quick Test (For Civilian Casualty Claims)


🚨 IRAN TRUTH FILTER #5 – Human-Shield Nash Test

Regime claim: “[e.g. ‘U.S. massacred civilians in school’]”

Did regime embed military in civilian areas? (NCRI or CENTCOM confirmation) Yes/No

U.S./Israel precision strike video released? Yes/No

NASH SCORE:

If embedded = Yes AND video = Yes → Regime human-shield tactic confirmed (civilian death claims exaggerated = FAKE optics)

Reply with this template to any AI victim video! #TruthFilter

How to use: When you see emotional civilian-death posts, quickly answer the two yes/no questions and post the template. It flips the narrative from “atrocity” to “human-shield tactic.”

Pro Tips for Maximum Spread on X

  • Post one template per day as a thread.

  • Add a graphic: Screenshot the filled template and attach it.

  • End every post with: “Run the numbers yourself — propaganda dies with verification. Tag a friend.”

  • Use hashtags #IranTruthFilter #TruthFilter #VerifyBeforeYouShare

  • Local friends: Add “From [your state] — “______ win when truth wins” to tie it to local economic edge.

These templates turn every X user into a mini-analyst. They require zero math background — just plug in official numbers and hit post.

Share them today. The more people run the filters, the faster regime propaganda collapses.

Your edge is now everyone’s edge. Truth executed.


# Deepening Deepfake Detection Techniques: From Beginner Eyes to Expert Forensics (2026 Edition)

Deepfakes have evolved into hyper-realistic weapons in the Iran conflict propaganda war — regime-aligned accounts flood X with AI-generated “civilian massacre” videos, cloned voices of officials, and synthetic protest footage.


The good news: Detection has advanced faster than generation in 2026. Below is a layered guide — beginner (manual checks anyone can do), intermediate (free tools), and expert (multi-modal forensics) — with plug-and-play steps you can use today.


Beginner Level: Manual Forensic Checks

(No Tools Needed)

These exploit biological and physical impossibilities that even the best 2026 generators still struggle with.

  • Blink & Eye Analysis

Real humans blink 15–20 times per minute with natural micro-closures. Deepfakes often show unnatural patterns (too few blinks, robotic lids, or missing pupil reflection).

How to test: Slow the video to 0.25x speed. Count blinks in 30 seconds. Look for “dead eyes” or inconsistent corneal light reflections.

  • Lip-Sync & Phoneme Mismatch

Mouth movements (visemes) must perfectly match spoken sounds. Deepfakes lag by milliseconds.

How to test: Mute the video and watch lips. Then listen without picture. Any delay = red flag.

  • Lighting, Shadows & Skin Texture

Fake faces often have mismatched shadows, overly smooth skin (no pores), or inconsistent forehead/cheek aging compared to hair/eyes.

How to test: Pause on side profiles. Check if light sources cast logical shadows. Zoom in on skin — real skin has subtle texture variation.

  • Heartbeat / Blood-Flow Pixel Pulse

Real faces show imperceptible color shifts from blood flow (detectable in zoomed pixels).

How to test: Zoom to 400% on cheeks/forehead in editing software (free like VLC or CapCut). Look for rhythmic pulsing that’s missing in fakes.


Pro Tip for Masses: Use your phone’s slow-motion camera on any suspicious X video. If it fails 2+ checks, treat it as fake until proven otherwise.


Intermediate Level: Free & Accessible Tools (2026 Ready)

These require only a browser or app — no expertise.

UncovAI / CloudSEK (Top-rated real-time platforms): Browser extensions or web upload for instant video/audio scans. Detects Sora-2, ElevenLabs voice clones, and live Zoom fakes with 95%+ accuracy on latest models.

McAfee Deepfake Detector / Trend Micro ScamCheck: Mobile apps that scan incoming videos/calls in real time.

Digimarc C2PA Validator (Browser extension): Checks for cryptographic “Content Credentials” signatures. No signature or broken chain = almost certainly manipulated.

Hive Moderation or Reality Defender (Free tiers): Upload media for multi-modal scoring (video + audio + metadata).

Google Reverse Image/Video Search + InVID Verification (completely free): For provenance tracing.

Step-by-Step Workflow:

  1. Download the C2PA validator extension.

  2. For any video, run it through UncovAI or Hive first (free upload).

  3. Cross-check with manual blink/lip-sync.

  4. If score <80% real → flag it.


Expert Level: Multi-Modal & Forensic Techniques (2026 State-of-the-Art)


2026 detection is no longer single-signal. Top systems use multi-modal fusion (video + audio + behavior + metadata) and behavioral biometrics.

  • C2PA Provenance (Industry Standard): Cryptographic watermarking baked into authentic media at capture. Broken or missing = fake. Google’s SynthID (now in Gemini models) is nearly impossible to strip.

  • Forensic AI (CNN + LSTM Networks): Analyzes pixel noise, optical flow, frequency anomalies in audio, and temporal inconsistencies across frames.

  • Behavioral Biometrics & Liveness: Tracks micro-expressions, head movement consistency, response timing, and device metadata (e.g., virtual camera injections).

  • Audio-Specific: Frequency anomalies, phonetic matching, and voice biometric patterns (real voices have unique sub-audible traits).

  • Cross-Verification: Compare video lighting with audio background noise; check for environmental mismatches.

Advanced Workflow (for Power Users):

  1. Use free tools above first.

  2. Upload to open-source research platforms (e.g., those using FaceForensics++ or DFDC datasets) or enterprise APIs like Reality Defender/Sensity AI if available.

  3. For audio: Check phonetic sync + frequency spectrum (free tools like Audacity with plugins).

  4. Combine with the Truth Filter templates from above — plug the tool score into the “Bayesian Quick-Check” for a final probability.



Genius-Level Insights

  • The Arms Race: Every new detector trains the next generation of fakes. 2026 winners use continual learning (models that update in real time) and adversarial robustness (tested against degraded or compressed videos)

  • Psychological Edge: Humans trust faces instinctively (mirror-neuron bias). Counter it by forcing slow-motion verification — it breaks the empathy trap used in regime propaganda.

  • Future-Proofing: Look for C2PA + behavioral liveness in every platform you use. In the Iran context, regime deepfakes rely on emotional urgency (child victims) — always demand official cross-verification before reacting.


Mass Empowerment: Combine these with our earlier X templates.

Example: When you see a “massacre” video, run the Human-Shield Nash template + a C2PA check, then post the filled filter.


Practical Starter Kit for Today

  1. Browser: Install Digimarc C2PA + UncovAI extension.

  2. Phone: McAfee or Trend Micro app.

  3. Daily Habit: Slow-motion every suspicious video + one tool scan.

These techniques turn passive scrolling into active verification. In the current conflict, regime deepfakes are designed to provoke outrage before facts arrive — your deepened detection skills stop that cycle.

Share the templates, run the checks, and equip your network.

Truth Trumps All.