Deepfakes are no longer just a Silicon Valley problem—they’re a global crisis. Just spoke with Brian Kudi, a solo founder in Kenya building @deeptrck, a six-month-old startup fighting deepfake misinformation across finance, media, and politics. His tech is detecting AI-generated videos, fake documents, even synthetic passports—with a 92% accuracy rate. And get this: In Kenya, deepfakes are being used for political propaganda and fake college degrees. One politician even deepfaked his own education. Brian’s team is building plugins, APIs, and tools anyone can use—even if they don’t know AI. He’s trying to protect human trust in a world increasingly built by machines. If you’re investing in the future of AI safety, look beyond SF. There’s global talent solving real-world problems—with far fewer resources. Support founders like Brian. The fight for truth is everywhere. 🔗 https://t.co/fWElGg7ZZx 🧠 @lord_bryane on X
How People Are Using AI For Fraud Detection
In 2025, risk and payments teams shift fraud screening to transformer-based sequence models, with Stripe reporting a foundation model trained on tens of billions of transactions that raised card-testing detection for large users from 59% to 97% overnight. Teams extend these methods across high-volume commerce and new attack surfaces—synthetic IDs and deepfake documents (Deeptrck), and agent-initiated payments with Visa tokens—while researchers note that such gains occur where signals and automation are already in place.
🤖 ai summary based on 7 tweets
Popular demos from official product accounts, team members, or affiliated creators.
Can’t load? Read tweets in preview...
TL;DR: We built a transformer-based payments foundation model. It works. For years, Stripe has been using machine learning models trained on discrete features (BIN, zip, payment method, etc.) to improve our products for users. And these feature-by-feature efforts have worked well: +15% conversion, -30% fraud. But these models have limitations. We have to select (and therefore constrain) the features considered by the model. And each model requires task-specific training: for authorization, for fraud, for disputes, and so on. Given the learning power of generalized transformer architectures, we wondered whether an LLM-style approach could work here. It wasn’t obvious that it would—payments is like language in some ways (structural patterns similar to syntax and semantics, temporally sequential) and extremely unlike language in others (fewer distinct ‘tokens’, contextual sparsity, fewer organizing principles akin to grammatical rules). So we built a payments foundation model—a self-supervised network that learns dense, general-purpose vectors for every transaction, much like a language model embeds words. Trained on tens of billions of transactions, it distills each charge’s key signals into a single, versatile embedding. You can think of the result as a vast distribution of payments in a high-dimensional vector space. The location of each embedding captures rich data, including how different elements relate to each other. Payments that share similarities naturally cluster together: transactions from the same card issuer are positioned closer together, those from the same bank even closer, and those sharing the same email address are nearly identical. These rich embeddings make it significantly easier to spot nuanced, adversarial patterns of transactions; and to build more accurate classifiers based on both the features of an individual payment and its relationship to other payments in the sequence. Take card-testing. Over the past couple of years traditional ML approaches (engineering new features, labeling emerging attack patterns, rapidly retraining our models) have reduced card testing for users on Stripe by 80%. But the most sophisticated card testers hide novel attack patterns in the volumes of the largest companies, so they’re hard to spot with these methods. We built a classifier that ingests sequences of embeddings from the foundation model, and predicts if the traffic slice is under an attack. It leverages transformer architecture to detect subtle patterns across transaction sequences. And it does this all in real time so we can block attacks before they hit businesses. This approach improved our detection rate for card-testing attacks on large users from 59% to 97% overnight. This has an instant impact for our large users. But the real power of the foundation model is that these same embeddings can be applied across other tasks, like disputes or authorizations. Perhaps even more fundamentally, it suggests that payments have semantic meaning. Just like words in a sentence, transactions possess complex sequential dependencies and latent feature interactions that simply can’t be captured by manual feature engineering. Turns out attention was all payments needed!