Unpopular opinion: The intersection of AI and fintech isn't where most think it is It's not: - LLMs doing customer service - Chatbots handling complaints It's: - Real-time fraud detection - Automated underwriting - Risk scoring at scale Follow the money. Banks are spending hundreds of millions on these categories at companies you haven't followed. The valuable stuff touches the money.
How People Are Using AI For Fraud Detection
In 2025, risk and payments teams shift fraud screening to transformer-based sequence models, with Stripe reporting a foundation model trained on tens of billions of transactions that raised card-testing detection for large users from 59% to 97% overnight. Teams extend these methods across high-volume commerce and new attack surfaces—synthetic IDs and deepfake documents (Deeptrck), and agent-initiated payments with Visa tokens—while researchers note that such gains occur where signals and automation are already in place.
🤖 ai summary based on 7 tweets
Insights from builder, researcher, investors, and domain experts. Opinions are the authors.
Can’t load? Read tweets in preview...
Even my AI agent thinks it’s overqualified for half the jobs on Wall Street. AI agents in finance are now executing incredibly complex workflows, and 57% of firms are already using them. Here's the facts: 🧠 GPT-4, Claude, LLaMA, and others are powering agents that forecast budgets, reconcile accounts, and conduct KYC 💸 Global FinTech AI usage hit $10–15B in 2024 revenue and is projected to reach $300B by 2033 🏗 Frameworks like LangChain, AutoGen, and DSPy enable integration with APIs and financial software for autonomous action AI agent deployment is shifting from experimentation to infrastructure. 60% of finance firms are subscribed to AI services, with Anthropic’s ARR jumping from $1B to $4B in a year. Agents handle tasks like cash flow forecasting, fraud detection, and compliance, typically performed by analysts and back-office teams. With AI outperforming human-managed portfolios in 93% of test cases, the cost of manual execution is becoming harder to justify. The trend connects directly to AI-driven automation in financial operations, where AI virtual assistants and compliance bots are replacing legacy functions. FinTech’s AI market is already scaling from $10B in 2024 toward a $300B projection by 2033. The biggest risk lies in model fragility at scale, unintended outcomes in autonomous workflows can lead to operational or compliance failures, especially without proper guardrails. We dive deeper into the latest data in the this week's Fintech Blueprint, check it out for more!
A post by Stripe engineer @thegautam on building a successful payments foundation model for fraud detection recently went viral. I want to talk about how unusual this particular use case is, which helps understand why such "instant wins" from deploying advanced AI are so rare. As background, Stripe reported that the model cut the amount of missed credit card fraud by up to 5x in some cases. 1. Fraud detection is not a prediction problem. We use the term "prediction" loosely in machine learning to include things like "predicting" fraud. But actually predicting the future (such as loan repayment) is totally different. There are intrinsic limits to predictability because the future is simply not known yet (as @sayashk and I discuss at length in our book AI Snake Oil.) But in fraud detection, as long as you have the right signals, in theory it should be possible to achieve very high accuracy. 2. Stripe is already operating in a singal-rich environment. In contrast, when you identify a new ML use case you often have to start by doing the data work, which might take years (not just technical work — convincing partners to share data, complying with privacy regs, etc.). 3. Most importantly, fraud detection is already automated, and the work involved merely upgrading from traditional ML to a foundation model, so more-or-less a drop-in replacement. Imagine, instead, that a company is trying to introduce the fraud detection to a world that simply hasn't heard of the concept. In this case, diffusion of fraud detection technology would be bottlenecked by extremely slow-moving processes, such as customer acceptance of the inconvenience of occasional credit card freezes due to the inevitable false positives. (The prevalence of such bottlenecks is one of the main points of the essay AI as Normal Technology.) All of this helps explain why there is a long list of barriers that need to be overcome in order to actually capture business value from AI capability improvements. Overnight transformations are extremely rare.
Visa wants to give AI Agents "tokens" so they can pay without you ever seeing a checkout page. Visa's CEO told investors this is their #1 priority. Here's how it will work 👇 https://t.co/AnZw3uGwJV
[#MachineLearning System Design Tech Case Study] Handling Billions of Transactions Daily — How Amazon Efficiently Prevents Fraudulent Transactions and How it Actually Works: https://t.co/9uKSJZ7j00 by @NainaChaturved8 https://t.co/QeeY6wXSU9