Since 2023, every fintech pitch deck has been stuffed with the word "AI". Some of it is real, most of it is marketing. Here's how to tell the difference when you evaluate a product.
Real AI use cases
Fraud detection is the oldest and most legitimate. Card networks and banks have used neural networks for decades to flag unusual transaction patterns in real time. Credit underwriting is another — lenders now use gradient-boosted models on hundreds of alternative data points to score thin-file borrowers. Insurance claims triage, KYC document verification, and transaction categorization are all genuine machine learning applications.
Fake AI use cases
Any product that offers "AI-powered investment recommendations" based on five multiple-choice questions isn't using AI in any meaningful sense — it's a decision tree. Most "AI chatbots" in banking are rules-based NLP systems that fail on anything off-script. The word is used to signal innovation, not to describe actual technology.
Where GenAI fits
Large language models are genuinely useful for three things in personal finance: summarizing long documents (policy terms, fund factsheets), answering complex support questions in natural language, and parsing unstructured data like bank statement PDFs. Most of these are back-office wins that don't need to be user-facing to create value.
The risks
AI systems can hallucinate financial advice. Regulatory frameworks for AI-driven wealth advisory are still evolving. Always verify AI-generated claims against primary sources before acting on them. If an app tells you "this fund will return 18% next year", that's not AI — that's just wrong.