In September last year, we promised a client we’d send 110 reminder messages.
We sent 296.
Not because of a rogue decision. Because of a throttling bug — a system that was supposed to limit volume didn’t stop when it should have. And one of our SIM cards got temporarily blocked as a result.
The client didn’t find out from us.
What I should have done immediately is obvious in retrospect. What I actually did was deal with the technical problem first and explain second. Wrong order.
The better lesson isn’t about the bug. It’s about what it revealed. We had built a system that could run without adequate human review. We had an automated reminder process where nobody was checking whether the output matched the agreement. We were moving fast enough that the checks weren’t keeping up.
This is a common failure mode in early-stage AI deployments. You automate the action. You forget to automate the verification. Or worse — you assume that because it worked yesterday, it’ll work today.
After that incident, we added a peer-review step before the daily send. Not because we didn’t trust the engineers. Because trust without verification is just hope.
I’m not sure “move fast and break things” was ever the right advice. I know it’s definitely not the right advice when you’re moving other people’s money.
What’s the production incident that taught you the most about verification culture?
#AI #DebtCollection #AccountsReceivable #Fintech
Leave a Reply