When intelligence does its job, uncertainty gives way to well-defined risk and people make better decisions. When intelligence fails, the results can be catastrophic. My next few blogs will focus on specific actions that can make the difference between the dizzying highs and the terrifying lows we all experience in security…so let’s start with a low:
In a former role I was responsible for a threat intelligence team focused on detecting web-based precursors to money laundering and fraud. It was a fancy way of saying we analyzed the traffic hitting a customer’s site at any given time for indicators of bad things happening at that time, or of someone building toward doing something bad in the future. One day I got an agitated call from a client: more than 5,000 of his customer’s accounts had been liquidated. Fraudsters had taken over the accounts and used them to steal a needle-moving amount of money. Since I was responsible for getting in front of stuff like this, he had some tough questions for me.
I called him back an hour later with a painful answer: We’d detected all of the takeovers, and alerted them to the fact 30+ days earlier. What’s more, we’d clustered all of the takeovers together because they all showed signs of account grooming (the fraudsters were changing shipping addresses and contact information on the hijacked accounts- getting them ready before actually stealing anything). It was there in black and white: we’d told them about this a month earlier, and it happened anyway. A subsequent post mortem showed where things went bad:
Their Security Operations Center (SOC) received our alerts, and because they were related to suspicious activity on customer accounts, they forwarded them on to their fraud team. At the time the fraud team was purely transaction-centric – meaning their investigations were triggered by shady looking purchases or money movements. Recall that my alerts were for grooming activity that took place long before any money or goods changed hands, and you can see where this is going…The fraud team closed every one of the alerts as “no trouble found” and 30 days later the money walked out the door.
I’d seen process breakdowns before, but this was my first full-blown intelligence failure. The gut reaction was to blame the fraud team, but it was more systemic than that. Intelligence exists to inform decisions, and the fraud team’s transaction-based decision process wasn’t able to contextualize and act upon what we gave them. Blaming them for mismatched intel would be like giving them a mug of gasoline, then saying it was their fault they couldn’t consume it.
The experience was a painful lesson, but it taught me the importance of comprehending the intricacies of the full intelligence cycle before the collection process begins. For those who may not have used intelligence cycles formally: the FBI has a good one. The US Navy’s is nearly identical, but if you’re looking for straightforward and plain language it’s tough to beat the CIA’s Kids’ Zone. No matter what framework you use, they all begin with a planning phase. In my cycles, the planning phase is built around answering five questions:
- What goals are our decision makers trying to achieve?
- What decisions are being made in pursuit of those goals, and by whom?
- What intelligence do they need to make those decisions effectively?
- How will we measure the impact of our intelligence on those decisions and their outcomes?
- How will feedback be used for continuous improvement?
They seem simplistic, but the exercise of discussing and answering these questions early on pays off. Flawed assumptions (and the awkward conversations that follow) are uncovered every time. It’s not the sexiest part of intelligence work, but I’ll take whiteboard frustrations and disagreements over live-fire failures any day of the week.
I’m out of time for this post- in my next I’ll go into how intelligence can turn into a game of telephone, and what we can do to keep it on the rails