12 July 2017

Fintech Hacks

Synack

“You get what you inspect,
not what you expect.”

By Synack Community Outreach Squad:
Ellie McCardwell and Jenn Yonemitsu

Sometimes at Synack, we ponder how we can hack things. After attending this year’s FS-ISAC conference with Patrick Wardle and Christopher Hudel, we wanted a chance to hone in on fintech security… As an experienced Synack Red Team member and financial sector professional, Christopher Hudel was the person we went back to with our most burning questions. Here’s our conversation with Christopher as we got to look at the problem through his eyes:

“…what sets the financial industry apart from others is their immediate liquidity — security risks are most directly and irrevocably connected to the loss of real, untraceable … monies”

– Christopher Hudel, Synack Red Team

Q1: Is there anything that sets the financial services industry apart in terms of the threat landscape and a financial company’s approach to security? Do you think financial institutions should take a unique stance in their approach to security practices?

In my mind, what sets the financial industry apart from others is their immediate liquidity — security risks are most directly and irrevocably connected to the loss of real, untraceable, fungible monies. Whereas many organizations struggle to properly identify the most important information assets or “crown jewels” and the proper methods to adequately protect them, fintech employees and their adversaries enjoys here a clarity of purpose.

Every company takes a unique approach to their implementation of security controls owing to their personal understanding of the threat landscape, an evaluation of their specific security posture, and prioritization of efforts that they feel make the most sense for both customers and shareholders. In this, financial institutions are no different.

Q2: How does the trend toward banking done on mobile devices impact security and risk? Do you think mobile banking can be done safely?

Banking on mobile devices doesn’t impact security and risk more than any other technology change does — which could be a lot! The idea of mobile banking reminds me of changes in the ATM space. Each of the transitions in the technology stack (OS/2 to Windows XP, IPX/SPX to TCP/IP, thick-client backend communication protocols to web-services) have brought about both operational gains and new security challenges.

Where issues have occurred, the cause seems to have been an echo of Churchill’s frequently-quoted line: “Those who fail to learn from history are doomed to repeat it.” Most security vulnerabilities are a common refrain (it’s why there is an OWASP Top-10 list, after-all) which are sung by new developers, systems integrators and those embracing new technologies.

Fortunately, the security ideals that are intended to protect new technologies (such as embedding security into the SDLC, network segmentation, defense-in-depth technologies, attack and penetration testing) remain well understood; they only need proper implementation to more specific use-cases.

Most financial institutions “get this” — mobile banking can and is absolutely done safely in millions of transactions each day.


Q3: Do you think we should enforce more rigorous security requirements on financial institutions or should the government incentivize companies who effectively mitigate their security risk beyond compliance? Is there a better way to make organizations more secure?

I am not in favor of enforcing more rigorous security requirements. Increased regulation often does not account for the threat landscape unique to each company and industry — for example, mandating a universal adherence to certain anti-malware technologies and implementations (which, I’m sure many security vendors would want) might actually stifle creativity in approaches that mitigate those threats.

Similarly, it is hard for the government to effectively incentivize companies who effectively mitigate their security risk, because how do you actually measure that they have mitigated their risk? Over what time frame? It’s possible that they were just lucky or not an attractive target to attackers during the evaluation period.

What I do favor is increased transparency and information sharing. Providing shareholders and customers current and accurate information regarding cyber incidents and sharing this information within an industry network (such as FS-ISAC) is what will maximize the availability of informed folks that can best protect themselves.

Q4: Does a compliance-based approach to security help or hurt?

Compliance is largely an inspection exercise – are you doing what you said you would do? You will get a result from an inspection; it will be the result you inspected, but it might not be what you expected. Inspections represent a point-in-time period; there are no continuous compliance endeavors, and I think that this is a drawback of the compliance-only focus.

If you look at PCI, that is a compliance exercise. When you look at major organizations that were PCI compliant and also have had a data breach(es), they (or their QSAs) will generally fall back on an answer that sounds something like this: “Well, it was a point in time evaluation, and times change”.

Obviously, in highly-regulated environments like financial services, you don’t get the choice: you have to be compliant AND secure. If you have a hundred dollars to spend, how much do you spend on compliance, and how much do you spend on security? Neither answer is going to be satisfactory to the board. If there is a significant deficiency in compliance, that’s not going be good for the C-suite, which is not going be good for the team. And if there is a significant deficiency in security– while compliant — that leads to a breach, that is also not a good day, right? So, how do you balance the two?

“You get what you inspect, not what you expect.”

It’s interesting that compliance is generally run by audit, while information security is run unto itself, or run under risk or IT. These organizations approach the problem from different perspectives.


Q5: What are the factors that make a target attractive to SRT members? Is it their specialty, skill, a combination of factors?

What’s great about an invite-only, crowdsourced SRT community is the diverse skillsets and interests that motivate them in target selection. For example, some SRT members are absolute geniuses at a particular exploit method (such as XSS, or XXE attacks) and I imagine they enjoy exploring new and creative ways to refine their exploits and toolsets against increasingly strong defenses.

Other researches may be motivated by “longer tail” exploits — those corner-case, but significant weaknesses that an open-ended target scope affords researchers beyond a traditional “two week pentest”. Researches can choose to maximize both their earned income and skill set development with every target.

Q6: Why is human creativity important for offensive cybersecurity?

“Semantic vulnerabilities go to the core of a creative hacker, and this is my personal passion.”

Principally, because our adversaries are human. Bad people can be ingenious and creative in their approach to steal secrets, steal money, or do harm. It takes a similarly-minded (but oppositely intentioned, ie: ethical) individual to creatively think offensively in an effort to help protect good people and organizations from the ill-intended folk.

I generally place exploitable weaknesses into two camps: syntactical and the semantic. Syntax vulnerabilities are those that can generally be tested for (XSS, SQLi, CSRF, etc..) by automated toolsets. Not to say that their exploitation won’t be creative, but it will be already well-understood and accounted for.

Semantic vulnerabilities (those involving logic flaws and unanticipated application flow) go to the core of a creative hacker, and this is my personal passion. It’s harder to programatically discern privilege escalation and unauthorized access; I believe that here is where creativity is best served.

Q7: In what situations do you think “teaming up” with other hackers/researchers is most effective?

Teaming is really effective when the problem set is challenging, or the scope is large enough that one skillset isn’t likely to cover it. (This is an advantage of the Synack platform for clients, although researchers don’t use it to collaborate directly with one another).

I most enjoy “teaming” during code reviews. Firstly, I end up learning a lot in the exercise and understanding code is complex enough that two folks and a whiteboard can usually suss out a security deficiency more quickly than one, or an automated code scanner (for semantic defects, anyway).

Of course teaming is also an excellent mentoring style and can help quickly bring a team together to a common, higher caliber.

Q8: How do we realistically defend against cyber attacks?

Opportunity, reward, and effort are three variables that enter into the calculus of how successful and likely an adversary’s efforts will be. When each of these can be maximized (low effort required to reap maximum reward amid lots of opportunity), a breach [attempt] is almost guaranteed.

“Good hacking is… going after targets that require the least amount of effort and results in the greatest reward and opportunity.”

Most crimes are ones of opportunity; more people are robbed in an alley at night that in bright daylight outside a police station. And no burglar enters through a second story window when the front door is wide open! Penetration testers, playing the role of an attacker, will be no different in the information security realm.

This can and should be used to inform an organization’s good defense. For example, I would encourage organizations to first get a handle on restricting local administrative rights and unique per-workstation admin passwords before implementing a robust “next gen” endpoint detection and response (EDR) platform. The former, while less expensive, involves robust process and support — it can be tempting to try and solve all problems with technology but that only ever works to a point.


As a researcher with experience on multiple platforms, he said that Synack affords a unique combination of high profile clients (ie: new startups, mature legacies, government agencies), emerging technologies, and a low-risk environment. He said, “There isn’t any question that the research is legitimate — the tester originates from a known IP space and adheres to scope constraints and the recipient never has to feel like they are being ‘shaken down’ for a bug bounty.”

In the wrap up of our interview, he added that he likes working on the Synack platform, “because it’s easy to engage on my schedule; there is rigor to assessing the quality of submissions, and the payouts are both fair and make sense.”


Synack provides initiatives to help foster the researcher community and engage top talent; technology to optimize researcher efficiency and accelerate vulnerability discovery; opportunities to work on unique targets; personalized support, and skills development. We do this through fun competitions, interactive gamification elements and levels, mentorship, and specialized projects.

Apply to join the Synack Red Team. Become one of the few and fully experience our platform – it’s designed by hackers for hackers. If you’re up for the challenge, apply today, and use code “SRTBLOGS” in your application.