Aloha it’s Patrick, Director of R&D at Synack. In my free time, I also run a small OS X security website objective-see.com, where I share my personal OS X security tools and blog about OS X security and coding topics. Below is one such post originally published on my site, which discusses the anti-analysis techniques utilized by a recently published fake Adobe Flash update. Read & enjoy!
Recently, SANS posted a short blog post titled “Fake Adobe Flash Update OS X Malware”. While the blog covers the initial infection vector, and subsequent articles provide a decent overview of the attack, here, let’s briefly discuss some of the anti-analysis techniques utilized by the malware installer.
Note: if you want to follow along at home, download the malware installer (password: infect3d).
In the SANS post, the author (Johannes Ullrich), described a Facebook click-bait scheme that lead naive users to a “fake Flash update”:
If an unsuspecting user is tricked into installing the ‘update’ they will unfortunately become infected with OS X adware.
While this social-engineering approach is the most common OS X infection strategy, the fact that the binary installer is signed is somewhat interesting. SANS noted this, specifically stating, “The installer is signed with a valid Apple developer certificate issued to a Maksim Noskov.”
Malware Installer; InstallCore
As previously mentioned, the malware installer is signed by Maksim Noskov. Though signing malware affords a simple Gatekeeper ‘bypass’, it may provide attribution and also presents a simple way to stop the malware in its tracks. How? Apple can (and did) easily revoke the signing certificate, thus preventing the malware from ever executing. This is illustrated in the following image where as of February 7th, OS X blocks the malware with CSSMERR_TP_CERT_REVOKED(-2147409652):
Ok, back to Maksim Noskov. Hopping on VirusTotal, we find about 1000 files signed with this Apple Developer ID:
InstallCore is a known component of OS X adware that’s been discussed before. Here however, lets briefly discuss some of its anti-analysis features.
One of the main questions an analyst often asks when analyzing a malicious installer or downloader is what the installer persistently installs and from where. Capturing some network traffic shows the malware installer connecting to various HTTP servers (presumably to download other malicious components), as well as downloading a legitimate adobe flash installer:
However, running ‘strings’ on the installer binary does not reveal these urls (e.g.appsstatic2fd4se5em.s3.amazonaws.com):
Clearly some string obfuscation is being performed. In reality, this isn’t too surprising; most malware (especially on Windows platforms) will obfuscate sensitive strings.
Loading the malware installer’s binary (tirocinium), into a disassembler such as IDA, makes it easy to spot string obfuscations (a simple add loop):
Though one could write an IDAPython script to decode such strings, I’m lazy and would rather let the malware simply deobfuscate the strings itself, under the watchful eye of a debugger. However, before jumping into a dynamic debugger, let’s note another ‘anti-analysis’ feature; junk code.
As the following images illustrate, there are many many code blocks and calls in the disassembly to functions such aslround, NSUserName, rint, sysconf, and more. All these calls are ‘useless’ – they’re simply present to change the signature (hash) of identical samples and/or perhaps thwart simple AV emulators and/or hinder analysis. However, in terms of manual analysis, while somewhat annoying, in reality they don’t stop us.
As previously mentioned, many of the malware’s more interesting strings are obfuscated. It was surmised that the easiest way to uncover such strings was to simply let the malware run under a debugger, allowing it to un-obfuscate the strings for us. Unfortunately, attempting to debug the malware initially failed:
I suspected some anti-debugging logic in the malware. While such anti-debugging in Windows malware is very common, on OS X malware, it is somewhat more rare.
Skimming the disassembly, nothing appears immediately obvious that would indicate debugger detection. However, as large portions of the malware (such as strings) were obfuscated, this isn’t too surprising.
Starting a debug session and setting a breakpoint on the malware installer’s initial code logic (-[ICAppController applicationDidFinishLaunching:]) did work. This indicated some code was (later) manually detecting the debugger and then terminating itself. Stepping over chunks of code quickly identified such a function (sub_100035C5B):
sub_100035C5B starts by de-obfuscating some string (see the loop at 0000000100035C88). Stepping over the deobfuscation logic in a debugger, reveals the string value: “ptrace”
Once the “ptrace” string is deobfuscated, the function then calls dlsym to resolve the address of ptrace.
Finally, at address 0x0000000100035CBE the function invokes ptrace with 0x1F for the request parameter. Seasoned OS X reversers should recognize 0x1F as PT_DENY_ATTACH, a common OS X-specific anti-debugging trick. Specifically, on OS X, if a process calls ptrace(PT_DENY_ATTACH, 0, 0, 0) while being actively debugged (traced), Apple notes it “will exit with the exit status of ENOTSUP;” Also, if a process has called ptrace with PT_DENY_ATTACH, attempting to attach to it with a debugger will equally fail.
Of course this anti-debugging mechanism is trivial to bypass. For example, one can skip over call, by modifying the program counter register (RIP):
The malware discussed in this blog post depends on an amateur infection mechanism, and is not particularly novel or intriguing. However, it makes use of some obfuscations and anti-debugging techniques. While such anti-analysis logic is quite common in Windows malware, they are somewhat more rare on OS X…for now!