stoQ was originally designed to support the enterprise level. When used in conjunction with a tool that provides file extraction capabilities, such as Suricata or Bro, one quickly realizes the agility and power of stoQ.
For example, assume you would like to use Suricata to extract every exe, swf, and jar file downloaded from the internet into your environment and then use stoQ to analyze each of them. In this example, you are interested in analyzing each file with Exiftool, yara, and ClamAV. You will then store the results from each of these tools into a preferred datastore. In this case, ElasticSearch will be leveraged; however, stoQ is very flexible, so, you can just as easily switch out Suricata for Bro, or ElasticSearch for MongoDB.
For the sake of simplicity, let's outline some assumptions:
- You've installed stoQ using the installation script
- Suricata is installed, running, and configured for file extraction
- All files are extracted into /var/log/suricata/files
- An ElasticSearch server is operational
- Everything is running on the same server, localhost (just for the sake of simplicity)
Depending on the size of your network, Suricata may extract files faster than you can scan them. To keep up with the throughput, let's leverage the queuing platform RabbitMQ. If you installed stoQ using the installation script, both the RabbitMQ plugin and the server are already installed and configured.
Make sure you are in the stoQ virtualenv before starting any of your processes:
The first thing that needs to happen to get stoQ processing these files is a directory monitor needs to be setup so stoQ knows when Suricata has extracted a file off of the network. All you have to do is run:
This will start the publisher worker plugin, which is used to send messages to our queuing platform of choice, RabbitMQ. Then, load the Suricata directory monitor plugin to monitor /var/log/suricata/files, and finally publish a message to the RabbitMQ queues for exif, yara, and clamav.
In short, once Suricata extracts a file off of the network, it will be saved to /var/log/suricata/files, at which point stoQ will detect it as a new file, then publish a message to each worker queue so it processes the file.
Now that we have files be extracted and sent to stoQ, you will need to start a worker (exif, yara, and clamav) to process those payloads:
This tells stoQ to load the exif worker plugin, then load the RabbitMQ plugin and monitor its queue, and when a message is found, process it and send any results to ElasticSearch. Now, open another terminal and do the same for yara:
And now, start the ClamAV plugin in yet another terminal:
Normally these processes would be running in the background, but for this example, you will want to see the output from the plugins should there be any.
At this point, all results from stoQ should be inserted into your ElasticSearch instance. By default, each worker plugin will insert into an index named after itself. For example, yara will insert into the yara index, exif will insert into the exif index, and so on. You can retrieve the results from each of the worker indexes from elasticsearch via curl:
Another more user-friendly option is to simply use Kibana for searching and interacting with the results. As an example, we created a very simple dashboard using output from stoQ's worker plugins yara, TRiD, and peinfo.
You now have the ability to scan every file extracted off your network with yara, exif, and clamav. The results from each have also been inserted into ElasticSearch for easy searching. This can very easily be expanded to scan each file with OPSWAT, VirusTotal, or anything else for which a plugin exists. If one isn't written, with a little python experience, you can easily write one yourself.