Unknowingly Joining the UIA
Everyone can picture the iconic image of a village mob grabbing their pitchforks and torches to fight off some threat to their community. This is their fight, a good cause in their minds, and they are willing to do whatever it takes to keep themselves and their village safe. But what if pitchforks were forced into their hands? What if a member of the mob used the name of someone else that decided not to fight? What if the mob loses, retreats, and the threat retaliates? That retaliation could target villagers that didn’t even want to fight, or even worse, didn’t fight at all. This is the situation many companies are currently facing, whether they know it or not.
On February 26, 2022, Mykhailo Fedorov, the Vice Prime Minister of Ukraine, tweeted:
“We are creating an IT army. We need digital talents.…There will be tasks for everyone. We continue to fight on the cyber front. The first task is on the channel for cyber specialists.”
The tweet links to a Telegram channel through which the Ukrainian IT Army (UIA) was born. The UIA has slowly grown since then and, as a group of vigilantes, has been very effective in the cyberwar against Russia. They have taken down several websites related to Russia’s state and private sectors. Most commonly through distributed denial of service (DDoS) attacks.
The resources used in these DDoS attacks typically are owned and provided by the cyber professionals (or amateurs) who are willingly participating in the UIA’s efforts. Now, whether you believe this is right or wrong, the morality of people joining a fight they believe in is not in question. However, forcing people to join the fight without their knowledge is undoubtedly wrong.
On May 4, 2022, CrowdStrike posted a blog explaining how they had found two Docker images, hosted on Docker Hub, running on compromised Docker Engines. These Docker images were created with the sole purpose of launching DoS attacks against Russian websites that are on the UIA’s list of targets. The Docker Engines were running as legitimate honeypots but were misconfigured. After being compromised, they began running one of the two containers targeting Russian websites.
This is where an ethical boundary is clearly crossed: the operators of these honeypots were not intending to participate in a cyberwar against a nation-state.
This isn’t to say misconfigured systems aren’t hijacked all the time for a myriad of nefarious reasons ― they are. In this specific situation, the dynamics change quite a lot because it’s not just local or federal law that the owners of the hardware need to worry about. There is a direct risk of nation-state retaliation, which could be in the form of lost revenue, a disruption of service, or a negative propaganda attack. However, if there was one misconfiguration, there most likely are more throughout the environment. If such an environment were to be targeted by a power such as the Russian GRU (Russia’s intelligence agency), the odds are that those misconfigurations will be exploited, which could lead to an even more disastrous outcome.
Have I terrified you yet?
Are you thinking about all the potential places things might be misconfigured?
Have you updated everything?
Have you thought through all the various parts of your infrastructure?
How about your software?
It’s practically impossible to be absolutely sure your entire infrastructure and/or software stack is configured correctly to stop these forms of attacks. To CrowdStrike’s credit, it touches on how its endpoint detection and response (EDR) solution, Falcon, would be able to prevent this as the requests from the Docker containers attacking Russian sites would be detected and blocked.
All of this can be prevented with simple, “low-hanging fruit” solutions that are easy and inexpensive (even free) to implement.
If you can see it, you can do something about it.
That’s a core mantra of any security program I implement. Looking at the infrastructure from the outside in, I begin to examine every layer in which I can gather data that will tell me what is happening.
Those versed in security tooling would immediately start thinking about security information and event management (SIEM). While, yes, a SIEM solution is the fully mature answer to the problem, there is plenty that can be done until that point.
Cloud-based solutions commonly offer many forms of data and monitoring right out of the box and they commonly go underutilized. Let’s look at AWS’s EC2 monitoring capabilities as an example.
When a user instantiates a new server, there is a monitoring tab in the AWS console that gives great data in regard to network activity in and out of the system.
These screens are provided by AWS as a part of the instance, without any additional cost. This information is vital yet constantly overlooked. In fact, most people don’t realize their expensive SIEM is probably pulling this data as a part of its aggregation.
AWS even allows you to create alerts based on this data. If you have an EC2 instance running containers, which then gets compromised and begins to send a slew of requests out, AWS will alert you. Now you see it, and you can do something about it.
This is the importance of keeping monitoring top of mind at every layer of the stack. AWS is purely an example. As an engineer, it’s your responsibility to understand the products you’re using fully, and most of them have similar monitoring and alerting capabilities.
It can be a daunting task to sit down and look at your entire stack ― what’s valuable, what’s not, what to look for, and so on.
I’ll close out with a strategy that I have always used in environments I’ve worked in:
Start simply and slowly. Start with the outer layers of your infrastructure, look at what data you have available to you. Dig into the user interfaces (UIs) of the products and services and see what you have enabled, what you can enable, and what your options are. The more you understand about the products you’re using, the more options you’ll have in understanding your entire stack.
You can’t turn on alerts without knowing what to alert. Establish a baseline. Keep an eye on the data you have for a while. I tend to aim for at least a couple of weeks. If I have the option, I’ll email myself a daily report of the data for those two weeks so I can dig in and understand the trends. This step is the real meat of any machine learning (ML) tech within a SIEM. That’s the convenience they’re offering you and what you’re really paying for. But if you’re not at that place, you can still do it manually ― you just need to understand your baseline numbers.
Based on your baseline, you can now configure alerts at thresholds that make sense for you. If there is enough of a departure from the trends you’ve seen, you should know about it. This gives you the opportunity to investigate why that departure occurred and discover any potential issues; for instance, a rogue container attacking a foreign nation-state.
Monitoring can be huge and expensive, but it can absolutely be small and bootstrapped. Don’t wait until you have the budget for the SIEM. Get it in place from the start.