It will inevitably happen at the worst possible time. Before a big release or an important client deployment, when your head of engineering has the flu, or when you’re on vacation. Ours did. It turns out that’s no accident. In addition to Murphy’s law, DDoS attackers time their attacks to when you’re most likely to be unprepared such as on holidays.
For those who don’t know a DDoS attack is an attempt to make an online service unavailable by overwhelming it with traffic from multiple sources. When successful this attack results in slowdown of service and often service shutdown.
On New Year’s Day I returned home from an emergency trip to the doctor with my infant son. That same day we were expecting family in the afternoon and friends in the evening. I was looking forward to a quick nap. Instead, I found a weird contact request in my inbox:
It had already been 2 hours since that first email. I frantically checked our websites (we have several domains). They were all down. Then I noticed an email.
That’s how our nightmare began. We emerged victorious a day later and several hundreds of dollars poorer. Basically since our IP address was compromised, we moved everything to a new address and started using Cloudflare for all our domains. What services like Cloudflare do, without getting too much into the technical aspects among other things, is hide your IP and use their vast network to handle traffic attacks. Adding a service like to your infrastructure is pretty simple and depending on the complexity of your system should take anywhere from 30 minutes to a day or two.
Here’s what we learned from our DDoS attack. My hope is that these lessons we can help you prevent yours (or at least mitigate the damages).
Have an emergency protocol. It was 2am where my engineers are when I noticed the attack. Luckily one of them was up but we do have a “website down” emergency protocol and you should too. You should know what happens when critical infrastructure is down, especially during off hours. When a startup is small, it’s easy - wake up the CTO or the VP of Engineering. As your company grows, your plan might look different.
Have a back up emergency protocol. Make sure there’s more than one person who understands your critical infrastructure. When the attack hit, our CTO was supposed to have the day off the next day. He was planning on a 24 hour, zero connectivity hiking trip in the desert. He never made the trip but had the attack hit a few hours later, we would’ve had no contact with him at a crucial moment.
Redundancy is key. First order of business when attacked is to ensure crucial services are up for your customers. We had a big customer who was having a big event at the time of the attack (an NFL team with a nationally televised game). While customers can be understanding of unexpected downtime, you’d rather not be in a position to explain it to them. If you have redundancy built in, your customers don’t even have to notice something is wrong. How much to invest in this depends on how mission critical your service is for your customers. As the texting conduit between them and their customers our service is definitely critical to our customers but your situation might be different.
Ask for help. As a non-technical CEO, there wasn’t much I could help my team with on the engineering side. However, I was able to find others who could help. I posted a request for help on Facebook and on the StartX founders Q&A board (OwnerListens is a StartX company). Within minutes I had over ten offers for help, including intros to the co-founders of Cloudflare, to the person in charge of preventing DDoS attacks for Google, and to someone at the FBI cyber crime unit. I’m blessed to live in the heart of Silicon Valley and enjoy the access it provides to amazing people like that. You may not but you’d be surprised at how quickly you can get to people who can help with a simple Facebook or Twitter post.
Know the policy of your server provider. One of the first steps we took to address the attack was to reach out to servers provider. They confirmed that we were under attack and that they took us offline. Since we use shared servers, the attack on us was affecting others in their system. Their default reaction to this is to take the website offline to protect everyone else (also known as killing the hostage). That’s the initial one hour attack ended up taking us down for almost a day. One of the things we wasted time on was having to interact with our server provider to show them we’re now protected so they lift our ban. Not all providers have this policy. You should know your provider’s policy and spring for dedicated servers if needed.
Don’t pay the ransom. No good will come out of paying the attackers. There is nothing stopping them from attacking again and asking for money. Definitely don’t believe their threats that there’s nothing you can do or that protective services like Cloudflare don’t work.
Prevention is better than treatment even if it isn’t cheap. Use protection services like AWS Shield, Cloudflare, Cloudfront etc. Many of these services have a lower tier or even a free plan that should fit any budget. It’s like paying for insurance but worse. With insurance if something bad happens, you file a claim and get some money back. In this case, if your protection is working properly you won’t even feel or know that an attack is happening. As a founder with a frugal mentality it’s a hard check to sign but
Even protection services have weaknesses. Your IP can be compromised in other ways. For example, a rogue or disgruntled employee could leak it.
Report it. There’s an FBI cyber crimes unit in the US and an equivalent in most countries. These crimes are being taken more and more seriously. Without your cooperation, the criminals can’t be stopped.
Notify those that need to know. Do you have customers who might be affected? Is the downtime severe enough that your board needs to be informed? Consider who needs to know and what channel to use to notify them but there is no need to broadcast it and perhaps invite other attackers or competitors to take advantage.
Don’t think that because you’re small, it won’t happen to you. We’re a small team and have stayed relatively under the radar and still some hacker found us. There is no reason it couldn’t happen to you.