A few months back now, I had the misfortune of overshooting my Internet quota, and winding up with a AU$380 bill for the month (and that was capped… in truth it was more like AU$3000). In fact, it happened a couple of times until I finally nailed down the cause.
Part of it was NTP traffic (seems lots of cowboys write SNTP clients now and point them at pool.ntp.org), some was the #Hackaday.io Spambot Hunter Project and related activity. In short, I invested some money into upping the quota, and some time into better monitoring.
I wanted to do the monitoring anyway to keep an eye on operations, as well as things like the solar panel voltages, etc. Since I got it in place, I’ve been able to get much faster notifications of when things go awry. Much sooner than the 120% quota usage alarm that Internode sends you.
I’m glad I did that now, last night I left a few tabs open on the Hackaday.io site. I noticed this evening they were still trying to load something and got suspicious… then I saw this:
Double checking, sure enough, something on one of those pages made Chromium get its knickers into a twist, and chew through all that data.
It took me a bit of tinkering to get the right query to extract the above chart. Essentially there was a sustained 1.5MB/sec download for over 21 hours which would account for the 113.1GB that Internode recorded.
It’s a bit co-incidental that the usage dropped the moment I re-started Chromium. Not sure why it was continually re-loading pages, but never mind.
The above data is collected using a combination of collectd and InfluxDB, with Grafana doing the dashboarding and alarms, and a small Perl script pulling the usage data off Internode’s API.