For reasons I no longer remember, I was checking out pg_netstat, a Postgres extension for monitoring network traffic. Reading about it I couldn't help but think this was almost exactly the case for IP accounting in systemd. Admittedly, it is still a cool project! I thought I might try out IP accounting while I tried a few other things, documented here.
Rather than trying to recreate
pg_netstat I thought it
would be more instructive to emulate the behavior with tools I use
more frequently. I have been
using NATS (a messaging server)
lately and thought it could be interesting to measure network
traffic while configuring it in a way that I was happy with
(security™). This isn't too hard because the way to deploy NATS is
delightfully easy — it is a single executable that is runnable out
of the box!
I won't bother writing much about how to download and unzip a file to setup NATS. Here instead is the service file I wrote; worth noting are the service properties:
[Unit] Description=demo NATS and IP accounting Requires=network.target After=network.target [Service] ExecStart=/usr/local/bin/nats-server -a 127.0.0.1 DynamicUser=yes PrivateNetwork=yes IPAccounting=yes [Install] WantedBy=multi-user.target
DynamicUser is nearly my default for new services, a
permissionless user and so many safe defaults it is easy to do the
right thing using it.
PrivateNetwork is perhaps
overkill here but I wanted to see how it did or did not affect IP
accounting. With it set the service has no networking capability
outside of localhost (which is why the server is launched with
127.0.0.1, the default is
IPAccounting enables tracking network
traffic. It turns out IP accounting just works on the private
network which shouldn't be too surprising but I like the consistency
With that done it is possible to start the server:
# systemctl start nats.service
Generating traffic is an interesting case because I have given the server a private network, so it is not immediately reachable and instead it is necessary to launch a shell in the same network namespace:
# systemd-run -p PrivateNetwork=yes -p JoinsNamespaceOf=nats.service -S
From there I used the
nats CLI tool to run a few
benchmark tests in order to generate traffic:
# nats bench benchsubject --pub 1 --sub 10 23:22:03 Starting Core NATS pub/sub benchmark [subject=benchsubject, multisubject=false, multisubjectmax=0, msgs=100,000, msgsize=128 B, pubs=1, subs=10, pubsleep=0s, subsleep=0s] NATS Pub/Sub stats: 728,714 msgs/sec ~ 88.95 MB/sec Pub stats: 72,428 msgs/sec ~ 8.84 MB/sec Sub stats: 665,969 msgs/sec ~ 81.30 MB/sec  72,425 msgs/sec ~ 8.84 MB/sec (100000 msgs)  72,351 msgs/sec ~ 8.83 MB/sec (100000 msgs)  71,554 msgs/sec ~ 8.73 MB/sec (100000 msgs)  71,755 msgs/sec ~ 8.76 MB/sec (100000 msgs)  69,488 msgs/sec ~ 8.48 MB/sec (100000 msgs)  69,140 msgs/sec ~ 8.44 MB/sec (100000 msgs)  68,505 msgs/sec ~ 8.36 MB/sec (100000 msgs)  67,393 msgs/sec ~ 8.23 MB/sec (100000 msgs)  67,227 msgs/sec ~ 8.21 MB/sec (100000 msgs)  66,623 msgs/sec ~ 8.13 MB/sec (100000 msgs) min 66,623 | avg 69,646 | max 72,425 | stddev 2,116 msgs
With the service having experienced some traffic it is time to try plucking my data from systemd. The first case is nice and easy:
$ systemctl show nats.service -p IPIngressBytes -p IPEgressBytes IPIngressBytes=47192772 IPEgressBytes=471106099
Of course, the systemd developers say the above isn't exactly intended for machine consumption and instead programs should probably use dbus rather than parsing the text (which admittedly is sourced via dbus). I don't have a lot of experience with dbus so here are two different attempts:
import dbus NATS = 'nats.service' sb = dbus.SystemBus() systemd1 = sb.get_object('org.freedesktop.systemd1', '/org/freedesktop/systemd1') manager = dbus.Interface(systemd1, 'org.freedesktop.systemd1.Manager') service = sb.get_object('org.freedesktop.systemd1', object_path=manager.GetUnit(NATS)) interface = dbus.Interface(service, dbus_interface=dbus.PROPERTIES_IFACE) print(interface.Get('org.freedesktop.systemd1.Service', 'IPEgressBytes'))
I have to admit, I did not love writing this. I don't yet feel confident enough in the design of dbus to explain why I need a manager to get the service to get an interface to get the property I care about, but at least it works.
While I am not a huge fan of pulling in more dependencies I think it is worth mentioning how drastic the improvement is in interfacing with systemd using python and the pystemd library. I think the following equivalent example gives some idea after the last:
from pystemd.systemd1 import Unit with Unit(b'nats.service') as u: print(u.Service.IPIngressBytes)
While I mentioned I'm not really interested in recreating pg_netstat I did notice how similar the results end up being as a consequence of the foundational pieces more than anything. pg_netstat exposes:
The first four map to the properties available via IP accounting:
The "speed" metrics are derived from the above and the interval over which they were collected; where pg_netstat polls at a given interval before writing to a database table.
Of course, having realized that I can't help but think of the sorts of easy hacks you could do to replicate such a setup. Maybe a systemd-timer triggering a "scrape" into an on-disk buffer? Doing it right might be tough but I am imagining something like:
[Unit] Description=record IP accounting data every minute [Timer] OnActiveSec=1m Unit=record-ip-accounting.service [Install] WantedBy= basic.target
There's a minor caveat in how timers use
that could jitter the time it is executed, but it would be best to
capture the time the data is pulled anyhow (that way you could do
smarter queries for windows and aggregates). In terms of writing it
someplace I might continue my horrible fascination with SQLite and
try emulating a kind of bounded on-disk buffer like:
create table buffer(id integer primary key autoincrement, IPIngressBytes, IPEgressBytes, IPIngressPackets, IPEgressPackets); create trigger delete_tail after insert on buffer begin delete from buffer where id < new.id-30240; end
Where (obviously) 30240 could be 3 weeks of IP metrics each minute. Now, I know better than to actually do this so this is all hypothetical.
Almost without meaning to I started recreating my old system monitoring setup, which was itself a kind of Munin replacement. As I get more experience I find some things are easier but others never seem to change, ah well. I was pleasantly surprised once again how simple this was to accomplish. The real benefit I think is how general the approach is, any service can be monitored without a custom solution per database or message queue. The level of detail is pretty rough but for the sorts of problems I have and the kind of debugging I perform I think they would work just fine.