Skip to content

Instantly share code, notes, and snippets.

@candlerb
Last active November 7, 2025 10:34
Show Gist options
  • Select an option

  • Save candlerb/b71da09e14407fa2eac2aa91838128a3 to your computer and use it in GitHub Desktop.

Select an option

Save candlerb/b71da09e14407fa2eac2aa91838128a3 to your computer and use it in GitHub Desktop.
Hooking up pmacct / nfacctd to prometheus

Hooking up pmacct to prometheus

This is a quick hack to get prometheus to read pmacct flow data. Rather than writing a proper exporter, I used exporter_exporter to spawn a python script for each scrape.

NOTE: each pmacct aggregate will end up as a distinct timeseries, so you need to make sure you don't have a cardinality explosion.

In the example below, the "local" network is 192.0.2.0/24 and 2001:db8:abcd::/48. nfacctd is configured so that traffic to and from each local host is aggregated separately, but the other side of each flow is "the rest of the Internet" (i.e. 0.0.0.0/0 or ::/0)

Unfortunately, this doesn't look at the translated source and destination addresses, so isn't much use if your router does NAT.

On pmacct server

apt-get install pmacct

wget https://github.com/QubitProducts/exporter_exporter/releases/download/v0.5.0/exporter_exporter-0.5.0.linux-amd64.tar.gz
tar -C /opt -xvzf exporter_exporter-0.5.0.linux-amd64.tar.gz
chown -R 0:0 /opt/exporter_exporter-0.5.0.linux-amd64
ln -s exporter_exporter-0.5.0.linux-amd64 /opt/exporter_exporter

/etc/pmacct/nfacctd.conf

nfacctd_port: 2055
plugins: memory[inbound], memory[outbound]
networks_file: /etc/pmacct/local.net
#networks_file_filter: true
nfacctd_net: longest
nfacctd_as: longest

imt_path[inbound]: /tmp/inbound.pipe
aggregate_filter[inbound]: dst net 10.0.0.0/8 or dst net 192.0.2.0/24 or dst net 2001:db8:abcd::/48
aggregate[inbound]: dst_host, src_net
#aggregate[inbound]: dst_host, src_as
imt_mem_pools_number[inbound]: 64
imt_mem_pools_size[inbound]: 65536
#nfacctd_net[inbound] = file

imt_path[outbound]: /tmp/outbound.pipe
aggregate_filter[outbound]: src net 10.0.0.0/8 or src net 192.0.2.0/24 or src net 2001:db8:abcd::/48
aggregate[outbound]: src_host, dst_net
#aggregate[outbound]: src_host, dst_as
imt_mem_pools_number[outbound]: 64
imt_mem_pools_size[outbound]: 65536
#nfacctd_net[outbound] = file

/etc/pmacct/local.net

! Inside
10.0.0.0/8
192.0.2.0/24
2001:db8:abcd::/48
! NSRC
128.223.157.0/25
2607:8400:2880:4::/64
! Rest of Internet
0.0.0.0/0
::/0

This allows the remote networks to be more granular than just "the rest of the Internet".

It's also possible to prefix each line with an AS number and comma, then you can aggregate by src/dst_as

/etc/systemd/system/nfacctd.service

(Note that /usr/lib/systemd/system/nfacctd.service also exists, but it is broken as it wrongly has "Type=Forking")

[Unit]
Description=nfacctd
Documentation=https://github.com/pmacct/pmacct/wiki
After=network-online.target

[Service]
User=nobody
Group=nogroup
EnvironmentFile=/etc/default/nfacctd
ExecStart=/usr/sbin/nfacctd -f ${NFACCTD_CONF} $DAEMON_OPTS
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target

/etc/default/nfacctd

# Defaults for nfacct initscript and systemd service

# Location of the configuration file
NFACCTD_CONF=/etc/pmacct/nfacctd.conf

# Additional options that are passed to nfacctd
DAEMON_OPTS=""

/etc/prometheus/expexp.yaml

modules:
  pmacct:
    method: exec
    timeout: 5s
    exec:
      command: /usr/local/bin/pmacct.py

/etc/systemd/system/exporter_exporter.service

[Unit]
Description=Prometheus exporter proxy
Documentation=https://github.com/QubitProducts/exporter_exporter
After=network-online.target

[Service]
User=nobody
Group=nogroup
ExecStart=/opt/exporter_exporter/exporter_exporter -config.file /etc/prometheus/expexp.yaml
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target

/usr/local/bin/pmacct.py

#!/usr/bin/python3

import json
import subprocess

LABELS={}   # add any static labels here, eg hostname

def export(metric, labels, value):
    lstr = ",".join(("%s=\"%s\"" % (k,v) for k,v in labels.items()))
    print("%s{%s} %d" % (metric, lstr, value))

for aggregate in ["inbound", "outbound"]:
    res = subprocess.run(["pmacct", "-s", "-p", "/tmp/%s.pipe" % aggregate, "-O", "json"],
           stdout=subprocess.PIPE)
    if res.returncode:
        print(res.stdout)
        res.check_returncode()
    for line in res.stdout.splitlines():
        data = json.loads(line)
        b = data.pop("bytes")
        p = data.pop("packets")
        data.update(LABELS)
        data["aggregate"] = aggregate
        export("pmacct_flow_bytes_total", data, b)
        export("pmacct_flow_packets_total", data, p)

Start

systemctl daemon-reload
systemctl enable --now exporter_exporter
systemctl enable --now nfacctd

Test

chmod +x /usr/local/bin/pmacct.py
/usr/local/bin/pmacct.py

curl localhost:9999/proxy?module=pmacct

On prometheus server

  - job_name: pmacct
    scrape_interval: 1m
    metrics_path: /proxy
    params:
      module: [pmacct]
    static_configs:
      - targets:
        - pmacct.example.net:9999
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment