It's been ages since I monkeyed with something like this. I'm 95% there but I'm hung up on the math at the end. Here's the basics.
I'm starting here:
grep eth0 /proc/net/dev|awk '{print $2, $10}'
That gives me this for rx and tx on the eth0 device:
10636188093 7027677683
What I'm after is an aggregated number in total Mbps for both sides of the interface, so a per-second metering for an experiment. "How much total traffic is pushing in/out of this host?"
I'm basically doing this:
- Take the rx and tx byte value for the interface
- Add them together in a bash script (for Reasons I need to do this in bash)
- Log the values every second during the test (later this will be changed to a database tool)
- Compare the latest combined/integrated result against the preceding one - subtraction for the difference
- As they're logged every second, that gives me the incremented byte value for the entire NIC per second
That final integrated value, if converted right, should give me a relatively legitimate Mbps value. What's the right formula there?
This is where I'm down to; the rest works perfect:
awk '{ foo = $1 / 1024 / 1024; print foo " Mbps" }'
That final processed "last second byte value" is foo = $1
.
Then I was throwing $1 / 1024 / 1024
for generating the Mbps value, but now upon further review online I'm seeing conflicting arguments and standards on this, and my memory is probably quite outdated.
I know the numbers will be semi-hinky as I'm metering both sides of the NIC, so the "Mbps" value can exceed actual possible line speeds on the NIC (e.g., a 1gb card could show up to 2.0 Mbps, a 10gb card can show up to 20.0 Mbps, etc.).
What should I be dividing that processed next-to-final byte value against for the most accurate Mbps counter? $value/1024/1024? $value/1024/1000? $value/1000000?
I think I need 1024/1024 but I wanted a sanity check.