So I have a few servers that deal with a lot concurrent IPTV streams over HTTP and because of the way TCP works this generates a lot of microbursts on my switches.
The servers are all connected with 10ge interfaces.
In order to reduce the microrbursts I am using linux 'tc' which helps a lot, though still after a few days of reading the man page I am not sure I am doing everything correctly.
What does the following command actually mean for a server on a 10g interface?
sudo tc qdisc add dev eth0 root tbf rate 3000mbit \ burst 30m latency 50ms peakrate 3500mbit minburst 100000
What I am trying to achieve:
Server will never output more than 3500mbit/s on millisecond level.
Server will never output more than 3000mbit/s on average.
Packets are not prioritized, FIFO is fine.
How would I achieve that using tc? And what does "minburst" (100000) and burst (30m) mean? Do I even need to specify them in order to achieve what I want?
[link][12 comments]