Slowing down a server with traffic shaping












0














I would like to study the behaviour of an application streaming from a slow server. I'm trying to take advantage of tc-netem to introduce a network delay.



Before setting up a more complicated scenario, I decided to do this on a virtual machine, which is supposed to emulate the slow server. I access the VM via ssh, so I decided to create a virtual ethernet which will be delayed, while I plan to use the real ethernet device for management.



First I created the fake interface with ip link add link eth0 type macvtap and I assign it with an IP address.



Then I added a 40 milliseconds delay with tc qdisc add dev macvtap0 root netem delay 40000. This effectively allowed me to have a throughput drop (from ~250 MiB/sec to ~6 MiB/sec).



At this point I tried to play a bit with my setup, and I started to realize that the delay was not just affecting the macvtap0 device, but also the eth0 device I was connecting through (in short, my ssh session started to lag).



I think that my netem delay affected the actual NIC. Is this due to the fact I'm working within a VM? Or should I have used something different tan macvtap? Or could it be because I applied my changes to the root qdisc?



EDIT - This is the first time for me entering the apparently huge world of traffic differentiation. Perhaps there's a better approach to this? E.g. can I set up a queue to slow down a selected process? I decided to rename this question to reflect my actual purpose.










share|improve this question





























    0














    I would like to study the behaviour of an application streaming from a slow server. I'm trying to take advantage of tc-netem to introduce a network delay.



    Before setting up a more complicated scenario, I decided to do this on a virtual machine, which is supposed to emulate the slow server. I access the VM via ssh, so I decided to create a virtual ethernet which will be delayed, while I plan to use the real ethernet device for management.



    First I created the fake interface with ip link add link eth0 type macvtap and I assign it with an IP address.



    Then I added a 40 milliseconds delay with tc qdisc add dev macvtap0 root netem delay 40000. This effectively allowed me to have a throughput drop (from ~250 MiB/sec to ~6 MiB/sec).



    At this point I tried to play a bit with my setup, and I started to realize that the delay was not just affecting the macvtap0 device, but also the eth0 device I was connecting through (in short, my ssh session started to lag).



    I think that my netem delay affected the actual NIC. Is this due to the fact I'm working within a VM? Or should I have used something different tan macvtap? Or could it be because I applied my changes to the root qdisc?



    EDIT - This is the first time for me entering the apparently huge world of traffic differentiation. Perhaps there's a better approach to this? E.g. can I set up a queue to slow down a selected process? I decided to rename this question to reflect my actual purpose.










    share|improve this question



























      0












      0








      0







      I would like to study the behaviour of an application streaming from a slow server. I'm trying to take advantage of tc-netem to introduce a network delay.



      Before setting up a more complicated scenario, I decided to do this on a virtual machine, which is supposed to emulate the slow server. I access the VM via ssh, so I decided to create a virtual ethernet which will be delayed, while I plan to use the real ethernet device for management.



      First I created the fake interface with ip link add link eth0 type macvtap and I assign it with an IP address.



      Then I added a 40 milliseconds delay with tc qdisc add dev macvtap0 root netem delay 40000. This effectively allowed me to have a throughput drop (from ~250 MiB/sec to ~6 MiB/sec).



      At this point I tried to play a bit with my setup, and I started to realize that the delay was not just affecting the macvtap0 device, but also the eth0 device I was connecting through (in short, my ssh session started to lag).



      I think that my netem delay affected the actual NIC. Is this due to the fact I'm working within a VM? Or should I have used something different tan macvtap? Or could it be because I applied my changes to the root qdisc?



      EDIT - This is the first time for me entering the apparently huge world of traffic differentiation. Perhaps there's a better approach to this? E.g. can I set up a queue to slow down a selected process? I decided to rename this question to reflect my actual purpose.










      share|improve this question















      I would like to study the behaviour of an application streaming from a slow server. I'm trying to take advantage of tc-netem to introduce a network delay.



      Before setting up a more complicated scenario, I decided to do this on a virtual machine, which is supposed to emulate the slow server. I access the VM via ssh, so I decided to create a virtual ethernet which will be delayed, while I plan to use the real ethernet device for management.



      First I created the fake interface with ip link add link eth0 type macvtap and I assign it with an IP address.



      Then I added a 40 milliseconds delay with tc qdisc add dev macvtap0 root netem delay 40000. This effectively allowed me to have a throughput drop (from ~250 MiB/sec to ~6 MiB/sec).



      At this point I tried to play a bit with my setup, and I started to realize that the delay was not just affecting the macvtap0 device, but also the eth0 device I was connecting through (in short, my ssh session started to lag).



      I think that my netem delay affected the actual NIC. Is this due to the fact I'm working within a VM? Or should I have used something different tan macvtap? Or could it be because I applied my changes to the root qdisc?



      EDIT - This is the first time for me entering the apparently huge world of traffic differentiation. Perhaps there's a better approach to this? E.g. can I set up a queue to slow down a selected process? I decided to rename this question to reflect my actual purpose.







      networking tc macvlan






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited Dec 15 at 18:03

























      asked Dec 15 at 17:00









      Dacav

      293212




      293212






















          1 Answer
          1






          active

          oldest

          votes


















          0














          I'm not sure about the problem you have, so I'm skipping that.



          If you want to fiddle with traffic shaping per process you will need to use a classful queuing discipline. HTB or HSFC are probably your best bet. Using that you can create a tree of queueing disciplines (netem can be attached to one of the leaves) and assign traffic across them with tc filter.



          Filtering is quite flexible because of the fw filter method wich can look for an iptables mark, which in turn means that you can select the traffic using iptables. You can also select the traffic directly though.



          Having said that, please note that qdiscs are only effective on outgoing traffic. You can have an ingress qdisc but that's very limited and probably won't behave as you'd expect.



          For testing purposes, a good bet would be to create a real VM with two interfaces and somehow force-route your traffic through that. Some tricker may be required (i.e. a couple of levels of NATing). In the VM you can then attach whatever qdisc you like on the two interfaces, controlling both directions of the traffic.






          share|improve this answer





















          • For ingress there's ifb, which inserts an internal egress pseudo-interface. Eg my answer to Simulation of packet loss on bridged interface using netem
            – A.B
            Dec 20 at 22:25













          Your Answer








          StackExchange.ready(function() {
          var channelOptions = {
          tags: "".split(" "),
          id: "106"
          };
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function() {
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled) {
          StackExchange.using("snippets", function() {
          createEditor();
          });
          }
          else {
          createEditor();
          }
          });

          function createEditor() {
          StackExchange.prepareEditor({
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: false,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: null,
          bindNavPrevention: true,
          postfix: "",
          imageUploader: {
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          },
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          });


          }
          });














          draft saved

          draft discarded


















          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f489159%2fslowing-down-a-server-with-traffic-shaping%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown

























          1 Answer
          1






          active

          oldest

          votes








          1 Answer
          1






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes









          0














          I'm not sure about the problem you have, so I'm skipping that.



          If you want to fiddle with traffic shaping per process you will need to use a classful queuing discipline. HTB or HSFC are probably your best bet. Using that you can create a tree of queueing disciplines (netem can be attached to one of the leaves) and assign traffic across them with tc filter.



          Filtering is quite flexible because of the fw filter method wich can look for an iptables mark, which in turn means that you can select the traffic using iptables. You can also select the traffic directly though.



          Having said that, please note that qdiscs are only effective on outgoing traffic. You can have an ingress qdisc but that's very limited and probably won't behave as you'd expect.



          For testing purposes, a good bet would be to create a real VM with two interfaces and somehow force-route your traffic through that. Some tricker may be required (i.e. a couple of levels of NATing). In the VM you can then attach whatever qdisc you like on the two interfaces, controlling both directions of the traffic.






          share|improve this answer





















          • For ingress there's ifb, which inserts an internal egress pseudo-interface. Eg my answer to Simulation of packet loss on bridged interface using netem
            – A.B
            Dec 20 at 22:25


















          0














          I'm not sure about the problem you have, so I'm skipping that.



          If you want to fiddle with traffic shaping per process you will need to use a classful queuing discipline. HTB or HSFC are probably your best bet. Using that you can create a tree of queueing disciplines (netem can be attached to one of the leaves) and assign traffic across them with tc filter.



          Filtering is quite flexible because of the fw filter method wich can look for an iptables mark, which in turn means that you can select the traffic using iptables. You can also select the traffic directly though.



          Having said that, please note that qdiscs are only effective on outgoing traffic. You can have an ingress qdisc but that's very limited and probably won't behave as you'd expect.



          For testing purposes, a good bet would be to create a real VM with two interfaces and somehow force-route your traffic through that. Some tricker may be required (i.e. a couple of levels of NATing). In the VM you can then attach whatever qdisc you like on the two interfaces, controlling both directions of the traffic.






          share|improve this answer





















          • For ingress there's ifb, which inserts an internal egress pseudo-interface. Eg my answer to Simulation of packet loss on bridged interface using netem
            – A.B
            Dec 20 at 22:25
















          0












          0








          0






          I'm not sure about the problem you have, so I'm skipping that.



          If you want to fiddle with traffic shaping per process you will need to use a classful queuing discipline. HTB or HSFC are probably your best bet. Using that you can create a tree of queueing disciplines (netem can be attached to one of the leaves) and assign traffic across them with tc filter.



          Filtering is quite flexible because of the fw filter method wich can look for an iptables mark, which in turn means that you can select the traffic using iptables. You can also select the traffic directly though.



          Having said that, please note that qdiscs are only effective on outgoing traffic. You can have an ingress qdisc but that's very limited and probably won't behave as you'd expect.



          For testing purposes, a good bet would be to create a real VM with two interfaces and somehow force-route your traffic through that. Some tricker may be required (i.e. a couple of levels of NATing). In the VM you can then attach whatever qdisc you like on the two interfaces, controlling both directions of the traffic.






          share|improve this answer












          I'm not sure about the problem you have, so I'm skipping that.



          If you want to fiddle with traffic shaping per process you will need to use a classful queuing discipline. HTB or HSFC are probably your best bet. Using that you can create a tree of queueing disciplines (netem can be attached to one of the leaves) and assign traffic across them with tc filter.



          Filtering is quite flexible because of the fw filter method wich can look for an iptables mark, which in turn means that you can select the traffic using iptables. You can also select the traffic directly though.



          Having said that, please note that qdiscs are only effective on outgoing traffic. You can have an ingress qdisc but that's very limited and probably won't behave as you'd expect.



          For testing purposes, a good bet would be to create a real VM with two interfaces and somehow force-route your traffic through that. Some tricker may be required (i.e. a couple of levels of NATing). In the VM you can then attach whatever qdisc you like on the two interfaces, controlling both directions of the traffic.







          share|improve this answer












          share|improve this answer



          share|improve this answer










          answered Dec 15 at 18:53









          V13

          2,799613




          2,799613












          • For ingress there's ifb, which inserts an internal egress pseudo-interface. Eg my answer to Simulation of packet loss on bridged interface using netem
            – A.B
            Dec 20 at 22:25




















          • For ingress there's ifb, which inserts an internal egress pseudo-interface. Eg my answer to Simulation of packet loss on bridged interface using netem
            – A.B
            Dec 20 at 22:25


















          For ingress there's ifb, which inserts an internal egress pseudo-interface. Eg my answer to Simulation of packet loss on bridged interface using netem
          – A.B
          Dec 20 at 22:25






          For ingress there's ifb, which inserts an internal egress pseudo-interface. Eg my answer to Simulation of packet loss on bridged interface using netem
          – A.B
          Dec 20 at 22:25




















          draft saved

          draft discarded




















































          Thanks for contributing an answer to Unix & Linux Stack Exchange!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid



          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.


          To learn more, see our tips on writing great answers.





          Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


          Please pay close attention to the following guidance:


          • Please be sure to answer the question. Provide details and share your research!

          But avoid



          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.


          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f489159%2fslowing-down-a-server-with-traffic-shaping%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          Morgemoulin

          Scott Moir

          Souastre