Visualizing µTP

We’ve spent a lot of time in recent posts talking about the benefits of µTP.  We’ve even talked a little bit about how it works here, though much more so in the various technical forums for the community.  But sometimes a picture is worth 2^^10 words and I think the graph below says it best.  µTP appears to be up to the task of reducing congestion.

Visualizing uTP 1

These results are taken from our QA regression tests that we run on each new version of the client that ships with µTP.  The test is a simple one.  We use a DSL line here in the office and start a client seeding on that DSL line.  We then measure the latency seen by other applications, such as VoIP, online games and web browsing, that we run concurrently over the same link.  The graph above is a histogram of those latency samples.

The green samples were taken with a client seeding on TCP and the red samples were taken with a client seeding on uTP.  (You can tell that these are engineering graphs rather than marketing ones simply enough by the fact that GREEN= bad and RED = good, but you get the picture…).  In reading the graph, remember, queuing delay (latency) is a side effect of congestion.  More latency in this test means more congestion.

With the target latency set at 100ms, µTP does a pretty good job keeping the latency felt by the other applications near the target.  TCP clearly does not and more than congests the uplink.  In the process this ruins the network for all of the adjacent applications below.

Visualizing uTP 2

While much work remains ahead of us (like picking the right target latency), it seems that µTP demonstrates some clear potential to alleviate network congestion wherever the network bottleneck happens to reside.  This has obvious benefits for users who will no longer congest themselves, benefits for publishers who want to use BitTorrent but also want to protect their brand when users seed content on their behalf, and benefits for ISPs who should see far fewer support issues with BitTorrent causing congestion and impacting other users on the network.

A win win win.

These results are taken from our QA regression tests that we run on each new version of the client that ships with µTP.  The test is a simple one.  We use a DSL line here in the office and start a client seeding on that DSL line.  We then measure the latency seen by other applications, such as VoIP, online games and web browsing, that we run concurrently over the same link.  The graph above is a histogram of those latency samples.

The green samples were taken with a client seeding on TCP and the red samples were taken with a client seeding on uTP.  (You can tell that these are engineering graphs rather than marketing ones simply enough by the fact that GREEN= bad and RED = good, but you get the picture…).  In reading the graph, remember, queuing delay (latency) is a side effect of congestion.  More latency in this test means more congestion.

With the target latency set at 100ms, µTP does a pretty good job keeping the latency felt by the other applications near the target.  TCP clearly does not and more than congests the uplink.  In the process this ruins the network for all of the adjacent applications below.

 Related Posts:
  • john

    Looks superb :)

  • http://www.digitalsociety.org George Ou

    You should label your chart as “upstream usage” because the latency on downstream is much greater. Moreover, 70 ms is unbearable for online gaming and it even causes my Lingo VoIP service to drop voice packets (timeout). It would be possible to force upstream induced jitter down to 15 milliseconds or less (time it takes to clear one packet from transmit queue).

    Based on my tests from now and before, the amount of latency being induced in early 2008 is not much different from now with uTorrent 2.0. Those downstream spikes are horrendous, and I have suggested (http://www.formortals.com/?p=57) that BitTorrent should attempt to separate the individual upstream packets at equidistant intervals so that upstream jitter (and maybe to some extent downstream jitter) can be vastly reduced.

    I will vehemently disagree with your resistance to “excessive intelligence” in the network because I feel this will not only result in better performance for other applications, but it will also improve BitTorrent (or any P2P) application. That’s the most direct way to manage per-user, per-subscriber, and per-application fairness and DiffServ on the broadband modem or cable/DSL head-end is the most direct way to eliminate upstream and downstream jitter respectively. By fighting these efforts and perhaps aligning yourselves with Net Neutrality advocates who hypocritically (http://blog.pff.org/archives/2009/06/free_press_hypocrisy_over_metering_internet_price.html) suggested that metered pricing was a better way to go than intelligent networks (http://blogs.zdnet.com/Ou/?p=914), you’re dooming your own application and protocol to poorer performance and more limited usage caps.

    And as I have repeatedly said, bandwidth throttling is bad but deprioritization of P2P file transfer applications is good for both the applications that have to share a network with P2P and good for the P2P application as well.

    • Simon Morris

      George,
      Thanks for your remarks. Your comments about pacing have been taken seriously by us and we did indeed spend some time (around the time you made your original suggestions I think) investigating it. The problem we found at the time is that this is certainly not a simple fix. We encountered a range of challenges including very large minimum delays you can (easily) use in Windows, a tendency for packets to get aggregated into chunks, large amounts of timing noise – we found that just force-fitting ideal pacing has a disastrous effect on transfer speeds. We do have some limited pacing implemented in some scenarios, and we understand the theoretical benefits, but it is not low-hanging-fruit. For now it has not been our focus, but we may well take another look at it in due course.
      Regarding our respective positions on net neutrality, as I said in my original email to you, we do not want to be zealots – we stand for constructive engagement between people who provide the internet and people who build popular apps ON the internet. uTP is an important (not the only) part of that engagement. We think there are compelling reasons for network providers to remain as agnostic as possible to applications. We realize there are always going to be network management needs and exceptions, but we also believe that some broadly agreed principles of net neutrality should be the general rule. Your claims that special network intelligence will result in better protocol performance and fewer bandwidth caps – its really an argument whether these good intentions will get us to a place we both want to go, or whether they are the first steps along the road to hell… Like you, we can’t know for sure, but compelled to make a choice we have chosen the way that we believe has the better chance of a better future for everyone.

  • http://www.digitalsociety.org George Ou

    Simon,

    Thanks for your remarks.

    It seems that your concerns are based on the accuracy of the intelligence in the network. My question to you is this. Asside from concerns about accuracy and possible misclassification (which I think can be addressed with intelligent classification mechanisms that look at traffic pattern and not just port numbers), do you have a philisophical opposition to always prioritizing low-bandwidth and low-duration applications over high-bandwidth and high-prioritization applications?

    In simpler terms, do you believe that it is ever appropriate for an ISP to prioritize VoIP and online gaming traffic (sub 100 Kbps) over BitTorrent traffic?

    • Simon Morris

      George,

      So we’re certainly not against a philosophy where bandwidth for some apps should give way to bandwidth for other apps. But we believe that it makes good sense for application developers to implement this type of prioritization (like uTP) rather than a priori rules being established by ISPs which every application provider must conform to going forward. (Should I say that app providers must then conform to every different network providers different implementation of these policies – it becomes a real mess very quickly.)

      I might point out an example. A number of developers (including us) are getting more and more advanced in implementing streaming on top of BitTorrent. At this point you have a protocol that is much higher priority in the mind of the user, but only the application knows that – the policy implemented by ISPs which blindly says “all BitTorrent is lowest priority” will simply squash it. This means that a natural development of download technology towards more of a progressive download (for content that makes sense) is somehow trumped by (perhaps well intentioned) a priori rules enforced in the “intelligent” network.

  • Pingback: Digital Society » Blog Archive » Analysis of BitTorrent uTP congestion avoidance

  • http://www.digitalsociety.org George Ou

    “Should I say that app providers must then conform to every different network providers different implementation of these policies”

    I don’t believe this is a legitimate concern since the policy fairly prioritizes low bandwidth over high bandwidth applications. It doesn’t require the developer to do anything. So long as the network provider is transparent and they don’t abuse he network default priority (which I believe should take a backseat to application or user priority labels), I don’t see what your objection is especially when I’m suggesting that your preference is more important than the default ISP settings. Moreover, I would suggest that the FCC can oversee any potential anti-competitive abuse of the default settings e.g., ISP labels low bandwidth application as low priority.

    Also, I never got an answer as to whether you believe BitTorrent (even in video streaming mode) should have equal priority to applications such as VoIP and online gaming which use less than 100 Kbps. Moreover, why would a round-robin queue that alternates between the VoIP/gaming packets and BitTorrent packets be unfair especially when it reduces jitter for the VoIP packets? As I explained here, http://www.digitalsociety.org/2009/09/the-need-for-a-smarter-prioritized-internet/, a First In First Out (FIFO) system is simply primitive and destructive for both VoIP which has negative repercussions for P2P (when the user shuts P2P down due to its bad behavior). Moreover, it would be fair if the queue forwarded 10 tiny VoIP or gaming packets (assuming 10 separate sessions) for every large BitTorrent packet because the router would spend equal time forwarding VoIP or gaming and BitTorrent.

    Moreover, BitTorrent or any P2P application in its purist form is not appropriate for streaming video because of the fact that it is out-of-order. Hybrid CDN/P2P models work where the CDN supplies the necessary inline packets and the P2P network offloads random bits ahead of the playback. So in this video application, the P2P application is still a background application and it does not deserve to have 10x the bandwidth of the single CDN flow.

    Lastly, you have not addressed the issue of BitTorrent taking 10 times more bandwidth than single-flow applications. This is a fundamental concern that has not been addressed by BitTorrent and it needs to be addressed at the network level.

  • Arvid Norberg

    There is one missing link in the reasoning going from “intelligent network” and “we can not have net neutrality”.

    There are plenty of examples of “intelligent networks” that don’t require the ISP to interpret which application is the source of which packets. This is exactly what the TOS byte was designed for. I’m sure most P2P developers would be quite excited if some major ISPs would implement a low-prio queue for a certain value of the TOS. No violation of net neutrality and intelligent network at the same time.

    • http://www.digitalsociety.org George Ou

      First of all Arvid, which one of the dozen definitions of “Net Neutrality” are you referring to?

      Having a low-priority queue or a “scavenger” class would be nice, but it is not mutually exclusive to an ISP implementing a default setting. We can still give user or application preference precedence, but that doesn’t mean having a default setting to catch every application and every user is a bad thing.

  • Nick Gilbert

    Simon Morris – “the policy implemented by ISPs which blindly says “all BitTorrent is lowest priority” will simply squash it.”

    But this is true – all BitTorrent traffic *IS* low priority. The problem is the actually implementation of that rule. ISPs are throttling BT traffic even when no other traffic is using that particular path. If they correctly throttled it only when the connection was nearly saturated, I doubt there would be a problem.

  • http://www.digitalsociety.org George Ou

    Nick,

    You’re absolutely right. BitTorrent *IS* and *SHOULD* be lowest priority even if it’s used to supplement a CDN for on-demand streaming or even if it’s used for buffered video streaming. If the video stream is 1 Mbps and it’s pushing the file down the pipe at 4 Mbps (which is usually how web streaming works), there’s no reason to treat it as something other than file transfer.

    The problem as you correctly point out is that some ISPs (mostly in Canada) are hard capping and hard throttling P2P. That is a heavy handed reaction that is the other extreme that claims that BitTorrent deserves just as much priority as everything else. Both the ISP and BitTorrent is wrong in demanding hard throttling or no deprioritization. The fair and efficient way to do this is to try to facilitate good performance for everyone. That means high priority for low bandwidth applications, low priority (but not hard capped) for file transfer applications.

    The problem now is that we have special interests demanding that we can’t protect VoIP or online gaming. http://www.digitalsociety.org/2009/11/fcc-nprm-prohibits-good-network-management/

    If they get their way, it won’t just harm VoIP and online gaming. That’s because a significant number of P2P users also play games and use VoIP. Those people will be forced to simply shut down P2P like they do today, and that harms P2P because it reduces the number of available seeds. So even if ISPs are prohibited from deprioritizing P2P file transfer, that won’t change the fact that P2P users will hard cap or shut down their own P2P applications because they or someone they live with will demand it.

  • Maxim

    Looks very well!

  • Arioch

    > if some major ISPs would implement a low-prio queue for a certain value of the TOS

    why ISP’s would have to be locomotives ?
    why cannot BitTorrent and other P2P clients make a deal and deliberately start using some agreed TOS-value ?

    Just make an agreement between uTorrent,Vuze,rTorrent, Transmission,… etc developers swarms.
    Succumb other p2p developers, like Shareaza,*Mule,LimeWire, etc.
    Succumb Skype and Mozilla/Opera/WebKit developers swarms to set the same TOS for file download sessions.

    Start it just as insurance is made before any reason for it happened.

    If ISPs in a year would found that at least half of low-priority traffic is marked with specific TOS, wouldn’t them jump into the wagon and implement low-priority queue then ?
    Just start tagging a priori, without ISPs giving you orders. And then let them catch up.

    • http://127.0.0.1 Gospodin

      I don’t think this would work, it’s impossible to get EVERY network enabled application to comply. If what you suggested is imposed, I could just use wget to do my downloads, and get a high priority :D

      • Arioch

        “network-enabed” is way too broad range
        only those dealing with large-volume latency-independent downloads. And not all apps, but a fair plenty of those.
        BitTorrent/uTorrent alone is noticeable share of world-wide internet traffic.

        You can. But you probably cannot make p2p transfers with wget :-)
        However, that is by no means bad. You would get normal-priority (not high – there is no high at all) and so would harm your own www/mail/chat/etc

        The settings is to be “by default”. We are lazy and we would not set extra chechbox for every new p2p downloading task. So, by default, they would be low-priority. Yes, if for some unusual reason, this particular info is important fo user, he has his right to mark it with normal priority. Giving slightly better downloading time and noticeably worse overall internet experience. If thuis information worth it – why not ?

  • Kyle Waters

    Your assertions that bittorrent traffic is low priority is closed minded George. It is simply not neutral. I do not play online games. I do not use VoIP. I also rarely watch streaming videos. What do I do? I use bittorrent. I pay the same for bandwidth as a user that does use these things. If I was getting less bandwidth than an chronic phone talking girl who uses VoIP then I’d, rightly, be upset. I understand your argument that bittorrent traffic is low priority. In that it isn’t time sensitive but that doesn’t make throttling it not net neutral (sorry for the double neg) because for the the person who uses both VoIP and bittorrent- they would prioritize VoIP over bittorrent but to someone who only uses non-time sensitive apps bittorrent might be their priority. I pay the same as any other user and therefore should get what I want else I’ll leave the network for a competitor even if higher priced.

    • Arioch

      for what was told above, context was prioritization of traffic with one client’s tube, not accross clients

      OTOH prioritizing traffic accross different users is an interesting question in itself

      If to compare Ann’s BT and Bob’s VoIP, then i think BT should be low-priority latency-wise and should be same-priority average-bandwidth-wise.

  • MF

    Lol, 2^^10 (tetration) is quite a big number, it’s 2^(2^(2^2^(2^(2^(2^(2^(2^2)))))))). For example 2^^5 is 2^(2^(2^(2^2))) that’s more than 2*10^19728. 2^^10 is more than 10^(10^(10^(10^(10^(10^19727.7804))))). You can’t imagine that, and certainly can’t write that much words, it wouldn’t even be possible if every quark in the universe contained googolplex supercomputers.

    PS. 2^^10 ends with the 5 digits …48736.

  • TheNetworkGuy

    What seems at issue here can be called “congestion control” in underprovisioned networks. If congestion becomes a major feature of your network, you should not even be into network provisioning. To take an extreme angle: is all this fuss about accomodating quarterly-revenue-driven corporate profiteers that are too stingy to invest in proper infrastructure? It is not at all expensive or difficult to engineer a properly provisioned network. Go look at countries in the far-east, who typically take a long-term (much longer than quarterly) view and are actually investing in their own business’ infrastructure. Foreign concept, perhaps? But it CAN and HAS been done.

    While I greatly respect some of the posters advocating “intelligent” non-neutral networks, I am obliged to sound an urgent warning to them. The thing that has made “The Network” great, is that it is simple, ubiquitous and dumb. All it does is shift packets. One example of what happens when you add “intelligence”: One of the few spaces that the current “clamourers” have to “implement policies” is in BGP. The effect of them “protecting their interests” is that implementing BGP has become fiendishly difficult and expensive for everyone, especially multihoming, and even more so on IPv6. This cost is simply passed on to the end-user. And BGP is now known as Border Gateway “Politics”. Imagine what will happen when they are allowed, by consensus, the same with plain transit. The first step on the road to hell.

    Kudo’s to Bittorrent for actually doing something about it. However, it shouldn’t be their problem. The reason bittorrent uses “ten times” more bandwidth than other applications, is because that’s how much it takes to move a file over the network. Nothing to do with the protocol. Everything to do with overselling and underprovisioning.

    Request to the guys working on uTP: Please remember the people on the other end of Long Fat Pipes – lots of bandwidth, but with lots of latency too.

  • BitTorrent6.4

    как увеличить скорасть клиента BitTorrent6.4

  • http://www.skillpod.com Thys the online games man

    makes sense. have had enough trouble with bottlenecks and just having everything freeze up, regardless of the number of tries.

    Will give it a few goes now and play around. hope it works. continue fighting the good fight!

    he he he.

  • Arioch

    What i find, id some kind of “RAM congesting”

    Take usual home router, like TrendNet 632/652 BRP

    MIMO-WiFi, OpenWRT/DD-WRT support, linux stock firmware

    connect it to some metroLAN ISP

    1) UTP has native capability of NAT traversing – great! i wonder why oldie-goldie UTP-torrent did not.

    2) however after half-hour of highly successfull UTP experience (downloading at 3-5 MBy/sec avg), the router itself starts degrading: at worst cases its internal WWW setup page no more loads. But easiest to notice is that Wi-Fi WPA2 + DHCP notebooks can no more connect until restarting router.

    I can find no other explanation, than UDP traffic routing tables consumes orders more RAM than TCP r.t. and blocks Wi-Fi/WPA/DHCP modules from loading from swap.

  • pingo

    The key is to surprise the enemy.

    As long as they can predict what route you will use, they will find a way to shape/kick/block/ban it.

    Get inside them.