DNS is less reliable, but could give you bigger throughput (you can send and request large records) ICMP packets would arrive quicker, will be more reliable will bypass various DNS hijackers (common with many ISP's) along the way.
DNS also requires you to have a DNS server and a domain, and you'll need something to constantly clear the cache on the local machine otherwise you'll eventually run out of room even if you are going to use the max available DNS record size.
If anything in the way will keep your DNS queries in cache then you might be screwed and run out of space very quickly.
If you need internet access ICMP tunnel will be better, bandwidth will be limited but it will be more or less a P2P tunnel, if you need to exfiltrate data without explicitly needing to maintain a bi-directional tunnel DNS is the way to go, will also work in more captive portal restrictive cases than ICMP.
Today ICMP is usually utterly blocked DNS sometimes work especially in common cases where the restricted network offers some white listed sites (e.g. airport wifi that allows you to access the airport's site and the local train service but blocks everything else).
I haven't seen a single network stack that doesn't limit the size of the tailing payload or packet in general (MTU's ;)), go try and push 65507 bytes of payload into the message and tell me how it goes.
In any case DNS tunnel offers you both TCP and UDP tunneling at much higher throughput, I'll take a look at your code when I'll have the time and see how it compares to ptunnel or ICMP shell.
"A correctly-formed ping packet is typically 56 bytes in size, or 84 bytes when the Internet Protocol header is considered. However, any IPv4 packet (including pings) may be as large as 65,535 bytes."
1500 is the MTU of Ethernet, it's often not sustainable on end to end connection (especially when you add frame overheads) if you are using that high of a payload size you'll get considerably worse performance than say limiting it to around 500 bytes, you are welcomed to try it.
Also with how global traffic is managed smaller packets tend to get priority since they can be qued quicker backbone connectivity uses much bigger frames than Ethernet so more often than not generating more smaller packets would increase your overall throughput (to a limit) unless you are on a very controlled network.
What do you mean by "it's often not sustainable"? Throughput on a server is higher at higher packet size, so if you're doing a download I'd expect the server to send 1500 byte packets. It's pretty easy to saturate a link with 1500 byte packets, and it's much harder to do so at lower packet sizes (from the server's perspective) since the per-packet processing costs start to dominate over the per-byte costs. Admittedly my knowledge of this sort of stuff is mostly intra-DC; is there some other factor that you're referring to that supersedes this on the web?
I'm not aware of prioritizing smaller packets on the backbone, sounds like something that would be targeted at small flows (i.e. first N packets in a flow get a priority bump)? More info on that would be appreciated.
It doesn't matter the MTU setting on your end for WAN and ISP / interlink grade networks is meaningless they don't use Ethernet, FDDI frame size is 4500 (ATM is about double that) bytes (minus what ever overhead, but usually 4200 and change) ISP/WAN routers don't care about how many mbit/s they transfer but how many packets they route at per given unit of time, as packets get packed into a single frame the smaller the packet the more packets they can transfer each frame.
Also from a more high level point of view if you think about it the small packets are the most critical ones and they are at least as far as responsiveness goes DNS is limited to 512bytes over UDP, TCP 3-way handshake packets are tiny and those are the packets that need to get to and back from their destination as fast as possible, delays in data transfers means slower speeds, delays of handshakes mean that your application can fail or hang.
Other important traffic such as VOIP[0] also uses very small packet sizes for this same reason most critical services need to transfer very little data (per given unit of time) but need to update data as frequently as possible to provide the illusion of real time and to mask the latency, same goes for other things like online/multi-player gaming and so on and on and on.
Pretty much if you want your service to be as responsive as possible limit your packet size to the smallest size possible and increase your PPS this will ensure that your packets get quicker to their destination.
[0]VOIP Packet Sizes http://www.cisco.com/c/en/us/support/docs/voice/voice-qualit...
The only time you would want to use large packets is pretty much when you can have a buffer, this means that you need to handle less packets per second which lowers CPU consumption (across the entire path) so video streaming and such can use pretty much as large of an MTU as they want unless they start getting fragments.
> It doesn't matter the MTU setting on your end for WAN and ISP / interlink grade networks is meaningless [as] they don't use Ethernet...
It absolutely does matter and is quite meaningful. :)
If you set your edge router's Internet-facing MTU to 9k, and the upstream equipment's MTU is smaller than that, then either your packets will be dropped, or PTMU Discovery will try to figure out the MTU of the path. (Better hope everyone along the path is correctly handling ICMP! :) )
> The only time you would want to use large packets is pretty much when you can have a buffer...
Or if you have high-volumes of data to move and want to dramatically increase the data:Ethernet_frame_boilerplate ratio. :)
> Also ... if you think about it the small packets are the most critical ones... [because they need to be dispatched as quickly as possible.]
Yes, but a larger MTU shouldn't affect this. Set whatever socket options are required to get those packets on their way as soon as they're created, and your system shouldn't wait to fill an Ethernet frame before sending that packet.
Poor choice of words on my part, if you configure jump frames on your uplink you are going to kill your network stack, if you limit it too much you'll have a huge overhead.
The point being is that for transferring data especially when responsiveness is important if not paramount utilizing the maximum potential frame size you can push without fragmentation would generally yield a poorer result in real world applications.
> [I]f you configure [jumbo] frames on your uplink you are going to kill your network stack...
I can't agree with that statement. If upstream devices support larger than 1500 byte MTU, OR PTMU works correctly, then you are absolutely not going to "kill your network stack". At worst, (in the PMTU discovery phase) you'll see poor performance for a few moments while the MTU for the path is worked out, and then nothing but smooth sailing from then on.
> The point being is that for transferring data especially when responsiveness is important if not paramount utilizing the maximum potential frame size you can push without fragmentation would generally yield a poorer result in real world applications.
I'm not sure what you're saying here. Are you saying:
"If you configure your networking equipment to always wait to fill up a full L2 frame before sending it off, you'll harm perf on latency-sensitive applications."?
If you're not, would you be so kind as to rephrase your statement? I may be particularly dense today. :)
However, if you are, then that statement is pretty obvious. I expect that few people configure their networks to do that. However, I don't see what that has to do with the link's MTU. Just because you have a 9k+ MTU, doesn't mean that you have to transmit 9k of data at a time. :)
I work for an ISP and it is all ethernet on the interior. Both for residential and commercial customers. The small amount of frame relay and things that are requested are on the ethernet network from edge to edge.
It's the de-facto MTU of much of The Internet. Baby Jumbo (MTU >1500 but <9k), Jumbo (MTU ~9k), and Super Jumbo (MTU substantially larger than 9k) frames exist, and are supported by many (but -sadly- not all) Ethernet devices.
Edit:
> Also with how global traffic is managed smaller packets tend to get priority...
Do you have a reliable citation for this? I would expect that core and near-core devices would handle so much traffic, that they all would be using MTUs far higher than 1500 bytes per frame.
It's pretty standard QoS measure, network schedulers especially for multiplexed/aggregated networks will have a bias for small packets, you should be able to find performance statistics for various token bucket configurations that will demonstrate that.
Do you have a cite for that? :) I know that CoDel doesn't bias for small packets; it treats all flows equally and tracks traffic on a bytes-transferred (rather than packets-transferred) basis.
Can you please clarify this comment? It sounds like you're saying Ethernet cannot maintain a line-rate transfer at the maximum MTU. But that can't be what you're saying, anyone could run an iperf/netperf or even large crafted packet transfer and prove this wrong.
If the layer above (i.e. IPv4) can create fragments, you can send up to the maximum payload of the L3 protocol. You can send up to a 64KiB IPv4 packet over 1500-byte Ethernet.
By far the biggest impact on tunneling over DNS is whether or not you can make direct DNS queries to the server running iodine.
I did some tests a while and found that iodine was ~98% of non-tunneled speed when I could access the server directly, since then the traffic is wrapped over huge big TXT queries and it's really efficient.
But the common case for using it is that you can only lookup through a local DNS server, and then it's usually ~0.5% or so of the usual speed. I.e. 1-2KB/s at best.
I haven't tried comparing both. I don't have much resources. All I can say is that using icmptunnel, one couldn't differentiate whether it's using tunnel or direct internet. Hence ICMP tunneling was very fast.