[Udpcast] Info on UDPcast IGMP packets
alain at knaff.lu
Fri Feb 10 10:47:49 CET 2006
Manon Lessard wrote:
> I work for the telecom dept of a large university and we are currently
> working with some UDPcast users on a very strange problem.
> It appears they are unable to start a cast on a given group of machines.
Where exactly does it fail? Can receivers still correctly register with
the sender (Sender displays New connection from 192.168.1.10 (#0)
00000009). Are no multicast packets received at all on receiver, or is
there only a very high loss rate?
Knowing on exactly when udpcast starts failing may help point to a
source of the problem.
> It so appears that the IGMP packets sent by UDPcast differ from
> Ghost on 2 points: in the header they are using ID field with a value of
> 0 (1 for Ghost) and the DF bit is set to 1 (0 for Ghost). The IGMP
> packets sent by Ghost are recognized and the session established. The
> ones sent by UDPcast are not, and the session is obviously dropped...
Unfortunately, I know of no way to prevent DF from being set on IGMP
packets. (For data packets, it's easy: just write 1 into
However, as long as no data packets actually needs fragmenting, the DF
flag on the IGMP should not pose any problems. By default, Udpcast uses
data packets of 1500 bytes (1472 bytes payload+28 bytes of UDP header).
On a normally configured network, this is small enough not to need
fragmenting (the MTU of Ethernet is usually 1500, so Udpcast's data
packets should fit in all right...).
However, just in case your MTU is smaller, you may instruct udp-sender
to use a smaller packet size with the --blocksize parameter:
./udp-sender -f /dev/zero --blocksize 500
Could you check whether this fixes the problem?
As for the packets IP id, it's unlikely that this should cause any problems.
> These clients are using a version of UDPcast dating from dec. 04.
> Are you aware of anything in UDPcast or any incompatibilities with some
> Cisco IOS versions that could cause this problem?
I checked with a colleague, and he told me that the Cisco equipment
needs to be specifically configured for correctly understanding IGMP
(because usually they use a proprietary protocol, CGMP). The equipment
needs to be configured to understand IGMP v3 and PIM v2. Symptoms of
failure in this area is that receivers can actually register with the
server, but transfer will not start (or conversely, transfer will start,
but traffic will also be blasted out of those switch ports where no
participating receiver is connected. This latter failure mode, where
multicast is basically treated as broadcast is much more common that the
case that you are observing)
Another thing to check out is "broadcast storm protection": By default
many switches assume that multicast or broadcast traffic should only
take over a tiny amount of available bandwidth, and assume that any high
bandwidth multicast traffic is an error (misconfiguration of some host
somewhere), and wildly drop such traffic. Obviously, udpcast *wants* all
bandwith to be available for the transfer, and we need to disable any
"broadcast storm protection" or "multicast flood protection" that may be
set on the switch. Symptoms of this failure mode is that *some* packets
still make it, so the transfer will start, and then fail (or just become
very slow, depending on exactly how many packets are dropped).
Then there is the ttl. Whenever a packet traverses a router, the ttl is
decremented, and when it reaches 0, the packet is dropped. By default
ttl is 1 (i.e. the packets can only reach machines on the local network,
and are not routed). On a normal LAN, this should be ok, as hubs and
normal switches don't decrement ttl. However, an L3 switch acts as a
router in may aspects, and may affect Ttl of traffic that passes through
it. If that is the case, you need to increase the TTL by supplying the
--ttl parameter on both sender and receiver:
./udp-sender -f /dev/zero --ttl 2
If nothing else helps, you can try to instruct udp-sender to use
broadcast, rather than multicast, by adding the --broadcast flag to its
command line (but in that case, traffic will spread over your whole
network, not just the machines that are actually participating)
More information about the Udpcast