Hi all,
I was wondering if anyone knows or could tell what the bottleneck is for udpcast multicast speeds?
We have been using udpcast satisfactory for a while now in combination with systemimager. In our previous situation the maximum speed seemed to be around 20 Mbps (MAX_BITRATE). Setting the speed any higher would result in dropped slices in the cast and some receivers not getting the image in the first cast. This was on a 100 Mbps network.
We now have a new setup where we image machines over a 1000 Mbps network, and the maximum speed seems to be 40 Mbps for udpcast? If we cast any faster the number of machines failing to successfully receive the cast dramatically increases (even though in the first tests, even at 40 Mbps there are 3 out of 40 still dropping).
Now this is on machines with 15.000 rpm scsi disk's. The hd and network both should be able to go as fast as 100 MByte/sec in theory. However we can't even get a stable cast at a 10th of this speed. It's not a very big issue since we save a lot of time by imaging all machines at the same time, but I was still wondering what exactly the bottleneck for the multicast's speed is.
Is it the udp protocol, or the multicast technique, or could it still be a hardware issue?
Any opinions on the subject are appreciated, perhaps some of the authors of udpcast could give some insight?
Kind regards,
-- Ramon Bastiaans SARA - Academic Computing Services Kruislaan 415 1098 SJ Amsterdam