I think Alain provided an excellent tip on how to deal with the megabit ethernet. However I just want to add that in my experience the target client disk drive is always the bottleneck. It really depends on what you are imaging and whether compression is used.
Remember that the transfer rate is showing the packets transfer rate, and not the rate being written to the disk.
We use a 100 Mbit network to image 40 GB client notebook drives. Given that most of the drive on the master machine has been zero'ed, it compresses very well to less than 3GB of image. When we send out the image to the client notebooks, the sender machine disconnects from all clients, and the clients remain writing data (mostly highly compressed zeros) to disk from their 512 MB available RAM for the next 4 or 5 minutes, which covers about 5 or 6 GB of client disk writing.
The only way I can imagine the network being a bottleneck is if: - the network is already saturated with other traffic - the image is not created as compressed - the master disk is nearly full - the master disk is not zeroed in the empty sectors
--Donald Teed
On Thu, 23 Sep 2004, Ramon Bastiaans wrote:
Hi all,
I was wondering if anyone knows or could tell what the bottleneck is for udpcast multicast speeds?
We have been using udpcast satisfactory for a while now in combination with systemimager. In our previous situation the maximum speed seemed to be around 20 Mbps (MAX_BITRATE). Setting the speed any higher would result in dropped slices in the cast and some receivers not getting the image in the first cast. This was on a 100 Mbps network.
We now have a new setup where we image machines over a 1000 Mbps network, and the maximum speed seems to be 40 Mbps for udpcast? If we cast any faster the number of machines failing to successfully receive the cast dramatically increases (even though in the first tests, even at 40 Mbps there are 3 out of 40 still dropping).
Now this is on machines with 15.000 rpm scsi disk's. The hd and network both should be able to go as fast as 100 MByte/sec in theory. However we can't even get a stable cast at a 10th of this speed. It's not a very big issue since we save a lot of time by imaging all machines at the same time, but I was still wondering what exactly the bottleneck for the multicast's speed is.
Is it the udp protocol, or the multicast technique, or could it still be a hardware issue?
Any opinions on the subject are appreciated, perhaps some of the authors of udpcast could give some insight?
Kind regards,
-- Ramon Bastiaans SARA - Academic Computing Services Kruislaan 415 1098 SJ Amsterdam
Udpcast mailing list Udpcast@udpcast.linux.lu http://udpcast.linux.lu/mailman/listinfo/udpcast