I was studying the UDPCast source code for some experimental purposes. I have the following question.
1. The whole data to be transfered into is divided into slices which again contains a series of blocks to transfer. Can we consider the whole data to be transfered as a single slice ( By setting appropriate parameters in code) and what kind of performance issues we can expect if we do a change of this kind at code level.
Thanks in advance,
Do you Yahoo!?
Y! Messenger - Communicate in real time. Download now.
The option of ipappend 1 in the default file and retransmission
of HELLOs in udp-sender (--rexmit-hello-interval) has been
valuable to getting udpcast working with the Dell notebooks
we have with Broadcom 5700 series ethernet.
Today, with a slightly newer broadcom chipset appearing in
the most recent shipment, we noticed that udp-receiver was
intermittantly not showing the "hit any button to start transfer"
message. With some trial and error I found that if I
watched the tail of /var/log/message on the receiver (server)
and waited for the messages on TX and RX flow control to
complete before running udp-receiver, it would always initiate
a good connection. If I had started the udp-receiver
prior to the TX/RX flow control appearing in the message log
(in which case there was no "hit any key" message),
I could ^C the receiver, run it again and it would signify
the ready state with 100% success.
In conclusion, we have a workaround of starting udp-receiver
after a few seconds past the client PXE machine booting and
showing the udp-sender status ready (but not yet "hit any key...").
Another solution would be if udp-receiver also supported
--rexmit-hello-interval. It isn't a flaw in the udpcast system
but a kludge for a network device that is proving itself to be
sluggish in initialization in general.
I don't know if this should be applied to the source tree, but perhaps
some people will find this patch useful (I know we do).
We wanted some additional logging from udpcast to debug our casting
sessions, because it seems that the only thing logged to its logfile is
messages like this:
"Doubling slice size to 1024"
We also prefer syslog in stead of the separate file for logging
So I wrote this little patch.
It logs more information (very usefull for udpcast's running in the
background on a image server) and it logs it to syslog.
Information being logged:
* New connections
* Why a cast starts (is min clients reached or max wait passed, etc)
* Transfer start (+ what file,pipe,port,if,participants)
* Transfer complete
See attached .patch
SARA - Academic Computing Services
1098 SJ Amsterdam
I'am using udpcast to transfert disk Images from a server to several clients.
The server box has one Network card and several virtual interfaces (eth0, eth0:0, eth0:1...).
When I try to send a file to one client through one of the virtual interface(eth0:x) everything goes well.
But when the client number increases (more than one) I get an error saying:
Rogue packet received <eth0 IP address>:9001 expecting <eth0:x IP address>:9001
The command line used on the server is :
udp-sender --file <filename> --interface eth0:x
On the client:
udp-receiver --file <filename>
It seems that only the eth0 interface can be used bay udpcast. It works perfectly with this one.
How can I get udpcast to work with my virtual interfaces
Sorry for my bad English!
Thank you for your help.
i work for a school system and have been using UDPCast for a couple of years
now with great results. but we just got in the new dell gx280 pcs. they have
sata hard drivers and a broadcom gigabit ethernet card. i can not get the boot
disk nor the cd to use either device. is there anyone who has add these
devices to the /dev/ on the floppy or cd image?
(John Allison )
We'd like to use udpcast to distribute approximately 20G of data to ~100
machines on a daily basis. We currently have 72 machines, but usually
only 40-50 of them successfully transfer data.
The command lines look like:
udp-sender --max-bitrate 45m --full-duplex --min-wait 300 --pipe "tar -c
-f - source_directory"
udp-receiver --pipe "tar -x -f -"
We have a cron job which monitors a directory in NFS for a
flag file, and when the flag is raised, the sender starts up, and after
a short random timeout, the receivers start up. The receivers write
messages to syslog, so I can verify that they're starting up before
min-wait expires. Lengthening min-wait doesn't appear to have an effect.
We're on switched fast/gig ethernet.
Are there any reasons why the sender wouldn't be able to "see" more
Thanks in advance.
National Center for Biotechnology Information (contractor)