It seems that with
echo 513920 > /proc/sys/net/core/rmem_default
echo 513920 > /proc/sys/net/core/rmem_max
and under our lan conditions
Timeout notAnswered= notReady= nrAns=1 nrRead=1 nrPart=2 avg=707
bytes= 21 178 134K re-xmits=0000000 ( 0.0%) slice=0112 - 0
we can recover behaviour preceding.
2012/1/20 Lluís Gras <lluisgg(a)joseptous.info>
--nosync do the trick
But still a lot of re-xmits, ie
root@debian:~# udp-sender --file /dev/sda6 --interface eth0 -P 9000
bytes= 24 038 355K re-xmits=2071665 ( 12.2%) slice=0112 - 0
Timeout notAnswered= notReady= nrAns=0 nrRead=0 nrPart=1 avg=743
Timeout notAnswered= notReady= nrAns=0 nrRead=0 nrPart=1 avg=923
bytes= 24 143 004K re-xmits=2080485 ( 12.2%) slice=0112 - 0
Disconnecting #0 (192.168.10.21)
root@debian:~# udp-receiver --file /dev/sda6 --nosync --interface eth0 -P
UDP receiver for /dev/sda6 at 192.168.10.21 on eth0
received message, cap=00000009
Connected as #0 to 192.168.10.16
Listening to multicast on 18.104.22.168
Press any key to start receiving data!
bytes= 24 143 004K (439.79 Mbps))
2012/1/19 Raul Sanchez <raul(a)um.es>
Have you try to use "--no-sync" option?
I had a similar problem and i solved it that way. I think that udpcast
has change its default settings in any cases.
Lluís Gras <lluisgg(a)joseptous.info> escribió:
Sorry for my English.
I have been using udpcast (PXE or a custom Debian Live) over the past two
years (version 20100130) and I recently upgraded to the latest version (
20110710), the old version got a transfer rate of about 480Mbps, withfeww
reemissions (approximately 10 re-xmits) with the new and the same teams
involved does not exceed 43 Mbps. I tried to disable ipv6 (ipv6.disable =
1) but there was no difference.
If I use the udp-sender (20110710) with the udp-receiver (20100130)
recovered the rate of 480 Mbps.
How do I find out why the number of re-xmits increases significantly?
Thanks in advance