Hi all,
I'm creating my own boot disk to do a re-image from udpcast. I'm a little new at this but have been using linux for about a year and a half now as my desktop machine. I've had a decent amount of experience coding so I'm no stranger to that either.
I've got an boot disk based off of slackware 10.2. The boot disk loads up a ramdisk and then mounts the target hard disk on /mnt/target and then uses udp-receiver to receive a tarball into that directory. So theoretically all data should go to the hard disk. Everything is fine until udp-receiver is started. Once it starts, the Inactive memory usage begins steadily rising until it has reached a value about 2 mb below the memory limit, where the amount of free memory begins oscillating. This amount of inactive memory remains until the bash script that started udpcast exits.
The command line used to start udp-receiver is
udp-receiver --nokbd 2>/udpcast-log | pv -pets --file size--
/mnt/target/${pkgfn}
$pkgfn is the name of the tarball.
Normally this is not an issue. But sometimes, the memory peaks out, and the kernel kills either udp-receiver or another crucial process. Whoops. The boot disk works fine for 128 MB memory computers, but not on 64 MB memory ones.
Anybody ever have this happen before? Any ideas??
Thanks very much for any help/guidance, Andrew
Andrew Reusch wrote:
Hi all,
I'm creating my own boot disk to do a re-image from udpcast. I'm a little new at this but have been using linux for about a year and a half now as my desktop machine. I've had a decent amount of experience coding so I'm no stranger to that either.
I've got an boot disk based off of slackware 10.2. The boot disk loads up a ramdisk and then mounts the target hard disk on /mnt/target and then uses udp-receiver to receive a tarball into that directory. So theoretically all data should go to the hard disk. Everything is fine until udp-receiver is started. Once it starts, the Inactive memory usage begins steadily rising until it has reached a value about 2 mb below the memory limit, where the amount of free memory begins oscillating. This amount of inactive memory remains until the bash script that started udpcast exits.
The command line used to start udp-receiver is
udp-receiver --nokbd 2>/udpcast-log | pv -pets --file size--
/mnt/target/${pkgfn}
$pkgfn is the name of the tarball.
Normally this is not an issue. But sometimes, the memory peaks out, and the kernel kills either udp-receiver or another crucial process. Whoops. The boot disk works fine for 128 MB memory computers, but not on 64 MB memory ones.
Anybody ever have this happen before? Any ideas??
Thanks very much for any help/guidance, Andrew
There was indeed a memory issue in udpcast, which is now fixed in version 20060525. However, oddly enough, it was in the sender, not the receiver, so I'm not quite sure whether it is indeed the same problem.
During my tests, memory usage stays constant (around 15 megs) even during long transfers. And even with the bug, the excessive amount of memory was allocated from the start, rather than accumulating.
Could you give it a try anyways, just in case?
Regards,
Alain
Yes, I certainly can. However, this will take some time. The sender is running on a big-endian ARM process (the linksys NSLU2) and so I will have to compile this. Now the receivers are still running on normal x86 computers. In the mean time, I am going to get another computer (x86) running some version of linux, and then install the latest version of udpcast on the computer and help with that.
Regardless of that, here are my thoughts. I'm sorry, I realize I wasn't terribly specific in my last message.
The bug you're describing doesn't sound like the one I'm experiencing at all...the problem appears on the clients running udp-receiver. I'm not even sure if it has much to do with udpcast, rather with the way I'm building my image. Basically, the udp-receiver processes take around 6500 K of memory if my memory serves (haha, okay never mind). And this stays constant. What does change is listed in /proc/meminfo. It's the Inactive line, which is what is steadily increasing and then being flushed. Now if I continually run a sync every second, it seems to help some. But all that command looks like it's doing is flushing a bit more memory down to wherever it's supposed to be. Possibly this has to do with some kind of delayed write in linux? The amount of inactive memory returns to normal levels after the bash process which runs udp-receiver terminates.
I'll let you know how udp-sender runs on the x86 machine...any thoughts on this receiving issue?
Thanks very much for your help,
Andrew
On 5/25/06, Alain Knaff alain@knaff.lu wrote:
Andrew Reusch wrote:
Hi all,
I'm creating my own boot disk to do a re-image from udpcast. I'm a
little
new at this but have been using linux for about a year and a half now as
my
desktop machine. I've had a decent amount of experience coding so I'm no stranger to that either.
I've got an boot disk based off of slackware 10.2. The boot disk loads
up a
ramdisk and then mounts the target hard disk on /mnt/target and then
uses
udp-receiver to receive a tarball into that directory. So theoretically
all
data should go to the hard disk. Everything is fine until udp-receiver
is
started. Once it starts, the Inactive memory usage begins steadily
rising
until it has reached a value about 2 mb below the memory limit, where
the
amount of free memory begins oscillating. This amount of inactive memory remains until the bash script that started udpcast exits.
The command line used to start udp-receiver is
udp-receiver --nokbd 2>/udpcast-log | pv -pets --file size--
/mnt/target/${pkgfn}
$pkgfn is the name of the tarball.
Normally this is not an issue. But sometimes, the memory peaks out, and
the
kernel kills either udp-receiver or another crucial process. Whoops. The boot disk works fine for 128 MB memory computers, but not on 64 MB
memory
ones.
Anybody ever have this happen before? Any ideas??
Thanks very much for any help/guidance, Andrew
There was indeed a memory issue in udpcast, which is now fixed in version 20060525. However, oddly enough, it was in the sender, not the receiver, so I'm not quite sure whether it is indeed the same problem.
During my tests, memory usage stays constant (around 15 megs) even during long transfers. And even with the bug, the excessive amount of memory was allocated from the start, rather than accumulating.
Could you give it a try anyways, just in case?
Regards,
Alain