100% CPU load on transfer seems related to pending Q size

Report client bugs
Post Reply
maurert
Posts: 59
Joined: Tue Oct 10, 2006 12:18 pm

100% CPU load on transfer seems related to pending Q size

Post by maurert »

As the pending queue size increases, the CPU time seems to increase to the point that it reached 100%. Modifying (selecting and deleting) items in the queue can take a LONG time. Minutes even.

I'm suspecting that the pending queue is maintained as a sorted list in memory. And deleting item from the top or middle means shifting the entire structure forward.
mike
Posts: 47
Joined: Fri Jul 28, 2006 4:36 pm

Post by mike »

how many items are in the queue when you see this happen?
maurert
Posts: 59
Joined: Tue Oct 10, 2006 12:18 pm

how many?

Post by maurert »

Thousands. In one case 90K items.
mike
Posts: 47
Joined: Fri Jul 28, 2006 4:36 pm

Post by mike »

I'll try it and see what happens. Do you see the 100% while the queue is processing transfers or when it's not?

And how much memory are you working with?
maurert
Posts: 59
Joined: Tue Oct 10, 2006 12:18 pm

More data

Post by maurert »

I'm seeing high CPU utilization while it is processing when the queue is LONG. But I don't have a good feel for LONG. I was seeing hevy CPU utilization with a queue length of 7000+ during normal processing, but at 1500 the CPU is less than 10%. I'm dealing with a 2.4 Ghtz CPU and 1GB of main memory.

BTW the COREFTP process with had 250MB+ of physical memory allocated.
maurert
Posts: 59
Joined: Tue Oct 10, 2006 12:18 pm

Going to command line helps

Post by maurert »

It seems that only the command line mode doesn't try to build and maintain a "pending" queue of files. So switching to command mode has solved my personal problem.

Also it helps as I'm trying to backup a Windows user directory. With the GUI it's tempting to simply drag the top level of the user from c:/documents and settings/ to the target. That quick and dirty, but it captures such large directories as the internet caches for IE and Firefox, and the temp driectories. That's where the 10s of thousands of files are. Those files are also only caches so there's no need to backup them up. Worse those files could be replaced with other files the very next day and since CoreFTP doesn't have a mode to delete files on the target that are gone from the source, the directory size would just grow and grow.

In command mode it's relatively easy to only copy the read data file directories (14 in my case) to the remote location. temp and cache directories are avoided. Yet the driectory structure on the remote is such that one GUI drap would retore it back to the right place.
Post Reply