linux.buzz 0 Newbie Poster

Hi All,

I need your help/suggestions for following problem.

I have a shell script which performs following operations and finally reports the time taken by each operation:

a. Creation of directories and file within them recursively. (Say DIR1)
b. File listing recursively (ls –R)
c. Copy (using cp –r ) the created directory (say DIR1) structure to a location (ex. /LOCATION_1)
d. Again copy (using cp –r) from /LOCATION_1 to /LOCATION_2.
e. Deleting the whole directory structures from /DIR1, LOCATION_1 and /LOCATION_2.

Before performing any of the operations above, script uses “sync” and “echo 3 >/proc/sys/vm/drop_caches” commands to drop cache.

For each operations above, the script shows the time taken to complete each operation. (Using “time command”).
If I run the same script 10 times, then the time taken for each operation is shown for each operation during each run.

The problem here is that, for steps “C” and “D”, the time difference of copying dir structure (of same size) from /DIR1 to /LOCATION_1 and from /LOCATION_1 to /LOCATION_2 is very much for 2-3 runs out of 10 runs.

No other process is run when the script runs. I am not sure about the cause of such variation. Can you suggest me as why this is happening.
Also, I was suggested to check the system memory usage while the script is running in the back. If this could be the reason behind the problem, then please suggest me if there is any tool that can be used to plot the system memory usage.