eggi 92 Posting Whiz

Hey there,

If you're doing this for C, you can just use quotes instead of the angle brackets, like

#include "/home/yourdir/myInclude/header.sh"

If you're using a shell and your includes are functions, you can put them in your .profile, as advised above or just create your header.sh file and source it at the head of all of your script like:

#!/bin/bash

. /home/yourdir/myIncludes/header.sh

code
code
code

and you'll be all set :)

, Mike

eggi 92 Posting Whiz

Hey Chris,

It looks like just a simple typo in Aia's reply. Just change $8.2f to %8.2f and you should get "1.20"

, Mike

eggi 92 Posting Whiz

Hey there,

You could use the output of netstat and pipe that to a line count. Something like:

netstat -an |grep "[2]3"|wc -l

althought that might give you some false positives and you should echo the statement if you want to dump the out in a variable, since wc will put leading space before the count.

Best wishes,

Mike

eggi 92 Posting Whiz

Cool,

Good job :) I was going to suggest this for your if statement, but I guess I should have read farther down ;)

if [ $xceedPROC="exceed" -o $xceedCPU="exceed" ]

Either way's good :)

Best wishes,

Mike

eggi 92 Posting Whiz

Hey Richard,

For both of those, you can capture the output of your test commands using backticks (will work in any shell) or the $() test in bash

For instance, you could take your statement:

ps -ef | awk '/Cpu/ {if ($2 > 50 ) print "exceed"}'

and assign it to a variable

xceeded=`ps -ef | awk '/Cpu/ {if ($2 > 50 ) print "exceed"}'`

or

xceeded=$(ps -ef | awk '/Cpu/ {if ($2 > 50 ) print "exceed"}')

and then test that variable, for example:

if [ $xceeded == "exceed" ]
then
    do what have to do
fi

Let me know if that's enough of a nudge in the right direction. You're almost there :)

Best wishes,

Mike

eggi 92 Posting Whiz

Hey,

Glad I could help. In the process, you taught me a cool trick. I never knew you could do that... all those years.... exiting over and over again just to get out of a terminal session. It figures ;)

Thanks, Lehe :)

, Mike

eggi 92 Posting Whiz

Hey there,

Sometimes it will work if you just replace your exec call with a simple direct call, so instead of:

exec $SHELL

just do

$SHELL

<-- Since you're not exec'ing (i.e. replacing the current process with your new process, you shouldn't get the session timeout error since you'll just be running your new bash on top of the old one. The downside is you have to exit the shell twice to logout.

And, since it works for your ssh, you can ssh in and check the status of the $- variable. Note what that is and then change your profile to simply run the new shell rather than exec it and check teh value of the variable as well. This will show you what options are automatically set when you login with bash via ssh versus direct login. If they're different, you can add a simple if-conditional in your profile to determine whether to exec your new shell or just run it straight-up and suffer with the double exit ;)

Just FYI, you can expect to see this kind of output when you query the $- built-in:

echo $-
himBH

the output should be different for both connections.

Best wishes,

Mike

eggi 92 Posting Whiz

Hey there,

This should work:

mkdir `date +%Y%m%d`

Best wishes,

Mike

eggi 92 Posting Whiz

Hey There,

Also, at the end of your code if you change:

echo "${name}:${phone}:${address}" >> addressbook.txt

to

echo "${name}:${phone}:${address}" > addressbook.txt

should inline replace the file.

Hope that helps :)

, Mike

eggi 92 Posting Whiz

Good deal :)

Glad to help!

, Mike

eggi 92 Posting Whiz

Hey There,

You're probably running into an issue with -p if you're just using a plaintext password. It expects the password to be encrypted using the crypt program (apologies if my assumption is incorrect.

One thing I was thinking was that it might be easier to create the script to add the 100 users, but have two lines (or comma separated single lines) with one doing the standard useradd and the other using "passwd" with the "--stdin" option, which you can automate with a pipe,

useradd allYourOptionsExceptPassword user1
echo Myp@ssworD|passwd --stdin user1

Some food for thought anyway. Hope it helps!

Best wishes,

Mike

eggi 92 Posting Whiz

Hey there,

That can be done. Try this first though, since I find that "human readable" output is usually off. I would grab the information in kb and convert that to Mb. It will probably be more accurate than the output you generally get.

Which leads me to ask: Do you require specificity in your output to a great degree or are you looking for broad strokes (like 1.2Mb is fine even if it's technically 1.2475Mb)?

Thanks,

Mike

eggi 92 Posting Whiz

Hey There,

You coud use the output of format, although if you wanted to automate, you'd have to pipe the right commands to it and then do some processing on the other end, eg, for disk 0:

(echo 0;echo p;echo p)|format

The partition print shows size in MB - You really just need slice 2, since that represents the entire disk.

Also, I believe:

iostat -En DIskname

or

iostat -En

<-- For everything and then parse it with sed/awk/what-have-you

may give you this info, but I don't have a SUn machine in front of me to verify.

Best wishes,

Mike

eggi 92 Posting Whiz

Glad to have been part of the solution for once ;)

best wishes,

, Mike

eggi 92 Posting Whiz

Hey there,

Just in case you have to do it the second way, this form works in bash 3.x (possibly earlier versions) and avoids the variable scoping problem while maintaining the integrity of the command you were feeding to the pipe.

while read j1 j2 j3 j4 fsize j5 j6 j7 fname 
do
	[ ! -f $fname ] && continue

	if grep $pattern "$fname" > /dev/null 2>%1
	then
		total=`expr $total + $fsize`
		echo "size is: $total"
	fi
done <<< "`ls -l`"
echo "Total size of files containing $pattern is: $total"

hope it helps :)

, Mike

eggi 92 Posting Whiz

Hey, ok, I see where you're coming from now :)

Actually, if you don't know where a particular process logs to, figuring that out from the process itself can be done, but it's not necessarily simple (although, hey, sometimes it is).

I would suggest that you do the log search prior to killing the PIDS associated with the process. Probably the best tools to use (although they may be a bit bulky) would be "lsof" or "truss" ("strace, xtrace, etc, all the same pretty much - depends on your distro of Linux or Unix).

In lsof you could do a simple

lsof|grep PID

and then sift through that output to find any open files associated with the process (eyeball it first and then script the grep out)

for truss, strace, xtrace, etc try doing (I'll use truss for an example, but check the man page for whatever statement-tracing or execution tracing software your distro comes with:

truss -f -p PID|egrep -i 'open|read|write|close'

and check out what files it opens and writes to and then script that out, after watching it manually.

Hopefully that helps. It might be difficult to get it, but it can be done. And, like I said, it might be really simple.

Oh, yes, one last thing - if you know the location of the program, you can run "strings" against it and probably find out where it logs to from that output:

strings /full/path/to/dyyno

Best wishes,

eggi 92 Posting Whiz

That's an excellent suggestion, comatose. I also didn't realize what a huge pain this whole thing was for you, glamiss.

I think leaving the nice value at 0 (default) will make your program hog cpu less, but I wouldn't recommend rewriting shell script in tcl, since shell script is, essentially, just using the shell instead of having to add another layer of application on top.

As I mentioned, and agree with comatose, I would be looking at a solution that broke the problem. It might be easier to pick out when its not working properly (if it's a proper program, it may even complain) than to try and find it amidst a ps-sea of random proc's

Best of luck to you!

, Mike

eggi 92 Posting Whiz

Hey there,

I'm sure what you're shooting for but, maybe:

LOG=${PROCS}.log

$LOG "0"

or

echo "0" >>$LOG

Other than that, I'll need more specifics.

Best wishes,

Mike

eggi 92 Posting Whiz

Hey there,

If you're running syslog-ng, look into that. It has a history of doing exactly what you're experiencing.

Otherwise, since you know this action is recurring (something is chmod'ing your /dev/null) you could just run this in cron with one line and just "assume" the permissions have been changed.

59 * * * * /bin/chmod 666 /dev/null >/dev/null 2>&1

and run a separate smaller shell script to just run

ls -ld /dev/null
lsof|grep /dev/null
etc...

or whatever info you want to grab every 5 minutes or so for a day and then go over that and see if you can find some likely suspects. lsof should show you what processes/users are tapping /dev/null constantly.

Another crazy thing you could do - if you don't think it'll get you in trouble - would be to lock down /dev/null and, hopefully, make the process that's goofing with it go nuts ;)

If my suggestion sounds glib, I apologize. I'm just thinking you could figure this out using an alternate method and stick with the no-pain quick-fix until you do. Since the script doesn't attempt to find what program changed the perms, you don't need to make it so complicated.

Best wishes,

Mike

eggi 92 Posting Whiz

Hey There,

Two things, I think:

1. You're not stripping the literal dollar sign from the figures you're trying to add

2. Since you're invoking awk twice, you need to include your BEGIN def's twice

If you change the last line of your function to:

sed 's/\$//' $dataFile|awk 'BEGIN{FS=","; RS="\n"} {total += $4}END{print total}'

it should work (although it's a simplistic sed equation that assumes only one $ per line), but you could save yourself some headach by just combining the two awk statements.

Best wishes,

Mike

eggi 92 Posting Whiz

Hey There,

It might have to do with the user agent string that wget passes to the site (I think it's something like wget-version). You can manipulate the --user-agent= variable to pass anything, just be careful that you don't use Mozilla as they're litigation happy (more details on the options page on wget's site regarding getting sued by them)

That's just one possibility.

Hope it helps :)

, Mike

eggi 92 Posting Whiz

Hey again,

I sent you another PM - I think we have it this time (there's only, I think, one other option ;)

, Mike

eggi 92 Posting Whiz

Hey again,

Yes, I did and (thank you so much for the find output) I realized that I have been misunderstanding you this entire time. I wrote you back via PM so as not to take up too much space, with one final question regarding what you need to find with the find statement.

Needless to say, what you wanted makes complete sense now (that is, I was either misunderstanding at what level you wanted to create the zips and, probably, because I assumed the events subdirectories started with the literal "events" and not the names of events ;)

Thanks for the info. We're almost home :)

, Mike

eggi 92 Posting Whiz

Hey there,

Very basicly, it's an integer used to identy an open file in a process. In Unix/Linux/Posix 0, 1 and 2 are generally reserved for STDIN (standard input), STDOUT (standard output) and STDERR (standard error), in that order.

The integer (of file descriptor) are required as arguments to read, write and close operations. The integer, itself, is created by an open operation.

Best wishes,

Mike

eggi 92 Posting Whiz

Hey There,

Another way to go about it would be to do the math incrementally, within the loop, to avoid any possibility of losing the variables value when you exit the loop due to scope issues. You can do this by using $c as your counter variable, as well, so you don't run your function until all 3 answers have been received

For instance:

for (( c=1; c<=3; c++))
do

read -p "Please insert Grade #$c:" GRD1
let GRD=$GRD+$GRD1
if [ $c -eq 3 ]
then
LetterGrade $GRD
echo 
fi
done

Best wishes,

Mike

eggi 92 Posting Whiz

Hey There,

Okay - good :) As long as all you got back where directories and they were all named events1, events2, etc. If possible, can you PM me the output? If you're getting back any results you wouldn't expect and/or that output has spaces or single/double quotes, etc, that could be the issue.

I'd love to take a look at that output, since I can't replicate the issue on my computer.

Feel free to cut and paste and send me a PM. Since that's where you dead-end, I need to take a look at it to determine what step to take next.

If any of it's confidential, can you also just replace alphabetical characters with different alphabetical characters, etc. It's very important that the structure of the results remains intact. For instance a file name "hi there" would be an issue, but I'd never notice the space if it was sent as "********" :)

Best wishes,

Mike

eggi 92 Posting Whiz

Hey Again :)

Actually I was just asking if you could run either of those command from the command line (outside of the script) and see the output from those :)

find /the/actual/path/to/your/dirs/ -type d -name "events[0-9]*"

all by itself should be good enough.

For example:

# find /var/tmp/test
/var/tmp/test
/var/tmp/test/a
/var/tmp/test/a/a.file
/var/tmp/test/b
/var/tmp/test/b/b.file

If we can "not" run the rest of the script, we'll be able to see if the "find" command is returning any values or not. I'm assuming that it's not, but just want to be sure and can't try it on your machine :)

eggi 92 Posting Whiz

Hey,

Don't worry. You don't sound dumb. If you were dumb, you wouldn't ask questions ;)

Actually if you could do the find command with your full path and then the same with $EVENTDIR, that would be a good double-test, even though you "should" get the same results - you could do

find $EVENTDIR -type d -name "events[0-9]*"

assuming $EVENTDIR has a value and

find /the/actual/path/to/your/dirs/ -type d -name "events[0-9]*"

The output from either of those should probably lead us to the root of the problem.

Best wishes, it's almost fixed :)

, Mike

eggi 92 Posting Whiz

Also, as Salem pointed out, don't forget to use the preceding $ character when you're extracting values from your variables.

Best wishes,

Mike

eggi 92 Posting Whiz

Hey there,

Well that's good news of a sort, since it's bringing us closer to an answer. Since that gives you same result from the CLI, I would take a look at this find statement:

find /domains/*/*/*************.com/public_html/storage/events/ -type d -name 'events[0-9]*'

on its own. It's probably not returning anything. I would try it in order like:

find /domains/*/*/*************.com/public_html/storage/events/

if this returns nothing, then the initial directory isn't correct

find /domains/*/*/*************.com/public_html/storage/events/ -type d

if this returns nothing, then the initial directory doesn't have any directories in it (although it should because one of those directories would be itself)

find /domains/*/*/*************.com/public_html/storage/events/ -type d -name "events[0-9]*"

if this returns nothing, then we know the problem is with the pattern matching glob (literal events followed by any number of digits - events1, events2, events54, etc)

Let me know how that goes and post the output if you can. Your Ubuntu should be all right. I'm on 8.04 Hardy now, but I don't recall ever noticing that the basic find command's functionality changed all that much (if at all) between distro's.

Take it easy :)

, Mike