JeoSaurus 32 Posting Whiz in Training

You could also just call the interpreter first. For instance, if we know that "$1" is a shell script:

/bin/sh $1

or

/bin/bash $1

I hope rohan1111 gets back to us about what the problem was. If the script really is essentially (as others have guessed) just:

chmod 744 $1
$1

...then I think you are all on the right track.

JeoSaurus 32 Posting Whiz in Training

If you DO mean 'pylunch', you might want to start with their official documentation:
http://pylunch.googlecode.com/files/PyLunch-0.2.pdf

I hope this helps!

JeoSaurus 32 Posting Whiz in Training

Oh! vzfs! Is this an OpenVZ or Virtuozzo virtual machine? If so, you may have a quota issue. Check out this thread: http://forum.openvz.org/index.php?t=msg&goto=35897&

The OP in that thread had a HUGE discrepancy between du and df output. The thread links to an article in the OpenVZ wiki which is pretty technical, but I think that's where the answer lies. If you are the administrator for the hardware node, you might want to check on the quota for the container. If not, you might want to check with your host and see if there's anything they can do to fix it.

I hope this helps!
-G

JeoSaurus 32 Posting Whiz in Training

SPeed_FANat1c,

That's interesting! I'm stumped... Can we see the full 'df' output?

JeoSaurus 32 Posting Whiz in Training

rubberman,

That's one of the first things I thought about (I used to work for a company that writes backup software and this came up a lot) so I researched it to refresh my memory. The filesystem (ext2, ext3, ext4) reserves about 5% for a buffer zone by default, which is nowhere near the 20ish percent that our OP is seeing.

This does come up a lot on larger filesystems. Since the default is 5%, you lose about 50GB on a 1TB partition, or 400GB on an 8TB partition. But 5% of 20GB is only 1GB. You usually notice the difference in the space remaining in your 'df' output, but 'du' output should be correct. I suspect that whats being reported is actually correct, but we aren't seeing all the numbers.

Thanks!
-G

JeoSaurus 32 Posting Whiz in Training

Hidden files should be included in du output. I'm not familiar with ncdu, but I'm checking it out now!
What do you get from this command: sudo du -sh /

Thanks!
-G

JeoSaurus 32 Posting Whiz in Training

Regex!
Try something like this:

var='96.33% from 120'
decimal=$(echo $var|grep -Eo '[0-9]*\.[0-9]*')
echo $decimal

I hope this helps!
-G

JeoSaurus 32 Posting Whiz in Training

Neat! Do you want the date to be updated each time? In your code snippet, you run the 'date' command one time, so each entry will be the same date/time. Try something like this!

#!/bin/sh
for i in $(seq 1 600); do
    date |tee -a outputfile
    sleep 1
done

Or if you're using bash:

#!/bin/bash
for i in {1..600}; do
    date |tee -a outputfile
    sleep 1
done

That should give you the results you need, and print them to the console at the same time it's writing to the output file. If you don't need the date to be current for each iteration, you can just set your date variable first, like in your example:

#!/bin/bash
dt=$(date)
for i in {1..600}; do
    echo $dt |tee -a outputfile
    sleep 1
done

I hope this helps!

Baduizm commented: Yeah +0
JeoSaurus 32 Posting Whiz in Training

Cut works well for that! Or awk:

echo $file | awk -F/ '{print $3}'

You could also do something with sed...

echo $file | sed -n 's%/media/\([^/]*\)/.*%\1%p'

HTH!
-G

JeoSaurus 32 Posting Whiz in Training

TL;DR version:

try awk -F/ '{print $5 "-" $6}' Albums-linux.txt "$5" and "$6" might need to be tweaked depending on your path.

Now for the full explanation:

Are you asking how to integrate this into your example? My example was just to illustrate how the substitution works, and what syntax the shell expects.

You would simply replace this:

${"${line:21}"///-}

with this:

${line/\//-}

So it would look more like this:

while read line; do echo "${line/\//-}"; done < Albums-linux.txt

If you have to start the line at position:21, then you might not be able to do it all in one operation using this method. If that's the case, you will want to use something different like sed or awk. Is it safe to assume that Albums-linux.txt contains a list of full paths, and the data that you want starts at :21? You might be able to do something like this, for example, instead:

line="/home/user/audio/GroupName/albumName"; echo $line |awk -F/ '{print $5 "-" $6}'

In this example, we set the field separator in awk to the "/" character. The first field is empty (before the first slash), so if we count the fields in my example, you get GroupName at $5 and albumName at $6. The print statement prints fields 5 and 6 with a hyphen in between, so the result is this:

GroupName-albumName

Which would translate in your original example to something like this self-contained one line awk script:

awk -F/ '{print $5 "-" $6}' Albums-linux.txt
JeoSaurus 32 Posting Whiz in Training

Hello dwlamb!

Without your exact example data it's hard to tell for sure, but I believe that bash expects the first parameter to be a variable, which means you can leave out the '$'. This simplified example seems to work for me, and might give you a better idea of how the substitution should look:

line="GroupName/albumName"
echo "$line"
echo "${line/\//-}"

The result:

GroupName/albumName
GroupName-albumName

I hope this helps!

JeoSaurus 32 Posting Whiz in Training

Well cgrep is just the sed script in the example link. You might even be able to use it as-is (I haven't actually looked at it yet)

JeoSaurus 32 Posting Whiz in Training

Oh, right! Not all versions of grep have the -A and -B flags. Here's a definition:

-A NUM, --after-context=NUM
      Print  NUM  lines of trailing context after matching lines.  Places a line containing -- between con-
      tiguous groups of matches.

-B NUM, --before-context=NUM
      Print NUM lines of leading context before matching lines.  Places a line containing --  between  con-
      tiguous groups of matches.

There's a way to do this with sed, but I haven't sat down with it to break it down and understand it myself. Here's a link to the O'reilly Unix Power Tools page that talks about it: http://docstore.mik.ua/orelly/unix3/upt/ch13_09.htm

And here's a link to the 'cgrep' script that they use in the example: http://examples.oreilly.com/9780596003302/example_files.tar.gz

I hope this helps!

4evrmrepylrning commented: Much appreciated! +1
JeoSaurus 32 Posting Whiz in Training

That's a good question! I don't think your sed line is going to work, unless each record is all on one line. The way the records appear to be formatted, your sort|uniq would give you a big pile of nothing.

Here's a (kind of ugly) script that I wrote to to see if this would work... I *think* it does what you're looking for.

input="test.txt"
nums="$(grep '^<num>' $input |sort -u)"
for num in $nums; do
    grep -B6 -A3 $num $input|head -n 9
    echo
done

In this case, 'test.txt' is my input file, containing the 4 sample records you provided. Here's my output:

# sh test.sh
<record>
<dateadd>012012</dateadd>
<nid>R04607295</nid>
<reflink></reflink>
<FPI>YES</FPI><TPG>NO</TPG><FT>YES</FT>
<num>631</num>
<author>Anon</author>
<title>ON THE WED</title>
</record>

<record>
<dateadd>012012</dateadd>
<idref>R04607297</idref>
<reflink></reflink>
<type>Article</type>
<FPI>YES</FPI><TPG>NO</TPG><FT>YES</FT>
<num>651</num>
<author>Bent, E</author>
<title>ENTRANCES AND EXITS</title>

I hope this helps, or at least gives you a place to start! There's probably a much cleaner way to do it, but this was quick and simple.

JeoSaurus 32 Posting Whiz in Training

Hello iamthesgt!

I'm sure there's some standard way to do this, but I don't know it. There are lots of pre-existing scripts out there that are similar to this one, but here's something I have been using for a while:

#!/bin/bash

# Check for FreeBSD in the uname output
# If it's not FreeBSD, then we move on!
if [ "$(uname -s)" == 'FreeBSD' ]; then
  OS='freebsd'

# Check for a redhat-release file and see if we can
# tell which Red Hat variant it is
elif [ -f "/etc/redhat-release" ]; then
  RHV=$(egrep -o 'Fedora|CentOS|Red.Hat' /etc/redhat-release)
  case $RHV in
    Fedora)  OS='fedora';;
    CentOS)  OS='centos';;
   Red.Hat)  OS='redhat';;
  esac

# Check for debian_version
elif [ -f "/etc/debian_version" ]; then
  OS='debian'

# Check for arch-release
elif [ -f "/etc/arch-release" ]; then
  OS='arch'

# Check for SuSE-release
elif [ -f "/etc/SuSE-release" ]; then
  OS='suse'

fi

# echo the result
echo "$OS"

It probably needs to be updated, and if you want to get more granular (debian vs ubuntu) or go as far as specific versions for each distro, it'll require a bit more than what's here. Hopefully, though, this will get you started.

-G

JeoSaurus 32 Posting Whiz in Training

Hi Bossman5000!

L7Sqr was answering your question about how to store a number in a variable, which is really the first step that you need to know for the operations that you're trying to do here.

Personally, I'd use a quick and dirty temporary file for something like this, but you could also easily put your command line arguments into an array, and do a bubble sort, like in the example here: http://tldp.org/LDP/abs/html/arrays.html

One of the simplest ways to do this, however, would be to write your command line integer parameters to a temporary file and then 'sort' the file and use 'head' and 'tail' to get the lowest and highest values.

Then to sum it all up, you could loop through the file, adding each number as you go, or use 'awk' to do it in a single line.

I hope this helps. Is this for homework? It sounds like you might want to go through the bash scripting guide to get more familiar with some of the basic operations.

JeoSaurus 32 Posting Whiz in Training

Hi bossman5000!

What have you tried so far? I can think of a few ways to do those operations.

One of the simplest ways to get the lowest/highest values is to use 'sort'.

There are a few ways to do the math as well. You can use something like 'bc', or it might be more efficient to use the bash built-in.

The fun part is accepting "any number of command line interger parameters". For that, you'll probably need to determine the number of arguments ($#), and loop through them.

Show us some code, let us know which parts are challenging you, and we can probably help work through it!

JeoSaurus 32 Posting Whiz in Training

Hi!

It looks like you're off to a good start! Since it's homework, I won't make any suggestions about doing it a different way, but I can point out some things that I can definitely see that might trip you up in the troubleshooting process!

First: when you want to execute a command and do something with the output, don't use [square brackets], just use the $(commands go here) style. Square brackets indicate that you want to do some kind of evaluation (true/false) of the output.

Second: you go to the trouble of setting the "$g" variable, but then you call "$@" again a few lines down, when I assume you just want to work with one value of $@ per loop. Try using "$g" in your evaluation of "if $groupname=$g".

Third: This is the real clue to what's happening, I think! You're working with cut, which is giving you whole columns of data, but you really only need one row out of that column of data for each iteration of "$g". Try using 'awk' or 'grep' to narrow down the results ;)

Once you've tweaked those three things, I think you'll be much closer to a working script. I'm not sure about the logic in the loop where you're calculating $count, but I think when you resolve the three things above, the rest should be easier to sort out.

One more hint... all those numbers might be getting printed to stderr instead of stdout... try redirecting stderr to /dev/null, …

iamthesgt commented: Thanks for the help. +3
JeoSaurus 32 Posting Whiz in Training

Looks like you're on the right track! Try using 'echo -e'

In some shells, you might have to specify /bin/echo (or whatever your path is) rather than the 'echo' built into the shell.

JeoSaurus 32 Posting Whiz in Training

Hello voidyman!

I'm not sure about kicking off a process via FTP, but if you have cron access you might get better results running a shell script from cron that checks for that file, and does the work if it exists.

JeoSaurus 32 Posting Whiz in Training

Hello Who?!

Does something like this help?

<?php

// set $a to house here:
$a = "house";

// set $c to the php code we want, using
// $a as a variable:
$c = "<?php \$b = \"$a\"; ?>";

// echo $c to see what out output looks like!
echo $c;

?>
JeoSaurus 32 Posting Whiz in Training

Hi voidyman!

I'm not sure how you got those fonts in here, but it sure makes your post hard to read!

A quick check of your script with 'perl -c' shows that the script (at least what you've pasted here) is missing a curly bracket at the end of the file (for that big "for(my $iSheet..." loop).

Another thing that *might* be an issues is the 'use XLSX.pm;' line. If that's a module you're including locally, try it without the '.pm' extension (use XLSX;)

I hope this helps!

JeoSaurus 32 Posting Whiz in Training

That's interesting!

If you can run that from the command line, it *should* work in a script as well... Perhaps try using the full path to 'java' and the full path to the Test.jar?

JeoSaurus 32 Posting Whiz in Training

Hi Sudo! I've done something similar before to monitor a log for errors, and execute commands based on what it found. Here's an example:

#!/bin/bash
logfile="/var/log/messages"
pattern="ERROR.*xxx"

tail -fn0 $logfile | while read line ; do
  echo "$line" | grep "$pattern" 
  if [ $? = 0 ]; then
    echo "$line"
  fi
done

I hope this helps! I like the pipe idea, but I haven't tried anything like that yet.

JeoSaurus 32 Posting Whiz in Training

Hi eddie!

It's been a while since I've set up any ecommerce sites, but as far as open source stuff goes, opensourcecms.com has always been a good place to read and compare. Here's the link to their ecommerce section: http://php.opensourcecms.com/scripts/show.php?catid=3&category=eCommerce

I hope this helps!

JeoSaurus 32 Posting Whiz in Training

Hi k2k!

Were you able to figure this out? Personally I've found that using keys for authentication is much more reliable (and possibly more secure?) than using passwords in scripting tasks like this. Is that an option in your case?

JeoSaurus 32 Posting Whiz in Training

Hi SakuraPink!

Sounds like you're making progress!

The 'p' at the end is for 'print'. Take a look at the sed man page for more info about that.

If you're using these sed lines in a similar way to your original script, that would explain why the second file is blank. Sed isn't going to process lines 100 - 107 and stop there. It's going to continue to the end of your input, leaving nothing for your second sed command to process.

A better way might be to have sed look at the file directly each time:

#!/bin/bash

# Prompt for filename
read -p "Enter file name: " fname

# Print lines 100 - 107 into newfile3.txt
sed -n -e '100,107p' $fname > newfile3.txt

# Print lines 108 - 107 into newfile4.txt
sed -n -e '108,127p' $fname > newfile4.txt

I hope this helps!

JeoSaurus 32 Posting Whiz in Training

Like Woooee suggested, cron is usually the best way to schedule something like this reliably. python does have a 'sleep()' function to make this easy though!

Example:

#!/usr/bin/python
import time
while True:
    print "x"
    time.sleep(10)

I hope this helps!
-Jeo

JeoSaurus 32 Posting Whiz in Training

Glad we could help! ardav's solution is definitely more elegant :)
My goal was to stay as true to your original script as possible.

Thanks for the feedback, and good luck!

JeoSaurus 32 Posting Whiz in Training

Hello yli!

Are you trying to replace values or strings? In your example, you're using str_replace() which gives us interesting results. For instance: "portocala" becomes "portoc<b>ala</b>"

For THIS example, I'll assume that's what you're expecting! :)

You can replace your new_kw/old_kw logic with a simple loop, which will loop through that array, no matter how many elements there are:

<?php

$search = "ala salsa portocala nueve vacas";
$where  = "texto ala salsa nueve texto portocala verde nueve";

$old_kw = explode(" ",$search);

foreach ($old_kw as $key => $value) {
   $new_kw[$key] = "<b>$value</b>";
}

$where  = str_replace($old_kw, $new_kw, $where);

echo "$where\n";

?>

I get the following output, which is identical to what I was getting from the original script:

texto <b>ala</b> <b>salsa</b> <b>nueve</b> texto portoc<b>ala</b> verde <b>nueve</b>

I hope this helps! Let us know if this isn't what you were looking for, and we'll see what we can do :)
-G

JeoSaurus 32 Posting Whiz in Training

Hello Sid!

I don't think there's a way to accomplish this with pure PHP. If cron is available on the server (if it's a unix/linux system, or task scheduler if it's Windows), that's probably the way to go.

I hope this helps!
-G

JeoSaurus 32 Posting Whiz in Training

Hello Diwakar Gana!

I'm not 100% sure of the answer, but it looks like there is a related discussion over at perlmonks:

http://www.perlmonks.org/?node_id=839304

I hope this helps!
-G

JeoSaurus 32 Posting Whiz in Training

You might want to give 'ls' a try! In my test (using your script) 'ls' returned just filenames.

use Net::FTP;
$ftp = Net::FTP->new("mysite.com");
$ftp->login('xxxx', 'xxxx');
$ftp->cwd("/private/test");
my @filenames=$ftp->ls();
$ftp->quit;
foreach (@filenames){
  print "$_\n";
}

This results in the output:

$ perl test.pl 
test5.txt
test4.txt
test2.txt
test1.txt
test3.txt

My ftp server doesn't even recognize an 'nlst' command.
I hope this helps!
-G

winky commented: Went above and beyond. +4
d5e5 commented: Good test. +2
JeoSaurus 32 Posting Whiz in Training

Hmm... I'm not familiar with the 'nlst' command. What happens if you use 'ls' instead? For me, 'ls' gives me a directory listing, neatly stored in @filenames.

JeoSaurus 32 Posting Whiz in Training

Hi suman.great!

Here's a line from the 'expr' man page that relates to the problem that you're having: Beware that many operators need to be escaped or quoted for shells. You can work around this simply by quoting your variables. I tested this with your sample data and this script:

cat test.txt | while read line; do 
    IDX=`expr index "$line" \/`
    echo "$IDX"
    echo "$line"
done

Putting quotes around "$line" in your expr command allows expr to read it correctly. Quoting "$line" again for your echo command allows it to echo the literal "*" instead of translating * to all of the files in the current directory.

I hope this helps!

-G

JeoSaurus 32 Posting Whiz in Training

Great info, thanks for sharing! I'm a python noob, but I'm learing a lot just watching :) I was having trouble figuring out how to get the pid in Windows if the process wasn't spawned by the python script (I don't spend a lot of time in Windows)

JeoSaurus 32 Posting Whiz in Training

Great, I'm glad we could help!

JeoSaurus 32 Posting Whiz in Training

Hi! Have you looked at using os.kill()?

JeoSaurus 32 Posting Whiz in Training

Well both of those methods are imperfect for counting connections, but you're probably getting a more accurate result with grep in this case, because it won't count the first two lines (if your output is like mine)

Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address           Foreign Address         State

Also consider that the result may actually be different from one moment to the next.

Here's perhaps a more accurate way to do it:

echo "TCP Connections: $(netstat -ant|awk 'END {print NR-2}')"
echo "UDP Connections: $(netstat -anu|awk 'END {print NR-2}')"

OR I like to do things where we're getting multiple numbers from the same output, in more of a snapshot form since the numbers CAN change from one moment to the next. Something like this may be more appropriate, depending on the ultimate goal of your script:

NETSTAT=$(netstat -an)
echo "TCP: $(echo "$NETSTAT"|grep -c ^tcp)"
echo "UDP: $(echo "$NETSTAT"|grep -c ^udp)"
rch1231 commented: The awk and grep examples were great and I had never used the output to a variable like that. Well written. +2
JeoSaurus 32 Posting Whiz in Training

If you just want TCP info from netstat, you can do netstat -at , or netstat -au for UDP.

You can also get some good information from lsof -i A little late, but I hope this helps!

JeoSaurus 32 Posting Whiz in Training

Great! There should be a group-install for that, or you can just install what you need, starting with "yum install httpd".

Take a look at this article. It explains how to install using the "Web Server" package group, as well as how to just install apache, and build from there:

http://hacktux.com/fedora/apache

JeoSaurus 32 Posting Whiz in Training

Hi Starfruit!

It really depends on which distribution you're using. Most modern distributions have a package manager that will easily set up a default apache installation for you with very little (if any) configuration required.

Which Linux distribution are you using?

JeoSaurus 32 Posting Whiz in Training

This post is 2 days old, so I hope this helps...

There are some commands that are actually made for this! Try "host" or "nslookup". Your results will be much faster and easier to parse than ping.

-G

JeoSaurus 32 Posting Whiz in Training

That's really odd... It works OK on my system:

$ sed -f DBACheck.sql.sed DBACheck.sql
select granted_role from sys.dba_role_privs where grantee='SYSTEM';

What version of sed are you using? ( sed --version )

-G

JeoSaurus 32 Posting Whiz in Training

Wow, that's a lot of pipes! I would do it something like this:

awk '{TotCPU += $1}{TotMem += $2}END{print "Total CPU= " TotCPU  "\nTotal Mem= "TotMem}' test.list

Kinda ugly all in one line, but here it is broken down a little:

awk '\
{TotCPU += $1}\
{TotMem += $2}\
END\
{print "Total CPU= " TotCPU  "\nTotal Mem= "TotMem}' test.list

Should output something like:

Total CPU= 5.1
Total Mem= 221.639

Season (format) to taste!

-G

JeoSaurus 32 Posting Whiz in Training

One thing to remember about 'sudo su' is that it's not a 'login shell' by default. If you use 'sudo su -l' ('sudo su -' for short) then you'll inherit all of the root user's environment variables, like the path and such.

Just curious though, why would you need to source the install script? If you're only running the one script, you shouldn't need to export any variables to your shell. Especially if you're running it with sudo, because in that environment, you're root just for that installscript, and then you're out and anything that was sourced doesn't matter.

Thanks!
-G

JeoSaurus 32 Posting Whiz in Training

If all that's in that directory is the sym links to the files that you want to tail, try something like this: for i in $(ls /path/to/directory); do tail -n 21 $i; done Let us know how it goes! If we're missing the gist of what you're trying to do, post your script for us so that we can get a better idea!

Hope this helps!
-G

JeoSaurus 32 Posting Whiz in Training

Hi k2k,

There are a few options. Usually, if you're the root user, these things are in your "PATH". Are you by any chance logging in as another user, and using the "su" command to get to root? if so, try using "su -l". This will start a login shell, and import all of root's default paths.

If a command is in your path, you can use the "which" command to find the full path if you need it. In your case, it looks like things like /sbin and /usr/sbin are missing from your path. If you've got slocate installed, you can use the "locate" command to find most things.

Hope this helps!
-G

JeoSaurus 32 Posting Whiz in Training

Yeah, haven't worked with Xen much myself. That could be it! Let us know what you find :)

-G

JeoSaurus 32 Posting Whiz in Training

Hi Mike,

That's certainly an odd problem... Let's compare the output of these commands:

$ date "+%a %d %b %Y %X %Z"; hwclock; ntpdate pool.ntp.org

That'll tell us if there are any differences between the system clock and the hardware clock, and give us a baseline from ntp. Do you know if the hardware clock is set to local time or UTC?