Oh, I see... How did I miss that? Looks like I also missed that the 'verify' that I found was not a command, but an argument to 'openssl'... I'll go back to my corner now...
Thanks for the follow-up :)
Oh, I see... How did I miss that? Looks like I also missed that the 'verify' that I found was not a command, but an argument to 'openssl'... I'll go back to my corner now...
Thanks for the follow-up :)
Hello Dark2Bright!
I'm not sure what you're trying to do with 'verify' in this case, but the 'verify' command seems to be pretty specific:
verify - Utility to verify certificates.
Are you trying to verify a certificate in this 'if' statement? If not, what condition are you testing?
Thanks!
Hi Amar!
You should be able to use Fedora's grub just fine. I don't think you can use the chainloader option unless you have Ubuntu's grub installed to the boot partition of the Ubuntu install (You say it was installed to the MBR before the Fedora installation)
You will need to find out which partitions contain your Ubuntu filesystems, and create a custom menuentry similar to the ones you see for Fedora.
Here's a link that may help: http://www.fedoraforum.org/forum/showthread.php?t=263739
Once you can boot into Ubuntu, you might want to re-install Ubuntu's grub to the MBR. I believe it will detect the Fedora installation automagically, where your Fedora installation is failing to detect Ubuntu.
I hope this helps!
Hello iamthesgt!
I'm sure there's some standard way to do this, but I don't know it. There are lots of pre-existing scripts out there that are similar to this one, but here's something I have been using for a while:
#!/bin/bash
# Check for FreeBSD in the uname output
# If it's not FreeBSD, then we move on!
if [ "$(uname -s)" == 'FreeBSD' ]; then
OS='freebsd'
# Check for a redhat-release file and see if we can
# tell which Red Hat variant it is
elif [ -f "/etc/redhat-release" ]; then
RHV=$(egrep -o 'Fedora|CentOS|Red.Hat' /etc/redhat-release)
case $RHV in
Fedora) OS='fedora';;
CentOS) OS='centos';;
Red.Hat) OS='redhat';;
esac
# Check for debian_version
elif [ -f "/etc/debian_version" ]; then
OS='debian'
# Check for arch-release
elif [ -f "/etc/arch-release" ]; then
OS='arch'
# Check for SuSE-release
elif [ -f "/etc/SuSE-release" ]; then
OS='suse'
fi
# echo the result
echo "$OS"
It probably needs to be updated, and if you want to get more granular (debian vs ubuntu) or go as far as specific versions for each distro, it'll require a bit more than what's here. Hopefully, though, this will get you started.
-G
Hi BTW8892!
Your script actually IS working! It's just that you're clearing the results immediately after :)
After you accept input and execute the corresponding command, it loops back to the beginning, which starts with "clear" on line 6.
Also, I'm not sure if this is intentional, but your actions in that case statement are all preceded by the 'echo' command, which means that all that will be printed to the screen is the actual command, not the results that you'd expect.
-> echo users
Output: "users"
-> echo cal -y
Output: "cal -y"
-> echo sed -n '1409,1428p' /etc/passwd
Output: "sed -n '1409,1428p' /etc/passwd"
If that's what you intended, then ignore me :) Otherwise, just take out the 'echo' and it will execute the commands and display the output (until it's cleared when looping back to the menu, of course)
I hope this helps!
-G
For sure! It's why we're here :)
Thanks for the tip L7Sqr!
I researched 'lockfile', and it looks like it's a function of the 'procmail' package, and therefore probably not quite as universal/portable as simply touching a file. You still have to do the same test to see if the file exists, so I'm not sure I see the benefit.
'Lockfile' has some useful features, but I'm not sure it's worth installing 'procmail' in an environment where it won't be used just to get it.
Here's another example, using a "launch" script to see if your script is already running before trying to launch it again:
#!/bin/sh
echo "checking for a running instance of some_script.sh..."
if ps -ef | grep -q [s]ome_script.sh; then
echo "already running"
else
echo "launching"
sh some_script.sh &
fi
### test run ###
-> sh launch.sh
checking for a running instance of some_script.sh...
launching
Success!
-> sh launch.sh
checking for a running instance of some_script.sh...
already running
I hope this helps!
-G
Hi pennywise134!
Did you ever get this sorted? If not, we're happy to help! Let us know what you've got so far and we'll see if we can give you a push in the right direction.
Hi itengineer7!
This post is a little aged, so you've probably already figured this out by now, but it should be fairly simple to include logic that check to see if the script is already running, and exit if it is. That should PREvent concurrent runs.
Here's one method that uses a touchfile:
#!/bin/sh
touchfile=/tmp/running
if [ -e $touchfile ]; then
echo already running
exit
else
echo starting
touch $touchfile
sleep 120
rm $touchfile
fi
### test run ###
-> ./rawr.sh &
[1] 21778
starting
-> ./rawr.sh
already running
You could also do something using 'ps'. With the ps method it might be easier to write a "launcher" script to actually run from cron, to check and see if your script is running and not start a second copy.
Hi Bossman5000!
L7Sqr was answering your question about how to store a number in a variable, which is really the first step that you need to know for the operations that you're trying to do here.
Personally, I'd use a quick and dirty temporary file for something like this, but you could also easily put your command line arguments into an array, and do a bubble sort, like in the example here: http://tldp.org/LDP/abs/html/arrays.html
One of the simplest ways to do this, however, would be to write your command line integer parameters to a temporary file and then 'sort' the file and use 'head' and 'tail' to get the lowest and highest values.
Then to sum it all up, you could loop through the file, adding each number as you go, or use 'awk' to do it in a single line.
I hope this helps. Is this for homework? It sounds like you might want to go through the bash scripting guide to get more familiar with some of the basic operations.
Hi bossman5000!
What have you tried so far? I can think of a few ways to do those operations.
One of the simplest ways to get the lowest/highest values is to use 'sort'.
There are a few ways to do the math as well. You can use something like 'bc', or it might be more efficient to use the bash built-in.
The fun part is accepting "any number of command line interger parameters". For that, you'll probably need to determine the number of arguments ($#), and loop through them.
Show us some code, let us know which parts are challenging you, and we can probably help work through it!
Hi!
It looks like you're off to a good start! Since it's homework, I won't make any suggestions about doing it a different way, but I can point out some things that I can definitely see that might trip you up in the troubleshooting process!
First: when you want to execute a command and do something with the output, don't use [square brackets], just use the $(commands go here) style. Square brackets indicate that you want to do some kind of evaluation (true/false) of the output.
Second: you go to the trouble of setting the "$g" variable, but then you call "$@" again a few lines down, when I assume you just want to work with one value of $@ per loop. Try using "$g" in your evaluation of "if $groupname=$g".
Third: This is the real clue to what's happening, I think! You're working with cut, which is giving you whole columns of data, but you really only need one row out of that column of data for each iteration of "$g". Try using 'awk' or 'grep' to narrow down the results ;)
Once you've tweaked those three things, I think you'll be much closer to a working script. I'm not sure about the logic in the loop where you're calculating $count, but I think when you resolve the three things above, the rest should be easier to sort out.
One more hint... all those numbers might be getting printed to stderr instead of stdout... try redirecting stderr to /dev/null, …
Looks like you're on the right track! Try using 'echo -e'
In some shells, you might have to specify /bin/echo (or whatever your path is) rather than the 'echo' built into the shell.
Hi gedas,
This script takes the INPUT file as a command line argument using the "shift" function here: my $infile = shift;
The OUTPUT file name is also taken from the command line, OR generated by tacking on ".xls" on the end of the input filename: my $outfile = shift || $infile . ".xls";
Try running the script with no arguments to get some 'usage' hints (Or you could read it directly from the 'usage' function in the script!)
I hope this helps!
Hello aFg3!
I'm not exactly sure what you're asking... Are you looking for a command line text editor, or are you looking for something along the lines of cat > file.txt
?
Not knowing much about your system, it's nearly impossible for us to tell what effect deleting that file might have.
That being said, if that file is (was?) indeed a *core dump*, then losing it is probably not a problem. Core files are full of diagnostic information written to disk as a program crashes. So it's *probably* ok.
What you've done here is a classic mistake... Unix/Linux systems do whatever you ask them to without question, up to and including completely destroying themselves. (I'll never forget my fist "rm -rf /*" ...it was a typo in a variable name :( )
Hi sudhanshu!
This should be relatively simple with a search/replace in sed. What have you tried so far?
Well that's different!
Let's see... What happens if you run it like this: /bin/bash -x script.sh
That should give us some debugging information
Maybe so! I'm not sure why it would copy more than just the user's /home/user directory, unless it's doing something unexpected with those environment variables...
1. What does that last line echo when the script runs?
2. What's the output of: echo "$USER:$HOME"
3. Does the line in question #2 produce a different result if you run it from a script?
If you you get more than one line of "Backup of /home/user complete for user", then you might still be using the "for" loop from the original script!
Glad I could help :)
If you just want it to back up the home directory of the user launching the script, all you reall have to do is declare the directory you want to back up and take out the 'for' loop. I'd probably do something like this:
#!/bin/bash
FTPUSER=sanders
FTPPASS=law123
SERVERIP="192.168.1.37"
DATE=`date +%D`
echo "mirror -R $HOME $USER" | lftp -u $FTPUSER,$FTPPASS $SERVERIP
echo "Backup of $HOME complete for $USER"
$USER and $HOME are environment variables that *should* be available to the script, otherwise you can set it with something like 'whoami', or just by setting it manually.
I see, that makes sense :) I wasn't paying attention..
I don't see a -r flag for the mirror command... IN FACT! When I run mirror -R without the lower case r, it works recursively. WITH the -r it does NOT copy recursively.
tl;dr: change the mirror command to this:
echo "mirror -R /home/$i $i" | lftp -u $FTPUSER,$FTPPASS $SERVERIP
I hope this helps!
Would SELinux help in this case? Here's an old post from the fedora mailing list that might help:
http://www.redhat.com/archives/fedora-selinux-list/2004-October/msg00125.html
Hi Staric!
I see a couple of potential issues here. I think the real answer lies in your "mirror" command. I'm not familiar with a 'mirror' command, so is it safe to assume that's a script? Could you paste that script here as well?
If the problem is that it's not copying the sub-directories, then it sounds like there's maybe a recursive flag missing somewhere.
For the second problem, you can use re.search, with a fairly simple regex. Something like this should do: [engi]{4}
That will match a lot of words that don't have ALL four of those letters though. You can probably refine it form there if needed.
Hello!
You may want to try using "MM" in your date string to represent the month, rather that "mm" which represents minutes ;)
SimpleDateFormat("dd-MM-yyyy")
From the java documentation:
M Month in year Month July; Jul; 07
m Minute in hour Number 30
Hello voidyman!
I'm not sure about kicking off a process via FTP, but if you have cron access you might get better results running a shell script from cron that checks for that file, and does the work if it exists.
Hi Shwick!
"I tried adding in a www entry but it only resolves to the same local ip as example.com"
^This is the expected behavior here. On the web server side of things, you would want to make sure that apache (or your web server of choice) resolves both example.com and www.example.com to the same place.
The OTHER option you might be looking for is a 'CNAME' record. Try something like: www IN CNAME example.com.
The list you linked is in the format of a 'hosts' file. You don't need to run BIND for that, you can simply add the entire list to your /etc/hosts file on any client machine (/windows/system32/drivers/etc/hosts on windows).
BUT if you want to add the list to BIND and point your clients to that server for DNS resolution, that would work too. You would do it the same way you added your example.com zone. Once you have the steps down, you could probably script it pretty easily, or you might be able to find an existing script to create the zones for you.
I hope this helps!
Hello Who?!
Does something like this help?
<?php
// set $a to house here:
$a = "house";
// set $c to the php code we want, using
// $a as a variable:
$c = "<?php \$b = \"$a\"; ?>";
// echo $c to see what out output looks like!
echo $c;
?>
Hello jam2010, welcome to the forum!
Could you show us what you have tried so far? We might be able to help you get it to work, but I don't know that anyone here will write a script to spec from scratch.
Thanks!
-G
Hi voidyman!
I'm not sure how you got those fonts in here, but it sure makes your post hard to read!
A quick check of your script with 'perl -c' shows that the script (at least what you've pasted here) is missing a curly bracket at the end of the file (for that big "for(my $iSheet..." loop).
Another thing that *might* be an issues is the 'use XLSX.pm;' line. If that's a module you're including locally, try it without the '.pm' extension (use XLSX;)
I hope this helps!
Hello perly!
It looks like you're close! Since you didn't post what you perceive as the problem is with your script, I'm just guessing as to the solution here :) Why not do it all line-by-line though? Something like this might work:
my $REPORT_FILE = 'report.txt';
my $allRfile = 'AllidentifiedMetabs1.txt';
open(ALL,"$allRfile") || die "can't open $allRfile $!";
open(OUT, "+>$REPORT_FILE") || die "can't open $REPORT_FILE $!";
my @lines = <ALL>;
foreach my $line (@lines) {
($ID,$name,$M2, $M3, $M4, $M5) = split /\t/, $line;
print OUT "$ID & $name & $M2 & $M3 & $M4 & $M5";
## @all = ($ID,$name,$M2, $M3, $M4, $M5);
## #chomp @all;
}
close ALL;
close OUT;
## open(OUT, "+>report.txt") or die "Can't open data";
## foreach $a(@all) {
## print "$ID & $name & $M2 & $M3 & $M4 & $M5";
## };
I hope this helps!
That's interesting!
If you can run that from the command line, it *should* work in a script as well... Perhaps try using the full path to 'java' and the full path to the Test.jar?
Hi Sudo! I've done something similar before to monitor a log for errors, and execute commands based on what it found. Here's an example:
#!/bin/bash
logfile="/var/log/messages"
pattern="ERROR.*xxx"
tail -fn0 $logfile | while read line ; do
echo "$line" | grep "$pattern"
if [ $? = 0 ]; then
echo "$line"
fi
done
I hope this helps! I like the pipe idea, but I haven't tried anything like that yet.
rsync!
Do some thing like rsync -av --delete-after /home/folder1/subfolder/ /home/folder2/subfolder
. I *think* that will do what you're looking for!
Sorry, I didn't give you the right syntax for running just the line matching (I left out the print command). It should look more like this: sed -n '/#modify/,/hi()/p'
That should show you only the two lines we want.
Okay, so the carat shouldn't have anything to do with the multi-line replacement that you're seeing. How many lines do you get from just this command: sed '/#modify/,/hi()/'
?
Sure! The carat matches the beginning of the line, so in the case of these two lines:
#modify only the hi() below this line
hi() ---> this is the hi() i want to modify
...only the hi() at the beginning of the line will be replaced like this:
#modify only the hi() below this line
hi(20) ---> this is the hi() i want to modify
Without the carat, it would replace the first match on BOTH lines, like this:
#modify only the hi(20) below this line
hi(20) ---> this is the hi() i want to modify
Or if you add a 'g' (s/^hi()/hi(20)/g) it will replace all occurrences on the matching lines:
#modify only the hi(20) below this line
hi(20) ---> this is the hi(20) i want to modify
I'm not sure how to explain why it would replace every instance in the file, as it seems to be doing in your case. What shell are you using? I'm testing in bash 3.2.25 with sed version 4.1.5.
EDIT: Also works as expected on OSX/bash 3.2.48, though I'm not sure of the sed version...
k2k,
This sounds similar to what you're asking in your other thread. So you want to match the line TWO down from the first match instead of the one directly below?
Will there ever be an 'a=0' between them? like this:
#this is y
a=0
a=0
where you would want to replace only the second a=0? Or will it always be something like:
#this is y
something else here
a=0
If there will always be something else there, the sed statement in your other thread should work just fine. It's not only matching two *lines*, it's matching two *expressions*, so the work will be done on every line between those two expressions.
For example, using your sample text:
-> sed '/#this is y$/,/a=/ {s/a=0/a=1/}' tmp.txt
#this is x
a=0
#this is y
#this is yy
a=1
this is z
a=0
It might be a shell interpretation thing... Try with brackets around the search/replace string like this: sed '/#modify/,/hi()/ {s/^hi()/hi(20)/}'
Well this script will only match those two lines. I don't think you can specify three patterns like that. I'm not exactly sure what you're asking there. Could you give us an example?
Also, are you saying that the the 'sed' line I posted above matches more than just the single hi() after the #modify line?
Hi techie929,
I *think* that you are getting an empty file because you are opening the file with ">", which will create a file if one does not exist, or *truncate* the file if it does exist. Therefore, the file is being truncated before you even get to read it into the rest of your script.
If you open it with "+<", you'll be able to read and write the existing file, but the problem with your *current* script is that you'll probably end up with each line being doubled.
You might want to think about *reading* from your input file, and *writing* the output to a second file (the perl -i "in-place" option might be a possibility as well...)
I hope this helps!
Good catch! Glad we could help :)
Hello k2k!
This is a perfect case for SED!
You can use something like this to match the two lines with '#modify...' and 'hi()', and replace just that (those?) instances. Some googling will show you some examples that are close, but I didn't find anything that fit your issue exactly, so here's what I tried, and the result:
Here's the simple sed statement:
sed '/#modify/,/hi()/ s/^hi()/hi(20)/'
Here's the result:
-> sed '/#modify/,/hi()/ s/^hi()/hi(20)/' tmp.txt
hi()
some other code
#modify only the hi() below this line
hi(20) ---> this is the hi() i want to modify
hi()
other code
hi()
other code
hi()
Here's a breakdown of the sed command:
sed '
/#modify/,/hi()/ <- Here we look at only the line with #modify and the next one with hi()
s/^hi()/hi(20)/ <- Here we replace the hi() at the beginning of the line with hi(20)
' tmp.txt
I hope this helps!
Hi eddie!
It's been a while since I've set up any ecommerce sites, but as far as open source stuff goes, opensourcecms.com has always been a good place to read and compare. Here's the link to their ecommerce section: http://php.opensourcecms.com/scripts/show.php?catid=3&category=eCommerce
I hope this helps!
Hello Watery!
Why not just use a simple regex to match the pattern you're looking for? This one works for me, but you might be able to simplify it even more:
open FILE, "sample.txt" or die $!;
while (<FILE>) {
if ( $_ =~ /(http:\/\/www\..*?)\// ) {
print $1 . "\n";
}
}
close FILE;
With your sample text, my output looks like:
http://www.starnet.com
http://www.starnet.com
http://www.starnet.com
I hope this helps!
griswolf beat me to it :)
Hi Member24!
Looks like you've made great progress!
I *think* the problem you're having now is the way that AWK handles environment variables. You may be able to use the ENVIRON array to pull that data into awk. Something like this might work:
system("/usr/bin/ksh/send_mail.ksh -s "subject" -f "ENVIRON["FROM_EMAIL_ID"]" -t "ENVIRON["TO_EMAIL_ID"]" -m "message"")
OR you could try importing the variables up front, with the awk command, before starting the script:
awk -v FROM="$FROM_EMAIL_ID" -v TO="$TO_EMAIL_ID" 'script starts here...
...
system("/usr/bin/ksh/send_mail.ksh -s "subject" -f "FROM" -t "TO" -m "message"")
...
I hope this helps!
Hi! Have you looked at File::Copy?
This link has information about the File::Copy module, and some examples:
http://perldoc.perl.org/File/Copy.html
I hope this helps!
Hi k2k!
Were you able to figure this out? Personally I've found that using keys for authentication is much more reliable (and possibly more secure?) than using passwords in scripting tasks like this. Is that an option in your case?
Hi Member24!
It looks like you're running into this feature of awk output redirection:
print items > output-file
This type of redirection prints the items into the output file output-file. The file name output-file can be any expression. Its value is changed to a string and then used as a file name (see section Expressions). When this type of redirection is used, the output-file is erased before the first output is written to it. Subsequent writes to the same output-file do not erase output-file, but append to it.
(Found here: http://www.chemie.fu-berlin.de/chemnet/use/info/gawk/gawk_7.html)
BUT it looks like there is a simple solution. Here's another excerpt from that page:
Redirecting output using `>', `>>', or `|' asks the system to open a file or pipe only if the particular file or command you've specified has not already been written to by your program, or if it has been closed since it was last written to.
So! It looks like you can use the "close()" command to have awk start anew with the next line that it writes to the file, and overwrite what's already there. I copied your script and modified it a bit for this test. Here's the result:
## The script WITHOUT the close()
awk -F"|" '{
if (!system("test -f " $1)) {
print $1 " exists\n"
print $2 > "action.txt"
system("cat action.txt")
} else {
print $1 " Not exists"
}
}' list.txt
## In the output you can see that every time …