Hi amithlaxman!
Have you looked into the fork() function?
Hi tmparisi!
Perhaps you could show us what you've tried so far? Generally FTP requires some interaction, so you might be better off using something like Perl, depending on what you're trying to do, but there's still a lot that can be done in bash!
Hi SakuraPink!
Sounds like you're making progress!
The 'p' at the end is for 'print'. Take a look at the sed man page for more info about that.
If you're using these sed lines in a similar way to your original script, that would explain why the second file is blank. Sed isn't going to process lines 100 - 107 and stop there. It's going to continue to the end of your input, leaving nothing for your second sed command to process.
A better way might be to have sed look at the file directly each time:
#!/bin/bash
# Prompt for filename
read -p "Enter file name: " fname
# Print lines 100 - 107 into newfile3.txt
sed -n -e '100,107p' $fname > newfile3.txt
# Print lines 108 - 107 into newfile4.txt
sed -n -e '108,127p' $fname > newfile4.txt
I hope this helps!
Would re.sub work for what you're doing? http://www.regular-expressions.info/python.html
Something like this?
#!/usr/bin/python
import re
import fileinput
for line in fileinput.input("test.txt"):
print re.sub("(APP[a-z]{2}[0-9]{3})", "<a href=\"\\1\">\\1</a>", line)
Here's a test run:
-> cat test.txt
APPsd222
APPxx333
-> python test.py
<a href="APPsd222">APPsd222</a>
<a href="APPxx333">APPxx333</a>
I hope this helps! I'm also a python noob :)
Hmm... I wonder if you can do it like it's done in the lua.syntax file?
There's a 'keyword' line under 'context default' that looks like this: keyword -- brown
There is also a separate section similar to the comment line that you have that looks like this:
context exclusive -- \n brown
spellcheck
I hope this helps!
-Jeo
Like Woooee suggested, cron is usually the best way to schedule something like this reliably. python does have a 'sleep()' function to make this easy though!
Example:
#!/usr/bin/python
import time
while True:
print "x"
time.sleep(10)
I hope this helps!
-Jeo
Glad we could help! ardav's solution is definitely more elegant :)
My goal was to stay as true to your original script as possible.
Thanks for the feedback, and good luck!
Interesting! One way I was able to reproduce that is with an extra line at the end of the input file. Here's a successful run:
-> cat -E test.txt
1 0 100$
2 11 150$
3 0 189$
4 0 195$
5 21 245$
-> python test.py
0
11 non-zero!
0
0
21 non-zero!
Non-zero total: 2
Here's a run with an extra line at the end of the csv file:
-> echo >> test.txt
-> cat -E test.txt
1 0 100$
2 11 150$
3 0 189$
4 0 195$
5 21 245$
$
-> python test.py
0
11 non-zero!
0
0
21 non-zero!
Traceback (most recent call last):
File "test.py", line 9, in ?
print row[1],
IndexError: list index out of range
So you might want to look at a way to check the input and trim out any blank lines, if there are any. It could also be something else entirely!...
Here's a simple example that checks to see if the list is empty before trying to process it:
#!/usr/bin/python
import csv
csvInput = csv.reader(open('test.txt', 'rb'), delimiter=' ')
count = 0
for row in csvInput:
if row:
print row[1],
if row[1] != "0":
print "non-zero!"
count += 1
else:
print
print "Non-zero total: %s" % count
Again, I'm terrible at python, but I hope this helps! :)
-G
Hello machine91!
Take a look at python's "csv" module. I believe it will give you what you need.
I'm a python noob, but here's a basic example that does something close to what I think you're looking for:
#!/usr/bin/python
import csv
csvInput = csv.reader(open('test.txt', 'rb'), delimiter=' ')
count = 0
for row in csvInput:
print row[1],
if row[1] != "0":
print "non-zero!"
count += 1
else:
print
print "Non-zero total: %s" % count
I hope this helps!
-G
Hello yli!
Are you trying to replace values or strings? In your example, you're using str_replace() which gives us interesting results. For instance: "portocala" becomes "portoc<b>ala</b>"
For THIS example, I'll assume that's what you're expecting! :)
You can replace your new_kw/old_kw logic with a simple loop, which will loop through that array, no matter how many elements there are:
<?php
$search = "ala salsa portocala nueve vacas";
$where = "texto ala salsa nueve texto portocala verde nueve";
$old_kw = explode(" ",$search);
foreach ($old_kw as $key => $value) {
$new_kw[$key] = "<b>$value</b>";
}
$where = str_replace($old_kw, $new_kw, $where);
echo "$where\n";
?>
I get the following output, which is identical to what I was getting from the original script:
texto <b>ala</b> <b>salsa</b> <b>nueve</b> texto portoc<b>ala</b> verde <b>nueve</b>
I hope this helps! Let us know if this isn't what you were looking for, and we'll see what we can do :)
-G
Hello Sid!
I don't think there's a way to accomplish this with pure PHP. If cron is available on the server (if it's a unix/linux system, or task scheduler if it's Windows), that's probably the way to go.
I hope this helps!
-G
Hi all! This can be done with a fairly simple one-liner!
As Salem pointed out, all the clues are in the man page, but I personally found the solution in 'sort' rather than 'uniq':
# The test file with your sample data
-> cat test.txt
REF | FOR | SUR
TLT090991|STEPHEN|GRIFFITHS
TLT090992|STEPHEN|GRIFFITHS
#Test run with one-line sort command
-> sort -t\| -k2 -u test.txt
REF | FOR | SUR
TLT090991|STEPHEN|GRIFFITHS
A trip through the man page for 'sort' reveals the following:
-t, --field-separator=SEP
use SEP instead of non-blank to blank transition
-k, --key=POS1[,POS2]
start a key at POS1, end it at POS2 (origin 1)
-u, --unique
with -c, check for strict ordering; without -c, output only the first of an equal run
TL;DR version:
-t sets the field separator, in this case "|" (escaped with \)
-k tells it which field to start with
-u says only print out unique lines (taking into consideration our starting position)
I hope this helps!
-G
Hi _neo_!
Just to be sure, can you tell us what language you're working with? There are a couple of ways this might be accomplished, but I wanted to be sure I'm testing with the correct base syntax file first.
Thanks!
-G
Great! Glad we could help :)
Hello again anjoz,
You should be able to add the logic for that pretty easily... Just check to see if the destination file already exists, and if it does, sleep and get the timestamp again, or simply increment the last number.
Here's another thought... If you want the timestamp. why not save THAT into the text file, and use a number that will ALWAYS be unique as the identifier in the filename? ls -i
will give you the inode where the file lives. That number should always be unique unless the two files are hard-linked, in which case they're actually the same file.
So you could do something like:
absolutepath=$(readlink -f ${args[i]})
basepath=$(basename $absolutepath)
timestamp=$(date +'%Y-%m-%d--%H-%M-%S-%N')
suffix=$(ls -i $basepath|awk '{print $1}')
if [ -e $absolutepath ]; then
mv "$absolutepath" "/$HOME/Trash/${basepath}.${suffix}"
echo "$timestamp $absolutepath.${$suffix}" >> /$HOME/.info
else
echo "File/Command does not exists"
fi
I hope this helps!
-G
heres what i have done so far my problem is that when i move a file with the same name in the trash folder i just overwrites the previous file can you give me any suggestions on how to approch this problem
...
Hi again!
I'm glad the 'readlink' command worked for you! I did make a suggestion in my post regarding the problem of files with the same name. Was that not what you were looking for? Here they are again:
----------
I'm not sure how to overcome the issue of storing those in a text file if there's more than one file with the same name.
I think a better solution might be to either store the file with the full path (/home/Trash/full/path/to/file)
OR
tar/gzip the files, preserving the full path by changing the working directory to / before storing and before REstoring.
----------
If the goal is to not have sub-directories in your Trash directory, you could use the tar.gz option, and use the path in the filename (replace '/' with '_' or something similar).
I hope this helps!
-G
Hello Diwakar Gana!
I'm not 100% sure of the answer, but it looks like there is a related discussion over at perlmonks:
http://www.perlmonks.org/?node_id=839304
I hope this helps!
-G
Hello santhoshvkumar!
I do not believe that there is a way to run a bash command from inside MySQL.
What you MIGHT be able to do though is run a bash script from cron that will check for a flag in mysql, and act based on that.
I hope this helps!
-G
Hi Freude!
The ">" basically just re-directs output. In this case, the result of: echo "pause 1; replot; reread;" > loop_forever.gnu
would be a file named 'loop_forever.gnu' with the stuff in quotes as the content of the file.
It looks like .gnu is an extension to indicate that it's a gnuplot script.
Beyond that, I'm not familiar with gnuplot, so I don't know how much more I can help, but let us know if you have more questions!
Thanks!
-G
Hello landog!
I'm not sure exactly where that particular error comes from, but there are a couple of issues with this.
First, when comparing integers you'll want to use the '-eq' operator. Use '=' to compare strings. I think '==' won't work in some shells.
Second, your 'if' statement is enclosed in an extra set of [brackets]. Knock those off and we're one step closer!
Third, and I could be wrong about this, but 'if' won't do math for you. The way to express a math problem in bash is $((x+y)) (or $(($x+$y)) ).
The 'Advanced Bash Scripting Guide' on the tldp.org site is just about the best bash scripting reference out there. Here's a link to the page on operators:
http://tldp.org/LDP/abs/html/comparison-ops.html
Here's a working example (in my shell anyway) of what I think you're trying to express:
#!/bin/bash
a=1
b=1
c=2
if [ $a -eq $((b+c)) ] || [ $b -eq $((a+c)) ] || [ $c -eq $((a+b)) ]; then
echo 'Looks OK!'
else
echo 'No Match :-('
fi
In this case the third expression should be true, triggering the 'Looks Ok!' result.
I hope this helps!
-G
Hello anjoz!
'readlink -f' will give you the full path to a filename, but I'm not sure how to overcome the issue of storing those in a text file if there's more than one file with the same name.
I think a better solution might be to either store the file with the full path (/home/Trash/full/path/to/file)
EXAMPLE:
ABSOLUTE=`readlink -f path/to/testfile.sh`
FILEPATH=`dirname $ABSOLUTE`
mkdir -p $HOME/Trash/$FILEPATH
mv $ABSOLUTE $HOME/Trash/$FILEPATH/
OR
tar/gzip the files, preserving the full path by changing the working directory to / before storing and before REstoring. You still might run into a problem with files that have the same name, but you could probably work out a naming system to get around that.
EXAMPLE:
ABSOLUTE=`readlink -f path/to/testfile.sh`
FILENAME=`basename $ABSOLUTE`
cd / && tar czf $HOME/Trash/$FILENAME.tar.gz $ABSOLUTE
I hope this helps!
-G
You could always just echo the command before it's run, then run it. You could even put a prompt in there if you really wanted.
#!/bin/sh
prompt="<<root@server ~>>$ "
for command in "ls" "cd /tmp" "ls"; do
echo "$prompt $command"
$command
done
Sample output:
$ sh tmp.sh
<<root@server ~>>$ ls
8ball.php bot.php test.php test.pl tmp.sh
<<root@server ~>>$ cd /tmp
<<root@server ~>>$ ls
gconfd-user orbit-user
Hi fuggles!
I'm not sure what you mean by "Live USB", but take a look at "unetbootin"
http://unetbootin.sourceforge.net/
You should be able to use unetbootin to get a bootable USB image from the dvd iso.
I hope this helps!
-G
Awesome, thanks! Glad I could be of service :)
Hi kukuruku!
I believe you're looking for this: $#argv
Check out this link for more details!
http://www-cs.canisius.edu/ONLINESTUFF/UNIX/shellprogramming.html
I hope this helps!
-G
You might want to give 'ls' a try! In my test (using your script) 'ls' returned just filenames.
use Net::FTP;
$ftp = Net::FTP->new("mysite.com");
$ftp->login('xxxx', 'xxxx');
$ftp->cwd("/private/test");
my @filenames=$ftp->ls();
$ftp->quit;
foreach (@filenames){
print "$_\n";
}
This results in the output:
$ perl test.pl
test5.txt
test4.txt
test2.txt
test1.txt
test3.txt
My ftp server doesn't even recognize an 'nlst' command.
I hope this helps!
-G
Hmm... I'm not familiar with the 'nlst' command. What happens if you use 'ls' instead? For me, 'ls' gives me a directory listing, neatly stored in @filenames.
Okay, this script is not the prettiest (and I'm sure there's a pure sed way to do this...), but it might at least give you a good starting point... Here goes :)
logfile="test.txt"
i=1
grep ERROR $logfile | while read line
do echo -n "$i - "
sed -e :b -e '/'"$line"'/!d;n;:a' -e '/^[0-9]/bb' -e 'n;ba' $logfile
i=$((i+1))
done
There's a good explanation for that 'sed' command here: http://www.catonmat.net/c/4323
I hope this helps!
-G
Thanks for the additional information! Do you need the count printed before each line, or do you just want a total at the end?
What kind of output do you need? Just print to STDOUT, or write to a log file?
Sure Suman.great!
Your first error was here: expr: syntax error
This was a result of the shell expanding "/*" into all of the file and directory names in the top level (root, or /) of the filesystem, instead of a literal "/*"
This would make your 'expr' command, with the variable expanded, look like this: expr index /1 /advancemfg /backup /bin /boot /buildblr /buildsjc /corp /departments /dept /dev /etc /global /home /import /initrd /lib /lost+found /media /misc /mnt /net /opt /proc /proj /root /sbin /scm_adm_remedy /scm_adm_spre /scm_adm_sre /scm_com_cla /scm_dev_aadm /scm_dev_ams /scm_dev_asic_emulator /scm_dev_avalon /scm_dev_bsi /scm_dev_catapult /scm_dev_dbgtools /scm_dev_dmc /scm_dev_dmm /scm_dev_efcm /scm_dev_efcm_eccapi /scm_dev_efcm_eccapi_pvob /scm_dev_efcm_pvob /scm_dev_efcm_ut /scm_dev_efcm_ut_pvob /scm_dev_efcmapi_pvob /scm_dev_efcmelmgr /scm_dev_efcmelmgr_pvob /scm_dev_efcmlegacy /scm_dev_efcmlegacy_pvob /scm_dev_efcmmpi /scm_dev_efcmxproj_common /scm_dev_efcmxproj_common_pvob /scm_dev_ezswitchsetup /scm_dev_fchba /scm_dev_fchba_toolchains /scm_dev_fos_lgen /scm_dev_isnap_test /scm_dev_lsi /scm_dev_maps /scm_dev_mldds /scm_dev_pki /scm_dev_prism /scm_env_globaltools /scm_env_metadata /scm_env_testage /scm_fvt_dce /scm_fvt_dcfm /scm_fvt_fcip /scm_grp_els /scm_grp_interwoven /scm_grp_itcollaboration /scm_grp_itobiee /scm_grp_itobiee_pvob /scm_grp_itoracleerp /scm_grp_itprojects /scm_grp_itprojects_ucm /scm_grp_itprojects_ucm_pvob /scm_grp_itpvob /scm_grp_itsharepoint_ucm /scm_grp_itsharepoint_ucm_pvob /scm_grp_itslk /scm_grp_itslk_swportal_ucm /scm_grp_itslkucm /scm_grp_itslkucm_pvob /scm_grp_itsoa /scm_grp_itsoa_pvob /scm_grp_itv2soa /scm_grp_itv3soa /scm_grp_itweb /scm_grp_r12 /scm_grp_rafw /scm_grp_rafw_dep /scm_grp_rafw_pvob /scm_grp_support /scm_grp_test /scm_int_media /scm_int_tpsinstallers /scm_jnk_fan /scm_jnk_grpitslk /scm_jnk_mergetest /scm_jnk_metadata /scm_jnk_rcitest1 /scm_jnk_rcitest2 /scm_jnk_simple /scm_jnk_testcq /scm_jnk_testinm /scm_jnk_testinm_pvob /scm_jnk_tetest /scm_jnk_training /scm_jnk_trainingte05 /scm_jnk_ucm /scm_jnk_ucm_pvob /scm_jnk_winrci /scm_oss_efcm /scm_oss_javajars /scm_oss_testtools /scm_rel_docs /scm_sqa_api /scm_sqa_arm /scm_sqa_bfc /scm_sqa_bosman /scm_sqa_dca /scm_sqa_dcfm /scm_sqa_dmm /scm_sqa_embedded /scm_sqa_ff /scm_sqa_fm /scm_sqa_fos /scm_sqa_fosreg /scm_sqa_hba /scm_sqa_nos /scm_sqa_oem /scm_sqa_psi /scm_sqa_sas /scm_sqa_scalability /scm_sqa_smia /scm_sqa_sustain /scm_sqa_toolsdev /scm_sqa_v0fos /scm_sqa_v2dcfm /scm_sqa_v2embedded /scm_sqa_v2fm /scm_sqa_v2fos /scm_sqa_v2oem /scm_sqa_v2smia /scm_sqa_v2sustain /scm_sqa_v3oem /scm_sqa_v4oem /scm_sqa_xpath_2 /scm_sqa_xpath_webtools /scm_tps_efcm /scm_tps_efcm_pvob /scm_tps_fos_safenet /scratch /selinux /srv /stage /swapfile1 /swapfile2 /sys /tftpboot /tmp /users /usr /var /view /vobs /vws1 \/
Expr just doesn't know what to do with this.
The output that you see AFTER that error is what …
Hi suman.great!
Here's a line from the 'expr' man page that relates to the problem that you're having: Beware that many operators need to be escaped or quoted for shells.
You can work around this simply by quoting your variables. I tested this with your sample data and this script:
cat test.txt | while read line; do
IDX=`expr index "$line" \/`
echo "$IDX"
echo "$line"
done
Putting quotes around "$line" in your expr command allows expr to read it correctly. Quoting "$line" again for your echo command allows it to echo the literal "*" instead of translating * to all of the files in the current directory.
I hope this helps!
-G
Expect is definitely the way to go... OR you could generate a list of files first (store them in a variable or temporary file), and use ncftpput to do the work, no expect or << required!
Hi madtorahtut!
Sorry to leave you hanging... Did you try the script? How'd it work out?
Awk would be great for this too, especially if the message is in the same 'field' every time.
For instance, in your example text, your 'short_message' is in the second field (after the first comma). Something like this would do the trick: awk -F, '/short_message/ {split ($2,A,"="); print A[2]}'
In the example, we set the comma as the initial delimeter with the '-F' flag.
We grab the second field with '$2'.
Then we split that field at the '=' sign and print the second half, which gives us <some text>
I hope this helps! You've already got two good solutions here :) Enjoy!
-G
This sounds interesting Kavitha!
What have you tried so far?
Do you have any example text so that we can see what the log looks like?
I worked on something similar recently, but I didn't go as far as getting the entire stack trace (that part might be a little trickier). Finding and counting the exceptions should be fairly simple to do.
Okay, this thread is long dead, but this is for pratsgl:
The original poster was looking for a way to resolve a hostname to an ip address.
Your script works great for getting the LOCAL ip address!
Did you know that you can do all of that parsing in awk, without using grep and sed? Try something like this to consolidate those 4 commands down to just 2: ifconfig eth0|awk '/inet addr/ {split ($2,A,":"); print A[2]}'
Hi Vasu!
That certainly sounds possible. I think we might need a little more information though. Perhaps some sample data, and which columns you want to get these values from?
No problem, let us know how it goes :) It should work fine in any case, but it's always good to know how best to optimize things, and swap is one of those areas where I don't really know as much as I probably should =P
Great info, thanks for sharing! I'm a python noob, but I'm learing a lot just watching :) I was having trouble figuring out how to get the pid in Windows if the process wasn't spawned by the python script (I don't spend a lot of time in Windows)
Thanks for that cfajohnson! I love awk, but I've barely scratched the surface of what it can do...
One difference I noticed though is that with my 'read', I can get everything from the 'WHAT' column in one field, because of the way 'read' treats the rest of the line as one field unless you give it another variable to use.
With your awk script, any arguments to the command displayed in the 'what' column are given their own field. For example:
## This source line...
username pts/0 07:57 0.00s 0.17s 0.04s screen -dr
## Parsed with 'read'...
username,pts/0,07:57,0.00s,0.17s,0.04s,screen -dr
## puts the command and its arguments into the same column.
## Parsing the same output with the (much more elegant
## and clean!) awk script puts each argument in its own column...
username,pts/0,07:57,0.00s,0.17s,0.04s,screen,-dr
So the end result is that we get 'screen,-dr' as opposed to 'screen -dr' all in one field. Is there a simple way to overcome this in awk?
Thanks!
-G
I'm sorry for the delay. I'm not sure how one would accomplish this in Windows. there may be something you could do in PowerShell or VB, but I'm not very familiar with either of those.
Somebody else may be able to help, but in the "shell scripting" forum, you're mostly going to find folks with Unix/Linux shell scripting in mind. You might try posting something over in the Visual Basic or Microsoft Windows forums.
Great, I'm glad we could help!
Oh! So you only want a copy of the latest file in each directory to be backed up. In that case, rsync probably isn't what you want. It'll be a little more complicated than that.
If the files are named with that date format, you'll have trouble sorting them properly based on that, so you'll probably want to use the file creation time instead. You can use 'ls -t' to sort the directory listing and grab the most recent one. I'd recommend trying it a few times to make sure it gives you the result you're looking for.
Then there's what to do on the remote system... Do you want to simply delete what's there first, and then replace it with the most recent file, or do you want to transfer the new files first, and then determine what needs deleted?
For the transfer, you'll probably want to use something like rsync or 'scp', but ftp could work as well. I tend to use scp with key-based authentication so that it can be automated without requiring you to enter a password every time.
You could also mount the remote filesystem with NFS or sshfs so that it can be treated like a local filesystem. That might be easier in this case, since you may have to script the removal of the OLD backup files.
I see! Okay, so one of the common issues with this is that when Linux is installed, it puts a boot loader into the MBR of the primary hard drive. You can fix this by running the Windows recovery console from the installation DVD. Here's a link that may help:
http://www.ehow.com/how_4836283_repair-mbr-windows.html
After that, you just have to decide what to do with that 15GB partition. You can probably use partition magic or something to re-claim it for your Windows installation (I don't think there's anything natively built into Windows for this), or re-format it and give it a drive letter.
I hope this helps!
Hi! Have you looked at using os.kill()?
That depends on what you mean by "newest"... Rsync actually checks the local and remote file and ONLY sends the file over if it's been update.
That way you aren't sending the entire directory every time, only the files that have changed since the last rsync.
Hi madtorahtut!
Have you tried rsync? Rsync does exactly what you're describing, with a very simple command line. Here's an example... If you want to have /data/ on Server A sync'd with /data/ on Server B (deleted files are deleted, changed files are changed, etc...) you would do something like this:
## I'm running this on Server A, the source, and supplying
## Server B as the remote destination
rsync -av --delete-after /data/ user@serverB:/data/
Rsync has a lot of possibilities. You can do incremental backups, or you can use your backup script to keep multiple, dated copies of the data (if disk space allows).
I hope this helps!
-G
Good point! I wasn't thinking about suspend. Here's a link that suggests that you need 2x the amount of installed RAM for suspend-to-disk to work.
That page also links to this article, which hints at something else I've read, which is that using a suspend file is an option, instead of using swap.
Further research suggests that the standard swsusp and uswsusp modules might not support suspending to a file (except in the case of a swap file), but TuxOnIce (aka swsusp2) does. I may or may not be more confused than when I started...
I hope this helps!
Hi wapcrimers,
There isn't reall an "uninstall" for Linux. How was it installed on this machine? Is it on a separate disk or partition?
It also might help to know what kind of problem you are experiencing. Based on the information you've provided so far, all we can really do is guess at what needs to be done.
Thanks!
-G
Hi Moncky,
This is something I've been interested in too, and until recently there hasn't been any real "standard" to go by. With as much RAM as you can get into a machine these days, the recommended swap size varies wildly depending on how the machine will be used.
After I read your post, I went searching to see if anyone had come up with any guidelines, and I came across blog post that refers to a table in the RedHat Linux installation manual. This looks pretty reasonable to me. Here's an excerpt:
In years past, the recommended amount of swap space increased linearly with the amount of RAM in the system. But because the amount of memory in modern systems has increased into the hundreds of gigabytes, it is now recognized that the amount of swap space that a system needs is a function of the memory workload running on that system. However, given that swap space is usually designated at install time, and that it can be difficult to determine beforehand the memory workload of a system, we recommend determining system swap using the following table.
Recommended System Swap Space:
4GB of RAM or less a minimum of 2GB of swap space
4GB to 16GB of RAM a minimum of 4GB of swap space
16GB to 64GB of RAM a minimum of 8GB of swap space
64GB to 256GB of RAM a minimum of 16GB of swap space
…