Sometimes it will work if you just replace your exec call with a simple direct call, so instead of:
[QUOTE]$SHELL[/QUOTE] <-- Since you're not exec'ing (i.e. replacing the current process with your new process, you shouldn't get the session timeout error since you'll just be running your new bash on top of the old one. The downside is you have to exit the shell twice to logout.
And, since it works for your ssh, you can ssh in and check the status of the $- variable. Note what that is and then change your profile to simply run the new shell rather than exec it and check teh value of the variable as well. This will show you what options are automatically set when you login with bash via ssh versus direct login. If they're different, you can add a simple if-conditional in your profile to determine whether to exec your new shell or just run it straight-up and suffer with the double exit ;)
Just FYI, you can expect to see this kind of output when you query the $- built-in:
the output should be different for both connections.
You're probably running into an issue with -p if you're just using a plaintext password. It expects the password to be encrypted using the crypt program (apologies if my assumption is incorrect.
One thing I was thinking was that it might be easier to create the script to add the 100 users, but have two lines (or comma separated single lines) with one doing the standard useradd and the other using "passwd" with the "--stdin" option, which you can automate with a pipe,
That can be done. Try this first though, since I find that "human readable" output is usually off. I would grab the information in kb and convert that to Mb. It will probably be more accurate than the output you generally get.
Which leads me to ask: Do you require specificity in your output to a great degree or are you looking for broad strokes (like 1.2Mb is fine even if it's technically 1.2475Mb)?
Just in case you have to do it the second way, this form works in bash 3.x (possibly earlier versions) and avoids the variable scoping problem while maintaining the integrity of the command you were feeding to the pipe.
Actually, if you don't know where a particular process logs to, figuring that out from the process itself can be done, but it's not necessarily simple (although, hey, sometimes it is).
I would suggest that you do the log search prior to killing the PIDS associated with the process. Probably the best tools to use (although they may be a bit bulky) would be "lsof" or "truss" ("strace, xtrace, etc, all the same pretty much - depends on your distro of Linux or Unix).
In lsof you could do a simple
and then sift through that output to find any open files associated with the process (eyeball it first and then script the grep out)
for truss, strace, xtrace, etc try doing (I'll use truss for an example, but check the man page for whatever statement-tracing or execution tracing software your distro comes with:
That's an excellent suggestion, comatose. I also didn't realize what a huge pain this whole thing was for you, glamiss.
I think leaving the nice value at 0 (default) will make your program hog cpu less, but I wouldn't recommend rewriting shell script in tcl, since shell script is, essentially, just using the shell instead of having to add another layer of application on top.
As I mentioned, and agree with comatose, I would be looking at a solution that broke the problem. It might be easier to pick out when its not working properly (if it's a proper program, it may even complain) than to try and find it amidst a ps-sea of random proc's
or whatever info you want to grab every 5 minutes or so for a day and then go over that and see if you can find some likely suspects. lsof should show you what processes/users are tapping /dev/null constantly.
Another crazy thing you could do - if you don't think it'll get you in trouble - would be to lock down /dev/null and, hopefully, make the process that's goofing with it go nuts ;)
If my suggestion sounds glib, I apologize. I'm just thinking you could figure this out using an alternate method and stick with the no-pain quick-fix until you do. Since the script doesn't attempt to find what program changed the perms, you don't need to make it so complicated.
It might have to do with the user agent string that wget passes to the site (I think it's something like wget-version). You can manipulate the --user-agent= variable to pass anything, just be careful that you don't use Mozilla as they're litigation happy (more details on the options page on wget's site regarding getting sued by them)
Yes, I did and (thank you so much for the find output) I realized that I have been misunderstanding you this entire time. I wrote you back via PM so as not to take up too much space, with one final question regarding what you need to find with the find statement.
Needless to say, what you wanted makes complete sense now (that is, I was either misunderstanding at what level you wanted to create the zips and, probably, because I assumed the events subdirectories started with the literal "events" and not the names of events ;)
Very basicly, it's an integer used to identy an open file in a process. In Unix/Linux/Posix 0, 1 and 2 are generally reserved for STDIN (standard input), STDOUT (standard output) and STDERR (standard error), in that order.
The integer (of file descriptor) are required as arguments to read, write and close operations. The integer, itself, is created by an open operation.
Another way to go about it would be to do the math incrementally, within the loop, to avoid any possibility of losing the variables value when you exit the loop due to scope issues. You can do this by using $c as your counter variable, as well, so you don't run your function until all 3 answers have been received
[CODE]for (( c=1; c<=3; c++))
read -p "Please insert Grade #$c:" GRD1
if [ $c -eq 3 ]
Okay - good :) As long as all you got back where directories and they were all named events1, events2, etc. If possible, can you PM me the output? If you're getting back any results you wouldn't expect and/or that output has spaces or single/double quotes, etc, that could be the issue.
I'd love to take a look at that output, since I can't replicate the issue on my computer.
Feel free to cut and paste and send me a PM. Since that's where you dead-end, I need to take a look at it to determine what step to take next.
If any of it's confidential, can you also just replace alphabetical characters with different alphabetical characters, etc. It's very important that the structure of the results remains intact. For instance a file name "hi there" would be an issue, but I'd never notice the space if it was sent as "****" :)
If we can "not" run the rest of the script, we'll be able to see if the "find" command is returning any values or not. I'm assuming that it's not, but just want to be sure and can't try it on your machine :)
if this returns nothing, then the initial directory doesn't have any directories in it (although it should because one of those directories would be itself)
[QUOTE]find /domains///*****.com/public_html/storage/events/ -type d -name "events[0-9]*"[/QUOTE]
if this returns nothing, then we know the problem is with the pattern matching glob (literal events followed by any number of digits - events1, events2, events54, etc)
Let me know how that goes and post the output if you can. Your Ubuntu should be all right. I'm on 8.04 Hardy now, but I don't recall ever noticing that the basic find command's functionality changed all that much (if at all) between distro's.