Hi all,

I am new to Linux/shell scripting having moderate knowledge.

In my script, I need the execution time of command (say 'ls') in mili seconds level. For this, I would like to use 'time' command. The problem is that, how to save the output of time command in a variable (for other commands, this can be done easily)?

Going further, the o/p of 'time' command:

real 0m0.003s
user 0m0.004s
sys 0m0.000s
In my script, I would like to use only middle line "user 0m0.004s" saved to the variable but unable to find out the way. Is that something related to 'awk' or 'sed'

Can someone help me out with this issue. Thanks a lot.

6 Years
Discussion Span
Last Post by griswolf

Try this: x=$((time sleep 1) 2>&1); echo $x Here's what is going on: The shell (bash in my case) time command is "reporting" timing info directly to the console.

  • By putting () around the command, which means "run in a subshell", that 'report' gets converted so it is on stderr.
  • The 2>&1 redirects stderr onto stdout
  • the $(...) allows us to set the variable from all the above (use backticks: `...` for sh).

This is all just for the Bourne family of shells (sh, ksh, bash). If you are using csh or related, you can do the same sort of thing, but the syntax is different.

Things to consider:

  • chant man bash (substitute your own shell) and look for something like 'time format' (in bash: TIMEFORMAT) to see some useful options
  • chant man date and look at the format options. If your version has milliseconds (mine does not), you can get where you want with much less effort (just run it before and after your command and do some arithmetic). This is my preferred option for most things.

Hi griswolf,

Thanks a lot for your reply. I'll try the solution u have posted.


That worked well with some changes.

Thanks a lot.

Going further with my post earlier, I am little confused about the output of time command. I mean I can't understand the difference between "real time", "user time" and "sys time" shown by "time" command. Can someone please explain the difference between these outcomes?

I need the execution time of commands like " ls -R /<PATH>" that means, the time taken to display all the files under that PATH recursively.

I have gone through some google searches and found that the required time should be "user time" & therefore, I am using "user time" displayed by "time" command for calculating the total time taken ny "ls -R /<PATH>" command. Please let me know if I am doing right ?? Thanks again.
Edit/Delete Message Reply With Quote Multi-Quote This Message Quick reply to this message Thanks


Then you want the 'real' time. In essence, the other two measure the amount of real time that your CPU(s) worked on the problem. For disk I/O, that is likely to be significantly less than 'wall clock' time, which is what I think you want. The closest you can come is 'elapsed real time' which is very close indeed.

I found the next several lines here, but have added my emphasis:
The time command runs the specified program command with the given arguments. When command finishes, time writes a message to standard output giving timing statistics about this program run. These statistics consist of (i) the elapsed real time between invocation and termination, (ii) the user CPU time (the sum of the tms_utime and tms_cutime values in a struct tms as returned by times(2)), and (iii) the system CPU time (the sum of the tms_stime and tms_cstime values in a struct tms as returned by times(2)).

This topic has been dead for over six months. Start a new discussion instead.
Have something to contribute to this discussion? Please be thoughtful, detailed and courteous, and be sure to adhere to our posting rules.