Hi,

Something new from me:

I have an application which can act as server and client (launch them on separate PCs). I need the server stdout because it displays some data for evaluating their communication. I start the server through a subprocess.Popen giving its stdout to PIPE

from subprocess import Popen,PIPE
import os

#Start subprocess
cmd = 'myAppli arg1 arg2 arg3'
p = Popen(cmd,shell=True,stdin=PIPE,stdout=PIPE)

#I know when the client is done sending and I kill p
if ClientDoneCondition:
    os.system('TASKKILL /PID '+str(p.pid)+' /F') #The only way I found to kill subprocesses

#Now I can analyse stdout, line by line preferably
line = p.stdout.readline()
while line.find("what  want") == -1:
    line = p.stdout.readline()
print 'Final line', line

The fact is that I know normally what part of the final line never changes so I can identify it, because readline() blocks if I try to read stuff after all the writing is done because the appli-server never closes and the forced kill doesn't put EOF at the end of stdout.

The problem is that the server can lose connection or stuff might happen which doesn't allow to get to the last line where my condition can be met, so , what I'm trying to ask:
- Is there another way except readline() to go through stdout and not block readline() due to lack of EOF
I would recover all the existing lines and conclude my analysis from the info they contain, ...


NOte: I tried putting a thread, but since they can't be kille din Python and even if timed out, the readline inside blocks, I get nowhere...

Thanks ,

T

Recommended Answers

All 19 Replies

Would it be possible for you to simply use a read() to bring in all the information that you can and then worry about parsing it once that operation has completed?

After storing what you've read you can perform a split('\n') and then iterate on the return to perform almost exactly like readline().

Alternately you could perform readlines(), which is basically exactly like doing a read and then splitting it up by line (returns a list).

The parsing is not the issue, the issue is that read() or readline() (until end) will only stop upon finding an EOF in the stdout, but since the application is a server and never exits, the stdout is therefore incomplete from its point of view-> noEOF...

I had chosen readline exactly because when I reach the last line I can recongnize it and so stop the reading of the next line. But when I don't reach that condition, it hangs ... Read simply hangs all together and I would never get a result.


I would need a function which doesn't block upon ...an unexistent line or a possibility of timeout...these are the idea that have crossed my mind.

I found a ..silly workaround, I don't like it and it's simply not elegant, but I don't know what else to do...

I figured I need something else then the pipe to store the stdout, so i opened a file in which I write the stdout, and after I kill the process I parse the file. Line by line until I find a line without nothing.

The fact that I can't add the EOF symbol in Python to really end the file (or event he stdout when I want) ...well..there just seems to be too much going on in subprocesses.

So, I found a workaround but don't seem to be getting any more bright ideas for unblocking the pipe readline()

Anybody?

What are you using to connect to your server? I've worked with paramiko and know that it provides a universal timeout, which will throw a TimeoutException when any transaction with the server/client goes beyond the desired limit... Perhaps something along those lines would get you what you need.

It is a traffic performance evaluator: IPerf. It seems to hang forever, even if it's client exits cleanly after sending what as ordered of it.

I haven't much experience with details of IPerf, I'm only implementing its automation, but... It seems that there's no connexion timeout. I've been looking into its options a bit, to try and find either an initial run time option for one single client or... for a clean exit.., haven't gotten anything yet...

Just found this:
-P, --parallel # $IPERF_PARALLEL The number of connections to handle by the server before closing. Default is 0 (which means to accept connections forever).

But I'll try tomorrow...Thanks for hanging on... dialogue helps me a lot for solving by brainstorming:)

I am the same way. Before coming to this forum I used to prowl another forum, which has degraded into abysmal territories and I will never again return to; however, I used to ask a question, and simply by typing out my queries I would in fact brainstorm with myself and solve the problem before a single member was able to reply.

So I'm right there with you ;)

;) I'll tell you if it worked as sson as I try:) Same thing happened to me a few times here...Hope we don't get to abysmal though:)

Wroks, the server exits very nicely, ouf, I shoudln't have just taken the commands other used as such, but well, all's well when it ends well, right?

So, solved, but through the application exit, the readline() would still block if no EOF, I took note of that:)..

C ya

LINUX only possible solution:

On linux, there is the standard module commands so you could write

from commands import getstatusoutput
cmd = "myAppli arg1 arg2 arg3"
status, output = getstatusoutput(cmd)
if not status:
    ... # process output
else:
   raise Exception("command '%s' failed with exist status '%s'" % (cmd, status))

May be you could make a pseudo module 'commands', using your trick. For example you may be able to write in a StringIO instead of a file.

found this thread on a google search.

I have the identical problem, though I can't solve it as you seem to have, by using a short-lived server.

I have a binary compiled program (let's call it "kernel") that is to be invoked from a python script. Every once in a while, kernel will spit out a line of text. I want the python program to, among other things, monitor the stdout of the kernel, grab the text it spits out and do some processing with it.

The problem is the blocking read calls:

import subprocess
kernel = subprocess.Popen("kernel", stdin=subprocess.PIPE, stdout=subprocess.PIPE)
msg = kernel.stdout.readline()  # blocks

I need to be able to read from kernel's stdout stream in a way that is non-blocking on two fronts:

1. If no data has been outputted, don't block, just return
2. If data is present, read it up to the newline character and then return, i.e. don't wait for an EOF

I think there are some lower-level python IO routines that can allow for these kinds of reads, but since the Popen object provides stdout only as a file, I seem to be restricted to file IO (which doesn't provide the kind of reads I'm trying to use).

Would appreciate any thoughts!

-v

In fact I made a little program which can read from a subprocess' stdout in a non blocking way. I start a thread which reads lines from the pipe and puts them in a Queue object. This is encapsulated in a class Pipe. The thread is stopped by closing the pipe, which produces an exception in the thread (and in the distant process). Here is the code

#!/usr/bin/env python
# pipe.py
try :
  from queue import Queue ,Empty 
except ImportError :# python < 3.0
  from Queue import Queue ,Empty 
from threading import Thread 

class TimeoutError (Exception ):
  pass 

class Pipe (object ):
  """A wrapper around a pipe opened for reading"""
  def __init__ (o ,pipe ):
    o .pipe =pipe 
    o .queue =Queue ()
    o .thread =Thread (target =o ._loop )
    o .thread .start ()
  def readline (o ,timeout =None ):
    "A non blocking readline function with a timeout"
    try :
      return o .queue .get (True ,timeout )
    except Empty :
      raise TimeoutError 
  def _loop (o ):
    try :
      while True :
        line =o .pipe .readline ()
        o .queue .put (line )
    except (ValueError ,IOError ):# pipe was closed
      pass 
  def close (o ):
    o .pipe .close ()
    o .thread .join ()

def testme ():
  """Start a subprocess and read its stdout in a non blocking way""" 
  import subprocess 
  prog ="pipetest.py"
  child =subprocess .Popen (
  "python %s"%prog ,
  shell =True ,
  stdout =subprocess .PIPE ,
  close_fds =True ,
  )
  pipe =Pipe (child .stdout )
  for i in range (20 ):
    try :
      line =pipe .readline (1.45 )
      print ("[%d] %s"%(i ,line [:-1 ]))
    except TimeoutError :
      print ("[%d] readline timed out"%i )
  pipe .close ()
  print("pipe was closed")

if __name__ =="__main__":
  testme ()

The distant program (your 'kernel' program) was this

#!/usr/bin/env python
# pipetest.py
from itertools import count 
from time import sleep 
import sys 

try :
  for i in count (0 ):
    print ("line %d"%i )
    sys .stdout .flush ()
    sleep (3 )
except IOError :# stdout was closed
  pass

With a little effort, I think this Pipe class could be turned into a very nice tool with read and readlines methods, and also a better control on the thread (when should we test if the thread is alive, if the pipe was closed, etc). Try to run pipe.py on your system ! If I have the time to do it, I'll put an enhanced class in the code snippets.
Note that this doesn't work without the sys.stdout.flush() in the distant program.

In fact the above solution doesn't work that well, because the program sometimes crashes when it tries to close the pipe while the thread is reading. I think that a possible solution would be to start a 2nd subprocess which would read the 1st subprocess' stdout and redirect it to a socket. One would then read into the socket in non blocking mode.

That pipe class looks pretty good. I'll try implementing it.

Can you elaborate on the situation in which it fails? Are you referring to the case when the distant program closes stdout?

Unfortunately, it happens when I call the Pipe.close() method. Python crashes without traceback. I think you should try the socket method I described above.

I feel like maybe you simply shouldn't be closing the pipe that was passed into the constructor (the first line in your Pipe.close() method). Since this is stdout, it belongs either to the distant process or at the very least to the Popen object returned by subprocess. Since your Pipe wrapper didn't open the object it is reading from, is there a reason it should be responsible for closing it?

I suspect the join() statement in close() should work fine. I.e. it would enqueue everything it can read from the distant process, and then wait for EOF (which would raise IOError in _loop()).

The problem is that without the call to close, there is no way to stop the thread. The idea of close was to raise an exception in the thread.

I have a working solution for linux only which uses a pair of sockets instead of a pipe

import subprocess 
import socket 

def testme ():
  """Start a subprocess and read its stdout in a non blocking way
  Linux only: we use socketpair, and we pass Socket.fileno() as
  subprocess Popen's stdout argument.
  """
  sock ,childsock =socket .socketpair ()
  prog ="pipetest.py"
  child =subprocess .Popen (
  ["python",prog ],
#shell=True,
  stdout =childsock .fileno (),#subprocess.PIPE,
#close_fds = False,
  )
  sock .settimeout (1.3 )
  for i in range (12 ):
    try :
      data =sock .recv (1024 )#pipe readline(1.7)
      print ("[%d] %s"%(i ,repr (data )))
    except socket .timeout :
      print ("[%d] recv timed out"%i )
  sock .shutdown (socket .SHUT_RDWR )
  sock .close ()
#childsock.shutdown(socket.SHUT_RDWR)
#childsock.close()

if __name__ =="__main__":
  testme ()

The program ran as the child process was

#!/usr/bin/env python
from time import sleep 
import sys ,os 
import random 

def eprint (*args ):
  s ="pipetest %d: %s\n"%(os .getpid ()," ".join (str (arg )for arg in args ))
  sys .stderr .write (s )
  sys .stderr .flush ()
eprint ("starting")
try :
  for i in range (30 ):
    print ("line %d"%i )
    sys .stdout .flush ()
    sleep (random .uniform (1 ,3 ))
except IOError ,e :# stdout was closed
  eprint ("IOError: ",e )
eprint ("exiting")
Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.