Sorry for the lousy title, "InterProcess Communication" was too long

Basically, I'm looking to use the native _winapi and msvcrt modules to launch an integrated python interpreter and communicate with it over pipes
I can't find anything on Google or DDG, and I've been searching for days, but nobody uses these modules for some reason...

I already have the foundation with a 1-way Parent->Child pipe that works, but it's giving me a few issues:
1: I can't seem to get a 2nd Child->Parent pipe to run without the program hanging
2: the output of the subprocess only prints after the main process is closed (tbh though I rather want the main process to actively print through subprocess feedback, even if it's delayed a few ns)

Note that the code came from the CPU-heavy multiprocessing module.
(I just reduced the processing, so this code wastes MUCH less CPU cycles)

# -*- coding: utf-8 -*-

from _winapi import CreatePipe, CreateProcess, CloseHandle
from msvcrt import open_osfhandle
import _winapi, msvcrt
from nt import getpid

# this is just a mock-up path to a local interpreter, since __file__ doesn't include the drive letter on wine
CWD = 'Z:%s'%__file__.replace('test.py','').replace('\\','/')
exec_path = '%sapp/'%CWD
executable = '%spython.exe'%exec_path

rhandle, whandle = CreatePipe(None, 0) # Parent to Child

prog = '''
from _winapi import OpenProcess, DuplicateHandle, GetCurrentProcess, CloseHandle
from _winapi import PROCESS_DUP_HANDLE, DUPLICATE_SAME_ACCESS, DUPLICATE_CLOSE_SOURCE
from msvcrt import open_osfhandle
import sys
sys.path = ['.\\DLLs','%s'] # because -I doesn't "Isolate" as well as it's supposed to, and we still have environment paths here.

source_process_handle = OpenProcess(PROCESS_DUP_HANDLE, False, %s)
try:
    handle = DuplicateHandle( source_process_handle, %s, GetCurrentProcess(), 0, False, DUPLICATE_SAME_ACCESS | DUPLICATE_CLOSE_SOURCE)
    from_parent = open( open_osfhandle( handle, 0 ), 'rb', closefd=True ) # not using `with` because I intend to open a 2nd pipe to write to

    exec(from_parent.read())

    from_parent.close()
    sys.exit(0)
finally:
    CloseHandle(source_process_handle)'''%(exec_path, getpid(), rhandle)

cmd = '%s -I -S -c "%s" --multiprocessing-fork'%(executable, prog)

to_child = open( open_osfhandle( whandle, 0 ), 'wb', closefd=True ) # not using `with` because I intend to open a 2nd pipe to read from

# start process
try:
    subprocess_handle, thread_handle, pid, tid = CreateProcess( executable, cmd, None, None, False, 0, None, None, None)
    CloseHandle(thread_handle)
except:
    CloseHandle(rhandle)
    raise

# send information to child
to_child.write(b'print("success!")')

for i in range(10000000): i+i+1 # give the subprocess enough time to complete

input('Press Enter to Exit...\n\n')

CloseHandle(subprocess_handle)
to_child.close()

Is there any way I can introduce a 2nd pipe to have proper interprocess communication??

EDIT:
btw, to those looking to use this as an example, I highly recommend you NOT send raw python code through the pipe like I'm doing here with to_child.write(b'print("success!")')
What I'm doing is for testing purposes only, and should not be used in practical cases as it's extremely insecure!

the multiprocessing module uses the pickle module, but that's a joke because you can easily get the authstring to unpickle the pipe...
see this image:
https://lh3.googleusercontent.com/R8vYH1reL4GF4KTc2GrO88CsMrIVshMhZVTmYOG5naZAmhDH-NACHSB7XOIzriFrNi8YiyD4ic1crLFV5sRu32ELSTw-gU4O-DVnm8H0wtmZAvzmDM5qhnmR5klDfDIxIy5aNgYWG-7vVHq9uTMZp-uRLAdYdzSEs_urbOLB6tfiDFbelCSpPXdiH10xs9bj_EnCIjTrcQS06qPtNSiZR1BvmruxEd_24qaqcQODtiBCPB0MPxD3G_-73xGFk3QRqYr0JJpgNSZk7eaVRmED3QooxY245k9jv_g-xeiRvTGqVMWGbQMDzDjsiRLOFdB4l42370Gf-m9rWTNISfuVA0H_6FoZxdrYh6MzX8tz3j6bztHc51h_PLyGHaWlk70JOsXcvG5OF-3TJridaUTldwbyqNUTmz-sPFUOghPdzmYc5WBocNzaiONiHcfCVsDTJOV3dYWKii-qs_aroQKE3ZpdkcjGuw2t0VCuDvrN96ZtCO1YYbRCTKi-97wVUFXS26v9NeikAVCiTazAwdyTr98byg6WORCel6dZ2cZaAgG6vHQCr7OpLKhJpUQ3lNmmzTcNFpPW1m-MljOn1bFotn70Fuz2hs0p4M4LxKzWTI-0ps6-y5uvYFms6DUpLrQ77RMlUqHDHuNZtFOaFSrekr84hFXcka0MkFsnNMZU_omaqQXJ2pOZiNsr=w896-h267-no

so it's essentially just wasting a ton of CPU cycles to send code over the pipe.

also, fun fact
from the image, you can use cp.__class__ or type(cp) to gain access to the deleted class MainProcess(BaseProcess): class.

But to get back on topic, what you should do is define code in the subprocess that analyzes the input read from the pipe and interprets commands from it that are specific to your app with the supplied data.

I should probably note, the interpreter I'm using is a portable Anaconda 2.3.0 (need to upgrade to 2.5.0) interpreter specifically tailored to work with my API, and as such is designed to be as minimal as possible (only 10.8 MiB in app/ folder size).
it does NOT include the standard library (so no os module), nor does it include builtin C modules I won't be using (such as _tkinter).

all it has is 2 standard modules being io (modified to remove abc and _weakrefset), and codecs (includes the encodings/ module (folder))

the app/ folder root looks like this:
https://cdn.discordapp.com/attachments/161204326218465280/491923337434628096/unknown.png

and to show the only modules I have to work with:
https://cdn.discordapp.com/attachments/161204326218465280/492146476760301578/unknown.png

everything here is either built in C if not built into python34
it's enough to start a subprocess though and import what I need, as it works with the above code, and I've already reimplemented most of what's actually needed from the standard library in my API.

so yeah, know that I can't use os or any of that stuff. ;)
(most of the standard library is CPU-heavy anyways, which is why it was removed from my API's environment)
^ regardless, it shouldn't be needed as my API implements better automated detection for it's libs and plugins.

EDIT: it's official, image support is broken.

so just to show exactly what I'm trying to do (this code hangs):

# -*- coding: utf-8 -*-
from _winapi import CreatePipe, CreateProcess, CloseHandle
from msvcrt import open_osfhandle
from nt import getpid

# this is just a mock-up path to a local interpreter, since __file__ doesn't include the drive letter on wine
CWD = 'Z:%s'%__file__.replace('test.py','').replace('\\','/')
exec_path = '%sapp/'%CWD
executable = '%spython.exe'%exec_path

crhandle, pwhandle = CreatePipe(None, 0) # parent -> child (input pipe)
prhandle, cwhandle = CreatePipe(None, 0) # child -> parent (output pipe)

subprocess_code = '''
from _winapi import OpenProcess, DuplicateHandle, GetCurrentProcess, CloseHandle
from _winapi import PROCESS_DUP_HANDLE, DUPLICATE_SAME_ACCESS, DUPLICATE_CLOSE_SOURCE
from msvcrt import open_osfhandle
import sys
sys.path = ['.\\DLLs','%s'] # because -I doesn't remove all C:/PythonXX paths (where this interpreter is in ./app/)

source_process_handle = OpenProcess(PROCESS_DUP_HANDLE, False, %s)
try:
    rhandle = DuplicateHandle( source_process_handle, %s, GetCurrentProcess(), 0, False, DUPLICATE_SAME_ACCESS | DUPLICATE_CLOSE_SOURCE)
    from_parent = open( open_osfhandle( rhandle, 0 ), 'rb', closefd=True )

    whandle = DuplicateHandle( source_process_handle, %s, GetCurrentProcess(), 0, False, DUPLICATE_SAME_ACCESS | DUPLICATE_CLOSE_SOURCE)
    to_parent = open( open_osfhandle( whandle, 0 ), 'wb', closefd=True )

    feedback = from_parent.read() # I'd like to run this in a while loop

    to_parent.write(b'Parent told me to say ' + feedback)

    to_parent.close()
    from_parent.close()
    sys.exit(0)
finally:
    CloseHandle(source_process_handle)'''%(exec_path, getpid(), crhandle, cwhandle)

to_child = open( open_osfhandle( pwhandle, 0 ), 'wb', closefd=True )

cmd = '%s -I -S -c "%s" --multiprocessing-fork'%(executable, subprocess_code)
# start process
try:
    subprocess_handle, thread_handle, pid, tid = CreateProcess( executable, cmd, None, None, False, 0, None, None, None)
    CloseHandle(thread_handle)
except:
    CloseHandle(crhandle)
    raise

# send information to child
to_child.write(b'success!') # don't send raw data, these pipes can easily be hacked, especially with python

for i in range(10000000): i+i+1 # give the subprocess enough time to write to the output pipe

from_child = open( open_osfhandle( prhandle, 0 ), 'rb', closefd=True )

feedback = from_child.read() # I'd like to run this in a while loop
print( b'Child said: %s'%feedback )

#input('Press Enter to Exit...\n\n')

CloseHandle(subprocess_handle)
to_child.close()

Does anyone know how to get this working??

I'm going to write "This is why I stopped using pipes and started using sockets." Also, with sockets I can scale across computers in the network.

commented: it's a good suggestion, but not what I'm looking to achieve ;) +4

I don't always wanna be 100% open to the network
plus sockets are super slow as demonstraited by IDLE
(though I could work my magic on that and do things more appropriately rather than use an RPC proxy interface)

My second method. Not as scalable is to pass a boatload of information via a file. I've used the IP methods and files for years for communications among threads or apps. The problem with IPC is as you noted but for me I needed my systems to be agnostic about the OS. So once the IP or file method is known the apps can be on almost any OS or spread across a collection of computers without being locked to any specific OS.

Be aware I am sharing here and not engaging or attempting to tackle your IPC woes.

PS. Added with edit. For me I've never found IP communication to be slow. Since it can be local, it happens without any traversal to the external network.

I've never found IP communication to be slow

you're probably smart and not using python like I am ;)

I needed my systems to be agnostic about the OS.

there's not much I need to worry about
all I really need is the process and pipe creation mechanics for each OS and I can make a simple class that manages everything.

everything in python has the extremely bad habit of extreme abstraction, just look at threading, subprocess, and multiprocessing...
those modules could do the exact same thing and be written with FAR less code.

and then there's modules like watchdog which brutaly abstract these much MUCH farther.
and here I've written (still working on) a single file watcher module that's not even 1000 lines.
(yes this is also part of my API)

to put it short, I'm trying to use "good python", which nobody uses:

  • performative
  • secure
  • memory friendly
  • readable (if you understand how python's mechanics work)

but anyways

not engaging or attempting to tackle your IPC woes.

that's fine, thanks for the advice ;)
I do plan to use sockets, but indirectly to pipes whenever a network session occures.

if anyone else wants to chime in, I do still need this.
I've given up looking for a solution for a time as I'm now working on another section of my API...
but I do intend to get back to this, and still need this method to work.

heck, at the very least provide info, cause I can't seem to find any...
DDG sucks here, and Google's results have been questionable...
Nullege is down, and GitHub just provides 100 pages of copies of subprocess and multiprocessing...
meanwhile other source engines hardly provide good results.

so yeah... I'll bother with this later...
I'm not blowing my brains out on it any longer...
I've already spent too much time on it. (take the discussion span of this post and add 2 days)

interesting
I only just skimmed over it, but it uses all standard library design principals, and doesn't really mention using pipes for syncing, but yeah, I'll give it a look through a few times over.

I think I might be a little more advance than it, cause I'll be using both multiprocessing and multithreading where appropriate ;)

my API has been in design since probably around 2013-2014, and I've learned alot within that time to now :)
going back and re-looking at stuff is always good though.