As above, I'm using directshownet for the first time, trying to create a program to perform some real-time video processing.
At the moment I'm aiming to create two video renders (in the same window) side by side, one showing the original video and the other showing an edited version.
I think I have to create two graph/filter things in order to do this?
I'm currently basing my work on the dxplay example.
The other thing I'm working on is performing some basic interpolation on the video. I can extract a single frame, convert it to a bitmap and then perform the interpolation, however I wasn't sure if that's the best way to do it, since I would assume I'd have to convert it back in order to then play it as part of a video again.
So should I try to perform the processing directly on the data extracted by getcurrentbuffer?
int hr; IntPtr ip = IntPtr.Zero; int iBuffSize = 0; hr = m_sampGrabber.GetCurrentBuffer(ref iBuffSize, ip); ip = Marshal.AllocCoTaskMem(iBuffSize);
Then I need to be able to play the video again, but from what I've seen directshow plays from a file, so I'm not sure what's the best method to use here. Eventually the video frames will be passed to a development board over ethernet in order to perform the processing, so ideally I'd like to extract them, transfer them, process them, transfer them back, then play them again (in the 2nd video window).
Saw this earlier today, haven't tried to have a play with it yet, looks potentially promising:
Any ideas/comments would be appreciated.