Hi All,

I am trying to create a image text detection program. The program is up and running, and I can distinguish between text and non-text using laplacian based method with an acceptable accuracy.

However, the problem is that I need the program run with the speed > 100 (images/s), and the best record so far is 10(images/s) with the resolution of 1280x768. My program spend most of the time on Laplacian convolution calculation and maximum laplacian difference.

I tried most of the optimization methods that I have heard of. The only thing that I haven't tried is programming in assembly code due to the complexity of my program.

Any suggestion will be appericiate.


6 Years
Discussion Span
Last Post by ChaseRLewis

Well using a GPU is about the only way your going to have any hope of processing that amount of information per second. Also your gonna have to make sure it's all streamed from memory to gpu properly, which can be more than a fair bit of a pain.

It's a pretty tall order to be fair.

This topic has been dead for over six months. Start a new discussion instead.
Have something to contribute to this discussion? Please be thoughtful, detailed and courteous, and be sure to adhere to our posting rules.