Hi All,

I am trying to create a image text detection program. The program is up and running, and I can distinguish between text and non-text using laplacian based method with an acceptable accuracy.

However, the problem is that I need the program run with the speed > 100 (images/s), and the best record so far is 10(images/s) with the resolution of 1280x768. My program spend most of the time on Laplacian convolution calculation and maximum laplacian difference.

I tried most of the optimization methods that I have heard of. The only thing that I haven't tried is programming in assembly code due to the complexity of my program.

Any suggestion will be appericiate.


Recommended Answers

All 2 Replies

Did you try CUDA?

Well using a GPU is about the only way your going to have any hope of processing that amount of information per second. Also your gonna have to make sure it's all streamed from memory to gpu properly, which can be more than a fair bit of a pain.

It's a pretty tall order to be fair.

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.