When I researched an article on multimedia search last year for EContent Magazine, (The resulting article was republished on Streamingmedia.com last December.) I learned in the course of my research that it's hard to search for non-text elements because they lack the contextual language of text. Seems logical enough, but the way most search engines get around this is by using the text-based metadata around the image or video to get searchers in the right neighborhood. It works in a 1990s sort of way, but what the world really needs is more advanced multimedia search.
That's why my eyes popped a bit when I came across this NYT article this morning while scanning today's technology news. It seems Google is experimenting with image recognition to provide a more advanced way to search for images (and one assumes eventually videos). The problem is that this is so resource-intensive, according to the article, that Google can only work with a small sub-set of its huge image repository. And if it's too resource-intensive for Google, you know we are talking about some serious resources.
Google is hoping to do for images, what page rank once did for text with its original search algorithm that rocked the world all those years ago. We shall see where this goes, but for now, it's interesting to see that Google is at least playing around with this, and as processor power and computer knowledge increases, we should begin to see major break-throughs around this type of search technology. For now, we are stuck mostly with metadata and some other interesting approaches outlined in the Streamingmedia.com article, but this announcement certainly bodes well for the future of mutimedia search.