My requirement is to parse through a HTML document ( which will be an article) and come up with a few keywords which describe what the document is about.

For example, if a professor writes an essay about say, information security; i want to parse this HTML document and come up with keywords say, information security, secure data storage etc.

Are there any tools available for such parsing? Are there any algorithms employed to parse and come up with keywords??


5 Years
Discussion Span
Last Post by vijiraghs

I read both of them. One tool parses the entire document and lists the word density. I can opt for this tool if I work under the assumption that the article is about the word that occurs most number of times. But this might not hold true always. So, if there are any other tools that might be useful, please help me.



The following is my requirement.

The user writes a blog. It contains a heading, few keywords/tags(user mentions explicitly) and the content. The user then submits the blog.

Before getting stored, the content section of the blog must be scanned using


This will yield, say, 10 keywords. Then I have to check if the keywords returned by the scanner are semantically related to the ones that the user has explicitly tagged. If they are, then they are included in a separate field in the database.

*I will have a **semantic library containing the keywords related to the keywords tagged by the user.

** I think "WordNet" will be sufficient. But most of the posts in my website will be computer science oriented. So, if there are any semantic libraries meant for computer science, please let me know.

Also, I want to know if I can achieve my requirement using "protege". If not, are there any other tools/platform with which i can do the above??

This topic has been dead for over six months. Start a new discussion instead.
Have something to contribute to this discussion? Please be thoughtful, detailed and courteous, and be sure to adhere to our posting rules.