I read both of them. One tool parses the entire document and lists the word density. I can opt for this tool if I work under the assumption that the article is about the word that occurs most number of times. But this might not hold true always. So, if there are any other tools that might be useful, please help me.
This will yield, say, 10 keywords. Then I have to check if the keywords returned by the scanner are semantically related to the ones that the user has explicitly tagged. If they are, then they are included in a separate field in the database.
*I will have a **semantic library containing the keywords related to the keywords tagged by the user.
** I think "WordNet" will be sufficient. But most of the posts in my website will be computer science oriented. So, if there are any semantic libraries meant for computer science, please let me know.
Also, I want to know if I can achieve my requirement using "protege". If not, are there any other tools/platform with which i can do the above??