I have to access a uuge data from mysql , Can i replace that db call writing data into text file and accessing it again from the file itself .
then file can be removed once data is processed

I have no idea what it is you're actually asking here.

1) A web application deployed in tomcat.

2) Lets say i have employee table in database having 10 fields , I have a dataset of 10 lakhs employees.
3) I have to go through all the records at an interval of 10 minutes , and i have to update one of the columns entry say timeSpentByemployee of every record.

4) If i maintains it in database then there is possibility that after every 10 minutes i will hit a query to database for all the records and also respective update query for all the records in a loop.

My question is rather then making expensive calls to database , Can i store the entire Employee dataset structure in a text file then. If so then is it faster then database and also which format can suit the perpose?

Is it clear now or still need more inputs ?

It definately will not be faster than the DB.

Edit: As long as you have your indexes properly defined. Have a DBA examine your table definition and your query and suggest possible changes to either the table indexes or your query.

Also, I hope you are using a PreparedStatement and batch mode, or, even better, a select/update (merge statement in oracle) type statement.

Edited 6 Years Ago by masijade: n/a

It definately will not be faster than the DB.

Edit: As long as you have your indexes properly defined. Have a DBA examine your table definition and your query and suggest possible changes to either the table indexes or your query.

Also, I hope you are using a PreparedStatement and batch mode, or, even better, a select/update (merge statement in oracle) type statement.

Assuming, you know the scenerio. Can you suggest some good practice which can make some significant speed as compared to what i already have mean some kind of memory storage api for huge data in java.

That is really going to be helpful !

No. The only thing I can recommend is what I already have recommended. Optimise the indexes on the table, and optimise your query. 10 minutes is more than enough time to update 1 million records. Heck, 2 minutes should be more than enough time.

This article has been dead for over six months. Start a new discussion instead.