Well, OK, you can do that. If this is a follow-on to you still unsolved "handling large files" topic then messing about with how you read the logs isn't going to make any real difference - it's the SQL that needs to be optimised. You may get the best throughput by having one thread reading the log and placing entries on a blocking queue, with one or more threads taking enries from the queue to update the database.
Maybe you can clarify exactly what you are trying to do, and exactly what help you need?
thanks for replay
i inserted one million row every 20 minutes so i think there is no problem with data base but if more than one process reads from the log and inserted to the database it will make it faster donot you think???
I'm goint to stick with what I suggested in my previous post. Have a single thread reading the log, and experiment to find the optimum number of threads updating the database. Have a look at the Oracle tutorial on Java Threads, and the API doc for LinkedBlockingQueue