Yes. It's ok to have many threads updating many different files. You just need a simple check to handle those cases where two threads happen to want to update the same file. That's where a sync block with the file as the lock object comes in - forcing one thread to wait until the other thread has finished updating and releases the lock.

So this will also affect the performance of the system as it seems as if there will be lot of delays because the truth is that the system will surely have a thousand clients accessing the same file at the sametime so this seems as if there will be lot of block in the synch. Aren't there a risk of data loss while waiting if there are a thousand clients trying to write to one file at the same time?

It seems every post you contradict your previous post!
Is it many files simultaneously or is it a thousand clients and one file?
You can't chose the right architecture unless you understand the requirement.

I have many files and also there is a high possibility that each file can be modified or accessed my many (thousand) clients at the same time so for simplicity I said let's just use 1 file because if I managed to solve the problem for each file I think it will be easy to perhaps put the solution code on a call where a thread is writing to a file, keep in mind that the server is also a multi-thread where each connected client will be served by it own thread so the server also have many files too but upon splitting the client message the server be able to know which file to write the data to.

So what I'm is a problem now is that I need to make sure that for each file no matter how many clients try to modify each file at the same time the data is saved I thought it would be easy to understand if we just use one file as an example on how can one(each) file be written by a thousand clients at the same time and the data be saved for each and every client that has written the data or sent the message all the messages be saved.

Ok. We discussed two solutions to that (queue vs lock). Both will work. Which you chose depends on the number of files etc. If there are many files then starting with a good solution for just one file is irrelevant.
It would help if you can state
how many transactions were there be per second
How many different files are being updated per second

About the number per second it is hard to tell because yes all files have a 99.9% chances of being updated simultaneously and 1% is for those roughly out of 600 files maybe 250 yes maybe updated simultaneously but at a low speed compared to rest of the files which per second each file can be accessed simultaneously at a maximum speed.

What I'm trying to do is, suppose there are let say Musical Awards and let's say there are 1000 nominees that means 1000 files on the server and now the community is given a change to choose their favorite artist through out the province so I'm sure you can imagine the volume of writing to each file it can be so obvious I need a multi-thread so that every client will be connected and the data they sent is saved to correct files.

You should really not use a "file" for this use case. Look into in-memory (and optional persistence) solutions like Redis to increment counters. These things will be atomic and managed by Redis.

If all this needs to be "in-memory" and you want to keep it simple, you can just go with ConcurrentHashMap. If you need persistence, it has to be a database of sorts (relational db, key value db etc.). This is so that atomicity in terms of persistence can be handled by a "robust" solution instead of doing it at code level.

Also mildly related and interesting, the XY problem. If you had included the "musical awards scenario" in your initial post, it would have been much easier to directly suggest a better solution. :)

As sos said - if you had explained the scenario earlier it would have saved a lot of time.
With what we now know...
Lots of files is about as bad a solution as I can think of. What's wrong with a simple SQL database? What's wrong with an in-memory solution with occasional dumps to disk?
You really must get a handle on the number of transactions per second. That will be a major determinant on choosing a good implementation. Something that works well at 1000 transactions/hour won't be ideal at 1000/second, and vice-versa.

The problem that forced me to use text files was that the free database has some limitations so that is why I thought of using text files.

There is no problem here with free databases. JavaDB comes with the JDK as standard, and is certainly up to this task, as Is MySQL and others.

So you suggesting that I must change to use db?

From what you have said so far I would use an SQL database, yes.

And how do I ensure that the all the data is saved that will be sent by clients simultaneously to database?

See previous posts

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.