grep -v "connected" filename > newfile

With regards to the use of grep and then output to a file as it was done in the sample above, are there any limitation to its use especially when the file is big? I have experience record truncation when it is output to a file. Anyone ever experience this before? How can this problem be resolved?

Record truncation? Not normal behavior unless the record has embedded ascii nul characters. Lack of disk space or exceeding enabled quotas will also cause the output file to truncate.

grep has a line length limit of 2048 characters.
There also is a concept of largefiles, files which are so big a signed 32 bit file pointer cannot access them > 2.4GB.

Which of these things applies to your case?

Record truncation? Not normal behavior unless the record has embedded ascii nul characters. Lack of disk space or exceeding enabled quotas will also cause the output file to truncate.

grep has a line length limit of 2048 characters.
There also is a concept of largefiles, files which are so big a signed 32 bit file pointer cannot access them > 2.4GB.

Which of these things applies to your case?

The file size is about 1.2GB. The recond was truncated when it was run in the script but when it was manually run later, the records in the file did not get truncated. Thus it is an intermittent problem. It could be due to disk space but I can't verify.

The way disk i/o in unix works is that data is parked in an in-memory cache in the kernel - it is not guaranteed to be written to disk when the write() system call is invoked. Every 30 seconds or so the syncer daemon issues a sync command. This forces the kernel to write everything in the kernel buffer to disk.

What you are seeing is an incompleted write operation - for whatever reason. Common reasons are - a signal was sent to the process that terminated it, write() or sync failed because something else filled up the
disk (maybe a temp file) and then that file went away, disk errors caused a fatal error. If it's an nsf mounted disk then the network also becomes an issue. What errors do you see in the log?

The way disk i/o in unix works is that data is parked in an in-memory cache in the kernel - it is not guaranteed to be written to disk when the write() system call is invoked. Every 30 seconds or so the syncer daemon issues a sync command. This forces the kernel to write everything in the kernel buffer to disk.

What you are seeing is an incompleted write operation - for whatever reason. Common reasons are - a signal was sent to the process that terminated it, write() or sync failed because something else filled up the
disk (maybe a temp file) and then that file went away, disk errors caused a fatal error. If it's an nsf mounted disk then the network also becomes an issue. What errors do you see in the log?

There was no tracking of error message in the script. WIll probably need write a program to do the task of dividing the files into two.

This article has been dead for over six months. Start a new discussion instead.