Hi all! I have a directory with multiple text files of data. I have been trying to develop a loop to read through all the files, eliminate the first 88 lines (text header) and create a new file with only the data values. I have been all to write the loop to read thru files or to extract only the data, but am not having any luck piecing the two together to read all files in the directory.
Any help would be greatly appreciated!

Recommended Answers

All 4 Replies

show some code please... don't forget to use the Code tags.

Ok! I am going to give this my best try. I am brand new with coding and am a slow learner. Thank you for replying. I am trying to read only the data (lines greater than 88, header ends at 88) to a new file. I have over 100 different files and am having trouble trying to build a loop to read all the files and extract the data. This is what I have been trying to use.

for filename in $(find -iname '*.txt') 
do
 awk -F"\t" ' 
    BEGIN {print NR > 88,FILENAME}
    ' $filename > output.txt
done

Ok! I am going to give this my best try. I am brand new with coding and am a slow learner. Thank you for replying. I am trying to read only the data (lines greater than 88, header ends at 88) to a new file. I have over 100 different files and am having trouble trying to build a loop to read all the files and extract the data. This is what I have been trying to use.

for filename in $(find -iname '*.txt') 
do
 awk -F"\t" ' 
    BEGIN {print NR > 88,FILENAME}
    ' $filename > output.txt
done

This works for me. Note the + which tells tail to start 88 from the front rather than 88 from end. You may find that you need to tweak the number up or down one. Just try it on one file to see what happens

path=.
for f in $(find $path -iname '*.txt') ; do
  tail +88 $f
done > output.txt
Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.