Hi Guys,

I'm new to awk and trying to format an stadout "temp.txt" to csv format so I can export to excel. I'm having issues here is my code so far..If any one can help to see the light!!! The ### sign is the second server on the list each server has that information before the ## sign...

Thank you for your help!!!

code:

awk 'NR==1 { print $1, $3, $5, $7, $9, $11, $13} { print $2, $4, $6, $8, $10, $12 }' RS='\n\n' FS=': +|\n' OFS='\t' temp.txt

output: Here my output so far with titles. Please see the temp.txt for the raw data. Also after the #### is another server on the list

Name    System ID       Locked  Registered      Last Checkin    OSA Status
webdev.domain.com       1000000002      False   20121002T16:53:52       20140405T16:30:01       offline
webdev.domain.com       192.168.1.11    2.6.9-89.ELhugemem
---------------
-----------------         |-- rhn-tools-rhel-4-as-i386
----------------------      LDAP_Basics_4   crontab_scripts SudoPrivs24
------------    monitoring_entitled
-------------   Development     Test Group      Database

Here is my temp.text file:

Name:          webdev.domain.com
System ID:     1000000002
Locked:        False
Registered:    20121002T16:53:52
Last Checkin:  20140405T16:30:01
OSA Status:    offline

Hostname:      webdev.domain.com
IP Address:    192.168.1.11
Kernel:        2.6.9-89.ELhugemem

Activation Keys
---------------
61-a10823ed52fbd5f9f69fcb7f744fd0f4

Software Channels
-----------------
rhel-i386-as-4
  |-- rhn-tools-rhel-4-as-i386

Configuration Channels
----------------------
TSMCheckScriptRH4
LDAP_Basics_4
nag_libexec_scripts
crontab_scripts
syslog_4
SudoPrivs24
universal

Entitlements
------------
enterprise_entitled
monitoring_entitled
provisioning_entitled

System Groups
-------------
Environment: Development
Red Hat 4
Test Group
Type: Database

##############################

Name:          test.qa.domain.com

Anyone can help on this.. I get the columns counts, but the second row it doesnt start from the beginning and only counts 2 rows and it doesnt go down the list.. Any will appreaciated
Thanks. the filename I named tnospace.txt because I removed the line spaces and the dashes

awk -F":" -v  n=10 \
'BEGIN { x=1; c=0;}
 ++c <= n && x == 1 {print $1; buf = buf $2 "\n";
     if(c == n) {x = 2; printf buf} next;}
 !/./{c=0;next}
 c <=n {printf "%s\n", $2 }' tnospace.txt | \
 paste  - - - - - - - - - - |
 column -t -s "$(printf "\t")"

I've a got a similar problem, I'm also interested in ths answer..

.. I need a shell script to create new users reading the info from a text file with a format like this: peter:1234:/home/store1 (the IFS is :) and I can only use shell scripting and/or AWK to make it. I've tried, but with no succes, running this in a shell as root:

$ awk -F ':| ' '{ print("adduser --home", $3, $1"; echo", $2 " | passwd", $1) | "/bin/bash" }' users.txt

any help will be really apreciated

hey.. guess this is going to be an unhelpful and late answer, but given the format of input you have, there is no fixed formula to parse it using awk or sed that I can see. You have gotta write code specific to each section in your input, to get what you want.
In other words, awk expects your input to be a lot less complicated than what you have.

Of course all this assumes you want a 'simple' awk program. If you're ready to write code in awk it'll do anything you want.
E.g. when I put your first record inside 'in.txt' and run the following awk script:

bash$ cat parse.awk
# All lines like "key: value"
/^[a-zA-Z0-9 ]+: +.+$/ {
        split($0, arr, ":")
        fields[arr[1]] = arr[2]
        # print $0, " -- ", arr[1]," == ", arr[2]
}

# field spanning across multiple lines with wierd formats
/Entitlements/ {
        # keep reading till we hit an empty line
        values = ""
        getline ln
        while (ln !~ /^$/) {
                if (ln !~ /^\-+$/)
                        values = values ", " ln
                        # print "getline: ", ln
                getline ln
        }
        fields["Entitlements"] = values
}

# Record separator found. Dump the fields collected so far.
/^#+$/ {
        print "found ##########"

        for (key in fields)
                printf "\"%s\",", fields[key]

        for (key in fields)
                print "\t", key, fields[key]
                delete fields[key]
}

bash$

I get this output:

bash$ awk -f parse.awk in.txt
found ##########
"    20121002T16"," Database","  20140405T16","        False","        2.6.9-89.ELhugemem","     1000000002","    offline","          webdev.domain.com"," Development","    192.168.1.11",", enterprise_entitled, monitoring_entitled, provisioning_entitled","      webdev.domain.com",    Registered     20121002T16
         Type  Database
         Last Checkin   20140405T16
         Locked         False
         Kernel         2.6.9-89.ELhugemem
         System ID      1000000002
         OSA Status     offline
         Name           webdev.domain.com
         Environment  Development
         IP Address     192.168.1.11
         Entitlements , enterprise_entitled, monitoring_entitled, provisioning_entitled
         Hostname       webdev.domain.com
bash$
bash$
This article has been dead for over six months. Start a new discussion instead.