Do any one know how to successfully run mapreduce process of chukwa in hadoop.Also I couldn't read my adaptor from initial adaptors file.So I dynamically added adaptors for collecting files.I have collected some files dynamically but now that also causing some problem as follows:

2013-01-09 01:20:01,533 INFO listen thread for / ChukwaAgent - started a new adaptor, id = adaptor_79516953ebabf5bc7807cd48f42af798 function=[org.apache.hadoop.chukwa.datacollection.adaptor.DirTailingAdaptor@7c3fab7a]

2013-01-09 01:20:01,534 INFO Thread-34 FileTailingAdaptor - chukwaAgent.fileTailingAdaptor.maxReadSize: 131072

2013-01-09 01:20:01,535 INFO Thread-34 FileTailingAdaptor - started file tailer adaptor_fc5d33edb18b66f930d97dc988df4df9 on file /var/log/syslog with first byte at offset 0

2013-01-09 01:20:01,535 INFO Thread-34 ChukwaAgent - started a new adaptor, id = adaptor_fc5d33edb18b66f930d97dc988df4df9 function=[Lightweight Tailer on /var/log/syslog]

2013-01-09 01:20:01,535 INFO Thread-34 DirTailingAdaptor - DirTailingAdaptor adaptor_79516953ebabf5bc7807cd48f42af798 started new adaptor adaptor_fc5d33edb18b66f930d97dc988df4df9

2013-01-09 01:20:01,715 WARN Thread-10 FileTailingAdaptor - failed to stream data for: /var/log/syslog, graceful period will Expire at now:1357674601715 + 180000 secs, i.e:1357674781715

2013-01-09 01:20:01,953 INFO Timer-0 ChukwaAgent - writing checkpoint 301
**2013-01-09 01:23:01,745 WARN Thread-10 FileTailingAdaptor - Adaptor|adaptor_fc5d33edb18b66f930d97dc988df4df9|attempts=90| File cannot be read: /var/log/syslog, streaming policy expired. File removed from streaming.
2013-01-09 01:23:01,745 INFO Thread-10 FileTailingAdaptor - Enter Shutdown:HARD_STOP - ObjectId:Lightweight Tailer on /var/log/syslog

2013-01-09 01:23:01,745 INFO Thread-10 FileTailingAdaptor - Exit Shutdown:HARD_STOP - ObjectId:Lightweight Tailer on /var/log/syslog
**2013-01-09 01:23:01,745 INFO Thread-10 ChukwaAgent - shutdown [Abruptly] on adaptor_fc5d33edb18b66f930d97dc988df4df9, logs 0 /var/log/syslog


I tried to do demux processing on the collected log data but I am getting this error:

java.lang.NoClassDefFoundError: org/codehaus/jackson/map/JsonMappingException

at org.apache.hadoop.chukwa.extraction.demux.Demux.run(Demux.java:213)

at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)

at org.apache.hadoop.chukwa.extraction.demux.DemuxManager.runDemux(DemuxManager.java:331)

at org.apache.hadoop.chukwa.extraction.demux.DemuxManager.processData(DemuxManager.java:287)

at org.apache.hadoop.chukwa.extraction.demux.DemuxManager.start(DemuxManager.java:200)

at org.apache.hadoop.chukwa.extraction.demux.DemuxManager.main(DemuxManager.java:73)

Caused by: java.lang.ClassNotFoundException: org.codehaus.jackson.map.JsonMappingException

at java.net.URLClassLoader$1.run(URLClassLoader.java:366)

at java.net.URLClassLoader$1.run(URLClassLoader.java:355)

at java.security.AccessController.doPrivileged(Native Method)

at java.net.URLClassLoader.findClass(URLClassLoader.java:354)

at java.lang.ClassLoader.loadClass(ClassLoader.java:423)

at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)

at java.lang.ClassLoader.loadClass(ClassLoader.java:356)

... 6 more

And also I have a great confusion on using what type of log files to be used for system management.If any one knows please kindly guide me.

--Thanks in Advance.

Edited by KMPS

5 Years
Discussion Span
Last Post by stormal1

Demux mapreduce jobs can be started by rnning:
CHUKWA_HOME/bin/chukwa demux

The Hadoop configuration files are located in HADOOP_HOME/etc/hadoop. To setup Chukwa to collect logs from Hadoop, you need to change some of the Hadoop configuration files.

Copy CHUKWA_HOME/etc/chukwa/hadoop-log4j.properties file to HADOOP_CONF_DIR/log4j.properties
Copy CHUKWA_HOME/etc/chukwa/hadoop-metrics2.properties file to HADOOP_CONF_DIR/hadoop-metrics2.properties
Edit HADOOP_HOME/etc/hadoop/hadoop-metrics2.properties file and change $CHUKWA_LOG_DIR to your actual CHUKWA log dirctory (ie, CHUKWA_HOME/var/log)


This topic has been dead for over six months. Start a new discussion instead.
Have something to contribute to this discussion? Please be thoughtful, detailed and courteous, and be sure to adhere to our posting rules.