• Home
  • Archive
  • Tools
  • Contact Us

The Customize Windows

Technology Journal

  • Cloud Computing
  • Computer
  • Digital Photography
  • Windows 7
  • Archive
  • Cloud Computing
  • Virtualization
  • Computer and Internet
  • Digital Photography
  • Android
  • Sysadmin
  • Electronics
  • Big Data
  • Virtualization
  • Downloads
  • Web Development
  • Apple
  • Android
Advertisement
You are here:Home » Install Apache Chukwa on Hadoop Cluster

By Abhishek Ghosh October 26, 2019 3:58 pm Updated on October 26, 2019

Install Apache Chukwa on Hadoop Cluster

Advertisement

In order to follow this guide, one need to install Java (preferably Sun Java) installed, MySQL 5.x and Hadoop cluster. We already have published separate guides on how to install Percona MySQL on Ubuntu server and how to install Hadoop on single server. Chukwa visualization interface requires HBase. Chukwa is a system designed for reliable log collection and processing with Hadoop. Chukwa cluster management scripts rely on SSH. So we need a Hadoop and HBase cluster on which Chukwa will process data, a collector process, which will write collected data to HBase and agent processes to send monitoring data to the collector. Chukwa’s mechanism was designed to require Hadoop (HDFS) and MapReduce jobs. Chukwa’s demux functionality internally runs a Map Reduce task to compute the key value pairs.

Install Apache Chukwa on Hadoop Cluster

 

Steps of Installation of Apache Chukwa

 

This is the repository of Apache Chukwa, you need the binary file :

Vim
1
2
https://chukwa.apache.org/releases.html
https://github.com/apache/chukwa

Untar via tar -xzvf command. Copy Chukwa on each node being monitored and run a collector. The official directory name containing Chukwa is referenced as CHUKWA_HOME. Create that directory and move the files.

Advertisement

---

Make sure that JAVA_HOME is set and points to the Java runtime. The Chukwa configuration files are located in the CHUKWA_HOME/conf directory with appened *.template extension. We need to copy, rename, and modify the *.template files so that chukwa-collector-conf.xml.template becomes chukwa-collector-conf.xml. There is a script in conf/chukwa-env.sh for this work and other settings. In that conf/chukwa-env.sh, set CHUKWA_LOG_DIR and CHUKWA_PID_DIR. Edit CHUKWA_HOME/conf/chukwa-env.sh to set JAVA_HOME to Java installation, HADOOP_JAR to $CHUKWA_HOME/hadoopjars/hadoop-0.18.2.jar (as example version), set CHUKWA_IDENT_STRING to the Chukwa cluster name. Edit CHUKWA_HOME/conf/chukwa-collector-conf.xml to the writer.hdfs.filesystem property to the HDFS root URL.

If the Hadoop configuration files are located in the HADOOP_HOME/conf directory then :

Vim
1
2
3
4
cp CHUKWA_HOME/conf/hadoop-log4j.properties HADOOP_HOME/conf/log4j.properties
cp CHUKWA_HOME/conf/hadoop-metrics.properties HADOOP_HOME/conf/hadoop-metrics.properties
ln -s HADOOP_HOME/conf/hadoop-site.xml CHUKWA_HOME/conf/hadoop-site.xml
cp $HADOOP_HOME/lib hadoop-*-core.jar file $CHUKWA_HOME/hadoopjars

Edit HADOOP_HOME/conf/hadoop-metrics.properties file and change the parameter @CHUKWA_LOG_DIR@ to a real log directory path such as CHUKWA_HOME/var/log. Remaining step is installation of MySQL. It is a generic way of how MySQL required to be installed :

Vim
1
2
3
tar fxvz mysql-*.tar.gz -C $CHUKWA_HOME/opt
cd $CHUKWA_HOME/opt/mysql-*
cp my.cnf CHUKWA_HOME/opt/mysql-*

We need to run these commands as general MySQL installation & configuration process:

Vim
1
2
3
4
./scripts/mysql_install_db
./bin/mysqld_safe&
./bin/mysqladmin -u root create <clustername>
./bin/mysql -u root <clustername> < $CHUKWA_HOME/conf/database_create_table

Edit CHUKWA_HOME/conf/jdbc.conf configuration file to set the clustername to the MYSQL root URL:

Vim
1
<clustername>=jdbc:mysql://localhost:3306/<clustername>?user=root

Download the MySQL Connector from the MySQL site and copy the jar file in CHUKWA_HOME/lib.

Vim
1
2
3
4
5
6
7
8
mysql -u root -p
Enter password:
GRANT REPLICATION SLAVE ON *.* TO '<username>'@'%' IDENTIFIED BY '<password>';
FLUSH PRIVILEGES;
# migrate data from Chukwa
use <database_name>
source /path/to/chukwa/conf/database_create_table.sql
source /path/to/chukwa/conf/database_upgrade.sql

Restart your Hadoop Cluster. Make sure HBase is started. After Hadoop and HBase are started, run:

Vim
1
bin/hbase shell < CHUKWA_HOME/etc/chukwa/hbase.schema

Add collector hostnames to CHUKWA_HOME/etc/chukwa/collectors. For data analytics with Apache Pig, you need extra environment setup, its like Hadoop.

Start the Chukwa collector script :

Vim
1
2
3
4
5
6
# when Chukwa Collector installed
CHUKWA_HOME/tools/init.d/chukwa-collector start
# in data processor node
CHUKWA_HOME/tools/init.d/chukwa-data-processors start
# Chukwa Processes
CHUKWA_HOME/tools/init.d/chukwa-collector status

The Hadoop Infrastructure Care Center (HICC) is the Chukwa web user interface. Download apache-tomcat and decompress the tarball to CHUKWA_HOME/opt, copy CHUKWA_HOME/hicc.war to apache-tomcat-x.y.z/webapps.

Installation and configuration of Apache Chukwa is not easy. There is a detailed administration guide to help you.

Facebook Twitter Pinterest

Abhishek Ghosh

About Abhishek Ghosh

Abhishek Ghosh is a Businessman, Surgeon, Author and Blogger. You can keep touch with him on Twitter - @AbhishekCTRL.

Here’s what we’ve got for you which might like :

Articles Related to Install Apache Chukwa on Hadoop Cluster

  • How To Install Apache HBase : Ubuntu Single Cloud Server Instance

    Here is Step By Step Guide On How To Install Apache HBase On Ubuntu Single Cloud Server Instance. Hbase is column-oriented distributed datastore,

  • Install Apache Hadoop on Ubuntu on Single Cloud Server Instance

    Here is How Install Apache Hadoop on Ubuntu on Single Cloud Server Instance in Stand-Alone Mode With Minimum System Requirement and Commands.

  • Nginx WordPress Installation Guide (All Steps)

    This is a Full Nginx WordPress Installation Guide With All the Steps, Including Some Optimization and Setup Which is Compatible With WordPress DOT ORG Example Settings For Nginx.

  • Installing Local Data Lake on Ubuntu Server : Part 1

    Here is Part 1 of Installing Local Data Lake on Ubuntu Server With Hadoop, Spark, Thriftserver, Jupyter etc To Build a Prediction System.

performing a search on this website can help you. Also, we have YouTube Videos.

Take The Conversation Further ...

We'd love to know your thoughts on this article.
Meet the Author over on Twitter to join the conversation right now!

If you want to Advertise on our Article or want a Sponsored Article, you are invited to Contact us.

Contact Us

Subscribe To Our Free Newsletter

Get new posts by email:

Please Confirm the Subscription When Approval Email Will Arrive in Your Email Inbox as Second Step.

Search this website…

 

Popular Articles

Our Homepage is best place to find popular articles!

Here Are Some Good to Read Articles :

  • Cloud Computing Service Models
  • What is Cloud Computing?
  • Cloud Computing and Social Networks in Mobile Space
  • ARM Processor Architecture
  • What Camera Mode to Choose
  • Indispensable MySQL queries for custom fields in WordPress
  • Windows 7 Speech Recognition Scripting Related Tutorials

Social Networks

  • Pinterest (24.3K Followers)
  • Twitter (5.8k Followers)
  • Facebook (5.7k Followers)
  • LinkedIn (3.7k Followers)
  • YouTube (1.3k Followers)
  • GitHub (Repository)
  • GitHub (Gists)
Looking to publish sponsored article on our website?

Contact us

Recent Posts

  • Hybrid Multi-Cloud Environments Are Becoming UbiquitousJuly 12, 2023
  • Data Protection on the InternetJuly 12, 2023
  • Basics of BJT TransistorJuly 11, 2023
  • What is Confidential Computing?July 11, 2023
  • How a MOSFET WorksJuly 10, 2023
PC users can consult Corrine Chorney for Security.

Want to know more about us?

Read Notability and Mentions & Our Setup.

Copyright © 2023 - The Customize Windows | dESIGNed by The Customize Windows

Copyright  · Privacy Policy  · Advertising Policy  · Terms of Service  · Refund Policy