Aache Nutch is a Production Ready Web Crawler. Nutch Can Be Extended With Apache Tika, Apache Solr, Elastic Search, SolrCloud, etc. Here is How to Install Apache Nutch on Ubuntu Server. Nutch relies on Apache Hadoop data structure. Apache Lucene is similar to Apache Nutch. Apache Lucene plays an important role in helping Nutch to index and search. We use Apache Tika for parsing, Apache Solr, Elastic Search etc for search functionalities and so on. There are some Python and Java projects for the same work. Main objective of Nutch is to scrape unstructured data from resources like RSS, HTML, CSV, PDF, and structure it.
Apache Nutch can not written as one tutorial. In this current tutorial, we will only show how to install Apache Nutch on Ubuntu Server and do basic configuration. We will not configure it with other software, like Apache Lucene or MongoDB.
Nutch 2.x and Nutch 1.x are quite different in terms of set up, functioning, and architecture. Nutch 2.x uses Apache Gora to manage NoSQL persistence over database stores. Nutch 1.x has much more features, many bug fixed. For advanced need, consider Nutch 1.x. For flexibility of database stores, use Nutch 2.x. We will install Nutch 1.x in this guide.
---
Install Apache Nutch on Ubuntu Server
Let us update, upgrade as root
user :
1 | apt update -y && apt upgrade -y |
Next, we have to install the Java runtime (JRE) :
1 | apt install default-jre |
Java runtime (JRE) and development environment (JDK)
After JRE, next we will install Java development environment (JDK) :
1 | apt install default-jdk |
After installation of both JRE and JDK, run the following command to check whether both has been installed correctly :
1 | java -version |
Now set JAVA_HOME
in the bash file with the following command:
1 | export JAVA_HOME=$(readlink -f /usr/bin/java | sed "s:bin/java::") |
Check with below command :
1 | echo JAVA_HOME |
Download the binary distribution of Apache Nutch from here :
1 | http://www.apache.org/dyn/closer.cgi/nutch/ |
It is version 1.14
at the moment of publishing this guide. Nutch 2.x is different series. You should use your closest mirror, we are showing just example of wget for clarifying version :
1 | wget http://www-eu.apache.org/dist/nutch/1.14/apache-nutch-1.14-bin.tar.gz |
You can use any directory, like documents directory for download. Uncompress it :
1 2 3 | ls -al tar -xvzf apache-nutch-1.14-bin.tar.gz ls -al |
Now run this command :
1 | bin/nutch |
You will able to see print of commands. Now about the configuration files :
nutch-default.xml : default property, located in ${nutch_home}/conf directory
nutch-site.xml : alternative for nutch-default.xml
core-default.xml, hdfs-default.xml, mapred-default.xml : used for Hadoop configuration.
mapred- default.xml : used to configure the map-reduce.
hdfs-default.xml : used to implement Hadoop Distributed File System in Nutch
To provide basic example, we will only minimally configure nutch-site.xml
file, open it :
1 | nano conf/nutch-site.xml |
You’ll get a stanza like this :
1 2 3 4 5 6 7 8 9 10 | <configuration> <property> <name>http.agent.name</name> <value>nutch-1.1.4-crawler</value> </property> ... |
We need to create a directory which will hold text files with list of URLs to crawl :
1 2 | mkdir -p urls touch seed.txt |
Add some URLs on that seed.txt
file, like :
1 2 3 | http://nutch.apache.org/ https://wiki.apache.org/nutch/FrontPage http://events.linuxfoundation.org/events/apachecon-europe |
Save the file. Now, inject those with the following format of command :
1 | bin/nutch inject crawl/crawldb urls |
It is like manual crawl. Run to generate list of pages :
1 | bin/nutch generate crawl/crawldb crawl/segments |
You’ll see segment directory with name of the directory as digits – timestamp as directory name. Like this :
1 | crawl/segments/201806271566721 |
Create shell variable with path :
1 | s1 = crawl/segments/20170129163653 |
Then parse :
1 | bin/nutch parse $s1 |
Update database :
1 | bin/nutch updatedb crawl/crawldb $s1 |
At this point, you need some other software. Like Apache SOLR. Actually the successful completion of the crawling process, on desktop computers we can run the luke-all jar tool (Luke is Lucene Index Toolbox), browse to open the crawler/index directory to view crawled pages. Official website of Apache Nutch has good tutorial :
1 | https://wiki.apache.org/nutch/FrontPage |