In our previous tutorials, we written the steps to install Apache Nutch on Ubuntu Server and also how to install Apache Solr on Ubuntu Server. Integrating Apache Nutch With Apache Solr Will Offer a Web UI, Options to Visually Search and Use Extended Functions of Apache Nutch. Our guide on installing Apache Solr uses older version of Solr (at present). We are using Apache Nutch 1.x – in previous articles we discussed that Apache Nutch 1.x and Apache Nutch 2.x has difference and practically they are kind of “different software”.
Integrating Apache Nutch With Apache Solr
You must read our previous guides to understand what we are talking about. The seed URLs file we created named seed.txt
under urls/
directory, if has following content :
1 | http://nutch.apache.org/ |
Then we can configure Regular Expression Filters by opening the file conf/regex-urlfilter.txt
with our needed regex, like :
---
1 | +^https?://([a-z0-9-]+\.)*nutch\.apache\.org/ |
That is for limiting crawl, adding logic of crawl etc. Download DMOZ RDF dump from somewhere (DMOZ officially closed), it has zillions of URLs. If that RDF dump’s name is content.rdf.u8.gz
, then do these steps (we already shown these kind of steps in earlier guides) :
1 2 3 4 5 | mkdir dmoz bin/nutch org.apache.nutch.tools.DmozParser content.rdf.u8 -subset 5000 > dmoz/urls bin/nutch inject crawl/crawldb dmoz bin/nutch inject crawl/crawldb urls bin/nutch generate crawl/crawldb crawl/segments |
We ended the Nutch guide after the above step explaining segments
. Here are advanced steps :
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 | s1=`ls -d crawl/segments/2* | tail -1` echo $s1 Now we run the fetcher on this segment with: bin/nutch fetch $s1 Then we parse the entries: bin/nutch parse $s1 When this is complete, we update the database with the results of the fetch: bin/nutch updatedb crawl/crawldb $s1 Now the database contains both updated entries for all initial pages as well as new entries that correspond to newly discovered pages linked from the initial set. Now we generate and fetch a new segment containing the top-scoring 1,000 pages: bin/nutch generate crawl/crawldb crawl/segments -topN 1000 s2=`ls -d crawl/segments/2* | tail -1` echo $s2 bin/nutch fetch $s2 bin/nutch parse $s2 bin/nutch updatedb crawl/crawldb $s2 bin/nutch generate crawl/crawldb crawl/segments -topN 1000 s3=`ls -d crawl/segments/2* | tail -1` echo $s3 bin/nutch fetch $s3 bin/nutch parse $s3 bin/nutch updatedb crawl/crawldb $s3 bin/nutch invertlinks crawl/linkdb -dir crawl/segments |
We already have Apache Solr installed, take that installation is at $HOME/apache-solr
, which we can refer as ${APACHE_SOLR_HOME}
.
create resources for a new Nutch Solr core :
1 | cp -r ${APACHE_SOLR_HOME}/server/solr/configsets/basic_configs ${APACHE_SOLR_HOME}/server/solr/configsets/nutch |
Copy the Nutch’s schema.xml
into the conf directory :
1 | cp ${NUTCH_RUNTIME_HOME}/conf/schema.xml ${APACHE_SOLR_HOME}/server/solr/configsets/nutch/conf |
There should not be managed-schema :
1 | rm ${APACHE_SOLR_HOME}/server/solr/configsets/nutch/conf/managed-schema |
Now, start the Solr server :
1 | ${APACHE_SOLR_HOME}/bin/solr start |
Create the nutch core :
1 | ${APACHE_SOLR_HOME}/bin/solr create -c nutch -d server/solr/configsets/nutch/conf/ |
Add the core name to the Solr server URL in Nutch’s configaration file :
1 | -Dsolr.server.url=http://localhost:8983/solr/nutch |
Now, we can pass through reverse proxy for server to reach the front-end which for localhost is at URL :
1 | http://localhost:8983/solr/#/ |
We can run commands like :
1 | bin/nutch dedup http://localhost:8983/solr |
We can run a crawl with Nutch and index the data in Solr. Using the same seed list :
1 | bin/crawl -i -D solr.server.url=http://localhost:8983/solr/nutchCore -s urls crawl 2 |
After the crawl, the data will be indexed in Solr. We can view it via Solr webUI. Go to http://localhost:8983/solr
, load the nutchCore, on left hand side of panel, click on Query to search documents in the indexed data. *:*. That return things indexed by Solr.