We can clarify the so called Big Data related softwares as Batch-only framework which includes Apache Hadoop, Stream-only frameworks which includes Apache Storm, Apache Samza and Hybrid framework which includes Apache Spark and Apache Flink. Apache Hadoop can be considered a processing framework with MapReduce as its default processing engine. Apache Spark can hook into Hadoop to replace MapReduce. This interoperability between them is one reason why these systems has been great. Apache Spark is a batch processing framework with stream processing capabilities. Principles of Hadoop’s MapReduce engine and Spark is not hugely different but Spark focuses primarily on speeding up batch processing workloads by offering full in-memory computation and processing optimization. Spark can be deployed as a standalone cluster or as we said few sentences before – can hook into Hadoop as an alternative to the MapReduce engine.In this guide we are showing how to install Apache Spark on Ubuntu single Cloud Server instance with Hadoop and also without installing Hadoop. We will suggest you to read our previous guide to install Apache Hadoop on Ubuntu single cloud server. That is master guide and many things like cheaper servers as option to practice, type of virtualization etc has been explained. It will take time to read and follow that guide. Please read again and again till the things becomes clear to you.
Install Apache Spark on Ubuntu Single Cloud Server (Standalone)
We are setting up Apache Spark 2 on Ubuntu 16.04 as separate instance. It is not common for a new user. We deliberately shown two ways under two separate subheadings. Here are Spark 2 stuffs (which is latest at the time of publishing this guide) :
1 | http://www.apache.org/dist/spark/spark-2.1.0/ |
Funnily it is just few steps to install as standalone. The subdomain d3kbcqa49mib13.cloudfront.net
is of official Apache :
---
1 2 3 4 5 6 7 8 9 10 11 12 | apt-add-repository ppa:webupd8team/java sudo apt-get update sudo apt-get install oracle-java7-installer java -version mkdir /opt/scala wget http://downloads.lightbend.com/scala/2.12.1/scala-2.12.1.deb dpkg -i scala-2.12.1.deb wget http://d3kbcqa49mib13.cloudfront.net/spark-2.0.2-bin-hadoop2.7.tgz tar xvf spark-2.0.2-bin-hadoop2.7.tgz cp -rv spark-2.0.2-bin-hadoop2.7 /opt/spark cd /opt/spark ./bin/spark-shell --master local[2] |
That ends a standalone installation.
Install Apache Spark on Ubuntu Single Cloud Server With Hadoop
This guide is for them who our following our previous guide to install Apache Hadoop on Ubuntu single cloud server and want to install Apache Spark on the same server. You are not applying the above standalone method with it. We can divide the work in to :
- Installing Scala
- Installing Spark
- Basic configuration and testing
We are brutally writing the steps without any explanation. Here is Scala :
1 2 | http://www.scala-lang.org/files/archive/ http://apache.mirrors.ionfish.org/spark/ |
1 2 3 4 5 6 7 8 9 10 11 12 | sudo apt-get remove scala-library scala wget http://www.scala-lang.org/files/archive/scala-2.12.1.deb sudo dpkg -i scala-2.12.1.deb sudo apt-get update sudo apt-get install scala wget http://apache.mirrors.ionfish.org/spark/spark-2.1.0/spark-2.1.0-bin-hadoop2.7.tgz tar -zxvf spark-2.1.0-bin-hadoop2.7.tgz mv spark-2.1.0-bin-hadoop2.7 /usr/local/spark wget http://dl.bintray.com/sbt/debian/sbt-0.13.9.deb sudo dpkg -i sbt-0.13.9.deb sudo apt-get update sudo apt-get install sbt |
Now, open ~/.bashrc
:
1 | nano ~/.bashrc |
Add :
1 2 3 | # SPARK START export SPARK_HOME=/usr/local/spark # SPARK END |
then run :
1 | source ~/.bashrc |
Perform few steps :
1 2 3 4 | cd $SPARK_HOME ./bin/spark-shell # test scala:> sc.parallelize(1 to 100).count() |
We completed the steps. You have to repeat this process on all the nodes including namenode and datanodes.
Tagged With apache spark ubuntu 16 04 cloudera , how to install apache spark with hadoop on ubuntu , install spark with hadoop on ubuntu , paperuri:(26f9c8b5e18adb438bcd91b4560bee3f) , setup spark hadoop setup in ubuntu , spark install , ubuntu solr spark hadoop