In previous guides, we have covered some important basic installation and setup guide for the major known Big Data softwares. Here is Part 1 of Installing Local Data Lake on Ubuntu Server With Hadoop, Spark, Thriftserver, Jupyter etc To Build a Prediction System. We suggest to use servers from VPSDime as they cost very low – $7 per month for 6GM RAM. We talked about some limitations of OpenVZ virtualization. VPSDime is great for test setups unless you are breaking their rules. 12GB is minimum need of RAM. Our older guides went towards analysis of data like log files as one path. Prediction software system is another path. We will use Ubuntu server as most user can use.
I can not give warranty about the version number related typo. At worst, currently WordPress has been just bad with hundreds of funky features and configs becomes odd if wrongly switched from Text to Visual.
Installing Local Data Lake on Ubuntu Server : What is Data Lake?
Data lake is a method of storing data within a system to facilitate the collocation of data in various schemata and structural forms for various tasks like reporting, visualization, analytics and machine learning. Apache Hadoop distributed file system itself is example data lake.
---
Installing Local Data Lake on Ubuntu Server
Please follow our previous guides to install the needed components :
- Install Hadoop (configure exactly in that way)
- Install Spark
Also you need to create OpenSSL cert :
1 | openssl req -x509 -nodes -days 365 -newkey rsa:1024 -keyout cert.pem -out cert.pem |
So, till this step, we already have configured ssh, added user and installed some softwares. Now install python with some packages like textblob, sklearn, jupyter notebook which you can use to test :
1 2 3 4 5 6 7 8 9 10 11 | apt install python-pip apt install python-numpy python-scipy python-matplotlib ipython ipython-notebook python-pandas python-sympy python-nose pip install textblob python -m textblob.download_corpora sudo pip install --upgrade ipython sudo pip install jupyter sudo apt-get install libsasl2-dev sudo pip install sasl sudo pip install pyhs2 # Jupyter git clone http://github.com/nasdag/pyspark |
If you run this :
1 | ipython |
You’ll get :
1 2 3 | In [1]: from IPython.lib import passwd In [2]: passwd() Enter password: |
Complete the steps. Next configure :
1 2 3 4 5 | jupyter notebook --generate-config mkdir -p ~/tutorials cd ~/tutorials git clone http://github.com/nasdag/pyspark nano ~/.jupyter/jupyter_notebook_config.py |
I have a sample configuration file as gist, you should fork or copy-paste and edit it.
You should use domain name and follow our guide to install Let’s Encrypt SSL certificate. But, at this state, you can go to :
1 | https://host_ip_address:4334/pyspark/ |
Now install MySQL from Repo :
1 2 | apt install mysql-server apt install libmysql-java |
Now, we will install Apache Hive, latest version of which you’ll get on :
1 | https://hive.apache.org/downloads.html |
apache-hive-2.1.1 was latest in my case. So these are commands :
1 2 | wget http://www-eu.apache.org/dist/hive/hive-2.1.1/apache-hive-2.1.1-bin.tar.gz tar -zxvf apache-hive-2.1.1-bin.tar.gz |
Basically I forgot the path of sample MySQL (it was different before), it should be like :
1 | /metastore/scripts/upgrade/mysql/ |
1 | https://github.com/apache/hive/tree/master/metastore/scripts/upgrade/mysql |
I am roughly saying the steps of configuring Hive. You have to go to that ../../upgrade/mysql/
and run these :
1 2 3 4 5 6 7 8 9 | mysql -u root -p Enter password: CREATE DATABASE metastore; USE metastore; SOURCE hive-schema-1.2.0.mysql.sql; CREATE USER 'hiveuser'@'%' IDENTIFIED BY 'hivepassword'; GRANT all on *.* to 'hiveuser'@localhost identified by 'hivepassword'; flush privileges; exit; |
You’ll get detailed step on official website. Now you have to install Scala and Maven :
1 2 | http://scala-lang.org/ https://maven.apache.org/download.cgi |
This is example of configuring :
1 2 3 4 5 6 7 | wget http://downloads.lightbend.com/scala/2.12.1/scala-2.12.1.tgz sudo tar -xzf scala-2.12.1.tgz -C /usr/local/share rm scala-scala-2.12.1.tgz wget http://www-eu.apache.org/dist/maven/maven-3/3.3.9/binaries/apache-maven-3.3.9-bin.tar.gz sudo tar -xzf apache-maven-3.3.9-bin.tar.gz -C /usr/local/share sudo mv /usr/local/share/apache-maven-3.3.9 /usr/local/share/maven-3.3.9 rm apache-maven-3.3.9-bin.tar.gz |
You need to edit :
1 | /usr/local/share/hadoop-x.y.z/etc/hadoop/core-site.xml |
to this way :
1 2 3 4 5 6 7 8 9 10 11 12 13 | <configuration> <property> <name>hadoop.tmp.dir</name> <value>/var/local/hadoop/tmp</value> </property> <property> <name>fs.default.name</name> <value>hdfs://localhost:54310</value> </property> </configuration> |
Edit :
1 | /usr/local/share/hadoop-x.y.z/etc/hadoop/mapred-site.xml |
to this way :
1 2 3 4 5 6 7 8 | <configuration> <property> <name>mapred.job.tracker</name> <value>localhost:54311</value> </property> </configuration> |
Edit :
1 | /usr/local/share/hadoop-x.y.z/etc/hadoop/hdfs-site.xml |
to :
1 2 3 4 5 6 7 8 | <configuration> <property> <name>dfs.replication</name> <value>1</value> </property> </configuration> |
Edit :
1 | /usr/local/share/hadoop-x.y.z/etc/hadoop/hadoop-env.sh |
to :
1 | export JAVA_HOME=/usr/lib/jvm/java-7-oracle |
Edit :
1 | nano ~/.bashrc |
to :
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 | export JAVA_HOME=/usr/lib/jvm/java-7-oracle export SCALA_HOME=/usr/local/share/scala-x.y.z export MAVEN_HOME=/usr/local/share/maven-x.y.z export PATH=$PATH:$PATH:$MAVEN_HOME/bin:$SCALA_HOME/bin:/home/nasdag/idea-IC/bin/ export IBUS_ENABLE_SYNC_MODE=1 export HADOOP_HOME=/usr/local/share/hadoop-x.y.z export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin unalias fs &> /dev/null alias fs="hadoop fs" unalias hls &> /dev/null alias hls="fs -ls" export SPARK_HOME=/usr/local/share/spark-x.y.z export PATH=$PATH:$SPARK_HOME/bin:$SPARK_HOME/sbin export HADOOP_USER_CLASSPATH_FIRST=true export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop/ export PYSPARK_SUBMIT_ARGS="--packages com.databricks:spark-csv_2.11:1.1.0 pyspark-shell" export PATH=$PATH:$PATH:/home/nasdag/zeppelin/bin |
Edit :
1 | /usr/local/share/spark-x.y.z/conf/hive-site.xml |
to :
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 | <configuration> <property> <name>javax.jdo.option.ConnectionURL</name> <value>jdbc:mysql://localhost/metastore?createDatabaseIfNotExist=true</value> <description>metadata is stored in a MySQL server</description> </property> <property> <name>javax.jdo.option.ConnectionDriverName</name> <value>com.mysql.jdbc.Driver</value> <description>MySQL JDBC driver class</description> </property> <property> <name>javax.jdo.option.ConnectionUserName</name> <value>hiveuser</value> <description>user name for connecting to mysql server</description> </property> <property> <name>javax.jdo.option.ConnectionPassword</name> <value>hivepassword</value> <description>password for connecting to mysql server</description> </property> </configuration> |
Perform :
1 | sudo mkdir -p /usr/local/share/spark-x.y.z/logs; sudo chmod 777 /usr/local/share/spark-x.y.z/logs |
Edit :
1 | /usr/local/share/spark-x.y.z/conf/spark-defaults.conf |
to :
1 2 | spark.driver.extraClassPath /usr/share/java/mysql-connector-java.jar spark.master local[2] |
Edit :
1 | nano ~/.ipython/profile_default/startup/initspark.py |
to :
1 2 3 | import sys sys.path.append('/usr/local/share/spark-x.y.z/python/') sys.path.append('/usr/local/share/spark-x.y.z/python/lib/py4j-x.y.z-src.zip') |
Install Zeppelin, IntelliJ IDEA :
1 2 3 4 5 6 7 | cd ~ git clone http://github.com/apache/incubator-zeppelin mv incubator-zeppelin zeppelin cd zeppelin export MAVEN_OPTS="-Xmx512m -XX:MaxPermSize=128m" mvn install -DskipTests -Dspark.version=1.5.2 -Dhadoop.version=2.6.2 nano zeppelin/conf/zeppelin-env.sh |
Add these kind of line :
1 2 | export SPARK_HOME=/usr/local/share/spark-1.5.2 export SPARK_SUBMIT_OPTIONS="--packages com.databricks:spark-csv_2.11:1.1.0 --jars /usr/share/java/mysql-connector-java.jar" |
Run:
1 2 3 4 | wget https://download.jetbrains.com/idea/ideaIC-15.0.2.tar.gz tar -xzf ideaIC-15.0.2.tar.gz -C ~ mv ~/idea-IC-143.1184.17 ~/idea-IC rm ideaIC-15.0.2.tar.gz |
Start all the services :
1 2 3 4 | start-dfs.sh start-thriftserver.sh start-dfs.sh zeppelin-daemon.sh start |
Now you can visit http://ip_address:8080/
.