共计 4575 个字符,预计需要花费 12 分钟才能阅读完成。
一、概述
1. 实验环境基于以前搭建的 Haoop HA;
2.spark HA 所需要的 Zookeeper 环境前文已经配置过,此处不再重复。
3. 所需软件包为:scala-2.12.3.tgz、spark-2.2.0-bin-Hadoop2.7.tar
4. 主机规划
bd1 bd2 bd3 | Worker |
bd4 bd5 |
Master、Worker |
二、配置 Scala
1. 解压并拷贝
[root@bd1 ~]# tar -zxf scala-2.12.3.tgz [root@bd1 ~]# cp -r scala-2.12.3 /usr/local/2. 配置环境变量
[root@bd1 ~]# vim /etc/profile export SCALA_HOME=/usr/local/scalaexport PATH=:$SCALA_HOME/bin:$PATH [root@bd1 ~]# source /etc/profile3. 验证
[root@bd1 ~]# scala -version Scala code runner version 2.12.3 -- Copyright 2002-2017, LAMP/EPFL and Lightbend, Inc.三、配置 Spark
1. 解压并拷贝
[root@bd1 ~]# tar -zxf spark-2.2.0-bin-hadoop2.7.tgz [root@bd1 ~]# cp spark-2.2.0-bin-hadoop2.7 /usr/local/spark2. 配置环境变量
[root@bd1 ~]# vim /etc/profile export SCALA_HOME=/usr/local/scalaexport PATH=:$SCALA_HOME/bin:$PATH [root@bd1 ~]# source /etc/profile3. 修改 spark-env.sh #文件不存在需要拷贝模板
[root@bd1 conf]# vim spark-env.sh export JAVA_HOME=/usr/local/jdkexport HADOOP_HOME=/usr/local/hadoopexport HADOOP_CONF_DIR=/usr/local/hadoop/etc/hadoopexport SCALA_HOME=/usr/local/scalaexport SPARK_DAEMON_JAVA_OPTS="-Dspark.deploy.recoveryMode=ZOOKEEPER -Dspark.deploy.zookeeper.url=bd4:2181,bd5:2181 -Dspark.deploy.zookeeper.dir=/spark"export SPARK_WORKER_MEMORY=1g export SPARK_WORKER_CORES=2 export SPARK_WORKER_INSTANCES=14. 修改 spark-defaults.conf #文件不存在需要拷贝模板
[root@bd1 conf]# vim spark-defaults.conf spark.master spark://master:7077 spark.eventLog.enabled truespark.eventLog.dir hdfs://master:/user/spark/historyspark.serializer org.apache.spark.serializer.KryoSerializer5. 在 HDFS 文件系统中新建日志文件目录
hdfs dfs -mkdir -p /user/spark/historyhdfs dfs -chmod 777 /user/spark/history6. 修改 slaves
[root@bd1 conf]# vim slaves bd1 bd2 bd3 bd4 bd5四、同步到其 他主机
1. 使用 scp 同步 Scala 到 bd2-bd5
scp -r /usr/local/scala root@bd2:/usr/local/scp -r /usr/local/scala root@bd3:/usr/local/scp -r /usr/local/scala root@bd4:/usr/local/scp -r /usr/local/scala root@bd5:/usr/local/2. 同步 Spark 到 bd2-bd5
scp -r /usr/local/spark root@bd2:/usr/local/scp -r /usr/local/spark root@bd3:/usr/local/scp -r /usr/local/spark root@bd4:/usr/local/scp -r /usr/local/spark root@bd5:/usr/local/五、启动集群并测试 HA
1. 启动顺序为:zookeeper–>hadoop–>spark
2. 启动 spark
bd4:
[root@bd4 sbin]# cd /usr/local/spark/sbin/ [root@bd4 sbin]# ./start-all.sh starting org.apache.spark.deploy.master.Master, logging to /usr/local/spark/logs/spark-root-org.apache.spark.deploy.master.Master-1-bd4.out bd4: starting org.apache.spark.deploy.worker.Worker, logging to /usr/local/spark/logs/spark-root-org.apache.spark.deploy.worker.Worker-1-bd4.out bd2: starting org.apache.spark.deploy.worker.Worker, logging to /usr/local/spark/logs/spark-root-org.apache.spark.deploy.worker.Worker-1-bd2.out bd3: starting org.apache.spark.deploy.worker.Worker, logging to /usr/local/spark/logs/spark-root-org.apache.spark.deploy.worker.Worker-1-bd3.out bd5: starting org.apache.spark.deploy.worker.Worker, logging to /usr/local/spark/logs/spark-root-org.apache.spark.deploy.worker.Worker-1-bd5.out bd1: starting org.apache.spark.deploy.worker.Worker, logging to /usr/local/spark/logs/spark-root-org.apache.spark.deploy.worker.Worker-1-bd1.out [root@bd4 sbin]# jps 3153 DataNode 7235 Jps 3046 JournalNode 7017 Master 3290 NodeManager 7116 Worker 2958 QuorumPeerMainbd5:
[root@bd5 sbin]# ./start-master.sh starting org.apache.spark.deploy.master.Master, logging to /usr/local/spark/logs/spark-root-org.apache.spark.deploy.master.Master-1-bd5.out [root@bd5 sbin]# jps 3584 NodeManager 5602 RunJar 3251 QuorumPeerMain 8564 Master 3447 DataNode 8649 Jps 8474 Worker 3340 JournalNode

3. 停掉 bd4 的 Master 进程
[root@bd4 sbin]# kill -9 7017 [root@bd4 sbin]# jps 3153 DataNode 7282 Jps 3046 JournalNode 3290 NodeManager 7116 Worker 2958 QuorumPeerMain

五、总结
一开始时想把 Master 放到 bd1 和 bd2 上,但是启动 Spark 后发现两个节点上都是 Standby。然后修改配置文件转移到 bd4 和 bd5 上,才顺利运行。换言之 Spark HA 的 Master 必须位于 Zookeeper 集群上才能正常运行,即该节点上要有 JournalNode 这个进程。
更多 Spark 相关教程见以下内容:
CentOS 7.0 下安装并配置 Spark http://www.linuxidc.com/Linux/2015-08/122284.htm
Ubuntu 系统搭建单机 Spark 注意事项 http://www.linuxidc.com/Linux/2017-10/147220.htm
Spark1.0.0 部署指南 http://www.linuxidc.com/Linux/2014-07/104304.htm
Spark2.0 安装配置文档 http://www.linuxidc.com/Linux/2016-09/135352.htm
Spark 1.5、Hadoop 2.7 集群环境搭建 http://www.linuxidc.com/Linux/2016-09/135067.htm
Spark 官方文档 – 中文翻译 http://www.linuxidc.com/Linux/2016-04/130621.htm
CentOS 6.2(64 位)下安装 Spark0.8.0 详细记录 http://www.linuxidc.com/Linux/2014-06/102583.htm
Spark-2.2.0 安装和部署详解 http://www.linuxidc.com/Linux/2017-08/146215.htm
Spark2.0.2 Hadoop2.6.4 全分布式配置详解 http://www.linuxidc.com/Linux/2016-11/137367.htm
Ubuntu 14.04 LTS 安装 Spark 1.6.0(伪分布式)http://www.linuxidc.com/Linux/2016-03/129068.htm
Spark 的详细介绍:请点这里
Spark 的下载地址:请点这里
本文永久更新链接地址:http://www.linuxidc.com/Linux/2017-10/147637.htm






