阿里云-云小站(无限量代金券发放中)
【腾讯云】云服务器、云数据库、COS、CDN、短信等热卖云产品特惠抢购

RHEL7.2 安装Hadoop 2.8.2

425次阅读
没有评论

共计 25484 个字符,预计需要花费 64 分钟才能阅读完成。

创建三台虚拟机,IP 地址为:192.168.169.101,192.168.169.102,192.168.169.103

将 192.168.169.102 为 namenode,192.168.169.101,192.168.169.103 为 datanode

关闭防火墙,安装 JDK1.8,设置 SSH 无密码登录,下载 Hadoop-2.8.2.tar.gz 到 /hadoop 目录下。

1 安装 namenode 结点

  将 hadoop-2.8.2.tar.gz 解压到 192.168.169.102 的 hadoop 用户的 home 目录 /hadoop 下

[hadoop@hadoop02 ~]$ pwd
/hadoop
[hadoop@hadoop02 ~]$ tar zxvf hadoop-2.8.2.tar.gz
… …
[hadoop@hadoop02 ~]$ cd hadoop-2.8.2/
[hadoop@hadoop02 hadoop-2.8.2]$ pwd
/hadoop/hadoop-2.8.2
[hadoop@hadoop02 hadoop-2.8.2]$ ls -l
总用量 132
drwxr-xr-x 2 hadoop hadoop  4096 10 月 20 05:11 bin
drwxr-xr-x 3 hadoop hadoop    19 10 月 20 05:11 etc
drwxr-xr-x 2 hadoop hadoop  101 10 月 20 05:11 include
drwxr-xr-x 3 hadoop hadoop    19 10 月 20 05:11 lib
drwxr-xr-x 2 hadoop hadoop  4096 10 月 20 05:11 libexec
-rw-r–r– 1 hadoop hadoop 99253 10 月 20 05:11 LICENSE.txt
-rw-r–r– 1 hadoop hadoop 15915 10 月 20 05:11 NOTICE.txt
-rw-r–r– 1 hadoop hadoop  1366 10 月 20 05:11 README.txt
drwxr-xr-x 2 hadoop hadoop  4096 10 月 20 05:11 sbin
drwxr-xr-x 4 hadoop hadoop    29 10 月 20 05:11 share
[hadoop@hadoop02 hadoop-2.8.2]$ 

 2 配置 Hadoop 环境变量

[hadoop@hadoop02 bin]$ vi /hadoop/.bash_profile
export HADOOP_HOME=/hadoop/hadoop-2.8.2
export PATH=$PATH:$HADOOP_HOME/bin

 注意:另两台虚拟机也要同样配置。

 执行 source ~./.bash_profile 使配置生效,并验证:

[hadoop@hadoop02 bin]$ source ~/.bash_profile
[hadoop@hadoop02 bin]$ echo $HADOOP_HOME
/hadoop/hadoop-2.8.2
[hadoop@hadoop02 bin]$ echo $PATH
/usr/java/jdk1.8.0_151/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/hadoop/.local/bin:/hadoop/bin:/hadoop/.local/bin:/hadoop/bin:/hadoop/hadoop-2.8.2/bin
[hadoop@hadoop02 bin]$ 

 3 创建 hadoop 工作目录

[hadoop@hadoop02 bin]$ mkdir -p /hadoop/hadoop/dfs/name /hadoop/hadoop/dfs/data /hadoop/hadoop/tmp

 4 修改 hadoop 配制文件

    共修改 7 个配制文件:

    hadoop-env.sh: java 环境变量
    yarn-env.sh:  制定 yarn 框架的 java 运行环境,yarn 它将资源管理和处理组件分开。基于 yarn 的架构不受 MapReduce 约束。
    slaves: 指定 datanode 数据存储服务器
    core-site.xml:  指定访问 hadoop web 界面的路径
    hdfs-site.xml:  文件系统的配置文件
    mapred-site.xml:  MapReducer 任务配置文件
    yarn-site.xml: yarn 框架配置,主要是一些任务的启动位置

4.1 /hadoop/hadoop-2.8.2/etc/hadoop/hadoop-env.sh

[hadoop@hadoop02 hadoop]$ vi /hadoop/hadoop-2.8.2/etc/hadoop/hadoop-env.sh
export JAVA_HOME=/usr/java/jdk1.8.0_151/

 4.2 /hadoop/hadoop-2.8.2/etc/hadoop/yarn-env.sh

[hadoop@hadoop02 hadoop]$ vi /hadoop/hadoop-2.8.2/etc/hadoop/yarn-env.sh
JAVA_HOME=/usr/java/jdk1.8.0_151/

 4.3 /hadoop/hadoop-2.8.2/etc/hadoop/slaves

[hadoop@hadoop02 hadoop]$ vi /hadoop/hadoop-2.8.2/etc/hadoop/slaves
hadoop01
hadoop03

 4.4 /hadoop/hadoop-2.8.2/etc/hadoop/core-site.xml

[hadoop@hadoop02 hadoop]$ vi /hadoop/hadoop-2.8.2/etc/hadoop/core-site.xml
<configuration>
    <property> 
        <name>hadoop.tmp.dir</name> 
        <value>/hadoop/hadoop/tmp</value>  // 手工创建的
        <final>true</final> 
        <description>A base for other temporary directories.</description> 
    </property> 
    <property> 
        <name>fs.default.name</name> 
        <value>hdfs://192.168.169.102:9000</value> 
        <final>true</final> 
    </property> 
    <property>   
        <name>io.file.buffer.size</name>   
        <value>131072</value>   
    </property>
</configuration>

 4.5 /hadoop/hadoop-2.8.2/etc/hadoop/hdfs-site.xml

[hadoop@hadoop02 hadoop]$ vi /hadoop/hadoop-2.8.2/etc/hadoop/hdfs-site.xml
        <property> 
            <name>dfs.replication</name> 
            <value>2</value> 
        </property> 
        <property> 
            <name>dfs.name.dir</name> 
            <value>/hadoop/hadoop/dfs/name</value> 
        </property> 
        <property> 
            <name>dfs.data.dir</name> 
            <value>/hadoop/hadoop/dfs/data</value> 
        </property> 
        <property>   
            <name>dfs.namenode.secondary.http-address</name>   
            <value>hadoop02:9001</value>   
        </property>   
        <property>   
            <name>dfs.webhdfs.enabled</name>   
            <value>true</value>   
        </property>   
        <property>   
            <name>dfs.permissions</name>   
            <value>false</value>   
        </property> 

 4.6 /hadoop/hadoop-2.8.2/etc/hadoop/mapred-queues.xml

[hadoop@hadoop02 hadoop]$ cp mapred-site.xml.template mapred-site.xml
[hadoop@hadoop02 hadoop]$ vi /hadoop/hadoop-2.8.2/etc/hadoop/mapred-site.xml
        <property>   
              <name>mapreduce.framework.name</name>   
              <value>yarn</value>   
        </property>
        <property>
              <name>mapreduce.jobhistory.address</name>
              <value>hadoop02:10020</value>
        </property>
        <property>
              <name>mapreduce.jobhistory.webapp.address</name>
              <value>hadoop02:19888</value>
        </property>

 4.7 /hadoop/hadoop-2.8.2/etc/hadoop/yarn-site.xml

 <property>
    <name>yarn.nodemanager.aux-service</name>
    <value>mapreduce_shuffle</value>
</property>
<property>
    <name>yarn.nodemanager.aux-service.mapreduce.shuffle.class</name>
    <value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<property>
    <name>yarn.resourcemanager.address</name>
    <value>hadoop02:8032</value>
</property>
<property>
    <name>yarn.resourcemanager.scheduler.address</name>
    <value>hadoop02:8030</value>
</property>
<property>
    <name>yarn.resourcemanager.resource-tracker.address</name>
    <value>hadoop02:8031</value>
</property>
<property>
    <name>yarn.resourcemanager.admin.address</name>
    <value>hadoop02:8033</value>
</property>
<property>
    <name>yarn.resourcemanager.webapp.address</name>
    <value>hadoop02:8088</value>
</property>

 5 安装 datanode 结点

    在 192.168.169.102 上

[hadoop@hadoop02 ~]$ scp -rp hadoop-2.8.2 hadoop@hadoop01:~/
[hadoop@hadoop02 ~]$ scp -rp hadoop-2.8.2 hadoop@hadoop03:~/

 6 初始化 namenode

[hadoop@hadoop02 ~]$ pwd
/hadoop
[hadoop@hadoop02 ~]$ ./hadoop-2.8.2/bin/hdfs namenode -format
17/11/05 21:10:43 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:  user = hadoop
STARTUP_MSG:  host = hadoop02/192.168.169.102
STARTUP_MSG:  args = [-format]
STARTUP_MSG:  version = 2.8.2
STARTUP_MSG:  classpath = /hadoop/hadoop-2.8.2/etc/hadoop:/hadoop/hadoop-2.8.2/share/hadoop/common/lib/activation-1.1.jar:/hadoop/hadoop-2.8.2/share/hadoop/common/lib/apacheds-i18n-2.0.0-M15.jar:/hadoop/hadoop-
……
STARTUP_MSG:  build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r 66c47f2a01ad9637879e95f80c41f798373828fb; compiled by ‘jdu’ on 2017-10-19T20:39Z
STARTUP_MSG:  java = 1.8.0_151
************************************************************/
17/11/05 21:10:43 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
17/11/05 21:10:43 INFO namenode.NameNode: createNameNode [-format]
17/11/05 21:10:43 WARN common.Util: Path /hadoop/hadoop/dfs/name should be specified as a URI in configuration files. Please update hdfs configuration.
17/11/05 21:10:43 WARN common.Util: Path /hadoop/hadoop/dfs/name should be specified as a URI in configuration files. Please update hdfs configuration.
Formatting using clusterid: CID-206dbc0f-21a2-4c5e-bad1-c296ed9f705a
17/11/05 21:10:44 INFO namenode.FSEditLog: Edit logging is async:false
17/11/05 21:10:44 INFO namenode.FSNamesystem: KeyProvider: null
17/11/05 21:10:44 INFO namenode.FSNamesystem: fsLock is fair: true
17/11/05 21:10:44 INFO namenode.FSNamesystem: Detailed lock hold time metrics enabled: false
17/11/05 21:10:44 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
17/11/05 21:10:44 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
17/11/05 21:10:44 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
17/11/05 21:10:44 INFO blockmanagement.BlockManager: The block deletion will start around 2017 十一月 05 21:10:44
17/11/05 21:10:44 INFO util.GSet: Computing capacity for map BlocksMap
17/11/05 21:10:44 INFO util.GSet: VM type      = 64-bit
17/11/05 21:10:44 INFO util.GSet: 2.0% max memory 889 MB = 17.8 MB
17/11/05 21:10:44 INFO util.GSet: capacity      = 2^21 = 2097152 entries
17/11/05 21:10:44 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false
17/11/05 21:10:44 INFO blockmanagement.BlockManager: defaultReplication        = 2
17/11/05 21:10:44 INFO blockmanagement.BlockManager: maxReplication            = 512
17/11/05 21:10:44 INFO blockmanagement.BlockManager: minReplication            = 1
17/11/05 21:10:44 INFO blockmanagement.BlockManager: maxReplicationStreams      = 2
17/11/05 21:10:44 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000
17/11/05 21:10:44 INFO blockmanagement.BlockManager: encryptDataTransfer        = false
17/11/05 21:10:44 INFO blockmanagement.BlockManager: maxNumBlocksToLog          = 1000
17/11/05 21:10:44 INFO namenode.FSNamesystem: fsOwner            = hadoop (auth:SIMPLE)
17/11/05 21:10:44 INFO namenode.FSNamesystem: supergroup          = supergroup
17/11/05 21:10:44 INFO namenode.FSNamesystem: isPermissionEnabled = false
17/11/05 21:10:44 INFO namenode.FSNamesystem: HA Enabled: false
17/11/05 21:10:44 INFO namenode.FSNamesystem: Append Enabled: true
17/11/05 21:10:45 INFO util.GSet: Computing capacity for map INodeMap
17/11/05 21:10:45 INFO util.GSet: VM type      = 64-bit
17/11/05 21:10:45 INFO util.GSet: 1.0% max memory 889 MB = 8.9 MB
17/11/05 21:10:45 INFO util.GSet: capacity      = 2^20 = 1048576 entries
17/11/05 21:10:45 INFO namenode.FSDirectory: ACLs enabled? false
17/11/05 21:10:45 INFO namenode.FSDirectory: XAttrs enabled? true
17/11/05 21:10:45 INFO namenode.NameNode: Caching file names occurring more than 10 times
17/11/05 21:10:45 INFO util.GSet: Computing capacity for map cachedBlocks
17/11/05 21:10:45 INFO util.GSet: VM type      = 64-bit
17/11/05 21:10:45 INFO util.GSet: 0.25% max memory 889 MB = 2.2 MB
17/11/05 21:10:45 INFO util.GSet: capacity      = 2^18 = 262144 entries
17/11/05 21:10:45 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
17/11/05 21:10:45 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
17/11/05 21:10:45 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension    = 30000
17/11/05 21:10:45 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10
17/11/05 21:10:45 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10
17/11/05 21:10:45 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
17/11/05 21:10:45 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
17/11/05 21:10:45 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
17/11/05 21:10:45 INFO util.GSet: Computing capacity for map NameNodeRetryCache
17/11/05 21:10:45 INFO util.GSet: VM type      = 64-bit
17/11/05 21:10:45 INFO util.GSet: 0.029999999329447746% max memory 889 MB = 273.1 KB
17/11/05 21:10:45 INFO util.GSet: capacity      = 2^15 = 32768 entries
17/11/05 21:10:45 INFO namenode.FSImage: Allocated new BlockPoolId: BP-1476203169-192.168.169.102-1509887445494
17/11/05 21:10:45 INFO common.Storage: Storage directory /hadoop/hadoop/dfs/name has been successfully formatted.
17/11/05 21:10:45 INFO namenode.FSImageFormatProtobuf: Saving image file /hadoop/hadoop/dfs/name/current/fsimage.ckpt_0000000000000000000 using no compression
17/11/05 21:10:45 INFO namenode.FSImageFormatProtobuf: Image file /hadoop/hadoop/dfs/name/current/fsimage.ckpt_0000000000000000000 of size 323 bytes saved in 0 seconds.
17/11/05 21:10:45 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
17/11/05 21:10:45 INFO util.ExitUtil: Exiting with status 0
17/11/05 21:10:45 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at hadoop02/192.168.169.102
************************************************************/
[hadoop@hadoop02 ~]$ 

 验证

[hadoop@hadoop02 ~]$ cd /hadoop/hadoop/dfs/name/current
[hadoop@hadoop02 current]$ pwd
/hadoop/hadoop/dfs/name/current
[hadoop@hadoop02 current]$ ls
fsimage_0000000000000000000  fsimage_0000000000000000000.md5  seen_txid  VERSION
[hadoop@hadoop02 current]$

7 启动 HDSF

[hadoop@hadoop02 sbin]$ pwd
/hadoop/hadoop-2.8.2/sbin
[hadoop@hadoop02 sbin]$ ./start-dfs.sh
Starting namenodes on [hadoop02]
The authenticity of host ‘hadoop02 (192.168.169.102)’ can’t be established.
ECDSA key fingerprint is f7:ef:fb:e5:7e:0f:59:40:63:23:99:9a:ca:e2:03:e8.
Are you sure you want to continue connecting (yes/no)? yes
hadoop02: Warning: Permanently added ‘hadoop02,192.168.169.102’ (ECDSA) to the list of known hosts.
hadoop02: starting namenode, logging to /hadoop/hadoop-2.8.2/logs/hadoop-hadoop-namenode-hadoop02.out
hadoop03: starting datanode, logging to /hadoop/hadoop-2.8.2/logs/hadoop-hadoop-datanode-hadoop03.out
hadoop01: starting datanode, logging to /hadoop/hadoop-2.8.2/logs/hadoop-hadoop-datanode-hadoop01.out
Starting secondary namenodes [hadoop02]
hadoop02: starting secondarynamenode, logging to /hadoop/hadoop-2.8.2/logs/hadoop-hadoop-secondarynamenode-hadoop02.out
[hadoop@hadoop02 sbin]$ 

 验证

192.168.169.102 上

[hadoop@hadoop02 sbin]$ ps -aux | grep namenode
hadoop    13502  3.0  6.2 2820308 241808 ?      Sl  21:18  0:09 /usr/java/jdk1.8.0_151//bin/java -Dproc_namenode -Xmx1000m -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/hadoop/hadoop-2.8.2/logs -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/hadoop/hadoop-2.8.2 -Dhadoop.id.str=hadoop -Dhadoop.root.logger=INFO,console -Djava.library.path=/hadoop/hadoop-2.8.2/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Djava.net.preferIPv4Stack=true -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/hadoop/hadoop-2.8.2/logs -Dhadoop.log.file=hadoop-hadoop-namenode-hadoop02.log -Dhadoop.home.dir=/hadoop/hadoop-2.8.2 -Dhadoop.id.str=hadoop -Dhadoop.root.logger=INFO,RFA -Djava.library.path=/hadoop/hadoop-2.8.2/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dhadoop.security.logger=INFO,RFAS -Dhdfs.audit.logger=INFO,NullAppender -Dhadoop.security.logger=INFO,RFAS -Dhdfs.audit.logger=INFO,NullAppender -Dhadoop.security.logger=INFO,RFAS -Dhdfs.audit.logger=INFO,NullAppender -Dhadoop.security.logger=INFO,RFAS org.apache.hadoop.hdfs.server.namenode.NameNode
hadoop    13849  2.1  4.5 2784012 174604 ?      Sl  21:18  0:06 /usr/java/jdk1.8.0_151//bin/java -Dproc_secondarynamenode -Xmx1000m -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/hadoop/hadoop-2.8.2/logs -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/hadoop/hadoop-2.8.2 -Dhadoop.id.str=hadoop -Dhadoop.root.logger=INFO,console -Djava.library.path=/hadoop/hadoop-2.8.2/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Djava.net.preferIPv4Stack=true -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/hadoop/hadoop-2.8.2/logs -Dhadoop.log.file=hadoop-hadoop-secondarynamenode-hadoop02.log -Dhadoop.home.dir=/hadoop/hadoop-2.8.2 -Dhadoop.id.str=hadoop -Dhadoop.root.logger=INFO,RFA -Djava.library.path=/hadoop/hadoop-2.8.2/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dhadoop.security.logger=INFO,RFAS -Dhdfs.audit.logger=INFO,NullAppender -Dhadoop.security.logger=INFO,RFAS -Dhdfs.audit.logger=INFO,NullAppender -Dhadoop.security.logger=INFO,RFAS -Dhdfs.audit.logger=INFO,NullAppender -Dhadoop.security.logger=INFO,RFAS org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode
hadoop    14264  0.0  0.0 112660  968 pts/1    S+  21:23  0:00 grep –color=auto namenode

 192.168.169.101 上

[hadoop@hadoop01 hadoop]$ ps -aux | grep datanode
hadoop    45401 24.5  4.0 2811244 165268 ?      Sl  21:31  0:10 /usr/java/jdk1.8.0_151//bin/java -Dproc_datanode -Xmx1000m -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/hadoop/hadoop-2.8.2/logs -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/hadoop/hadoop-2.8.2 -Dhadoop.id.str=hadoop -Dhadoop.root.logger=INFO,console -Djava.library.path=/hadoop/hadoop-2.8.2/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Djava.net.preferIPv4Stack=true -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/hadoop/hadoop-2.8.2/logs -Dhadoop.log.file=hadoop-hadoop-datanode-hadoop01.log -Dhadoop.home.dir=/hadoop/hadoop-2.8.2 -Dhadoop.id.str=hadoop -Dhadoop.root.logger=INFO,RFA -Djava.library.path=/hadoop/hadoop-2.8.2/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -server -Dhadoop.security.logger=ERROR,RFAS -Dhadoop.security.logger=ERROR,RFAS -Dhadoop.security.logger=ERROR,RFAS -Dhadoop.security.logger=INFO,RFAS org.apache.hadoop.hdfs.server.datanode.DataNode
hadoop    45479  0.0  0.0 112660  968 pts/0    S+  21:32  0:00 grep –color=auto datanode

 192.168.169.103 上

[hadoop@hadoop03 hadoop]$ ps -aux | grep datanode
hadoop    10608  7.4  3.9 2806140 158464 ?      Sl  21:31  0:08 /usr/java/jdk1.8.0_151//bin/java -Dproc_datanode -Xmx1000m -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/hadoop/hadoop-2.8.2/logs -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/hadoop/hadoop-2.8.2 -Dhadoop.id.str=hadoop -Dhadoop.root.logger=INFO,console -Djava.library.path=/hadoop/hadoop-2.8.2/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Djava.net.preferIPv4Stack=true -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/hadoop/hadoop-2.8.2/logs -Dhadoop.log.file=hadoop-hadoop-datanode-hadoop03.log -Dhadoop.home.dir=/hadoop/hadoop-2.8.2 -Dhadoop.id.str=hadoop -Dhadoop.root.logger=INFO,RFA -Djava.library.path=/hadoop/hadoop-2.8.2/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -server -Dhadoop.security.logger=ERROR,RFAS -Dhadoop.security.logger=ERROR,RFAS -Dhadoop.security.logger=ERROR,RFAS -Dhadoop.security.logger=INFO,RFAS org.apache.hadoop.hdfs.server.datanode.DataNode
hadoop    10757  0.0  0.0 112660  968 pts/0    S+  21:33  0:00 grep –color=auto datanode

 8 启动 yarn

[hadoop@hadoop02 sbin]$ ./start-yarn.sh
starting yarn daemons
starting resourcemanager, logging to /hadoop/hadoop-2.8.2/logs/yarn-hadoop-resourcemanager-hadoop02.out
hadoop01: starting nodemanager, logging to /hadoop/hadoop-2.8.2/logs/yarn-hadoop-nodemanager-hadoop01.out
hadoop03: starting nodemanager, logging to /hadoop/hadoop-2.8.2/logs/yarn-hadoop-nodemanager-hadoop03.out

 验证

192.168.169.102 上

[hadoop@hadoop02 sbin]$ ps -aux | grep resourcemanage
hadoop    16256 21.6  7.1 2991540 277336 pts/1  Sl  21:36  0:22 /usr/java/jdk1.8.0_151//bin/java -Dproc_resourcemanager -Xmx1000m -Dhadoop.log.dir=/hadoop/hadoop-2.8.2/logs -Dyarn.log.dir=/hadoop/hadoop-2.8.2/logs -Dhadoop.log.file=yarn-hadoop-resourcemanager-hadoop02.log -Dyarn.log.file=yarn-hadoop-resourcemanager-hadoop02.log -Dyarn.home.dir= -Dyarn.id.str=hadoop -Dhadoop.root.logger=INFO,RFA -Dyarn.root.logger=INFO,RFA -Djava.library.path=/hadoop/hadoop-2.8.2/lib/native -Dyarn.policy.file=hadoop-policy.xml -Dhadoop.log.dir=/hadoop/hadoop-2.8.2/logs -Dyarn.log.dir=/hadoop/hadoop-2.8.2/logs -Dhadoop.log.file=yarn-hadoop-resourcemanager-hadoop02.log -Dyarn.log.file=yarn-hadoop-resourcemanager-hadoop02.log -Dyarn.home.dir=/hadoop/hadoop-2.8.2 -Dhadoop.home.dir=/hadoop/hadoop-2.8.2 -Dhadoop.root.logger=INFO,RFA -Dyarn.root.logger=INFO,RFA -Djava.library.path=/hadoop/hadoop-2.8.2/lib/native -classpath /hadoop/hadoop-2.8.2/etc/hadoop:/hadoop/hadoop-2.8.2/etc/hadoop:/hadoop/hadoop-2.8.2/etc/hadoop:/hadoop/hadoop-2.8.2/share/hadoop/common/lib/*:/hadoop/hadoop-2.8.2/share/hadoop/common/*:/hadoop/hadoop-2.8.2/share/hadoop/hdfs:/hadoop/hadoop-2.8.2/share/hadoop/hdfs/lib/*:/hadoop/hadoop-2.8.2/share/hadoop/hdfs/*:/hadoop/hadoop-2.8.2/share/hadoop/yarn/lib/*:/hadoop/hadoop-2.8.2/share/hadoop/yarn/*:/hadoop/hadoop-2.8.2/share/hadoop/mapreduce/lib/*:/hadoop/hadoop-2.8.2/share/hadoop/mapreduce/*:/hadoop/hadoop-2.8.2/contrib/capacity-scheduler/*.jar:/hadoop/hadoop-2.8.2/contrib/capacity-scheduler/*.jar:/hadoop/hadoop-2.8.2/contrib/capacity-scheduler/*.jar:/hadoop/hadoop-2.8.2/share/hadoop/yarn/*:/hadoop/hadoop-2.8.2/share/hadoop/yarn/lib/*:/hadoop/hadoop-2.8.2/etc/hadoop/rm-config/log4j.properties org.apache.hadoop.yarn.server.resourcemanager.ResourceManager
hadoop    16541  0.0  0.0 112660  972 pts/1    S+  21:38  0:00 grep –color=auto resourcemanage

 192.168.169.101 上

[hadoop@hadoop01 hadoop]$ ps -aux | grep nodemanager
hadoop    45543 10.9  6.6 2847708 267304 ?      Sl  21:36  0:18 /usr/java/jdk1.8.0_151//bin/java -Dproc_nodemanager -Xmx1000m -Dhadoop.log.dir=/hadoop/hadoop-2.8.2/logs -Dyarn.log.dir=/hadoop/hadoop-2.8.2/logs -Dhadoop.log.file=yarn-hadoop-nodemanager-hadoop01.log -Dyarn.log.file=yarn-hadoop-nodemanager-hadoop01.log -Dyarn.home.dir= -Dyarn.id.str=hadoop -Dhadoop.root.logger=INFO,RFA -Dyarn.root.logger=INFO,RFA -Djava.library.path=/hadoop/hadoop-2.8.2/lib/native -Dyarn.policy.file=hadoop-policy.xml -server -Dhadoop.log.dir=/hadoop/hadoop-2.8.2/logs -Dyarn.log.dir=/hadoop/hadoop-2.8.2/logs -Dhadoop.log.file=yarn-hadoop-nodemanager-hadoop01.log -Dyarn.log.file=yarn-hadoopnodemanager-hadoop01.log -Dyarn.home.dir=/hadoop/hadoop-2.8.2 -Dhadoop.home.dir=/hadoop/hadoop-2.8.2 -Dhadoop.root.logger=INFO,RFA -Dyarn.root.logger=INFO,RFA -Djava.library.path=/hadoop/hadoop-2.8.2/lib/native -classpath /hadoop/hadoop-2.8.2/etc/hadoop:/hadoop/hadoop-2.8.2/etc/hadoop:/hadoop/hadoop-2.8.2/etc/hadoop:/hadoop/hadoop-2.8.2/share/hadoop/common/lib/*:/hadoop/hadoop-2.8.2/share/hadoop/common/*:/hadoop/hadoop-2.8.2/share/hadoop/hdfs:/hadoop/hadoop-2.8.2/share/hadoop/hdfs/lib/*:/hadoop/hadoop-2.8.2/share/hadoop/hdfs/*:/hadoop/hadoop-2.8.2/share/hadoop/yarn/lib/*:/hadoop/hadoop-2.8.2/share/hadoop/yarn/*:/hadoop/hadoop-2.8.2/share/hadoop/mapreduce/lib/*:/hadoop/hadoop-2.8.2/share/hadoop/mapreduce/*:/contrib/capacity-scheduler/*.jar:/contrib/capacity-scheduler/*.jar:/hadoop/hadoop-2.8.2/share/hadoop/yarn/*:/hadoop/hadoop-2.8.2/share/hadoop/yarn/lib/*:/hadoop/hadoop-2.8.2/etc/hadoop/nm-config/log4j.properties org.apache.hadoop.yarn.server.nodemanager.NodeManager
hadoop    45669  0.0  0.0 112660  964 pts/0    S+  21:39  0:00 grep –color=auto nodemanager

 192.168.169.103 上

[hadoop@hadoop03 hadoop]$ ps -aux | grep nodemanager
hadoop    10808  8.4  6.4 2841680 258220 ?      Sl  21:36  0:21 /usr/java/jdk1.8.0_151//bin/java -Dproc_nodemanager -Xmx1000m -Dhadoop.log.dir=/hadoop/hadoop-2.8.2/logs -Dyarn.log.dir=/hadoop/hadoop-2.8.2/logs -Dhadoop.log.file=yarn-hadoop-nodemanager-hadoop03.log -Dyarn.log.file=yarn-hadoop-nodemanager-hadoop03.log -Dyarn.home.dir= -Dyarn.id.str=hadoop -Dhadoop.root.logger=INFO,RFA -Dyarn.root.logger=INFO,RFA -Djava.library.path=/hadoop/hadoop-2.8.2/lib/native -Dyarn.policy.file=hadoop-policy.xml -server -Dhadoop.log.dir=/hadoop/hadoop-2.8.2/logs -Dyarn.log.dir=/hadoop/hadoop-2.8.2/logs -Dhadoop.log.file=yarn-hadoop-nodemanager-hadoop03.log -Dyarn.log.file=yarn-hadoopnodemanager-hadoop03.log -Dyarn.home.dir=/hadoop/hadoop-2.8.2 -Dhadoop.home.dir=/hadoop/hadoop-2.8.2 -Dhadoop.root.logger=INFO,RFA -Dyarn.root.logger=INFO,RFA -Djava.library.path=/hadoop/hadoop-2.8.2/lib/native -classpath /hadoop/hadoop-2.8.2/etc/hadoop:/hadoop/hadoop-2.8.2/etc/hadoop:/hadoop/hadoop-2.8.2/etc/hadoop:/hadoop/hadoop-2.8.2/share/hadoop/common/lib/*:/hadoop/hadoop-2.8.2/share/hadoop/common/*:/hadoop/hadoop-2.8.2/share/hadoop/hdfs:/hadoop/hadoop-2.8.2/share/hadoop/hdfs/lib/*:/hadoop/hadoop-2.8.2/share/hadoop/hdfs/*:/hadoop/hadoop-2.8.2/share/hadoop/yarn/lib/*:/hadoop/hadoop-2.8.2/share/hadoop/yarn/*:/hadoop/hadoop-2.8.2/share/hadoop/mapreduce/lib/*:/hadoop/hadoop-2.8.2/share/hadoop/mapreduce/*:/contrib/capacity-scheduler/*.jar:/contrib/capacity-scheduler/*.jar:/hadoop/hadoop-2.8.2/share/hadoop/yarn/*:/hadoop/hadoop-2.8.2/share/hadoop/yarn/lib/*:/hadoop/hadoop-2.8.2/etc/hadoop/nm-config/log4j.properties org.apache.hadoop.yarn.server.nodemanager.NodeManager
hadoop    11077  0.0  0.0 112660  968 pts/0    S+  21:40  0:00 grep –color=auto nodemanager

 9 启动 jobhistory(查看 job 状态)

[hadoop@hadoop02 sbin]$ ./mr-jobhistory-daemon.sh start historyserver
starting historyserver, logging to /hadoop/hadoop-2.8.2/logs/mapred-hadoop-historyserver-hadoop02.out
[hadoop@hadoop02 sbin]$

 10 查看 HDFS 信息

[hadoop@hadoop02 bin]$ hdfs dfsadmin -report
Configured Capacity: 97679564800 (90.97 GB)
Present Capacity: 87752962048 (81.73 GB)
DFS Remaining: 87752953856 (81.73 GB)
DFS Used: 8192 (8 KB)
DFS Used%: 0.00%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0
Missing blocks (with replication factor 1): 0
Pending deletion blocks: 0
 
————————————————-
Live datanodes (2):
 
Name: 192.168.169.101:50010 (hadoop01)
Hostname: hadoop01
Decommission Status : Normal
Configured Capacity: 48839782400 (45.49 GB)
DFS Used: 4096 (4 KB)
Non DFS Used: 4984066048 (4.64 GB)
DFS Remaining: 43855712256 (40.84 GB)
DFS Used%: 0.00%
DFS Remaining%: 89.80%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Sun Nov 05 22:22:53 CST 2017
 
 
Name: 192.168.169.103:50010 (hadoop03)
Hostname: hadoop03
Decommission Status : Normal
Configured Capacity: 48839782400 (45.49 GB)
DFS Used: 4096 (4 KB)
Non DFS Used: 4942536704 (4.60 GB)
DFS Remaining: 43897241600 (40.88 GB)
DFS Used%: 0.00%
DFS Remaining%: 89.88%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Sun Nov 05 22:22:53 CST 2017

 如展示结果如下所示:

[hadoop@hadoop02 hadoop]$ hdfs dfsadmin -report
Configured Capacity: 0 (0 B)
Present Capacity: 0 (0 B)
DFS Remaining: 0 (0 B)
DFS Used: 0 (0 B)
DFS Used%: NaN%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0
Missing blocks (with replication factor 1): 0
Pending deletion blocks: 0

 问题可能出在 2 个地方
1 core-site.xml 的 fs.default.name 配置不对;
2 防火墙没有关闭

查看文件块

[hadoop@hadoop02 bin]$ hdfs fsck / -files -blocks
Connecting to namenode via http://hadoop02:50070/fsck?ugi=hadoop&files=1&blocks=1&path=%2F
FSCK started by hadoop (auth:SIMPLE) from /192.168.169.102 for path / at Sun Nov 05 22:25:18 CST 2017
/ <dir>
/tmp <dir>
/tmp/hadoop-yarn <dir>
/tmp/hadoop-yarn/staging <dir>
/tmp/hadoop-yarn/staging/history <dir>
/tmp/hadoop-yarn/staging/history/done <dir>
/tmp/hadoop-yarn/staging/history/done_intermediate <dir>
Status: HEALTHY
 Total size:    0 B
 Total dirs:    7
 Total files:  0
 Total symlinks:        0
 Total blocks (validated):  0
 Minimally replicated blocks:  0
 Over-replicated blocks:    0
 Under-replicated blocks:  0
 Mis-replicated blocks:    0
 Default replication factor:    2
 Average block replication: 0.0
 Corrupt blocks:        0
 Missing replicas:      0
 Number of data-nodes:      2
 Number of racks:      1
FSCK ended at Sun Nov 05 22:25:18 CST 2017 in 6 milliseconds
 
 
The filesystem under path ‘/’ is HEALTHY

web 查看 FDFS:
http://192.168.169.102:50070
web 查看集群
http://192.168.169.102:8088

Hadoop 项目之基于 CentOS7 的 Cloudera 5.10.1(CDH)的安装部署  http://www.linuxidc.com/Linux/2017-04/143095.htm

Hadoop2.7.2 集群搭建详解(高可用)http://www.linuxidc.com/Linux/2017-03/142052.htm

使用 Ambari 来部署 Hadoop 集群(搭建内网 HDP 源)http://www.linuxidc.com/Linux/2017-03/142136.htm

Ubuntu 14.04 下 Hadoop 集群安装  http://www.linuxidc.com/Linux/2017-02/140783.htm

CentOS 6.7 安装 Hadoop 2.7.2  http://www.linuxidc.com/Linux/2017-08/146232.htm

Ubuntu 16.04 上构建分布式 Hadoop-2.7.3 集群  http://www.linuxidc.com/Linux/2017-07/145503.htm

CentOS 7.3 下 Hadoop2.8 分布式集群安装与测试  http://www.linuxidc.com/Linux/2017-09/146864.htm

CentOS 7 下 Hadoop 2.6.4 分布式集群环境搭建  http://www.linuxidc.com/Linux/2017-06/144932.htm

Hadoop2.7.3+Spark2.1.0 完全分布式集群搭建过程  http://www.linuxidc.com/Linux/2017-06/144926.htm

更多 Hadoop 相关信息见 Hadoop 专题页面 http://www.linuxidc.com/topicnews.aspx?tid=13

本文永久更新链接地址 :http://www.linuxidc.com/Linux/2017-11/148417.htm

正文完
星哥玩云-微信公众号
post-qrcode
 0
星锅
版权声明:本站原创文章,由 星锅 于2022-01-21发表,共计25484字。
转载说明:除特殊说明外本站文章皆由CC-4.0协议发布,转载请注明出处。
【腾讯云】推广者专属福利,新客户无门槛领取总价值高达2860元代金券,每种代金券限量500张,先到先得。
阿里云-最新活动爆款每日限量供应
评论(没有评论)
验证码
【腾讯云】云服务器、云数据库、COS、CDN、短信等云产品特惠热卖中

星哥玩云

星哥玩云
星哥玩云
分享互联网知识
用户数
4
文章数
19351
评论数
4
阅读量
7981717
文章搜索
热门文章
星哥带你玩飞牛NAS-6:抖音视频同步工具,视频下载自动下载保存

星哥带你玩飞牛NAS-6:抖音视频同步工具,视频下载自动下载保存

星哥带你玩飞牛 NAS-6:抖音视频同步工具,视频下载自动下载保存 前言 各位玩 NAS 的朋友好,我是星哥!...
星哥带你玩飞牛NAS-3:安装飞牛NAS后的很有必要的操作

星哥带你玩飞牛NAS-3:安装飞牛NAS后的很有必要的操作

星哥带你玩飞牛 NAS-3:安装飞牛 NAS 后的很有必要的操作 前言 如果你已经有了飞牛 NAS 系统,之前...
我把用了20年的360安全卫士卸载了

我把用了20年的360安全卫士卸载了

我把用了 20 年的 360 安全卫士卸载了 是的,正如标题你看到的。 原因 偷摸安装自家的软件 莫名其妙安装...
再见zabbix!轻量级自建服务器监控神器在Linux 的完整部署指南

再见zabbix!轻量级自建服务器监控神器在Linux 的完整部署指南

再见 zabbix!轻量级自建服务器监控神器在 Linux 的完整部署指南 在日常运维中,服务器监控是绕不开的...
飞牛NAS中安装Navidrome音乐文件中文标签乱码问题解决、安装FntermX终端

飞牛NAS中安装Navidrome音乐文件中文标签乱码问题解决、安装FntermX终端

飞牛 NAS 中安装 Navidrome 音乐文件中文标签乱码问题解决、安装 FntermX 终端 问题背景 ...
阿里云CDN
阿里云CDN-提高用户访问的响应速度和成功率
随机文章
飞牛NAS中安装Navidrome音乐文件中文标签乱码问题解决、安装FntermX终端

飞牛NAS中安装Navidrome音乐文件中文标签乱码问题解决、安装FntermX终端

飞牛 NAS 中安装 Navidrome 音乐文件中文标签乱码问题解决、安装 FntermX 终端 问题背景 ...
恶意团伙利用 PHP-FPM 未授权访问漏洞发起大规模攻击

恶意团伙利用 PHP-FPM 未授权访问漏洞发起大规模攻击

恶意团伙利用 PHP-FPM 未授权访问漏洞发起大规模攻击 PHP-FPM(FastCGl Process M...
150元打造低成本NAS小钢炮,捡一块3865U工控板

150元打造低成本NAS小钢炮,捡一块3865U工控板

150 元打造低成本 NAS 小钢炮,捡一块 3865U 工控板 一块二手的熊猫 B3 工控板 3865U,搭...
告别Notion焦虑!这款全平台开源加密笔记神器,让你的隐私真正“上锁”

告别Notion焦虑!这款全平台开源加密笔记神器,让你的隐私真正“上锁”

  告别 Notion 焦虑!这款全平台开源加密笔记神器,让你的隐私真正“上锁” 引言 在数字笔记工...
Prometheus:监控系统的部署与指标收集

Prometheus:监控系统的部署与指标收集

Prometheus:监控系统的部署与指标收集 在云原生体系中,Prometheus 已成为最主流的监控与报警...

免费图片视频管理工具让灵感库告别混乱

一言一句话
-「
手气不错
你的云服务器到底有多强?宝塔跑分告诉你

你的云服务器到底有多强?宝塔跑分告诉你

你的云服务器到底有多强?宝塔跑分告诉你 为什么要用宝塔跑分? 宝塔跑分其实就是对 CPU、内存、磁盘、IO 做...
让微信公众号成为 AI 智能体:从内容沉淀到智能问答的一次升级

让微信公众号成为 AI 智能体:从内容沉淀到智能问答的一次升级

让微信公众号成为 AI 智能体:从内容沉淀到智能问答的一次升级 大家好,我是星哥,之前写了一篇文章 自己手撸一...
150元打造低成本NAS小钢炮,捡一块3865U工控板

150元打造低成本NAS小钢炮,捡一块3865U工控板

150 元打造低成本 NAS 小钢炮,捡一块 3865U 工控板 一块二手的熊猫 B3 工控板 3865U,搭...
仅2MB大小!开源硬件监控工具:Win11 无缝适配,CPU、GPU、网速全维度掌控

仅2MB大小!开源硬件监控工具:Win11 无缝适配,CPU、GPU、网速全维度掌控

还在忍受动辄数百兆的“全家桶”监控软件?后台偷占资源、界面杂乱冗余,想查个 CPU 温度都要层层点选? 今天给...
星哥带你玩飞牛NAS-12:开源笔记的进化之路,效率玩家的新选择

星哥带你玩飞牛NAS-12:开源笔记的进化之路,效率玩家的新选择

星哥带你玩飞牛 NAS-12:开源笔记的进化之路,效率玩家的新选择 前言 如何高效管理知识与笔记,已经成为技术...