阿里云-云小站(无限量代金券发放中)
【腾讯云】云服务器、云数据库、COS、CDN、短信等热卖云产品特惠抢购

Ubuntu 14.04 LTS下安装Hadoop 1.2.1(伪分布模式)

176次阅读
没有评论

共计 7799 个字符,预计需要花费 20 分钟才能阅读完成。

Hadoop 的运行模式可分为单机模式、伪分布模式和分布模式。首先无论哪种模式都需要安装 JDK 的,这一步之前的随笔 Ubuntu 14.04 LTS 下安装 JDK 1.8 中已经做了(见 http://www.linuxidc.com/Linux/2016-09/135403.htm)。这里就不多说了。

其次是安装 SSH。安装 SSH 是为了每次可以免密码登陆数据节点服务器。因为集群的环境下,每次登陆到数据节点服务器不可能每次都输入密码。这一步在前面的随笔 Ubuntu 14.04 LTS 下配置 SSH 免密码登录中已经做了(见 http://www.linuxidc.com/Linux/2016-09/135404.htm)。这里也不多说了。

伪分布模式安装:

首先下载 Hadoop 1.2.1 到本机,再解压到用户目录下。

linuxidc@ubuntu:~/Downloads$ tar zxf hadoop-1.2.1.tar.gz -C ~/hadoop_1.2.1
linuxidc@ubuntu:~/Downloads$ cd ~/hadoop_1.2.1/
linuxidc@ubuntu:~/hadoop_1.2.1$ ls
hadoop-1.2.1
linuxidc@ubuntu:~/hadoop_1.2.1$ cd hadoop-1.2.1/
linuxidc@ubuntu:~/hadoop_1.2.1/hadoop-1.2.1$ ls
bin          hadoop-ant-1.2.1.jar          ivy          sbin
build.xml    hadoop-client-1.2.1.jar      ivy.xml      share
c++          hadoop-core-1.2.1.jar        lib          src
CHANGES.txt  hadoop-examples-1.2.1.jar    libexec      webapps
conf        hadoop-minicluster-1.2.1.jar  LICENSE.txt
contrib      hadoop-test-1.2.1.jar        NOTICE.txt
docs        hadoop-tools-1.2.1.jar        README.txt

然后配置 hadoop 的几个配置文件,都是 XML 格式。

首先是 core-default.xml。这里配置 hadoop 分布式文件系统的地址和端口,以及 Hadoop 临时文件目录(/tmp/hadoop-${user.name})。

linuxidc@ubuntu:~/hadoop_1.2.1/hadoop-1.2.1/conf$ cat core-site.xml
<?xml version=”1.0″?>
<?xml-stylesheet type=”text/xsl” href=”https://www.linuxidc.com/Linux/2016-09/configuration.xsl”?>

<!– Put site-specific property overrides in this file. –>

<configuration>
    <property>
        <name>fs.default.name</name>
        <value>hdfs://localhost:9000</value>
    </property>
    <property>
        <name>hadoop.tmp.dir</name>
        <value>/hadoop/hadooptmp</value>
    </property>
</configuration>
linuxidc@ubuntu:~/hadoop_1.2.1/hadoop-1.2.1/conf$

修改 hadoop 系统环境配置文件,告诉 hadoop 安装好的 jdk 的主目录路径

linuxidc@ubuntu:~/hadoop_1.2.1/hadoop-1.2.1$ cd conf/
linuxidc@ubuntu:~/hadoop_1.2.1/hadoop-1.2.1/conf$ ls
capacity-scheduler.xml      hadoop-policy.xml      slaves
configuration.xsl          hdfs-site.xml          ssl-client.xml.example
core-site.xml              log4j.properties      ssl-server.xml.example
fair-scheduler.xml          mapred-queue-acls.xml  taskcontroller.cfg
hadoop-env.sh              mapred-site.xml        task-log4j.properties
hadoop-metrics2.properties  masters
linuxidc@ubuntu:~/hadoop_1.2.1/hadoop-1.2.1/conf$ sudo vim hadoop-env.sh n
[sudo] password for linuxidc:
2 files to edit
linuxidc@ubuntu:~/hadoop_1.2.1/hadoop-1.2.1/conf$ sudo vim hadoop-env.sh
linuxidc@ubuntu:~/hadoop_1.2.1/hadoop-1.2.1/conf$ tail -n 1 hadoop-env.sh
export JAVA_HOME=/usr/lib/jvm/jdk

然后是 hdfs-site.xml。修改 hdfs 的文件备份数量为 1,dfs 命名节点的主目录,dfs 数据节点的目录。

linuxidc@ubuntu:~/hadoop_1.2.1/hadoop-1.2.1/conf$ cat hdfs-site.xml
<?xml version=”1.0″?>
<?xml-stylesheet type=”text/xsl” href=”https://www.linuxidc.com/Linux/2016-09/configuration.xsl”?>

<!– Put site-specific property overrides in this file. –>

<configuration>
    <property>
        <name>dfs.replication</name>
        <value>1</value>
    </property>
    <property>
        <name>dfs.name.dir</name>
        <value>/hadoop/hdfs/name</value>
    </property>
    <property>
        <name>dfs.data.dir</name>
        <value>/hadoop/hdfs/data</value>
    </property>
</configuration>
linuxidc@ubuntu:~/hadoop_1.2.1/hadoop-1.2.1/conf$

最后配置 mapreduce 的 job tracker 的地址和端口

linuxidc@ubuntu:~/hadoop_1.2.1/hadoop-1.2.1/conf$ cat mapred-site.xml
<?xml version=”1.0″?>
<?xml-stylesheet type=”text/xsl” href=”https://www.linuxidc.com/Linux/2016-09/configuration.xsl”?>

<!– Put site-specific property overrides in this file. –>

<configuration>
 <property>
 <name>mapred.job.tracker</name>
 <value>localhost:9001</value>
 </property>
</configuration>
linuxidc@ubuntu:~/hadoop_1.2.1/hadoop-1.2.1/conf$

配置 masters 文件和 slaves 文件,这里因为我们是伪分布式,命名节点和数据节点其实都是一样。

linuxidc@ubuntu:~/hadoop_1.2.1/hadoop-1.2.1/conf$ cat masters
localhost
192.168.2.100

linuxidc@ubuntu:~/hadoop_1.2.1/hadoop-1.2.1/conf$ cat slaves
localhost
192.168.2.100
linuxidc@ubuntu:~/hadoop_1.2.1/hadoop-1.2.1/conf$

编辑 /etc/hosts 文件,配置主机名和 IP 地址的映射关系

linuxidc@ubuntu:~/hadoop_1.2.1/hadoop-1.2.1/conf$ cat /etc/hosts
127.0.0.1    localhost
127.0.1.1    ubuntu

# The following lines are desirable for IPv6 capable hosts
::1    ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
192.168.2.100 master
192.168.2.100 slave
linuxidc@ubuntu:~/hadoop_1.2.1/hadoop-1.2.1/conf$

创建好 core-default.xml,hdfs-site.xml,mapred-site.xml 三个配置文件里面写到的目录

linuxidc@ubuntu:~/hadoop_1.2.1/hadoop-1.2.1/conf$ mkdir -p /hadoop/hadooptmp
linuxidc@ubuntu:~/hadoop_1.2.1/hadoop-1.2.1/conf$ mkdir -p /hadoop/hdfs/name
linuxidc@ubuntu:~/hadoop_1.2.1/hadoop-1.2.1/conf$ mkdir -p /hadoop/hdfs/data

格式化 HDFS

linuxidc@ubuntu:~/hadoop_1.2.1/hadoop-1.2.1/bin$ ./hadoop namenode -format

启动所有 Hadoop 服务,包括 JobTracker,TaskTracker,Namenode

linuxidc@ubuntu:~/hadoop_1.2.1/hadoop-1.2.1/bin$ ./start-all.sh
starting namenode, logging to /home/linuxidc/hadoop_1.2.1/hadoop-1.2.1/libexec/../logs/hadoop-linuxidc-namenode-ubuntu.out
192.168.68.130: starting datanode, logging to /home/linuxidc/hadoop_1.2.1/hadoop-1.2.1/libexec/../logs/hadoop-linuxidc-datanode-ubuntu.out
localhost: starting datanode, logging to /home/linuxidc/hadoop_1.2.1/hadoop-1.2.1/libexec/../logs/hadoop-linuxidc-datanode-ubuntu.out
localhost: ulimit -a for user linuxidc
localhost: core file size          (blocks, -c) 0
localhost: data seg size          (kbytes, -d) unlimited
localhost: scheduling priority            (-e) 0
localhost: file size              (blocks, -f) unlimited
localhost: pending signals                (-i) 7855
localhost: max locked memory      (kbytes, -l) 64
localhost: max memory size        (kbytes, -m) unlimited
localhost: open files                      (-n) 1024
localhost: pipe size            (512 bytes, -p) 8
localhost: starting secondarynamenode, logging to /home/linuxidc/hadoop_1.2.1/hadoop-1.2.1/libexec/../logs/hadoop-linuxidc-secondarynamenode-ubuntu.out
192.168.68.130: secondarynamenode running as process 10689. Stop it first.
starting jobtracker, logging to /home/linuxidc/hadoop_1.2.1/hadoop-1.2.1/libexec/../logs/hadoop-linuxidc-jobtracker-ubuntu.out
192.168.68.130: starting tasktracker, logging to /home/linuxidc/hadoop_1.2.1/hadoop-1.2.1/libexec/../logs/hadoop-linuxidc-tasktracker-ubuntu.out
localhost: starting tasktracker, logging to /home/linuxidc/hadoop_1.2.1/hadoop-1.2.1/libexec/../logs/hadoop-linuxidc-tasktracker-ubuntu.out
localhost: ulimit -a for user linuxidc
localhost: core file size          (blocks, -c) 0
localhost: data seg size          (kbytes, -d) unlimited
localhost: scheduling priority            (-e) 0
localhost: file size              (blocks, -f) unlimited
localhost: pending signals                (-i) 7855
localhost: max locked memory      (kbytes, -l) 64
localhost: max memory size        (kbytes, -m) unlimited
localhost: open files                      (-n) 1024
localhost: pipe size            (512 bytes, -p) 8
linuxidc@ubuntu:~/hadoop_1.2.1/hadoop-1.2.1/bin$

查看 Hadoop 服务是否启动成功

linuxidc@ubuntu:~/hadoop_1.2.1/hadoop-1.2.1/conf$ jps
3472 JobTracker
3604 TaskTracker
3084 NameNode
5550 Jps
3247 DataNode
3391 SecondaryNameNode
linuxidc@ubuntu:~/hadoop_1.2.1/hadoop-1.2.1/conf$

查看 hadoop 群集的状态

linuxidc@ubuntu:~/hadoop_1.2.1/hadoop-1.2.1/bin$ ./hadoop dfsadmin -report
Configured Capacity: 41083600896 (38.26 GB)
Present Capacity: 32723169280 (30.48 GB)
DFS Remaining: 32723128320 (30.48 GB)
DFS Used: 40960 (40 KB)
DFS Used%: 0%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0

————————————————-
Datanodes available: 1 (1 total, 0 dead)

Name: 127.0.0.1:50010
Decommission Status : Normal
Configured Capacity: 41083600896 (38.26 GB)
DFS Used: 40960 (40 KB)
Non DFS Used: 8360431616 (7.79 GB)
DFS Remaining: 32723128320(30.48 GB)
DFS Used%: 0%
DFS Remaining%: 79.65%
Last contact: Sat Dec 26 12:22:07 PST 2015

linuxidc@ubuntu:~/hadoop_1.2.1/hadoop-1.2.1/bin$

下面关于 Hadoop 的文章您也可能喜欢,不妨看看:

Ubuntu14.04 下 Hadoop2.4.1 单机 / 伪分布式安装配置教程  http://www.linuxidc.com/Linux/2015-02/113487.htm

CentOS 安装和配置 Hadoop2.2.0  http://www.linuxidc.com/Linux/2014-01/94685.htm

Ubuntu 13.04 上搭建 Hadoop 环境 http://www.linuxidc.com/Linux/2013-06/86106.htm

Ubuntu 12.10 +Hadoop 1.2.1 版本集群配置 http://www.linuxidc.com/Linux/2013-09/90600.htm

Ubuntu 上搭建 Hadoop 环境(单机模式 + 伪分布模式)http://www.linuxidc.com/Linux/2013-01/77681.htm

Ubuntu 下 Hadoop 环境的配置 http://www.linuxidc.com/Linux/2012-11/74539.htm

单机版搭建 Hadoop 环境图文教程详解 http://www.linuxidc.com/Linux/2012-02/53927.htm

更多 Hadoop 相关信息见Hadoop 专题页面 http://www.linuxidc.com/topicnews.aspx?tid=13

本文永久更新链接地址:http://www.linuxidc.com/Linux/2016-09/135406.htm

正文完
星哥说事-微信公众号
post-qrcode
 0
星锅
版权声明:本站原创文章,由 星锅 于2022-01-21发表,共计7799字。
转载说明:除特殊说明外本站文章皆由CC-4.0协议发布,转载请注明出处。
【腾讯云】推广者专属福利,新客户无门槛领取总价值高达2860元代金券,每种代金券限量500张,先到先得。
阿里云-最新活动爆款每日限量供应
评论(没有评论)
验证码
【腾讯云】云服务器、云数据库、COS、CDN、短信等云产品特惠热卖中