首页  ·  知识 ·  大数据
Hadoop-2.7.2+zookeeper-3.4.6完全分布式环境搭建
网友  itpub  Hadoop  编辑:Jasmine   图片来源:网络
Hadoop-2.7.2+zookeeper-3.4.6完全分布式环境搭建

一.版本

组件名

版本

说明

JRE

java version "1.7.0_67"

Java(TM) SE Runtime Environment (build 1.7.0_67-b01)

Java HotSpot(TM) 64-Bit Server VM (build 24.65-b04, mixed mode)

 

Hadoop

hadoop-2.7.2.tar.gz

主程序包

Zookeeper

zookeeper-3.4.6.tar.gz

热切,Yarn 存储数据使用的协调服务

二.主机规划

IP

Host 及安装软件

部署模块

进程

172.16.101.55

sht-sgmhadoopnn-01

hadoop

NameNode

ResourceManager

NameNode

DFSZKFailoverController

ResourceManager

172.16.101.56

sht-sgmhadoopnn-02

hadoop

NameNode

ResourceManager

NameNode

DFSZKFailoverController

ResourceManager

172.16.101.58

sht-sgmhadoopdn-01

hadoop、zookeeper

DataNode

NodeManager

Zookeeper

DataNode

NodeManager

JournalNode

QuorumPeerMain

172.16.101.59

sht-sgmhadoopdn-02

Hadoop、zookeeper

DataNode

NodeManager

Zookeeper

DataNode

NodeManager

JournalNode

QuorumPeerMain

172.16.101.60

sht-sgmhadoopdn-03

Hadoop、zookeeper

DataNode

NodeManager

Zookeeper

DataNode

NodeManager

JournalNode

QuorumPeerMain

三.目录规划

名称

路径

$HADOOP_HOME

/hadoop/hadoop-2.7.2

Data

$ HADOOP_HOME/data

Log

$ HADOOP_HOME/logs

四.常用脚本及命令

1.启动集群

start-dfs.sh

start-yarn.sh

2.关闭集群

stop-yarn.sh

stop-dfs.sh

3.监控集群

hdfs dfsadmin -report

4.单个进程启动/关闭

hadoop-daemon.sh start|stop namenode|datanode| journalnode

yarn-daemon.sh start |stop resourcemanager|nodemanager

http://blog.chinaunix.net/uid-25723371-id-4943894.html

五.环境准备

1 .设置ip地址(5台)

点击(此处)折叠或打开

[root@sht-sgmhadoopnn-01 ~]# vi /etc/sysconfig/network-scripts/ifcfg-eth0

DEVICE="eth0"

BOOTPROTO="static"

DNS1="172.16.101.63"

DNS2="172.16.101.64"

GATEWAY="172.16.101.1"

HWADDR="00:50:56:82:50:1E"

IPADDR="172.16.101.55"

NETMASK="255.255.255.0"

NM_CONTROLLED="yes"

ONBOOT="yes"

TYPE="Ethernet"

UUID="257c075f-6c6a-47ef-a025-e625367cbd9c"


执行命令: service network restart

验证:ifconfig

2 .关闭防火墙(5台)

执行命:service iptables stop

验证:service iptables status

3.关闭防火墙的自动运行(5台)

执行命令:chkconfig iptables off

验证:chkconfig --list | grep iptables

4 设置主机名(5台)

执行命令

(1)hostname sht-sgmhadoopnn-01

(2)vi /etc/sysconfig/network

点击(此处)折叠或打开

[root@sht-sgmhadoopnn-01 ~]# vi /etc/sysconfig/network

NETWORKING=yes

HOSTNAME=sht-sgmhadoopnn-01.telenav.cn

GATEWAY=172.16.101.1

5 ip与hostname绑定(5台)

点击(此处)折叠或打开

[root@sht-sgmhadoopnn-01 ~]# vi /etc/hosts

127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4

::1 localhost localhost.localdomain localhost6 localhost6.localdomain6

172.16.101.55 sht-sgmhadoopnn-01.telenav.cn sht-sgmhadoopnn-01

172.16.101.56 sht-sgmhadoopnn-02.telenav.cn sht-sgmhadoopnn-02

172.16.101.58 sht-sgmhadoopdn-01.telenav.cn sht-sgmhadoopdn-01

172.16.101.59 sht-sgmhadoopdn-02.telenav.cn sht-sgmhadoopdn-02

172.16.101.60 sht-sgmhadoopdn-03.telenav.cn sht-sgmhadoopdn-03

验证:ping sht-sgmhadoopnn-01

6. 设置5台machines,SSH互相通信

http://blog.itpub.net/30089851/viewspace-1992210/

7 .安装JDK(5台)

点击(此处)折叠或打开

(1)执行命令

[root@sht-sgmhadoopnn-01 ~]# cd /usr/java

[root@sht-sgmhadoopnn-01 java]# cp /tmp/jdk-7u67-linux-x64.gz ./

[root@sht-sgmhadoopnn-01 java]# tar -xzvf jdk-7u67-linux-x64.gz

(2)vi /etc/profile 增加内容如下:

export JAVA_HOME=/usr/java/jdk1.7.0_67

export HADOOP_HOME=/hadoop/hadoop-2.7.2

export ZOOKEEPER_HOME=/hadoop/zookeeper

export PATH=.:$HADOOP_HOME/bin:$JAVA_HOME/bin:$ZOOKEEPER_HOME/bin:$PATH

#先把HADOOP_HOME, ZOOKEEPER_HOME配置了

#本次实验机器已经配置好了jdk1.7.0_67-cloudera

(3)执行 source /etc/profile

(4)验证:java –version

8.创建文件夹(5台)

mkdir /hadoop

六.安装Zookeeper

sht-sgmhadoopdn-01/02/03

1.下载解压zookeeper-3.4.6.tar.gz

点击(此处)折叠或打开

[root@sht-sgmhadoopdn-01 tmp]# wget http://mirror.bit.edu.cn/apache/zookeeper/zookeeper-3.4.6/zookeeper-3.4.6.tar.gz

[root@sht-sgmhadoopdn-02 tmp]# wget http://mirror.bit.edu.cn/apache/zookeeper/zookeeper-3.4.6/zookeeper-3.4.6.tar.gz

[root@sht-sgmhadoopdn-03 tmp]# wget http://mirror.bit.edu.cn/apache/zookeeper/zookeeper-3.4.6/zookeeper-3.4.6.tar.gz

[root@sht-sgmhadoopdn-01 tmp]# tar -xvf zookeeper-3.4.6.tar.gz

[root@sht-sgmhadoopdn-02 tmp]# tar -xvf zookeeper-3.4.6.tar.gz

[root@sht-sgmhadoopdn-03 tmp]# tar -xvf zookeeper-3.4.6.tar.gz

[root@sht-sgmhadoopdn-01 tmp]# mv zookeeper-3.4.6 /hadoop/zookeeper

[root@sht-sgmhadoopdn-02 tmp]# mv zookeeper-3.4.6 /hadoop/zookeeper

[root@sht-sgmhadoopdn-03 tmp]# mv zookeeper-3.4.6 /hadoop/zookeeper

2.修改配置

点击(此处)折叠或打开

[root@sht-sgmhadoopdn-01 tmp]# cd /hadoop/zookeeper/conf

[root@sht-sgmhadoopdn-01 conf]# cp zoo_sample.cfg zoo.cfg

[root@sht-sgmhadoopdn-01 conf]# vi zoo.cfg

修改dataDir

dataDir=/hadoop/zookeeper/data

添加下面三行

server.1=sht-sgmhadoopdn-01:2888:3888

server.2=sht-sgmhadoopdn-02:2888:3888

server.3=sht-sgmhadoopdn-03:2888:3888

[root@sht-sgmhadoopdn-01 conf]# cd ../

[root@sht-sgmhadoopdn-01 zookeeper]# mkdir data

[root@sht-sgmhadoopdn-01 zookeeper]# touch data/myid

[root@sht-sgmhadoopdn-01 zookeeper]# echo 1 > data/myid

[root@sht-sgmhadoopdn-01 zookeeper]# more data/myid

1

## sht-sgmhadoopdn-02/03,也修改配置,就如下不同

[root@sht-sgmhadoopdn-02 zookeeper]# echo 2 > data/myid

[root@sht-sgmhadoopdn-03 zookeeper]# echo 3 > data/myid

七.安装Hadoop(HDFS HA+YARN HA)

#step3~7,用SecureCRT ssh 到 linux的环境中,假如copy 内容从window 到 linux 中,中文乱码,请参照修改http://www.cnblogs.com/qi09/archive/2013/02/05/2892922.html

1.下载解压hadoop-2.7.2.tar.gz

点击(此处)折叠或打开

[root@sht-sgmhadoopdn-01 tmp]# cd /hadoop/zookeeper/conf

[root@sht-sgmhadoopdn-01 conf]# cp zoo_sample.cfg zoo.cfg

[root@sht-sgmhadoopdn-01 conf]# vi zoo.cfg

修改dataDir

dataDir=/hadoop/zookeeper/data

添加下面三行

server.1=sht-sgmhadoopdn-01:2888:3888

server.2=sht-sgmhadoopdn-02:2888:3888

server.3=sht-sgmhadoopdn-03:2888:3888

[root@sht-sgmhadoopdn-01 conf]# cd ../

[root@sht-sgmhadoopdn-01 zookeeper]# mkdir data

[root@sht-sgmhadoopdn-01 zookeeper]# touch data/myid

[root@sht-sgmhadoopdn-01 zookeeper]# echo 1 > data/myid

[root@sht-sgmhadoopdn-01 zookeeper]# more data/myid

1

## sht-sgmhadoopdn-02/03,也修改配置,就如下不同

[root@sht-sgmhadoopdn-02 zookeeper]# echo 2 > data/myid

[root@sht-sgmhadoopdn-03 zookeeper]# echo 3 > data/myid

2.修改$HADOOP_HOME/etc/hadoop/hadoop-env.sh

export JAVA_HOME="/usr/java/jdk1.7.0_67-cloudera"

3.修改$HADOOP_HOME/etc/hadoop/core-site.xml

点击(此处)折叠或打开

<?xml version="1.0" encoding="UTF-8"?>

<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<configuration>

    <!--Yarn 需要使用 fs.defaultFS 指定NameNode URI -->

    <property>

        <name>fs.defaultFS</name>

        <value>hdfs://mycluster</value>

    </property>

    <!--HDFS超级用户 -->

    <property>

        <name>dfs.permissions.superusergroup</name>

        <value>root</value>

    </property>

    <!--==============================Trash机制======================================= -->

    <property>

        <!--多长时间创建CheckPoint NameNode截点上运行的CheckPointer 从Current文件夹创建CheckPoint;默认:0 由fs.trash.interval项指定 -->

        <name>fs.trash.checkpoint.interval</name>

        <value>0</value>

    </property>

    <property>

        <!--多少分钟.Trash下的CheckPoint目录会被删除,该配置服务器设置优先级大于客户端,默认:0 不删除 -->

        <name>fs.trash.interval</name>

        <value>1440</value>

    </property>

</configuration>


4.修改$HADOOP_HOME/etc/hadoop/hdfs-site.xml


点击(此处)折叠或打开

<?xml version="1.0" encoding="UTF-8"?>

<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<configuration>

    <!--开启web hdfs -->

    <property>

        <name>dfs.webhdfs.enabled</name>

        <value>true</value>

    </property>

    <property>

        <name>dfs.namenode.name.dir</name>

        <value>/hadoop/hadoop-2.7.2/data/dfs/name</value>

        <description> namenode 存放name table(fsimage)本地目录(需要修改)</description>

    </property>

    <property>

        <name>dfs.namenode.edits.dir</name>

        <value>${dfs.namenode.name.dir}</value>

        <description>namenode粗放 transaction file(edits)本地目录(需要修改)</description>

    </property>

    <property>

        <name>dfs.datanode.data.dir</name>

        <value>/hadoop/hadoop-2.7.2/data/dfs/data</value>

        <description>datanode存放block本地目录(需要修改)</description>

    </property>

    <property>

        <name>dfs.replication</name>

        <value>3</value>

    </property>

    <!-- 块大小 (默认) -->

    <property>

        <name>dfs.blocksize</name>

        <value>268435456</value>

    </property>

    <!--======================================================================= -->

    <!--HDFS高可用配置 -->

    <!--nameservices逻辑名 -->

    <property>

        <name>dfs.nameservices</name>

        <value>mycluster</value>

    </property>

    <property>

        <!--设置NameNode IDs 此版本最大只支持两个NameNode -->

        <name>dfs.ha.namenodes.mycluster</name>

        <value>nn1,nn2</value>

    </property>


    <!-- Hdfs HA: dfs.namenode.rpc-address.[nameservice ID] rpc 通信地址 -->

    <property>

        <name>dfs.namenode.rpc-address.mycluster.nn1</name>

        <value>sht-sgmhadoopnn-01:8020</value>

    </property>

    <property>

        <name>dfs.namenode.rpc-address.mycluster.nn2</name>

        <value>sht-sgmhadoopnn-02:8020</value>

    </property>


    <!-- Hdfs HA: dfs.namenode.http-address.[nameservice ID] http 通信地址 -->

    <property>

        <name>dfs.namenode.http-address.mycluster.nn1</name>

        <value>sht-sgmhadoopnn-01:50070</value>

    </property>

    <property>

        <name>dfs.namenode.http-address.mycluster.nn2</name>

        <value>sht-sgmhadoopnn-02:50070</value>

    </property>


    <!--==================Namenode editlog同步 ============================================ -->

    <!--保证数据恢复 -->

    <property>

        <name>dfs.journalnode.http-address</name>

        <value>0.0.0.0:8480</value>

    </property>

    <property>

        <name>dfs.journalnode.rpc-address</name>

        <value>0.0.0.0:8485</value>

    </property>

    <property>

        <!--设置JournalNode服务器地址,QuorumJournalManager 用于存储editlog -->

        <!--格式:qjournal://<host1:port1>;<host2:port2>;<host3:port3>/<journalId> 端口同journalnode.rpc-address -->

        <name>dfs.namenode.shared.edits.dir</name>

        <value>qjournal://sht-sgmhadoopdn-01:8485;sht-sgmhadoopdn-02:8485;sht-sgmhadoopdn-03:8485/mycluster</value>

    </property>


    <property>

        <!--JournalNode存放数据地址 -->

        <name>dfs.journalnode.edits.dir</name>

        <value>/hadoop/hadoop-2.7.2/data/dfs/jn</value>

    </property>

    <!--==================DataNode editlog同步 ============================================ -->

    <property>

        <!--DataNode,Client连接Namenode识别选择Active NameNode策略 -->

        <name>dfs.client.failover.proxy.provider.mycluster</name>

        <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>

    </property>

    <!--==================Namenode fencing:=============================================== -->

    <!--Failover后防止停掉的Namenode启动,造成两个服务 -->

    <property>

        <name>dfs.ha.fencing.methods</name>

        <value>sshfence</value>

    </property>

    <property>

        <name>dfs.ha.fencing.ssh.private-key-files</name>

        <value>/root/.ssh/id_rsa</value>

    </property>

    <property>

        <!--多少milliseconds 认为fencing失败 -->

        <name>dfs.ha.fencing.ssh.connect-timeout</name>

        <value>30000</value>

    </property>


    <!--==================NameNode auto failover base ZKFC and Zookeeper====================== -->

    <!--开启基于Zookeeper及ZKFC进程的自动备援设置,监视进程是否死掉 -->

    <property>

        <name>dfs.ha.automatic-failover.enabled</name>

        <value>true</value>

    </property>

    <property>

        <name>ha.zookeeper.quorum</name>

        <value>sht-sgmhadoopdn-01:2181,sht-sgmhadoopdn-02:2181,sht-sgmhadoopdn-03:2181</value>

    </property>

    <property>

        <!--指定ZooKeeper超时间隔,单位毫秒 -->

        <name>ha.zookeeper.session-timeout.ms</name>

        <value>2000</value>

    </property>

</configuration>

5.修改$HADOOP_HOME/etc/hadoop/yarn-env.sh

#Yarn Daemon Options

#export YARN_RESOURCEMANAGER_OPTS

#export YARN_NODEMANAGER_OPTS

#export YARN_PROXYSERVER_OPTS

#export HADOOP_JOB_HISTORYSERVER_OPTS

#Yarn Logs

export YARN_LOG_DIR="/hadoop/hadoop-2.7.2/logs"

6.修改$HADOOP_HOEM/etc/hadoop/mapred-site.xml

点击(此处)折叠或打开

[root@sht-sgmhadoopnn-01 hadoop]# cp mapred-site.xml.template mapred-site.xml

[root@sht-sgmhadoopnn-01 hadoop]# vi mapred-site.xml

<configuration>

    <!-- 配置 MapReduce Applications -->

    <property>

        <name>mapreduce.framework.name</name>

        <value>yarn</value>

    </property>

    <!-- JobHistory Server ============================================================== -->

    <!-- 配置 MapReduce JobHistory Server 地址 ,默认: 0.0.0.0:10020 -->

    <property>

        <name>mapreduce.jobhistory.address</name>

        <value>sht-sgmhadoopnn-01:10020</value>

    </property>

    <!-- 配置 MapReduce JobHistory Server web ui 地址, 默认: 0.0.0.0:19888 -->

    <property>

        <name>mapreduce.jobhistory.webapp.address</name>

        <value>sht-sgmhadoopnn-01:19888</value>

    </property>

</configuration>

7.修改$HADOOP_HOME/etc/hadoop/yarn-site.xml

点击(此处)折叠或打开

<configuration>

    <!-- nodemanager 配置 ================================================= -->

    <property>

        <name>yarn.nodemanager.aux-services</name>

        <value>mapreduce_shuffle</value>

    </property>

    <property>

        <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>

        <value>org.apache.hadoop.mapred.ShuffleHandler</value>

    </property>

    <property>

        <description>Address where the localizer IPC is.</description>

        <name>yarn.nodemanager.localizer.address</name>

        <value>0.0.0.0:23344</value>

    </property>

    <property>

        <description>NM Webapp address.</description>

        <name>yarn.nodemanager.webapp.address</name>

        <value>0.0.0.0:23999</value>

    </property>


    <!-- HA 配置 =============================================================== -->

    <!-- Resource Manager Configs -->

    <property>

        <name>yarn.resourcemanager.connect.retry-interval.ms</name>

        <value>2000</value>

    </property>

    <property>

        <name>yarn.resourcemanager.ha.enabled</name>

        <value>true</value>

    </property>

    <property>

        <name>yarn.resourcemanager.ha.automatic-failover.enabled</name>

        <value>true</value>

    </property>

    <!-- 使嵌入式自动故障转移。HA环境启动,与 ZKRMStateStore 配合 处理fencing -->

    <property>

        <name>yarn.resourcemanager.ha.automatic-failover.embedded</name>

        <value>true</value>

    </property>

    <!-- 集群名称,确保HA选举时对应的集群 -->

    <property>

        <name>yarn.resourcemanager.cluster-id</name>

        <value>yarn-cluster</value>

    </property>

    <property>

        <name>yarn.resourcemanager.ha.rm-ids</name>

        <value>rm1,rm2</value>

    </property>

    <!--这里RM主备结点需要单独指定,(可选)

    <property>

        <name>yarn.resourcemanager.ha.id</name>

        <value>rm2</value>

</property>

 -->

    <property>

        <name>yarn.resourcemanager.scheduler.class</name>

        <value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler</value>

    </property>

    <property>

        <name>yarn.resourcemanager.recovery.enabled</name>

        <value>true</value>

    </property>

    <property>

        <name>yarn.app.mapreduce.am.scheduler.connection.wait.interval-ms</name>

        <value>5000</value>

    </property>

    <!-- ZKRMStateStore 配置 -->

    <property>

        <name>yarn.resourcemanager.store.class</name>

        <value>org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore</value>

    </property>

    <property>

        <name>yarn.resourcemanager.zk-address</name>

        <value>sht-sgmhadoopdn-01:2181,sht-sgmhadoopdn-02:2181,sht-sgmhadoopdn-03:2181</value>

    </property>

    <property>

        <name>yarn.resourcemanager.zk.state-store.address</name>

        <value>sht-sgmhadoopdn-01:2181,sht-sgmhadoopdn-02:2181,sht-sgmhadoopdn-03:2181</value>

    </property>

    <!-- Client访问RM的RPC地址 (applications manager interface) -->

    <property>

        <name>yarn.resourcemanager.address.rm1</name>

        <value>sht-sgmhadoopnn-01:23140</value>

    </property>

    <property>

        <name>yarn.resourcemanager.address.rm2</name>

        <value>sht-sgmhadoopnn-02:23140</value>

    </property>

    <!-- AM访问RM的RPC地址(scheduler interface) -->

    <property>

        <name>yarn.resourcemanager.scheduler.address.rm1</name>

        <value>sht-sgmhadoopnn-01:23130</value>

    </property>

    <property>

        <name>yarn.resourcemanager.scheduler.address.rm2</name>

        <value>sht-sgmhadoopnn-02:23130</value>

    </property>

    <!-- RM admin interface -->

    <property>

        <name>yarn.resourcemanager.admin.address.rm1</name>

        <value>sht-sgmhadoopnn-01:23141</value>

    </property>

    <property>

        <name>yarn.resourcemanager.admin.address.rm2</name>

        <value>sht-sgmhadoopnn-02:23141</value>

    </property>

    <!--NM访问RM的RPC端口 -->

    <property>

        <name>yarn.resourcemanager.resource-tracker.address.rm1</name>

        <value>sht-sgmhadoopnn-01:23125</value>

    </property>

    <property>

        <name>yarn.resourcemanager.resource-tracker.address.rm2</name>

        <value>sht-sgmhadoopnn-02:23125</value>

    </property>

    <!-- RM web application 地址 -->

    <property>

        <name>yarn.resourcemanager.webapp.address.rm1</name>

        <value>sht-sgmhadoopnn-01:8088</value>

    </property>

    <property>

        <name>yarn.resourcemanager.webapp.address.rm2</name>

        <value>sht-sgmhadoopnn-02:8088</value>

    </property>

    <property>

        <name>yarn.resourcemanager.webapp.https.address.rm1</name>

        <value>sht-sgmhadoopnn-01:23189</value>

    </property>

    <property>

        <name>yarn.resourcemanager.webapp.https.address.rm2</name>

        <value>sht-sgmhadoopnn-02:23189</value>

    </property>

</configuration>

8.修改slaves

[root@sht-sgmhadoopnn-01 hadoop]# vi slaves

sht-sgmhadoopdn-01

sht-sgmhadoopdn-02

sht-sgmhadoopdn-03

9.分发文件夹

[root@sht-sgmhadoopnn-01 hadoop]# scp -r hadoop-2.7.2 root@sht-sgmhadoopnn-02:/hadoop

[root@sht-sgmhadoopnn-01 hadoop]# scp -r hadoop-2.7.2 root@sht-sgmhadoopdn-01:/hadoop

[root@sht-sgmhadoopnn-01 hadoop]# scp -r hadoop-2.7.2 root@sht-sgmhadoopdn-02:/hadoop

[root@sht-sgmhadoopnn-01 hadoop]# scp -r hadoop-2.7.2 root@sht-sgmhadoopdn-03:/hadoop

八.启动集群

另外一种启动方式:http://www.micmiu.com/bigdata/hadoop/hadoop2-cluster-ha-setup/

1.启动zookeeper

点击(此处)折叠或打开

command: ./zkServer.sh start|stop|status

[root@sht-sgmhadoopdn-01 bin]# ./zkServer.sh start

JMX enabled by default

Using config: /hadoop/zookeeper/bin/../conf/zoo.cfg

Starting zookeeper ... STARTED

[root@sht-sgmhadoopdn-01 bin]# jps

2073 QuorumPeerMain

2106 Jps

[root@sht-sgmhadoopdn-02 bin]# ./zkServer.sh start

JMX enabled by default

Using config: /hadoop/zookeeper/bin/../conf/zoo.cfg

Starting zookeeper ... STARTED

[root@sht-sgmhadoopdn-02 bin]# jps

2073 QuorumPeerMain

2106 Jps

[root@sht-sgmhadoopdn-03 bin]# ./zkServer.sh start

JMX enabled by default

Using config: /hadoop/zookeeper/bin/../conf/zoo.cfg

Starting zookeeper ... STARTED

[root@sht-sgmhadoopdn-03 bin]# jps

2073 QuorumPeerMain

2106 Jps

2.启动hadoop(HDFS+YARN)

a.格式化前,先在journalnode 节点机器上先启动JournalNode进程

点击(此处)折叠或打开

[root@sht-sgmhadoopdn-01 ~]# cd /hadoop/hadoop-2.7.2/sbin

[root@sht-sgmhadoopdn-01 sbin]# hadoop-daemon.sh start journalnode

starting journalnode, logging to /hadoop/hadoop-2.7.2/logs/hadoop-root-journalnode-sht-sgmhadoopdn-03.telenav.cn.out

[root@sht-sgmhadoopdn-03 sbin]# jps

16722 JournalNode

16775 Jps

15519 QuorumPeerMain

[root@sht-sgmhadoopdn-02 ~]# cd /hadoop/hadoop-2.7.2/sbin

[root@sht-sgmhadoopdn-02 sbin]# hadoop-daemon.sh start journalnode

starting journalnode, logging to /hadoop/hadoop-2.7.2/logs/hadoop-root-journalnode-sht-sgmhadoopdn-03.telenav.cn.out

[root@sht-sgmhadoopdn-03 sbin]# jps

16722 JournalNode

16775 Jps

15519 QuorumPeerMain

[root@sht-sgmhadoopdn-03 ~]# cd /hadoop/hadoop-2.7.2/sbin

[root@sht-sgmhadoopdn-03 sbin]# hadoop-daemon.sh start journalnode

starting journalnode, logging to /hadoop/hadoop-2.7.2/logs/hadoop-root-journalnode-sht-sgmhadoopdn-03.telenav.cn.out

[root@sht-sgmhadoopdn-03 sbin]# jps

16722 JournalNode

16775 Jps

15519 QuorumPeerMain

b.NameNode格式化

点击(此处)折叠或打开

[root@sht-sgmhadoopnn-01 bin]# hadoop namenode -format

16/02/25 14:05:04 INFO namenode.NameNode: STARTUP_MSG:

/************************************************************

STARTUP_MSG: Starting NameNode

STARTUP_MSG: host = sht-sgmhadoopnn-01.telenav.cn/172.16.101.55

STARTUP_MSG: args = [-format]

STARTUP_MSG: version = 2.7.2

STARTUP_MSG: classpath =

……………..

………………

16/02/25 14:05:07 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033

16/02/25 14:05:07 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0

16/02/25 14:05:07 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension = 30000

16/02/25 14:05:07 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10

16/02/25 14:05:07 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10

16/02/25 14:05:07 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25

16/02/25 14:05:07 INFO namenode.FSNamesystem: Retry cache on namenode is enabled

16/02/25 14:05:07 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis

16/02/25 14:05:07 INFO util.GSet: Computing capacity for map NameNodeRetryCache

16/02/25 14:05:07 INFO util.GSet: VM type = 64-bit

16/02/25 14:05:07 INFO util.GSet: 0.029999999329447746% max memory 889 MB = 273.1 KB

16/02/25 14:05:07 INFO util.GSet: capacity = 2^15 = 32768 entries

16/02/25 14:05:08 INFO namenode.FSImage: Allocated new BlockPoolId: BP-1182930464-172.16.101.55-1456380308394

16/02/25 14:05:08 INFO common.Storage: Storage directory /hadoop/hadoop-2.7.2/data/dfs/name has been successfully formatted.

16/02/25 14:05:08 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0

16/02/25 14:05:08 INFO util.ExitUtil: Exiting with status 0

16/02/25 14:05:08 INFO namenode.NameNode: SHUTDOWN_MSG:

/************************************************************

SHUTDOWN_MSG: Shutting down NameNode at sht-sgmhadoopnn-01.telenav.cn/172.16.101.55

************************************************************/

c.同步NameNode元数据

点击(此处)折叠或打开

同步sht-sgmhadoopnn-01 元数据到sht-sgmhadoopnn-02

主要是:dfs.namenode.name.dir,dfs.namenode.edits.dir还应该确保共享存储目录下(dfs.namenode.shared.edits.dir ) 包含NameNode 所有的元数据。

[root@sht-sgmhadoopnn-01 hadoop-2.7.2]# pwd

/hadoop/hadoop-2.7.2

[root@sht-sgmhadoopnn-01 hadoop-2.7.2]# scp -r data/ root@sht-sgmhadoopnn-02:/hadoop/hadoop-2.7.2

seen_txid 100% 2 0.0KB/s 00:00

fsimage_0000000000000000000 100% 351 0.3KB/s 00:00

fsimage_0000000000000000000.md5 100% 62 0.1KB/s 00:00

VERSION 100% 205 0.2KB/s 00:00

d.初始化ZFCK

点击(此处)折叠或打开

[root@sht-sgmhadoopnn-01 bin]# hdfs zkfc -formatZK

……………..

……………..

16/02/25 14:14:41 INFO zookeeper.ZooKeeper: Client environment:user.home=/root

16/02/25 14:14:41 INFO zookeeper.ZooKeeper: Client environment:user.dir=/hadoop/hadoop-2.7.2/bin

16/02/25 14:14:41 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=sht-sgmhadoopdn-01:2181,sht-sgmhadoopdn-02:2181,sht-sgmhadoopdn-03:2181 sessionTimeout=2000 watcher=org.apache.hadoop.ha.ActiveStandbyElector$WatcherWithClientRef@5f4298a5

16/02/25 14:14:41 INFO zookeeper.ClientCnxn: Opening socket connection to server sht-sgmhadoopdn-01.telenav.cn/172.16.101.58:2181. Will not attempt to authenticate using SASL (unknown error)

16/02/25 14:14:41 INFO zookeeper.ClientCnxn: Socket connection established to sht-sgmhadoopdn-01.telenav.cn/172.16.101.58:2181, initiating session

16/02/25 14:14:42 INFO zookeeper.ClientCnxn: Session establishment complete on server sht-sgmhadoopdn-01.telenav.cn/172.16.101.58:2181, sessionid = 0x15316c965750000, negotiated timeout = 4000

16/02/25 14:14:42 INFO ha.ActiveStandbyElector: Session connected.

16/02/25 14:14:42 INFO ha.ActiveStandbyElector: Successfully created /hadoop-ha/mycluster in ZK.

16/02/25 14:14:42 INFO zookeeper.ClientCnxn: EventThread shut down

16/02/25 14:14:42 INFO zookeeper.ZooKeeper: Session: 0x15316c965750000 closed

e.启动HDFS 系统

集群启动,在sht-sgmhadoopnn-01执行start-dfs.sh

集群关闭,在sht-sgmhadoopnn-01执行stop-dfs.sh

#####集群启动############

点击(此处)折叠或打开

[root@sht-sgmhadoopnn-01 sbin]# start-dfs.sh

16/02/25 14:21:42 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

Starting namenodes on [sht-sgmhadoopnn-01 sht-sgmhadoopnn-02]

sht-sgmhadoopnn-01: starting namenode, logging to /hadoop/hadoop-2.7.2/logs/hadoop-root-namenode-sht-sgmhadoopnn-01.telenav.cn.out

sht-sgmhadoopnn-02: starting namenode, logging to /hadoop/hadoop-2.7.2/logs/hadoop-root-namenode-sht-sgmhadoopnn-02.telenav.cn.out

sht-sgmhadoopdn-01: starting datanode, logging to /hadoop/hadoop-2.7.2/logs/hadoop-root-datanode-sht-sgmhadoopdn-01.telenav.cn.out

sht-sgmhadoopdn-02: starting datanode, logging to /hadoop/hadoop-2.7.2/logs/hadoop-root-datanode-sht-sgmhadoopdn-02.telenav.cn.out

sht-sgmhadoopdn-03: starting datanode, logging to /hadoop/hadoop-2.7.2/logs/hadoop-root-datanode-sht-sgmhadoopdn-03.telenav.cn.out

Starting journal nodes [sht-sgmhadoopdn-01 sht-sgmhadoopdn-02 sht-sgmhadoopdn-03]

sht-sgmhadoopdn-01: journalnode running as process 6348. Stop it first.

sht-sgmhadoopdn-03: journalnode running as process 16722. Stop it first.

sht-sgmhadoopdn-02: journalnode running as process 7197. Stop it first.

16/02/25 14:21:56 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

Starting ZK Failover Controllers on NN hosts [sht-sgmhadoopnn-01 sht-sgmhadoopnn-02]

sht-sgmhadoopnn-01: starting zkfc, logging to /hadoop/hadoop-2.7.2/logs/hadoop-root-zkfc-sht-sgmhadoopnn-01.telenav.cn.out

sht-sgmhadoopnn-02: starting zkfc, logging to /hadoop/hadoop-2.7.2/logs/hadoop-root-zkfc-sht-sgmhadoopnn-02.telenav.cn.out

You have mail in /var/spool/mail/root

####单进程启动###########

NameNode(sht-sgmhadoopnn-01, sht-sgmhadoopnn-02):

hadoop-daemon.sh start namenode

DataNode(sht-sgmhadoopdn-01, sht-sgmhadoopdn-02, sht-sgmhadoopdn-03):

hadoop-daemon.sh start datanode

JournamNode(sht-sgmhadoopdn-01, sht-sgmhadoopdn-02, sht-sgmhadoopdn-03):

hadoop-daemon.sh start journalnode

ZKFC(sht-sgmhadoopnn-01, sht-sgmhadoopnn-02):

hadoop-daemon.sh start zkfc

f.验证namenode,datanode,zkfc

1) 进程

点击(此处)折叠或打开

[root@sht-sgmhadoopnn-01 sbin]# jps

12712 Jps

12593 DFSZKFailoverController

12278 NameNode

[root@sht-sgmhadoopnn-02 ~]# jps

29714 NameNode

29849 DFSZKFailoverController

30229 Jps

[root@sht-sgmhadoopdn-01 ~]# jps

6348 JournalNode

8775 Jps

559 QuorumPeerMain

8509 DataNode

[root@sht-sgmhadoopdn-02 ~]# jps

9430 Jps

9160 DataNode

7197 JournalNode

2073 QuorumPeerMain

[root@sht-sgmhadoopdn-03 ~]# jps

16722 JournalNode

17369 Jps

15519 QuorumPeerMain

17214 DataNode

2) 页面

sht-sgmhadoopnn-01:

http://172.16.101.55:50070/

sht-sgmhadoopnn-02:

http://172.16.101.56:50070/

g.启动YARN运算框架

#####集群启动############

1) sht-sgmhadoopnn-01启动Yarn,命令所在目录:$HADOOP_HOME/sbin

点击(此处)折叠或打开

[root@sht-sgmhadoopnn-01 sbin]# start-yarn.sh

starting yarn daemons

starting resourcemanager, logging to /hadoop/hadoop-2.7.2/logs/yarn-root-resourcemanager-sht-sgmhadoopnn-01.telenav.cn.out

sht-sgmhadoopdn-03: starting nodemanager, logging to /hadoop/hadoop-2.7.2/logs/yarn-root-nodemanager-sht-sgmhadoopdn-03.telenav.cn.out

sht-sgmhadoopdn-02: starting nodemanager, logging to /hadoop/hadoop-2.7.2/logs/yarn-root-nodemanager-sht-sgmhadoopdn-02.telenav.cn.out

sht-sgmhadoopdn-01: starting nodemanager, logging to /hadoop/hadoop-2.7.2/logs/yarn-root-nodemanager-sht-sgmhadoopdn-01.telenav.cn.out

2) sht-sgmhadoopnn-02备机启动RM

点击(此处)折叠或打开

[root@sht-sgmhadoopnn-02 sbin]# yarn-daemon.sh start resourcemanager

starting resourcemanager, logging to /hadoop/hadoop-2.7.2/logs/yarn-root-resourcemanager-sht-sgmhadoopnn-02.telenav.cn.out

####单进程启动###########

1) ResourceManager(sht-sgmhadoopnn-01, sht-sgmhadoopnn-02)

yarn-daemon.sh start resourcemanager

2) NodeManager(sht-sgmhadoopdn-01, sht-sgmhadoopdn-02, sht-sgmhadoopdn-03)

yarn-daemon.sh start nodemanager

######关闭#############

[root@sht-sgmhadoopnn-01 sbin]# stop-yarn.sh

#包含namenode的resourcemanager进程,datanode的nodemanager进程

[root@sht-sgmhadoopnn-02 sbin]# yarn-daemon.sh stop resourcemanager

h.验证resourcemanager,nodemanager

1) 进程

点击(此处)折叠或打开

[root@sht-sgmhadoopnn-01 sbin]# jps

13611 Jps

12593 DFSZKFailoverController

12278 NameNode

13384 ResourceManager

[root@sht-sgmhadoopnn-02 sbin]# jps

32265 ResourceManager

32304 Jps

29714 NameNode

29849 DFSZKFailoverController

[root@sht-sgmhadoopdn-01 ~]# jps

6348 JournalNode

559 QuorumPeerMain

8509 DataNode

10286 NodeManager

10423 Jps

[root@sht-sgmhadoopdn-02 ~]# jps

9160 DataNode

10909 NodeManager

11937 Jps

7197 JournalNode

2073 QuorumPeerMain

[root@sht-sgmhadoopdn-03 ~]# jps

18031 Jps

16722 JournalNode

17710 NodeManager

15519 QuorumPeerMain

17214 DataNode

2) 页面

ResourceManger(Active):http://172.16.101.55:8088

ResourceManger(Standby):http://172.16.101.56:8088/cluster/cluster

九.监控集群

[root@sht-sgmhadoopnn-01 ~]# hdfs dfsadmin -report

十.附件及参考

#http://archive-primary.cloudera.com/cdh5/cdh/5/hadoop-2.6.0-cdh5.5.2.tar.gz

#http://archive-primary.cloudera.com/cdh5/cdh/5/zookeeper-3.4.5-cdh5.5.2.tar.gz

hadoop : http://mirror.bit.edu.cn/apache/hadoop/common/hadoop-2.7.2/hadoop-2.7.2.tar.gz

zookeeper :http://mirror.bit.edu.cn/apache/zookeeper/zookeeper-3.4.6/zookeeper-3.4.6.tar.gz

参考:

Hadoop-2.3.0-cdh5.0.1完全分布式环境搭建(NameNode,ResourceManager HA):

http://blog.itpub.net/30089851/viewspace-1987620/

如何解决这类问题:The string "--" is not permitted within comments:

http://blog.csdn.net/free4294/article/details/38681095

SecureCRT连接linux终端中文显示乱码解决办法:

http://www.cnblogs.com/qi09/archive/2013/02/05/2892922.html


本文作者:网友 来源:itpub
CIO之家 www.ciozj.com 微信公众号:imciow
    >>频道首页  >>网站首页   纠错  >>投诉
版权声明:CIO之家尊重行业规范,每篇文章都注明有明确的作者和来源;CIO之家的原创文章,请转载时务必注明文章作者和来源;
延伸阅读
也许感兴趣的
我们推荐的
主题最新
看看其它的