集群规划
主机名 | ip | 安装的软件 | 进程 |
---|---|---|---|
hadoop01 | 192.168.1.101 | jdk、hadoop | NN、DFSZKFailoverController |
hadoop02 | 192.168.1.102 | jdk、hadoop | NN、DFSZKFailoverController |
hadoop03 | 192.168.1.103 | jdk、hadoop | RM |
hadoop04 | 192.168.1.104 | jdk、hadoop、zookeeper | DN、NM、journalnode |
hadoop05 | 192.168.1.105 | jdk、hadoop、zookeeper | DN、NM、journalnode |
hadoop06 | 192.168.1.106 | jdk、hadoop、zookeeper | DN、NM、journalnode |
六台主机
主机名:
hadoop01、hadoop02、hadoop03、hadoop04、hadoop05、hadoop06 如果不会请参考:用户名:
密码:12345678设置主机名映射(root用户)
将本机的主机名和IP建立映射关系
vi /etc/hosts11
加入如下文件:
192.168.2.101 hadoop01192.168.2.102 hadoop02192.168.2.103 hadoop03192.168.2.104 hadoop04192.168.2.105 hadoop05192.168.2.106 hadoop06123456123456
拷贝/etc/hosts到其它主机
scp /etc/hosts hadoop02:/etc/scp /etc/hosts hadoop03:/etc/scp /etc/hosts hadoop04:/etc/scp /etc/hosts hadoop05:/etc/scp /etc/hosts hadoop06:/etc/1234512345
开放常用端口(root用户)
#关闭防火墙sudo systemctl stop firewalld.service#关闭开机启动sudo systemctl disable firewalld.service12341234
创建专有的用户(root用户)
一般是建专有的hadoop用户,不在root用户上面搭建
创建组和用户
这里每台虚拟主机都应该有hadoop用户
#先创建组cloudgroupadd cloud#创建用户并加入组clouduseradd -g cloud hadoop#修改用户hadoop的密码passwd hadoop123456123456
将hadoop用户加到sodu列表
1、查看/etc/sudoers的权限
ls -l /etc/sudoers11
可以看的是只读权限,如果我们要修改就必须先改变该文件的权限
2、修改权限
chmod 777 /etc/sudoers11
3、将hadoop添加root权限
vim /etc/sudoers11
在root下加入下面hadoop用户
4、还原权限
chmod 440 /etc/sudoers11
拷贝/etc/sudoers到其它主机
scp /etc/sudoers hadoop02:/etc/scp /etc/sudoers hadoop03:/etc/scp /etc/sudoers hadoop04:/etc/scp /etc/sudoers hadoop05:/etc/scp /etc/sudoers hadoop06:/etc/1234512345
配置免密码登录(hadoop用户)
切换hadoop用户
su hadoop11
进入到当前用户的根目录
cd ~ 11
查看所有文件
ls –la11
进入.ssh目录
cd .ssh11
生产公钥和私钥(四个回车)
ssh-keygen -t rsa11
执行完这个命令后,会生成两个文件id_rsa(私钥)、id_rsa.pub(公钥)
将公钥拷贝到要免登陆的机器上
ssh-copy-id 192.168.2.101ssh-copy-id 192.168.2.102ssh-copy-id 192.168.2.103ssh-copy-id 192.168.2.104ssh-copy-id 192.168.2.105ssh-copy-id 192.168.2.106123456123456
这时会在192.168.2.102主机的.ssh/下产生一个名为authorized_keys的文件,这时通过 ssh 192.168.2.102时可以直接免登陆进入主机
如下:同理可以给其他机器也设置免密码登录。
准备软件
在/home/hadoop/下创建cloud文件夹,用来安装相关软件,同时所用安装包放在cloud下的soft-install文件夹下,如:
cd /home/hadoopmkdir cloudmkdir soft-install123123
在soft-install里上传我们需要的软件:
上传我们所需要的软件到这个目录
安装jdk
解压
tar -zxvf jdk-8u91-linux-x64.tar.gz -C /home/hadoop/cloud/11
配置环境变量
# 修改配置文件sudo vi /etc/profile# 在最后下添加export JAVA_HOME=/home/hadoop/cloud/jdk1.8.0_91export PATH=$JAVA_HOME/bin:$PATHexport CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar# 刷新配置文件source /etc/profile1234567891012345678910
将jdk和环境变量分别拷贝到其他主机上
可以直接将cloud文件夹复制过去
scp -r cloud/ hadoop02:/home/hadoop/scp -r cloud/ hadoop03:/home/hadoop/scp -r cloud/ hadoop04:/home/hadoop/scp -r cloud/ hadoop05:/home/hadoop/scp -r cloud/ hadoop06:/home/hadoop/1234512345
将环境变量拷贝到其他主机下
sudo scp /etc/profile hadoop02:/etc/sudo scp /etc/profile hadoop03:/etc/sudo scp /etc/profile hadoop04:/etc/sudo scp /etc/profile hadoop05:/etc/sudo scp /etc/profile hadoop06:/etc/1234512345
刷新环境变量
source /etc/profile11
安装zookeeper
如果不懂Zookeeper请参考:
下载地址:
安装
前面我们已经安装的jdk,现在我们在hadoop04、hadoop05、hadoop06上安装Zookeeper
1、解压
tar -zxvf zookeeper-3.4.8.tar.gz -C /home/hadoop/cloud/11
2、修改Zookeeper的默认配置 conf/zoo_sample.cfg
mv zoo_sample.cfg zoo.cfgvi zoo.cfg1212
配置如下:
#修改dataDir指向我们数据dataDir=/home/hadoop/cloud/zookeeper-3.4.8/data#并在最后添加server.1=hadoop04:2888:3888server.2=hadoop05:2888:3888server.3=hadoop06:2888:3888123456123456
3、在/home/hadoop/cloud/zookeeper-3.4.8/目录下创建data文件夹
mkdir data11
4、在data文件夹下创建myid文件指明本机id
vim myid11
id 分别对应为hadoop04为1,hadoop05为2,hadoop06为3 后面我们再统一拷贝
5、复制zookeeper-3.4.8到105、106机器上并修改相应的myid
scp -r zookeeper-3.4.8/ hadoop04:/home/hadoop/cloud/scp -r zookeeper-3.4.8/ hadoop05:/home/hadoop/cloud/scp -r zookeeper-3.4.8/ hadoop06:/home/hadoop/cloud/123123
启动Zookeeper
分别在hadoop04、hadoop05、hadoop06上启动Zookeeper
#执行/home/hadoop/cloud/zookeeper-3.4.8/bin目录下的脚本启动./zkServer.sh start1212
查看zookeeper的状态
./zkServer.sh status11
在bin/目录下运行,运行结果如下说明成功(此时至少运行2台)
其实我们可以找到leader 然后stop,会发现Zookeeper会立即切换Leader
安装hadoop
安装(现在hadoop01安装,然后复制其他机器)
解压
tar -zxvf hadoop-2.7.2.tar.gz -C /home/hadoop/cloud/11
配置环境变量
# 修改配置文件sudo vi /etc/profile# 在最后下添加export HADOOP_HOME=/home/hadoop/cloud/hadoop-2.7.2export PATH=$PATH:$HADOOP_HOME/bin# 刷新配置文件source /etc/profile123456789123456789
:
which hadoop11
修改配置文件(6个)
hadoop-env.sh
# The java implementation to use.export JAVA_HOME=/home/hadoop/cloud/jdk1.8.0_911212
core-site.xml
123456789101112131415161718192021123456789101112131415161718192021 hadoop.tmp.dir /home/hadoop/cloud/hadoop-2.7.2/tmp fs.defaultFS hdfs://ns1 ha.zookeeper.quorum hadoop04:2181,hadoop05:2181,hadoop06:2181
hdfs-site.xml
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768697071727374757677787980818212345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273747576777879808182 dfs.nameservices ns1 dfs.ha.namenodes.ns1 nn1,nn2 dfs.namenode.rpc-address.ns1.nn1 hadoop01:8020 dfs.namenode.http-address.ns1.nn1 hadoop01:50070 dfs.namenode.rpc-address.ns1.nn2 hadoop02:8020 dfs.namenode.http-address.ns1.nn2 hadoop02:50070 dfs.namenode.shared.edits.dir qjournal://hadoop04:8485;hadoop05:8485;hadoop06:8485/ns1 dfs.journalnode.edits.dir /home/hadoop/cloud/hadoop-2.7.2/journal/ dfs.ha.automatic-failover.enabled true dfs.client.failover.proxy.provider.ns1 org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider dfs.ha.fencing.methods sshfence shell(/bin/true) dfs.ha.fencing.ssh.private-key-files /home/hadoop/.ssh/id_rsa dfs.ha.fencing.ssh.connect-timeout 30000
mapred-site.xml.template
需要重命名: mv mapred-site.xml.template mapred-site.xml1234567812345678 mapreduce.framework.name yarn
yarn-site.xml
1234567891011121312345678910111213 yarn.resourcemanager.hostname hadoop03 yarn.nodemanager.aux-services mapreduce_shuffle
slaves
hadoop04hadoop05hadoop06123123
并在 hadoop-2.7.2文件下 创建tmp文件:
mkdir tmp11
将配置好的文件拷贝到其他主机
将hadoop-2.5.2拷贝到其他主机下
scp -r hadoop-2.7.2 hadoop02:/home/hadoop/cloud/scp -r hadoop-2.7.2 hadoop03:/home/hadoop/cloud/scp -r hadoop-2.7.2 hadoop04:/home/hadoop/cloud/scp -r hadoop-2.7.2 hadoop05:/home/hadoop/cloud/scp -r hadoop-2.7.2 hadoop06:/home/hadoop/cloud/1234512345
将环境变量拷贝到其他主机下
sudo scp /etc/profile hadoop02:/etc/sudo scp /etc/profile hadoop03:/etc/sudo scp /etc/profile hadoop04:/etc/sudo scp /etc/profile hadoop05:/etc/sudo scp /etc/profile hadoop06:/etc/1234512345
刷新环境变量
source /etc/profile11
启动
启动的时候注意启动顺序
1、启动zookeeper(在hadoop04、05、06 )
2、启动journal node(在hadoop04、05、06)
#hadoop-2.7.2/sbin下./sbin/hadoop-daemon.sh start journalnode1212
3、格式化HDFS(namenode)第一次要格式化(在hadoop01、02中任意一台)(这里直接复制会有问题,最好手动输入)
./bin/hdfs namenode –format11
并把/home/hadoop/cloud/hadoop-2.7.2/tmp 文件夹拷贝到另一台namenode的目录下
scp -r /home/hadoop/cloud/hadoop-2.7.2/tmp hadoop@hadoop02:/home/hadoop/cloud/hadoop-2.7.2/11
4、格式化 zk(在hadoop01即可)(这里直接复杂会有问题,最好手动输入)
./bin/hdfs zkfc –formatZK11
5、启动zkfc来监控NN状态(在hadoop01、02)
./sbin/hadoop-daemon.sh start zkfc11
6、启动HDFS(namenode)(在hadoop01即可)
#hadoop-2.7.2/sbin下./sbin/start-dfs.sh1212
7、启动YARN(MR)(在hadoop03即可)
#hadoop-2.7.2/sbin下./sbin/start-yarn.sh1212
查看结果
如果上面的启动没有报错的的话,这时在我们的虚拟主机上应该分别有自己的进程,如前文我们规划的一样。
查看本机的进程jps11
通过浏览器测试如下:
http://192.168.2.101:50070/11
可以看出hadoop01的namenode是处于一种standby状态,那么hadoop02应该是处于active状态
查看YARN的状态
http://192.168.2.103:8088/11