hadoop2.0ÒѾ·¢²¼Á˺ܶàÎȶ¨°æ±¾£¬Ôö¼ÓÁ˺ܶàÌØÐÔ£¬±ÈÈçHDFS HA¡¢YARNµÈ¡£×îеÄhadoop-2.7.2ÓÖÔö¼ÓÁËYARN HA
ÐÞ¸ÄÖ÷»úÃû¡¢IPµØÖ·¡£ÕâЩÔÚ֮ǰ²©¿ÍÓÐÌá¹ý¾Í²»ÔÙдÁË¡£
ÅäÖÃIPµØÖ·ºÍÖ÷»úÃûÓ³Éä¹ØÏµ¡£
sudo vi /etc/hosts
¼¯Èº¹æ»®£º
Ö÷»úÃû IP °²×°µÄÈí¼þ ÔËÐеĽø³Ì
spark01 192.168.2.201 jdk¡¢hadoop NameNode¡¢ DFSZKFailoverController(zkfc)
spark02 192.168.2.202 jdk¡¢hadoop NameNode¡¢DFSZKFailoverController(zkfc)
spark03 192.168.2.203 jdk¡¢hadoop ResourceManager
spark04 192.168.2.204 jdk¡¢hadoop ResourceManager
spark05 192.168.2.205 jdk¡¢hadoop¡¢zookeeper DataNode¡¢NodeManager¡¢JournalNode¡¢QuZ†·Ÿ"/kf/ware/vc/" target="_blank" class="keylink">vcnVtUGVlck1haW48YnIgLz4NCnNwYXJrMDYgMTkyLjE2OC4yLjIwNiBqZGuhomhhZG9vcKGiem9va2VlcGVyIERhdGFOb2RloaJOb2RlTWFuYWdlcqGiSm91cm5hbE5vZGWholF1b3J1bVBlZXJNYWluPGJyIC8+DQpzcGFyazA3IDE5Mi4xNjguMi4yMDcgamRroaJoYWRvb3Chonpvb2tlZXBlciBEYXRhTm9kZaGiTm9kZU1hbmFnZXKhokpvdXJuYWxOb2RloaJRdW9ydW1QZWVyTWFpbjwvcD4NCjxoMiBpZD0="2°²×°zookeeper¼¯ÈºÔÚspark05-07ÉÏ">2¡¢°²×°zookeeper¼¯ÈºÔÚspark05-07ÉÏ
²Î¿¼²©¿Í£ºhttps://blog.csdn.net/u013821825/article/details/51375860
3.1 ½âѹ
tar -zxvf hadoop-2.7.2.tar.gz -C /app/
3.2 ÅäÖÃHDFS£¨hadoop2.0ËùÓеÄÅäÖÃÎļþ¶¼ÔÚHADOOP_HOME/etc/hadoopĿ¼Ï£©
#½«hadoopÌí¼Óµ½»·¾³±äÁ¿ÖÐ
sudo vi /etc/profile
export JAVA_HOME=/home/hadoop/app/jdk1.8.0_91
export HADOOP_HOME=/home/hadoop/app/hadoop-2.7.2
export PATH=JAVA_HOME/bin:
export CLASSPATH=.:
#hadoop2.0µÄÅäÖÃÎļþÈ«²¿ÔÚ$HADOOP_HOME/etc/hadoopÏ cd /home/hadoop/app/hadoop-2.7.2/etc/hadoop
a¡¢ÐÞ¸Ähadoo-env.sh
export JAVA_HOME=/home/hadoop/app/jdk1.8.0_91
b¡¢ÐÞ¸Äcore-site.xml
fs.defaultFS
hdfs://ns1/
hadoop.tmp.dir
/home/hadoop/app/hadoop-2.7.2/tmp
ha.zookeeper.quorum
spark05:2181,spark06:2181,spark07:2181
c¡¢ÐÞ¸Ähdfs-site.xml
dfs.nameservices
ns1
dfs.ha.namenodes.ns1
nn1,nn2
dfs.namenode.rpc-address.ns1.nn1
spark01:9000
dfs.namenode.http-address.ns1.nn1
spark01:50070
dfs.namenode.rpc-address.ns1.nn2
spark02:9000
dfs.namenode.http-address.ns1.nn2
spark02:50070
dfs.namenode.shared.edits.dir
qjournal://spark05:8485;spark06:8485;spark07:8485/ns1
dfs.journalnode.edits.dir
/home/hadoop/app/hadoop-2.7.2/journaldata
dfs.ha.automatic-failover.enabled
true
dfs.client.failover.proxy.provider.ns1 org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider
dfs.ha.fencing.methods
sshfence
shell(/bin/true)
dfs.ha.fencing.ssh.private-key-files
/home/hadoop/.ssh/id_rsa
dfs.ha.fencing.ssh.connect-timeout
30000
d¡¢ÐÞ¸Ämapred-site.xml
ÏÈÖ´ÐУº[hadoop@hadoop01 hadoop]$ mv mapred-site.xml.template mapred-site.xml
mapreduce.framework.name
yarn
e¡¢ÐÞ¸Äyarn-site.xml
yarn.resourcemanager.ha.enabled
true
yarn.resourcemanager.cluster-id
yrc
yarn.resourcemanager.ha.rm-ids
rm1,rm2
yarn.resourcemanager.hostname.rm1
spark03
yarn.resourcemanager.hostname.rm2
spark04
yarn.resourcemanager.zk-address spark05:2181,spark06:2181,spark07:2181
yarn.nodemanager.aux-services
mapreduce_shuffle
f¡¢ÐÞ¸Äslaves(slavesÊÇÖ¸¶¨×Ó½ÚµãµÄλÖã¬ÒòΪҪÔÚspark01ÉÏÆô¶¯HDFS¡¢ÔÚspark03Æô¶¯yarn£¬ËùÒÔspark01ÉϵÄslavesÎļþÖ¸¶¨µÄÊÇdatanodeµÄλÖã¬spark03ÉϵÄslavesÎļþÖ¸¶¨µÄÊÇnodemanagerµÄλÖÃ)
spark05
spark06
spark07
ÎÞÃÜÔ¿µÇ¼£º
Ê×ÏÈÒªÅäÖÃspark01µ½spark02¡¢spark03¡¢spark04¡¢spark05¡¢spark06¡¢spark07µÄÃâÃÜÂëµÇ½
[hadoop@hadoop01 ~]
[hadoop@hadoop01 ~]
[hadoop@hadoop01 ~]
ÎÒÔÚÕâÀï¾ÍÖ»½ØÒ»ÕÅͼÁË
ÅäÖÃspark03——¡·spark5-7µÄÎÞÃÜÔ¿µÇ¼
ͬÉÏÊö²½ÖèÔÚ03ÖÐÖ´ÐУºssh-keygen -t rsa
[hadoop@hadoop01 ~]
[hadoop@hadoop01 ~]$ ssh-copy-id spark07
g¡¢ÔÙ½«spark01ÖÐÅäÖúõÄhadoop»·¾³¿½±´µ½ÆäËû»úÆ÷ÉÏ
scp -r hadoop-2.7.2/ spark02:/home/hadoop/app/
×Ô¼ºÒÀ´Î¿½±´2-7¶¼Òª¿½±´¹ýÈ¥£¬ÕâÑùÕû¸öhadoop¼¯Èº»·¾³¾Í²¿ÊðÍê³ÉÁË¡£½ÓÏÂÀ´¾ÍÊÇÆô¶¯Óë²âÊÔ¼¯Èº¡£
4¡¢Æô¶¯Óë²âÊÔ
Ñϸñ°´ÕÕÒÔϲ½ÖèÍê³ÉÆô¶¯ÈÎÎñ£º
1¡¢Æô¶¯zookeeper¼¯Èº£¨·Ö±ðÔÚspark05¡¢spark06¡¢spark07ÉÏÆô¶¯zk£©
cd /weekend/zookeeper-3.4.8/bin/
./zkServer.sh start
#²é¿´×´Ì¬£ºÒ»¸öleader£¬Á½¸öfollower
./zkServer.sh status
2¡¢Æô¶¯journalnode£¨·Ö±ðÔÚÔÚspark05¡¢spark06¡¢spark07ÉÏÖ´ÐУ©
cd /weekend/hadoop-2.7.2/sbin
sbin/hadoop-daemon.sh start journalnode
#ÔËÐÐjpsÃüÁî¼ìÑ飬weekend05¡¢weekend06¡¢weekend07É϶àÁËJournalNode½ø³Ì
3¡¢¸ñʽ»¯HDFS
#ÔÚspark01ÉÏÖ´ÐÐÃüÁî:
hdfs namenode -format
#¸ñʽ»¯ºó»áÔÚ¸ù¾Ýcore-site.xmlÖеÄhadoop.tmp.dirÅäÖÃÉú³É¸öÎļþ£¬ÕâÀïÎÒÅäÖõÄÊÇ/hadoop/hadoop-2.7.2/tmp£¬È»ºó½«/weekend/hadoop-2.4.1/tmp¿½±´µ½hadoop02µÄ/weekend/hadoop-2.7.2/Ï¡£
scp -r tmp/ hadoop02:/home/hadoop/app/hadoop-2.7.2/
##Ò²¿ÉÒÔÕâÑù£¬½¨Òéhdfs namenode -bootstrapStandby
4¡¢¸ñʽ»¯ZKFC(ÔÚhadoop¡¾spark¡¿01ÉÏÖ´Ðм´¿É)
hdfs zkfc -formatZK
5¡¢Æô¶¯HDFS(ÔÚhadoop¡¾spark¡¿01ÉÏÖ´ÐÐ)
sbin/start-dfs.sh
6¡¢Æô¶¯YARN(#####×¢Òâ#####£ºÊÇÔÚspark03ÉÏÖ´ÐÐstart-yarn.sh£¬°ÑnamenodeºÍresourcemanager·Ö¿ªÊÇÒòΪÐÔÄÜÎÊÌ⣬ÒòΪËûÃǶ¼ÒªÕ¼ÓôóÁ¿×ÊÔ´£¬ËùÒÔ°ÑËûÃÇ·Ö¿ªÁË£¬ËûÃÇ·Ö¿ªÁ˾ÍÒª·Ö±ðÔÚ²»Í¬µÄ»úÆ÷ÉÏÆô¶¯)
sbin/start-yarn.sh
7¡¢ÔÚä¯ÀÀÆ÷Öв鿴nanenode
8¡¢ÑéÖ¤HDFS HA
Ê×ÏÈÏòhdfsÉÏ´«Ò»¸öÎļþ
hadoop fs -put /etc/profile /profile
hadoop fs -ls /
È»ºóÔÙkillµôactiveµÄNameNode
kill -9
ͨ¹ýä¯ÀÀÆ÷·ÃÎÊ£ºhttps://192.168.1.202:50070
NameNode ‘hadoop02:9000’ (active)
Õâ¸öʱºòweekend02ÉϵÄNameNode±ä³ÉÁËactive
ÔÚÖ´ÐÐÃüÁ
hadoop fs -ls /
-rw-r–r– 3 root supergroup 1926 2014-02-06 15:36 /profile
¸Õ²ÅÉÏ´«µÄÎļþÒÀÈ»´æÔÚ£¡£¡£¡
ÊÖ¶¯Æô¶¯ÄǸö¹ÒµôµÄNameNode
sbin/hadoop-daemon.sh start namenode
ͨ¹ýä¯ÀÀÆ÷·ÃÎÊ£ºhttps://192.168.1.201:50070
NameNode ‘hadoop01:9000’ (standby)
ÑéÖ¤YARN£º
ÔËÐÐÒ»ÏÂhadoopÌṩµÄdemoÖеÄWordCount³ÌÐò£º
hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.4.1.jar wordcount /profile /out
µ½´ËΪֹËùÓеÄÅäÖö¼Íê³ÉÁË£¡