jdk:jdk-8u131-linux-x64.tar.gz
hadoop:hadoop-2.8.0.tar.gz
centos:CentOS-7-x86_64-Minimal-1611.iso
A:centos201 centos202 centos203 B:使用NAT连接模式且设置固定IP C:关闭防火墙 systemctl stop firewalld.service systemctl disable firewalld.service D:关闭SElinux vi /etc/selinux/config E:重启系统 生效 F:更改hosts(vi /etc/hosts) 192.168.193.201 centos201 192.168.193.202 centos202 192.168.193.203 centos203 G:添加用户 groupadd -g 601 wuyang useradd -g wuyang -u 601 wuyang (echo '123456';sleep 1;echo '123456')| passwd wuyang H:免密钥登录 su wuyang ssh-keygen -t rsa ssh-copy-id centos201 ssh-copy-id centos202 ssh-copy-id centos203
2.安装JDK
A:解压 B:vi /etc/profile C:添加 export JAVA_HOME=/usr/java/jdk1.8.0_131 export JRE_HOME=/usr/java/jdk1.8.0_131/jre export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar export PATH=$JAVA_HOME/bin:$JAVA_HOME/jre/bin:$PATH D:source /etc/profile E:alternatives --install /usr/bin/java java /usr/java/jdk1.8.0_131/bin/java 300 F:alternatives --config java G:alternatives --install /usr/bin/javac javac /usr/java/jdk1.8.0_131/bin/java 300 H:alternatives --config javac
3.安装HDFS
A:解压到/usr/hadoop B:把Hadoop的安装路径添加到”/etc/profile”中 export HADOOP_HOME=/usr/hadoop export PATH=$PATH:$HADOOP_HOME/bin C:source /etc/profile D:配置hadoop-env.sh,并确认生效 vi /usr/hadoop/etc/hadoop/hadoop-env.sh export JAVA_HOME=/usr/java/jdk1.8.0_131 export HADOOP_CONF_DIR=/usr/hadoop/etc/hadoop/ source /usr/hadoop/etc/hadoop/hadoop-env.sh hadoop version E:在/usr/hadoop目录下创建子目录 [root@centos201 hadoop]# mkdir tmp [root@centos201 hadoop]# mkdir hdfs [root@centos201 hadoop]# cd hdfs [root@centos201 hdfs]# mkdir name [root@centos201 hdfs]# mkdir tmp [root@centos201 hdfs]# mkdir data F:配置core-site.xml文件 在/usr/hadoop/etc/hadoop”目录下。修改Hadoop核心配置文件core-site.xml,这里配置的是HDFS master(即namenode)的地址和端口号。G:配置hdfs-site.xml文件 hadoop.tmp.dir /usr/hadoop/tmp true A base for other temporary directories. fs.default.name hdfs://centos201:9000 true io.file.buffer.size 131072 dfs.replication 3 dfs.name.dir /usr/hadoop/hdfs/name dfs.data.dir /usr/hadoop/hdfs/data dfs.namenode.secondary.http-address centos201:9001 dfs.webhdfs.enabled true H:配置mapred-site.xml文件 dfs.permissions false I:配置yarn-site.xml文件 mapreduce.framework.name yarn yarn.resourcemanager.address centos201:18040 yarn.resourcemanager.scheduler.address centos201:18030 yarn.resourcemanager.webapp.address centos201:18088 yarn.resourcemanager.resource-tracker.address centos201:18025 yarn.resourcemanager.admin.address centos201:18141 yarn.nodemanager.aux-services mapreduce_shuffle J:配置slaves文件 centos201 centos202 centos203 K:给权限 chown -R wuyang:wuyang /usr/hadoop/ chown -R wuyang:wuyang /usr/java/ sudo chmod -R a+w /usr/hadoop/ yarn.nodemanager.aux-services.mapreduce.shuffle.class org.apache.hadoop.mapred.ShuffleHandler