已经安装好了Cloudera Manager和CDH5.10.0
Kerberos server已经部署好了(服务搭建过程见Kerberos服务部署),在CDH节点已经安装了kerberos client。
在KDC server主机上,创建一个名为cloudera-scm的principal,并将其密码设为1234。执行命令:
~]# kadmin.local
Authenticating as principal root/admin@TEST.COM with password. kadmin.local: addprinc -pw cloudera-scm-1234 cloudera-scm/admin@TEST.COM WARNING: no policy specified for cloudera-scm/admin@TEST.COM; defaulting to no policy Principal "cloudera-scm/admin@TEST.COM" created.
在CM界面 管理 -》 安全 -》 状态 -》 启用kerberos
点击continue,进入下一页进行配置,要注意的是:这里的『Kerberos Encryption Types』必须跟KDC实际支持的加密类型匹配(即kdc.conf中的值)。点击continue,进入下一页,这一页中可以不勾选『Manage krb5.conf through Cloudera Manager』。点击continue,进入下一页,输入Cloudera Manager Principal(就我们之前创建的cloudera-scm/admin@GUIZHOU.COM )的username和password。点击continue,进入下一页,导入KDC Account Manager Credentials。点击continue,进入下一页,restart cluster并且enable Kerberos。大功告成!
看看现在KDC database中有哪些principals
# kadmin.local Authenticating as principal root/admin@TEST.COM with password. kadmin.local: listprincs HTTP/bigdata25@TEST.COM K/M@TEST.COM cloudera-scm/admin@TEST.COM hbase/_HOST@TEST.COM hbase/bigdata25@TEST.COM hbase@TEST.COM hdfs/bigdata25@TEST.COM hdfs@TEST.COM hive/bigdata25@TEST.COM httpfs/bigdata25@TEST.COM hue/bigdata25@TEST.COM kadmin/admin@TEST.COM kadmin/changepw@TEST.COM kafka/bigdata25@TEST.COM kafka_mirror_maker/bigdata25@TEST.COM krbtgt/TEST.COM@TEST.COM mapred/bigdata25@TEST.COM oiteboy/admin@TEST.COM oozie/bigdata25@TEST.COM sentry/bigdata25@TEST.COM solr/bigdata25@TEST.COM spark/bigdata25@TEST.COM yarn/bigdata25@TEST.COM zookeeper/bigdata25@TEST.COM
# kadmin.local
Authenticating as principal root/admin@TEST.COM with password.
kadmin.local: addprinc hdfs@TEST.COM
WARNING: no policy specified for hdfs@TEST.COM; defaulting to no policy
Enter password for principal "hdfs@TEST.COM":
Re-enter password for principal "hdfs@TEST.COM":
Principal "hdfs@TEST.COM" created.
1、确认HDFS可以正常使用
hdfs/bigdata25@TEST.COM是CM自动生成的,我们并不知道其密码,这可以通过生成keytab来进行验证。
生成hdfs的keytab文件
# kadmin.local ktadd -norandkey -k /root/hdfs.keytab hdfs/bigdata25@TEST.COM
验证keytab文件是否生效
]# klist -kt /root/hdfs.keytab Keytab name: FILE:/root/hdfs.keytab KVNO Timestamp Principal ---- ------------------- ------------------------------------------------------ 13 07/03/2018 10:08:10 hdfs/bigdata25@ZQYKJ.COM 13 07/03/2018 10:08:10 hdfs/bigdata25@ZQYKJ.COM 13 07/03/2018 10:08:10 hdfs/bigdata25@ZQYKJ.COM
根据keytab获取KDC的ticket
# kinit -kt keytab/hdfs.keytab hdfs/bigdata25@ZQYKJ.COM
查看ticket缓存
# klist -e Ticket cache: FILE:/tmp/krb5cc_0 Default principal: hdfs/bigdata25@ZQYKJ.COM Valid starting Expires Service principal 07/06/2018 11:24:46 07/07/2018 11:24:46 krbtgt/ZQYKJ.COM@ZQYKJ.COM renew until 07/11/2018 11:24:46, Etype (skey, tkt): aes128-cts-hmac-sha1-96, aes128-cts-hmac-sha1-96
查看hdfs上的文件
]# hdfs dfs -ls / Found 6 items drwx------ - hbase hbase 0 2018-07-03 09:59 /hbase drwxr-xr-x - hdfs supergroup 0 2018-07-04 14:57 /lts drwxr-xr-x - hdfs supergroup 0 2018-07-04 15:25 /outer drwxrwxr-x - solr solr 0 2018-07-03 14:19 /solr drwxrwxrwt - hdfs supergroup 0 2018-07-03 13:57 /tmp drwxr-xr-x - hdfs supergroup 0 2018-07-03 11:42 /user
2、确认可以正常提交MapReduce job
获取了hdfs的证书后,提交一个PI程序,如果能正常提交并成功运行,则说明Kerberized Hadoop cluster在正常工作。
3、确认其他组件(Zookeeper/HBase/Hue/Oozie等)可以正常运行