首页 > hadoop > hdp离线安装:3.安装HDP服务

hdp离线安装:3.安装HDP服务

2017年10月11日 发表评论 阅读评论

3.安装hdp及服务

核心服务:

hive

hdfs

修改
java.io.tmpdir=/data/tmp/java_tmp/
logs目录为/data/logs, 默认是 /var/logs/

hdfs启动时遇到问题:

ERROR datanode.DataNode (DataNode.java:secureMain(2630)) – Exception in secureMain
java.io.IOException: the path component: ‘/data/app’ is group-writable, and the group is not root. Its permissions are 0775, and it is owned by gid 1001. Please fix this or select a different socket path.
at org.apache.hadoop.net.unix.DomainSocket.validateSocketPathSecurity0(Native Method)
at org.apache.hadoop.net.unix.DomainSocket.bindAndListen(DomainSocket.java:189)
at org.apache.hadoop.hdfs.net.DomainPeerServer. <init>(DomainPeerServer.java:40)
at org.apache.hadoop.hdfs.server.datanode.DataNode.getDomainPeerServer(DataNode.java:1048)
at org.apache.hadoop.hdfs.server.datanode.DataNode.initDataXceiver(DataNode.java:1014)
at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1218)
at org.apache.hadoop.hdfs.server.datanode.DataNode. <init>(DataNode.java:449)
at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2508)
at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2395)
at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2442)
at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2623)
at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2647)

修改权限,将目录属主修改为root
chown root:root /data/app

启动hiveserver时报错:

File “/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py”, line 192, in run_command
raise Fail(err_msg)
resource_management.core.exceptions.Fail: Execution of ‘curl -sS -L -w ‘%{http_code}’ -X PUT –data-binary @/usr/hdp/2.5.3.0-37/hadoop/mapreduce.tar.gz -H ‘Content-Type: application/octet-stream’ ‘http://hadoop1.test.hadoop:50070/webhdfs/v1/hdp/apps/2.5.3.0-37/mapreduce/mapreduce.tar.gz?op=CREATE&user.name=hdfs&overwrite=True&permission=444” returned status_code=403.
{
“RemoteException”: {
“exception”: “IOException”,
“javaClassName”: “java.io.IOException”,
“message”: “Failed to find datanode, suggest to check cluster health.”
}
}

发现提示要check hdfs的状态
检测后发现hdfs中没有datanode
查看datanode的日志:

2017-03-26 15:47:45,464 ERROR datanode.DataNode (BPServiceActor.java:run(747)) – Initialization failed for Block pool BP-1249627527-192.168.112.47-1490509953596 (Datanode Uuid 0c839aa2-07b8-4390-9d7f-ccaf973c2cae) service to hadoop2.test.yunwei.puppet.dh/192.168.1.48:8020 Datanode denied communication with namenode because hostname cannot be resolved (ip=192.168.1.49, hostname=192.168.1.49): DatanodeRegistration(0.0.0.0:50010, datanodeUuid=0c839aa2-07b8-4390-9d7f-ccaf973c2cae, infoPort=50075, infoSecurePort=0, ipcPort=8010, storageInfo=lv=-56;cid=CID-5f1833f1-d0c5-470a-aa23-49c653bfe41e;nsid=1383829147;c=0)

关于这个问题,ambari说要自定义hostname,也有说用反向dns解析记录的
尝试增加dns反向解析记录,重启使dns解析生效,在重启hdfs,正确识别datanode
参考链接:
https://ambari.apache.org/1.2.3/installing-hadoop-using-ambari/content/ambari-chap7a.html

移动设备快速阅读本文:
            请扫描二维码  -->
分类: hadoop 标签:
  1. 本文目前尚无任何评论.
  1. 本文目前尚无任何 trackbacks 和 pingbacks.

%d 博主赞过: