写点什么

Hadoop 学习过程中遇到的错误及解决方法

作者:阿丞
  • 2021 年 12 月 28 日
  • 本文字数:4431 字

    阅读完需:约 15 分钟

1.ERROR: Unable to write in /opt/module/hadoop-3.1.3/logs. Aborting.

错误


ERROR: Unable to write in /opt/module/hadoop-3.1.3/logs. Aborting.
复制代码


解决方式


sudo chmod -R 777 /opt/module/hadoop-3.1.3/logs
复制代码

2.ERROR: JAVA_HOME is not set and could not be found.

错误


[lzc@hadoop02 sbin]$ start-dfs.sh Starting namenodes on [hadoop02]Starting datanodeshadoop04: ERROR: JAVA_HOME is not set and could not be found.hadoop03: ERROR: JAVA_HOME is not set and could not be found.Starting secondary namenodes [hadoop04]hadoop04: ERROR: JAVA_HOME is not set and could not be found.
复制代码


解决方式


#在hadoop-env.sh文件中添加JAVA_HOME路径
复制代码

3.hdfs Couldn't preview the file.

错误


hdfs Couldn't preview the file.
复制代码


解决方式


#在windows系统中修改hosts文件,添加Hadoop集群的ip和主机名
复制代码

4.测试 HDFS Yarn

错误


Container exited with a non-zero exit code 1. Error file: prelaunch.err.Last 4096 bytes of prelaunch.err :Last 4096 bytes of stderr :Error: Could not find or load main class org.apache.hadoop.mapreduce.v2.app.MRAppMaster
Please check whether your etc/hadoop/mapred-site.xml contains the below configuration:<property><name>yarn.app.mapreduce.am.env</name><value>HADOOP_MAPRED_HOME=${full path of your hadoop distribution directory}</value></property><property><name>mapreduce.map.env</name><value>HADOOP_MAPRED_HOME=${full path of your hadoop distribution directory}</value></property><property><name>mapreduce.reduce.env</name><value>HADOOP_MAPRED_HOME=${full path of your hadoop distribution directory}</value></property>
复制代码


解决方式


#按照提示在mapred-site.xml文件中添加
<property> <name>yarn.app.mapreduce.am.env</name> <value>HADOOP_MAPRED_HOME=/opt/module/hadoop-3.1.3</value></property><property> <name>mapreduce.map.env</name> <value>HADOOP_MAPRED_HOME=/opt/module/hadoop-3.1.3</value></property><property> <name>mapreduce.reduce.env</name> <value>HADOOP_MAPRED_HOME=/opt/module/hadoop-3.1.3</value></property>
复制代码

5.集群崩溃处理办法

#1.杀死进程sbin/stop-dfs.sh
#2.删除每个集群上的data和logsrm -rf data/ logs/
#3.格式化hdfs namenode -format
#4.启动集群sbin/start-dfs.sh
#问题:删除的data和logs岂不是丢失了
复制代码

6.配置历史服务器有问题(历史服务器启动不了)

错误


[lzc@hadoop02 hadoop-3.1.3]$ bin/mapred mapred --daemon start historyserverERROR: mapred is not COMMAND nor fully qualified CLASSNAME.Usage: mapred [OPTIONS] SUBCOMMAND [SUBCOMMAND OPTIONS] or    mapred [OPTIONS] CLASSNAME [CLASSNAME OPTIONS]  where CLASSNAME is a user-provided Java class
OPTIONS is none or any of:
--config dir Hadoop config directory--debug turn on shell script debug mode--help usage information
SUBCOMMAND is one of:

Admin Commands:
frameworkuploader mapreduce framework uploadhsadmin job history server admin interface
Client Commands:
archive create a Hadoop archivearchive-logs combine aggregated logs into hadoop archivesclasspath prints the class path needed for running mapreduce subcommandsdistcp copy file or directories recursivelyenvvars display computed Hadoop environment variablesjob manipulate MapReduce jobsminicluster CLI MiniClusterpipes run a Pipes jobqueue get information regarding JobQueuessampler samplerstreaming launch a mapreduce streaming jobversion print the version
Daemon Commands:
historyserver run job history servers as a standalone daemon
SUBCOMMAND may print help when invoked w/o parameters or with -h.
复制代码


查看 hadoop/sbin 目录,发现只有一个 mr-jobhistory-daemon.sh 文件,尝试启动这个文件。


sbin/mr-jobhistory-daemon.sh start historyserver
#情况[lzc@hadoop02 hadoop-3.1.3]$ sbin/mr-jobhistory-daemon.sh start historyserverWARNING: Use of this script to start the MR JobHistory daemon is deprecated.WARNING: Attempting to execute replacement "mapred --daemon start" instead.[lzc@hadoop02 hadoop-3.1.3]$ jps19009 DataNode2546 Jps1174 NodeManager18878 NameNode
复制代码


发现历史服务器没有启动起来,查看 mapred-site.xml 文件配置信息


<configuration><!-- 指定 MapReduce 程序运行在 Yarn 上 --> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property>
<property> <name>yarn.app.mapreduce.am.env</name> <value>HADOOP_MAPRED_HOME=/opt/module/hadoop-3.1.3</value></property><property> <name>mapreduce.map.env</name> <value>HADOOP_MAPRED_HOME=/opt/module/hadoop-3.1.3</value></property><property> <name>mapreduce.reduce.env</name> <value>HADOOP_MAPRED_HOME=/opt/module/hadoop-3.1.3</value></property>
<!-- 历史服务器端地址 --><property> <name>mapreduce.jobhistory.address</name> <value>hadoop102:10020</value></property>
<!-- 历史服务器 web 端地址 --><property> <name>mapreduce.jobhistory.webapp.address</name> <value>hadoop102:19888</value></property>
</configuration>
复制代码


2021-12-17 11:24:40,611 ERROR org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager: ExpiredTokenRemover received java.lang.InterruptedException: sleep interrupted2021-12-17 11:24:40,611 ERROR org.apache.hadoop.mapreduce.v2.hs.JobHistoryServer: Error starting JobHistoryServerorg.apache.hadoop.yarn.webapp.WebAppException: Error starting http server  at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:443)  at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:428)  at org.apache.hadoop.mapreduce.v2.hs.HistoryClientService.initializeWebApp(HistoryClientService.java:166)  at org.apache.hadoop.mapreduce.v2.hs.HistoryClientService.serviceStart(HistoryClientService.java:122)  at org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)  at org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:121)  at org.apache.hadoop.mapreduce.v2.hs.JobHistoryServer.serviceStart(JobHistoryServer.java:200)  at org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)  at org.apache.hadoop.mapreduce.v2.hs.JobHistoryServer.launchJobHistoryServer(JobHistoryServer.java:227)  at org.apache.hadoop.mapreduce.v2.hs.JobHistoryServer.main(JobHistoryServer.java:236)Caused by: java.net.SocketException: Unresolved address  at sun.nio.ch.Net.translateToSocketException(Net.java:131)  at sun.nio.ch.Net.translateException(Net.java:157)  at sun.nio.ch.Net.translateException(Net.java:163)  at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:76)  at org.eclipse.jetty.server.ServerConnector.openAcceptChannel(ServerConnector.java:351)  at org.eclipse.jetty.server.ServerConnector.open(ServerConnector.java:319)  at org.apache.hadoop.http.HttpServer2.bindListener(HttpServer2.java:1200)  at org.apache.hadoop.http.HttpServer2.bindForSinglePort(HttpServer2.java:1231)  at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:1294)  at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:1149)  at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:439)  ... 9 moreCaused by: java.nio.channels.UnresolvedAddressException  at sun.nio.ch.Net.checkAddress(Net.java:101)  at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:218)  at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)  ... 16 more
复制代码


mapred 配置信息


<!-- 历史服务器端地址 --><property> <name>mapreduce.jobhistory.address</name> <value>hadoop02:10020</value></property>
<!-- 历史服务器 web 端地址 --><property> <name>mapreduce.jobhistory.webapp.address</name> <value>hadoop02:19888</value></property>
<!-- 指定存放已经运行完的Hadoop作业记录 --><property> <name>mapreduce.jobhistory.done-dir</name> <value>${yarn.app.mapreduce.am.staging-dir}/done</value></property>
<property> <name>mapred.job.tracker</name> <value>http://hadoop02:9001</value> </property>
复制代码


#!/bin/bashif [ $# -lt 1 ]then echo "No Args Input..." exit ;ficase $1 in"start") echo " =================== 启动 hadoop 集群 ===================" echo " --------------- 启动 hdfs ---------------" ssh hadoop02 "/opt/module/hadoop-3.1.3/sbin/start-dfs.sh" echo " --------------- 启动 yarn ---------------" ssh hadoop03 "/opt/module/hadoop-3.1.3/sbin/start-yarn.sh" echo " --------------- 启动 historyserver ---------------" ssh hadoop03 "/opt/module/hadoop-3.1.3/bin/mapred --daemon start historyserver";;"stop") echo " =================== 关闭 hadoop 集群 ===================" echo " --------------- 关闭 historyserver ---------------" ssh hadoop02 "/opt/module/hadoop-3.1.3/bin/mapred --daemon stop historyserver" echo " --------------- 关闭 yarn ---------------" ssh hadoop03 "/opt/module/hadoop-3.1.3/sbin/stop-yarn.sh" echo " --------------- 关闭 hdfs ---------------" ssh hadoop02 "/opt/module/hadoop-3.1.3/sbin/stop-dfs.sh";;*) echo "Input Args Error...";;esac
复制代码


用户头像

阿丞

关注

既然选择编程,只管风雨兼程。 2021.12.13 加入

本人21年6月毕业于双非本科(软件工程),同年3月实习期内参与公司内部核心数据域(数据仓库)从0到1搭建的全过程,同年7月至今参与国家电网的PMS3.0建设实施方案编写工作,主要负责应用体系建设章节的整理与编写。

评论

发布
暂无评论
Hadoop学习过程中遇到的错误及解决方法