阿里云-云小站(无限量代金券发放中)
【腾讯云】云服务器、云数据库、COS、CDN、短信等热卖云产品特惠抢购

Hadoop运行wordcount出现异常解决

152次阅读
没有评论

共计 13456 个字符,预计需要花费 34 分钟才能阅读完成。

近学习 Hadoop,在 Windows+Eclipse+ 虚拟机 Hadoop 集群环境下运行 Mapreduce 程序遇到了很多问题。上网查了查,并经过自己的分析,最终解决,在此分享一下,给遇到同样问题的人提供参考。

我的 Hadoop 集群环境:

虚拟机上 4 台机器:192.168.137.111(master)、192.168.137.112(slave1)、192.168.137.113(slave2)、192.168.137.114(slave3)

Hadoop 集群用户名:hadoop

Hadoop 版本:hadoop-1.1.2

开发环境:Windows7+Eclipse+Hadoop 插件

异常 1:

 14/10/18 08:23:47 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform… using builtin-java classes where applicable
14/10/18 08:23:47 ERROR security.UserGroupInformation: PriviledgedActionException as:guilin cause:java.io.IOException: Failed to set permissions of path: \tmp\hadoop-guilin\mapred\staging\guilin1651756173\.staging to 0700
Exception in thread “main” java.io.IOException: Failed to set permissions of path: \tmp\hadoop-guilin\mapred\staging\guilin1651756173\.staging to 0700
 at org.apache.hadoop.fs.FileUtil.checkReturnValue(FileUtil.java:689)
 at org.apache.hadoop.fs.FileUtil.setPermission(FileUtil.java:662)
 at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:509)
 at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:344)
 at org.apache.hadoop.fs.FilterFileSystem.mkdirs(FilterFileSystem.java:189)
 at org.apache.hadoop.mapreduce.JobSubmissionFiles.getStagingDir(JobSubmissionFiles.java:116)
 at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:918)
 at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:1)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:415)
 at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
 at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:912)
 at org.apache.hadoop.mapreduce.Job.submit(Job.java:500)
 at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:530)
 at com.guilin.hadoop.mapreduce.WordCount.main(WordCount.java:75)

原因:wordcount 程序连的是本地 windows 上的 hadoop,需添加 conf.set(“mapred.job.tracker”, “master:9001”),连接集群。

异常 2:

14/10/18 08:37:14 ERROR security.UserGroupInformation: PriviledgedActionException as:guilin cause:org.apache.hadoop.security.AccessControlException: org.apache.hadoop.security.AccessControlException: Permission denied: user=guilin, access=EXECUTE, inode=”hadoop”:hadoop:supergroup:rwx——
Exception in thread “main” org.apache.hadoop.security.AccessControlException: org.apache.hadoop.security.AccessControlException: Permission denied: user=guilin, access=EXECUTE, inode=”hadoop”:hadoop:supergroup:rwx——
 at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
 at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
 at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
 at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
 at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:95)
 at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57)
 at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1030)
 at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:524)
 at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:768)
 at org.apache.hadoop.mapreduce.JobSubmissionFiles.getStagingDir(JobSubmissionFiles.java:103)
 at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:918)
 at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:1)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:415)
 at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
 at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:912)
 at org.apache.hadoop.mapreduce.Job.submit(Job.java:500)
 at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:530)
 at com.guilin.hadoop.mapreduce.WordCount.main(WordCount.java:75)
Caused by: org.apache.hadoop.ipc.RemoteException: org.apache.hadoop.security.AccessControlException: Permission denied: user=guilin, access=EXECUTE, inode=”hadoop”:hadoop:supergroup:rwx——
 at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:199)
 at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:155)
 at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:125)
 at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:5468)
 at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkTraverse(FSNamesystem.java:5447)
 at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:2168)
 at org.apache.hadoop.hdfs.server.namenode.NameNode.getFileInfo(NameNode.java:888)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
 at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke(Method.java:597)
 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:578)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1393)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1389)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:396)
 at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1387)

 at org.apache.hadoop.ipc.Client.call(Client.java:1107)
 at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:230)
 at com.sun.proxy.$Proxy2.getFileInfo(Unknown Source)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:85)
 at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:62)
 at com.sun.proxy.$Proxy2.getFileInfo(Unknown Source)
 at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1028)
 … 12 more

原因:wordcount 程序使用 windows7 的账户登录集群 hadoop,我的系统账户名是 guilin,而 hadoop 集群账户是 hadoop,并且集群 hadoop 目录权限设置的是仅 hadoop 用户有读、写、执行权限。

解决办法:第一种是修改 windows 管理员(Administrator)账户名为 hadoop 账户名;第二种是在集群上创建一个账户名称与 windows 管理员账户名相同,并设置对 hadoop 目录有读、写、执行权限。推荐使用第一种,

异常 3:

14/10/18 09:57:19 WARN mapred.JobClient: No job jar file set.  User classes may not be found. See JobConf(Class) or JobConf#setJar(String).
14/10/18 09:57:19 INFO input.FileInputFormat: Total input paths to process : 5
14/10/18 09:57:19 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform… using builtin-java classes where applicable
14/10/18 09:57:19 WARN snappy.LoadSnappy: Snappy native library not loaded
14/10/18 09:57:20 INFO mapred.JobClient: Running job: job_201410181754_0001
14/10/18 09:57:21 INFO mapred.JobClient:  map 0% reduce 0%
14/10/18 09:57:29 INFO mapred.JobClient: Task Id : attempt_201410181754_0001_m_000004_0, Status : FAILED
java.lang.RuntimeException: java.lang.ClassNotFoundException: com.guilin.hadoop.mapreduce.WordCount$TokenizerMapper
 at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:849)
 at org.apache.hadoop.mapreduce.JobContext.getMapperClass(JobContext.java:199)
 at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:719)
 at org.apache.hadoop.mapred.MapTask.run(MapTask.java:370)
 at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:396)
 at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
 at org.apache.hadoop.mapred.Child.main(Child.java:249)

原因:hadoop 集群上运行 mapreduce 程序需要 jar 包。

解决办法:添加 conf.set(“mapred.jar”,”hadoop-test.jar”);

把项目打包为 jar 文件 hadoop-test.jar,放置在项目根目录下。

wordcount 完整代码

 

package com.guilin.hadoop.mapreduce;

import java.io.IOException;
import java.util.StringTokenizer;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.util.GenericOptionsParser;

public class WordCount {

 public static class TokenizerMapper extends
   Mapper<Object, Text, Text, IntWritable> {
  private static final IntWritable one = new IntWritable(1);
  private Text word = new Text();

  public void map(Object key, Text value,
    Mapper<Object, Text, Text, IntWritable>.Context context)
    throws IOException, InterruptedException {
   StringTokenizer itr = new StringTokenizer(value.toString());
   while (itr.hasMoreTokens()) {
    this.word.set(itr.nextToken());
    context.write(this.word, one);
   }
  }
 }

 public static class IntSumReducer extends
   Reducer<Text, IntWritable, Text, IntWritable> {
  private IntWritable result = new IntWritable();

  public void reduce(Text key, Iterable<IntWritable> values,
    Reducer<Text, IntWritable, Text, IntWritable>.Context context)
    throws IOException, InterruptedException {
   int sum = 0;
   for (IntWritable val : values) {
    sum += val.get();
   }
   this.result.set(sum);
   context.write(key, this.result);
  }
 }

 public static void main(String[] args) throws IOException,
   ClassNotFoundException, InterruptedException {
  Configuration conf = new Configuration(); 
  conf.set(“mapred.job.tracker”, “master:9001”);
  conf.set(“mapred.jar”, “hadoop-test.jar”);
  String[] ars = new String[] {“hdfs://master:9000/usr/hadoop/input”,
    “hdfs://master:9000/usr/hadoop/newout1” };
  String[] otherArgs = new GenericOptionsParser(conf, ars)
    .getRemainingArgs();
  if (otherArgs.length != 2) {
   System.err.println(“Usage: wordcount <in> <out>”);
   System.exit(2);
  }
  Job job = new Job(conf, “wordcount”);
  job.setJarByClass(WordCount.class);
  job.setMapperClass(WordCount.TokenizerMapper.class);
  job.setCombinerClass(WordCount.IntSumReducer.class);
  job.setReducerClass(WordCount.IntSumReducer.class);
  job.setOutputKeyClass(Text.class);
  job.setOutputValueClass(IntWritable.class);
  FileInputFormat.addInputPath(job, new Path(otherArgs[0]));
  FileOutputFormat.setOutputPath(job, new Path(otherArgs[1]));
  System.exit(job.waitForCompletion(true) ? 0 : 1);
 }

}

最后运行成功

14/10/18 10:12:27 INFO input.FileInputFormat: Total input paths to process : 2
14/10/18 10:12:27 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform… using builtin-java classes where applicable
14/10/18 10:12:27 WARN snappy.LoadSnappy: Snappy native library not loaded
14/10/18 10:12:27 INFO mapred.JobClient: Running job: job_201410181754_0004
14/10/18 10:12:28 INFO mapred.JobClient:  map 0% reduce 0%
14/10/18 10:12:32 INFO mapred.JobClient:  map 100% reduce 0%
14/10/18 10:12:39 INFO mapred.JobClient:  map 100% reduce 33%
14/10/18 10:12:40 INFO mapred.JobClient:  map 100% reduce 100%
14/10/18 10:12:40 INFO mapred.JobClient: Job complete: job_201410181754_0004
14/10/18 10:12:40 INFO mapred.JobClient: Counters: 29
14/10/18 10:12:40 INFO mapred.JobClient:  Job Counters
14/10/18 10:12:40 INFO mapred.JobClient:    Launched reduce tasks=1
14/10/18 10:12:40 INFO mapred.JobClient:    SLOTS_MILLIS_MAPS=4614
14/10/18 10:12:40 INFO mapred.JobClient:    Total time spent by all reduces waiting after reserving slots (ms)=0
14/10/18 10:12:40 INFO mapred.JobClient:    Total time spent by all maps waiting after reserving slots (ms)=0
14/10/18 10:12:40 INFO mapred.JobClient:    Launched map tasks=2
14/10/18 10:12:40 INFO mapred.JobClient:    Data-local map tasks=2
14/10/18 10:12:40 INFO mapred.JobClient:    SLOTS_MILLIS_REDUCES=8329
14/10/18 10:12:40 INFO mapred.JobClient:  File Output Format Counters
14/10/18 10:12:40 INFO mapred.JobClient:    Bytes Written=31
14/10/18 10:12:40 INFO mapred.JobClient:  FileSystemCounters
14/10/18 10:12:40 INFO mapred.JobClient:    FILE_BYTES_READ=75
14/10/18 10:12:40 INFO mapred.JobClient:    HDFS_BYTES_READ=264
14/10/18 10:12:40 INFO mapred.JobClient:    FILE_BYTES_WRITTEN=154204
14/10/18 10:12:40 INFO mapred.JobClient:    HDFS_BYTES_WRITTEN=31
14/10/18 10:12:40 INFO mapred.JobClient:  File Input Format Counters
14/10/18 10:12:40 INFO mapred.JobClient:    Bytes Read=44
14/10/18 10:12:40 INFO mapred.JobClient:  Map-Reduce Framework
14/10/18 10:12:40 INFO mapred.JobClient:    Map output materialized bytes=81
14/10/18 10:12:40 INFO mapred.JobClient:    Map input records=2
14/10/18 10:12:40 INFO mapred.JobClient:    Reduce shuffle bytes=81
14/10/18 10:12:40 INFO mapred.JobClient:    Spilled Records=12
14/10/18 10:12:40 INFO mapred.JobClient:    Map output bytes=78
14/10/18 10:12:40 INFO mapred.JobClient:    CPU time spent (ms)=1090
14/10/18 10:12:40 INFO mapred.JobClient:    Total committed heap usage (bytes)=241246208
14/10/18 10:12:40 INFO mapred.JobClient:    Combine input records=8
14/10/18 10:12:40 INFO mapred.JobClient:    SPLIT_RAW_BYTES=220
14/10/18 10:12:40 INFO mapred.JobClient:    Reduce input records=6
14/10/18 10:12:40 INFO mapred.JobClient:    Reduce input groups=4
14/10/18 10:12:40 INFO mapred.JobClient:    Combine output records=6
14/10/18 10:12:40 INFO mapred.JobClient:    Physical memory (bytes) snapshot=311574528
14/10/18 10:12:40 INFO mapred.JobClient:    Reduce output records=4
14/10/18 10:12:40 INFO mapred.JobClient:    Virtual memory (bytes) snapshot=1034760192
14/10/18 10:12:40 INFO mapred.JobClient:    Map output records=8

Ubuntu 13.04 上搭建 Hadoop 环境 http://www.linuxidc.com/Linux/2013-06/86106.htm

Ubuntu 12.10 +Hadoop 1.2.1 版本集群配置 http://www.linuxidc.com/Linux/2013-09/90600.htm

Ubuntu 上搭建 Hadoop 环境(单机模式 + 伪分布模式)http://www.linuxidc.com/Linux/2013-01/77681.htm

Ubuntu 下 Hadoop 环境的配置 http://www.linuxidc.com/Linux/2012-11/74539.htm

单机版搭建 Hadoop 环境图文教程详解 http://www.linuxidc.com/Linux/2012-02/53927.htm

搭建 Hadoop 环境(在 Winodws 环境下用虚拟机虚拟两个 Ubuntu 系统进行搭建)http://www.linuxidc.com/Linux/2011-12/48894.htm

更多 Hadoop 相关信息见 Hadoop 专题页面 http://www.linuxidc.com/topicnews.aspx?tid=13

正文完
星哥说事-微信公众号
post-qrcode
 0
星锅
版权声明:本站原创文章,由 星锅 于2022-01-20发表,共计13456字。
转载说明:除特殊说明外本站文章皆由CC-4.0协议发布,转载请注明出处。
【腾讯云】推广者专属福利,新客户无门槛领取总价值高达2860元代金券,每种代金券限量500张,先到先得。
阿里云-最新活动爆款每日限量供应
评论(没有评论)
验证码
【腾讯云】云服务器、云数据库、COS、CDN、短信等云产品特惠热卖中