阿里云-云小站(无限量代金券发放中)
【腾讯云】云服务器、云数据库、COS、CDN、短信等热卖云产品特惠抢购

Hadoop 调试第一个MapReduce程序过程详细记录总结

124次阅读
没有评论

共计 30076 个字符,预计需要花费 76 分钟才能阅读完成。

开发环境搭建参考
    <Hadoop 在 Windows7 操作系统下使用 Eclipse 来搭建 Hadoop 开发环境 >:http://www.linuxidc.com/Linux/2014-12/111061.htm

1,程序代码如下:

​package wc;

import java.io.IOException;
import java.util.StringTokenizer;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.util.GenericOptionsParser;

public class W2 {

public static class TokenizerMapper extends Mapper<Object, Text, Text, IntWritable> {
private final static IntWritable one = new IntWritable(1);
private Text word = new Text();
public void map(Object key, Text value, Context context)
throws IOException, InterruptedException {
StringTokenizer itr = new StringTokenizer(value.toString());
while (itr.hasMoreTokens()) {
word.set(itr.nextToken());
context.write(word, one);
}
}
}

public static class IntSumReducer extends
Reducer<Text, IntWritable, Text, IntWritable> {
private IntWritable result = new IntWritable();
public void reduce(Text key, Iterable<IntWritable> values,
Context context) throws IOException, InterruptedException {
int sum = 0;
for (IntWritable val : values) {
sum += val.get();
}
result.set(sum);
context.write(key, result);
}
}

public static void main(String[] args) throws Exception {
Configuration conf = new Configuration();
        System.setProperty(“hadoop.home.dir”, “E:/hadoop/hadoop-2.3.0”);
String[] otherArgs = new GenericOptionsParser(conf, args).getRemainingArgs();
if (otherArgs.length != 2) {
System.err.println(“Usage: wordcount <in> <out>”);
System.exit(2);
}

Job job = new Job(conf, “word count”);
job.setJarByClass(W2.class);
job.setMapperClass(TokenizerMapper.class);
job.setCombinerClass(IntSumReducer.class);
job.setReducerClass(IntSumReducer.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
FileInputFormat.addInputPath(job, new Path(otherArgs[0]));
FileOutputFormat.setOutputPath(job, new Path(otherArgs[1]));
System.exit(job.waitForCompletion(true) ? 0 : 1);
}
}

2,运行方式:

在 eclipse 中 W2.java 代码区点击右键,点击里面的 run on hadoop 即可运行该程序。

3,运行报错 (1):

Exception in thread “main” java.lang.NoClassDefFoundError: com/google/common/base/Preconditions

    at org.apache.hadoop.conf.Configuration$DeprecationDelta.<init>(Configuration.java:314)

    at org.apache.hadoop.conf.Configuration$DeprecationDelta.<init>(Configuration.java:327)

    at org.apache.hadoop.conf.Configuration.<clinit>(Configuration.java:409)

    at wc.WordCount.main(WordCount.java:82)

Caused by: java.lang.ClassNotFoundException: com.google.common.base.Preconditions

    at java.net.URLClassLoader$1.run(Unknown Source)

    at java.net.URLClassLoader$1.run(Unknown Source)

    at java.security.AccessController.doPrivileged(Native Method)

    at java.net.URLClassLoader.findClass(Unknown Source)

    at java.lang.ClassLoader.loadClass(Unknown Source)

    at sun.misc.Launcher$AppClassLoader.loadClass(Unknown Source)

    at java.lang.ClassLoader.loadClass(Unknown Source)

    … 4 more

 

少了 guava-r07.jar 包。

 

4,运行报错 (2):

Exception in thread “main” java.lang.NoClassDefFoundError: org/apache/hadoop/util/PlatformName

缺少 hadoop-auth-2.2.0.jar 包,这个包在. /eclipse/configuration/org.eclipse.osgi/bundles/230/1/.cp/lib/hadoop-auth-2.2.0.jar 里面

 

5,运行报错 (3):

Exception in thread “main” java.lang.NoClassDefFoundError: org/slf4j/LoggerFactory

缺少 2 个包:

/usr/local/eclipse/configuration/org.eclipse.osgi/bundles/230/1/.cp/lib/slf4j-api-1.7.5.jar

/usr/local/eclipse/configuration/org.eclipse.osgi/bundles/230/1/.cp/lib/slf4j-log4j12-1.7.5.jar

 

6,运行报错 (4):

在 Eclipse 运行 hadoop 报错:

2014-12-11 20:12:01,750 INFO  [main] Configuration.deprecation (Configuration.java:warnOnceIfDeprecated(996)) – fs.default.name is deprecated. Instead, use fs.defaultFS
SLF4J: This version of SLF4J requires log4j version 1.2.12 or later. See also http://www.slf4j.org/codes.html#log4j_version
2014-12-11 20:12:02,760 WARN  [main] util.NativeCodeLoader (NativeCodeLoader.java:<clinit>(62)) – Unable to load native-hadoop library for your platform… using builtin-java classes where applicable
2014-12-11 20:12:02,812 ERROR [main] util.Shell (Shell.java:getWinUtilsPath(336)) – Failed to locate the winutils binary in the hadoop binary path
java.io.IOException: Could not locate executable null\bin\winutils.exe in the Hadoop binaries.

解决:

代码里加上 System.setProperty(“hadoop.home.dir”, “d:/hadoop”); 并查看 Windows 环境下 Hadoop 目录下的 bin 目录下有没有 winutils.exe,没有就下一个拷贝过去。

7,运行报错 (5):

报错:

Exception in thread “main” java.lang.NoClassDefFoundError: com/google/protobuf/ServiceException

    at org.apache.hadoop.ipc.ProtobufRpcEngine.<clinit>(ProtobufRpcEngine.java:69)

    at java.lang.Class.forName0(Native Method)

缺乏 /usr/local/app/apache-tomcat-6.0.37_9090/webapps/solr/WEB-INF/lib/protobuf-java-2.4.0a.jar

Exception in thread “main” java.lang.VerifyError: class org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$AppendRequestProto overrides final method getUnknownFields.()Lcom/google/protobuf/UnknownFieldSet;

需要换成 protobuf-java-2.5.0.jar 包。

8,运行报错 (6):

Caused by: java.lang.ClassNotFoundException: com.google.common.cache.CacheBuilder

    at java.net.URLClassLoader$1.run(Unknown Source)

    at java.net.URLClassLoader$1.run(Unknown Source)

    at java.security.AccessController.doPrivileged(Native Method)

    at java.net.URLClassLoader.findClass(Unknown Source)

    at java.lang.ClassLoader.loadClass(Unknown Source)

    at sun.misc.Launcher$AppClassLoader.loadClass(Unknown Source)

    at java.lang.ClassLoader.loadClass(Unknown Source)

    … 12 more

少 guava-11.0.2.jar 包

9,运行报错 (7):

Exception in thread “main” org.apache.hadoop.security.AccessControlException: Permission denied: user=Administrator, access=EXECUTE, inode=”/tmp”:hadoop:supergroup:drwx——

    at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:234)

    at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:187)

    at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:150)

    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:5433)

    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:5415)

    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:5371)

    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1462)

    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1443)

    at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:536)

    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:368)

    at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)

    at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)

    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)

    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1962)

    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1958)

    at java.security.AccessController.doPrivileged(Native Method)

    at javax.security.auth.Subject.doAs(Subject.java:415)

    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)

    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1956)

10,运行报错 (8):

报错如下:

2014-12-16 10:16:09,632 INFO  [main] Configuration.deprecation (Configuration.java:warnOnceIfDeprecated(996)) – fs.default.name is deprecated. Instead, use fs.defaultFS

2014-12-16 10:16:11,597 WARN  [main] util.NativeCodeLoader (NativeCodeLoader.java:<clinit>(62)) – Unable to load native-hadoop library for your platform… using builtin-java classes where applicable

Job start!

2014-12-16 10:16:28,819 INFO  [main] client.RMProxy (RMProxy.java:createRMProxy(92)) – Connecting to ResourceManager at /192.168.52.128:8032

2014-12-16 10:16:29,714 WARN  [main] security.UserGroupInformation (UserGroupInformation.java:doAs(1551)) – PriviledgedActionException as:Administrator (auth:SIMPLE) cause:java.io.IOException: The ownership on the staging directory /tmp/hadoop-yarn/staging/Administrator/.staging is not as expected. It is owned by hadoop. The directory must be owned by the submitter Administrator or by Administrator

Exception in thread “main” java.io.IOException: The ownership on the staging directory /tmp/hadoop-yarn/staging/Administrator/.staging is not as expected. It is owned by hadoop. The directory must be owned by the submitter Administrator or by Administrator

    at org.apache.hadoop.mapreduce.JobSubmissionFiles.getStagingDir(JobSubmissionFiles.java:112)

    at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:348)

    at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1285)

    at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1282)

    at java.security.AccessController.doPrivileged(Native Method)

    at javax.security.auth.Subject.doAs(Unknown Source)

    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)

    at org.apache.hadoop.mapreduce.Job.submit(Job.java:1282)

    at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1303)

    at wc.WordCount.main(WordCount.java:147)

解决方法:

接着选择 ” 本地用户和组 ”,展开 ” 用户 ”,找到系统管理员 ”Administrator”,修改其为 ”hadoop”,操作结果如下图:

Hadoop 调试第一个 MapReduce 程序过程详细记录总结

最后,把电脑进行 ” 注销 ” 或者 ” 重启电脑 ”,这样才能使管理员才能用这个名字。再次运行之后,显示正常,能连接到 linux 下的 hadoop 服务了,控制台信息如下显示:

2014-12-16 11:01:07,009 INFO  [main] Configuration.deprecation (Configuration.java:warnOnceIfDeprecated(996)) – fs.default.name is deprecated. Instead, use fs.defaultFS

2014-12-16 11:01:12,938 WARN  [main] util.NativeCodeLoader (NativeCodeLoader.java:<clinit>(62)) – Unable to load native-hadoop library for your platform… using builtin-java classes where applicable

Job start!

2014-12-16 11:01:39,646 INFO  [main] client.RMProxy (RMProxy.java:createRMProxy(92)) – Connecting to ResourceManager at /192.168.52.128:8032

2014-12-16 11:01:49,297 INFO  [main] mapreduce.JobSubmissionFiles (JobSubmissionFiles.java:getStagingDir(119)) – Permissions on staging directory /tmp/hadoop-yarn/staging/hadoop/.staging are incorrect: rwxrwxrwx. Fixing permissions to correct value rwx——

2014-12-16 11:01:56,366 WARN  [main] mapreduce.JobSubmitter (JobSubmitter.java:copyAndConfigureFiles(150)) – Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this.

2014-12-16 11:02:14,657 INFO  [main] input.FileInputFormat (FileInputFormat.java:listStatus(287)) – Total input paths to process : 1

2014-12-16 11:02:15,781 INFO  [main] mapreduce.JobSubmitter (JobSubmitter.java:submitJobInternal(396)) – number of splits:1

2014-12-16 11:02:16,057 INFO  [main] Configuration.deprecation (Configuration.java:warnOnceIfDeprecated(996)) – fs.default.name is deprecated. Instead, use fs.defaultFS

2014-12-16 11:02:16,711 INFO  [main] mapreduce.JobSubmitter (JobSubmitter.java:printTokens(479)) – Submitting tokens for job: job_1418698686855_0001

2014-12-16 11:02:20,493 INFO  [main] impl.YarnClientImpl (YarnClientImpl.java:submitApplication(166)) – Submitted application application_1418698686855_0001

2014-12-16 11:02:21,353 INFO  [main] mapreduce.Job (Job.java:submit(1289)) – The url to track the job: http://name01:8088/proxy/application_1418698686855_0001/

2014-12-16 11:02:21,393 INFO  [main] mapreduce.Job (Job.java:monitorAndPrintJob(1334)) – Running job: job_1418698686855_0001

2014-12-16 11:02:45,306 INFO  [main] mapreduce.Job (Job.java:monitorAndPrintJob(1355)) – Job job_1418698686855_0001 running in uber mode : false

2014-12-16 11:02:45,392 INFO  [main] mapreduce.Job (Job.java:monitorAndPrintJob(1362)) –  map 0% reduce 0%

2014-12-16 11:02:45,543 INFO  [main] mapreduce.Job (Job.java:monitorAndPrintJob(1375)) – Job job_1418698686855_0001 failed with state FAILED due to: Application application_1418698686855_0001 failed 2 times due to AM Container for appattempt_1418698686855_0001_000002 exited with  exitCode: 1 due to: Exception from container-launch: org.apache.hadoop.util.Shell$ExitCodeException: /bin/bash: line 0: fg: no job control

 

org.apache.hadoop.util.Shell$ExitCodeException: /bin/bash: line 0: fg: no job control

 

    at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)

    at org.apache.hadoop.util.Shell.run(Shell.java:418)

    at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)

    at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)

    at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:283)

    at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:79)

    at java.util.concurrent.FutureTask.run(FutureTask.java:262)

    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)

    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)

    at java.lang.Thread.run(Thread.java:745)

 

 

Container exited with a non-zero exit code 1

.Failing this attempt.. Failing the application.

2014-12-16 11:02:45,955 INFO  [main] mapreduce.Job (Job.java:monitorAndPrintJob(1380)) – Counters: 0

error!

11,运行报错 (9):

2014-12-16 15:31:45,980 INFO  [main] Configuration.deprecation (Configuration.java:warnOnceIfDeprecated(996)) – session.id is deprecated. Instead, use dfs.metrics.session-id

2014-12-16 15:31:45,986 INFO  [main] jvm.JvmMetrics (JvmMetrics.java:init(76)) – Initializing JVM Metrics with processName=JobTracker, sessionId=

2014-12-16 15:31:46,213 WARN  [main] security.UserGroupInformation (UserGroupInformation.java:doAs(1551)) – PriviledgedActionException as:hadoop (auth:SIMPLE) cause:org.apache.hadoop.mapred.FileAlreadyExistsException: Output directory hdfs://192.168.52.128:9000/data/output already exists

Exception in thread “main” org.apache.hadoop.mapred.FileAlreadyExistsException: Output directory hdfs://192.168.52.128:9000/data/output already exists

    at org.apache.hadoop.mapreduce.lib.output.FileOutputFormat.checkOutputSpecs(FileOutputFormat.java:146)

    at org.apache.hadoop.mapreduce.JobSubmitter.checkSpecs(JobSubmitter.java:458)

删除原来的 /data/output 目录

12,运行报错 (10):

Could not locate executable null\bin\winutils.exe in the Hadoop binaries

老掉牙的问题了,系统变量未设置 HADOOP_HOME,系统变量设置 HADOOP_HOME,或者直接加一句代码指定路径地址:

        System.setProperty(“hadoop.home.dir”, “E:/hadoop/hadoop-2.3.0”);

13,运行报错 (11):

2014-12-16 14:28:58,589 WARN  [main] util.NativeCodeLoader (NativeCodeLoader.java:<clinit>(62)) – Unable to load native-hadoop library for your platform… using builtin-java classes where applicable

2014-12-16 14:29:08,664 INFO  [main] Configuration.deprecation (Configuration.java:warnOnceIfDeprecated(996)) – session.id is deprecated. Instead, use dfs.metrics.session-id

2014-12-16 14:29:08,665 INFO  [main] jvm.JvmMetrics (JvmMetrics.java:init(76)) – Initializing JVM Metrics with processName=JobTracker, sessionId=

2014-12-16 14:29:10,026 INFO  [main] input.FileInputFormat (FileInputFormat.java:listStatus(287)) – Total input paths to process : 1

2014-12-16 14:29:11,164 INFO  [main] mapreduce.JobSubmitter (JobSubmitter.java:submitJobInternal(396)) – number of splits:1

2014-12-16 14:29:11,761 INFO  [main] mapreduce.JobSubmitter (JobSubmitter.java:printTokens(479)) – Submitting tokens for job: job_local1985238633_0001

2014-12-16 14:29:11,810 WARN  [main] conf.Configuration (Configuration.java:loadProperty(2345)) – file:/tmp/hadoop-hadoop/mapred/staging/hadoop1985238633/.staging/job_local1985238633_0001/job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval;  Ignoring.

2014-12-16 14:29:11,811 WARN  [main] conf.Configuration (Configuration.java:loadProperty(2345)) – file:/tmp/hadoop-hadoop/mapred/staging/hadoop1985238633/.staging/job_local1985238633_0001/job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts;  Ignoring.

2014-12-16 14:29:11,916 INFO  [main] mapreduce.JobSubmitter (JobSubmitter.java:submitJobInternal(441)) – Cleaning up the staging area file:/tmp/hadoop-hadoop/mapred/staging/hadoop1985238633/.staging/job_local1985238633_0001

Exception in thread “main” java.lang.UnsatisfiedLinkError: org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Ljava/lang/String;I)Z

    at org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Native Method)

    at org.apache.hadoop.io.nativeio.NativeIO$Windows.access(NativeIO.java:560)

    at org.apache.hadoop.fs.FileUtil.canRead(FileUtil.java:977)

    at org.apache.hadoop.util.DiskChecker.checkAccessByFileMethods(DiskChecker.java:177)

    at org.apache.hadoop.util.DiskChecker.checkDirAccess(DiskChecker.java:164)

    at org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:98)

    at org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.confChanged(LocalDirAllocator.java:285)

    at org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:344)

    at org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:150)

    at org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:131)

    at org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:115)

    at org.apache.hadoop.mapred.LocalDistributedCacheManager.setup(LocalDistributedCacheManager.java:131)

    at org.apache.hadoop.mapred.LocalJobRunner$Job.<init>(LocalJobRunner.java:163)

    at org.apache.hadoop.mapred.LocalJobRunner.submitJob(LocalJobRunner.java:731)

    at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:432)

    at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1285)

    at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1282)

    at java.security.AccessController.doPrivileged(Native Method)

    at javax.security.auth.Subject.doAs(Unknown Source)

    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)

    at org.apache.hadoop.mapreduce.Job.submit(Job.java:1282)

    at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1303)

    at wc.W2.main(W2.java:111)

缺乏 hadoop.dll,下载 hadoop.dll 放到 hadoop/bin 目录下即可,但是之后运行依然报错,还需要手动设置下 hadoop 在 windows 下的运行路径,于是在 Eclipse 运行环境中,在运行的 WordCount.java 中,右键点击在下拉菜单栏里面选择 Run Configurations,然后加上 path 的设置,Run 顺利通过。参数如下图所示:

Hadoop 调试第一个 MapReduce 程序过程详细记录总结

之后调试通过,运行结果如下:

2014-12-16 15:34:01,303 INFO  [main] Configuration.deprecation (Configuration.java:warnOnceIfDeprecated(996)) – session.id is deprecated. Instead, use dfs.metrics.session-id

2014-12-16 15:34:01,309 INFO  [main] jvm.JvmMetrics (JvmMetrics.java:init(76)) – Initializing JVM Metrics with processName=JobTracker, sessionId=

2014-12-16 15:34:02,047 INFO  [main] input.FileInputFormat (FileInputFormat.java:listStatus(287)) – Total input paths to process : 1

2014-12-16 15:34:02,120 INFO  [main] mapreduce.JobSubmitter (JobSubmitter.java:submitJobInternal(396)) – number of splits:1

2014-12-16 15:34:02,323 INFO  [main] mapreduce.JobSubmitter (JobSubmitter.java:printTokens(479)) – Submitting tokens for job: job_local1764589720_0001

2014-12-16 15:34:02,367 WARN  [main] conf.Configuration (Configuration.java:loadProperty(2345)) – file:/tmp/hadoop-hadoop/mapred/staging/hadoop1764589720/.staging/job_local1764589720_0001/job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval;  Ignoring.

2014-12-16 15:34:02,368 WARN  [main] conf.Configuration (Configuration.java:loadProperty(2345)) – file:/tmp/hadoop-hadoop/mapred/staging/hadoop1764589720/.staging/job_local1764589720_0001/job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts;  Ignoring.

2014-12-16 15:34:02,682 WARN  [main] conf.Configuration (Configuration.java:loadProperty(2345)) – file:/tmp/hadoop-hadoop/mapred/local/localRunner/hadoop/job_local1764589720_0001/job_local1764589720_0001.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval;  Ignoring.

2014-12-16 15:34:02,682 WARN  [main] conf.Configuration (Configuration.java:loadProperty(2345)) – file:/tmp/hadoop-hadoop/mapred/local/localRunner/hadoop/job_local1764589720_0001/job_local1764589720_0001.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts;  Ignoring.

2014-12-16 15:34:02,703 INFO  [main] mapreduce.Job (Job.java:submit(1289)) – The url to track the job: http://localhost:8080/

2014-12-16 15:34:02,704 INFO  [main] mapreduce.Job (Job.java:monitorAndPrintJob(1334)) – Running job: job_local1764589720_0001

2014-12-16 15:34:02,707 INFO  [Thread-4] mapred.LocalJobRunner (LocalJobRunner.java:createOutputCommitter(471)) – OutputCommitter set in config null

2014-12-16 15:34:02,719 INFO  [Thread-4] mapred.LocalJobRunner (LocalJobRunner.java:createOutputCommitter(489)) – OutputCommitter is org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter

2014-12-16 15:34:02,853 INFO  [Thread-4] mapred.LocalJobRunner (LocalJobRunner.java:runTasks(448)) – Waiting for map tasks

2014-12-16 15:34:02,857 INFO  [LocalJobRunner Map Task Executor #0] mapred.LocalJobRunner (LocalJobRunner.java:run(224)) – Starting task: attempt_local1764589720_0001_m_000000_0

2014-12-16 15:34:02,919 INFO  [LocalJobRunner Map Task Executor #0] util.ProcfsBasedProcessTree (ProcfsBasedProcessTree.java:isAvailable(129)) – ProcfsBasedProcessTree currently is supported only on Linux.

2014-12-16 15:34:03,281 INFO  [LocalJobRunner Map Task Executor #0] mapred.Task (Task.java:initialize(581)) –  Using ResourceCalculatorProcessTree : org.apache.hadoop.yarn.util.WindowsBasedProcessTree@2e1022ec

2014-12-16 15:34:03,287 INFO  [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:runNewMapper(733)) – Processing split: hdfs://192.168.52.128:9000/data/input/README.txt:0+1366

2014-12-16 15:34:03,304 INFO  [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:createSortingCollector(388)) – Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer

2014-12-16 15:34:03,340 INFO  [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:setEquator(1181)) – (EQUATOR) 0 kvi 26214396(104857584)

2014-12-16 15:34:03,341 INFO  [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:init(975)) – mapreduce.task.io.sort.mb: 100

2014-12-16 15:34:03,341 INFO  [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:init(976)) – soft limit at 83886080

2014-12-16 15:34:03,341 INFO  [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:init(977)) – bufstart = 0; bufvoid = 104857600

2014-12-16 15:34:03,341 INFO  [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:init(978)) – kvstart = 26214396; length = 6553600

2014-12-16 15:34:03,708 INFO  [main] mapreduce.Job (Job.java:monitorAndPrintJob(1355)) – Job job_local1764589720_0001 running in uber mode : false

2014-12-16 15:34:03,710 INFO  [main] mapreduce.Job (Job.java:monitorAndPrintJob(1362)) –  map 0% reduce 0%

2014-12-16 15:34:04,121 INFO  [LocalJobRunner Map Task Executor #0] mapred.LocalJobRunner (LocalJobRunner.java:statusUpdate(591)) –

2014-12-16 15:34:04,128 INFO  [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:flush(1435)) – Starting flush of map output

2014-12-16 15:34:04,128 INFO  [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:flush(1453)) – Spilling map output

2014-12-16 15:34:04,128 INFO  [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:flush(1454)) – bufstart = 0; bufend = 2055; bufvoid = 104857600

2014-12-16 15:34:04,128 INFO  [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:flush(1456)) – kvstart = 26214396(104857584); kvend = 26213684(104854736); length = 713/6553600

2014-12-16 15:34:04,179 INFO  [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:sortAndSpill(1639)) – Finished spill 0

2014-12-16 15:34:04,194 INFO  [LocalJobRunner Map Task Executor #0] mapred.Task (Task.java:done(995)) – Task:attempt_local1764589720_0001_m_000000_0 is done. And is in the process of committing

2014-12-16 15:34:04,207 INFO  [LocalJobRunner Map Task Executor #0] mapred.LocalJobRunner (LocalJobRunner.java:statusUpdate(591)) – map

2014-12-16 15:34:04,208 INFO  [LocalJobRunner Map Task Executor #0] mapred.Task (Task.java:sendDone(1115)) – Task ‘attempt_local1764589720_0001_m_000000_0’ done.

2014-12-16 15:34:04,208 INFO  [LocalJobRunner Map Task Executor #0] mapred.LocalJobRunner (LocalJobRunner.java:run(249)) – Finishing task: attempt_local1764589720_0001_m_000000_0

2014-12-16 15:34:04,208 INFO  [Thread-4] mapred.LocalJobRunner (LocalJobRunner.java:runTasks(456)) – map task executor complete.

2014-12-16 15:34:04,211 INFO  [Thread-4] mapred.LocalJobRunner (LocalJobRunner.java:runTasks(448)) – Waiting for reduce tasks

2014-12-16 15:34:04,211 INFO  [pool-6-thread-1] mapred.LocalJobRunner (LocalJobRunner.java:run(302)) – Starting task: attempt_local1764589720_0001_r_000000_0

2014-12-16 15:34:04,221 INFO  [pool-6-thread-1] util.ProcfsBasedProcessTree (ProcfsBasedProcessTree.java:isAvailable(129)) – ProcfsBasedProcessTree currently is supported only on Linux.

2014-12-16 15:34:04,478 INFO  [pool-6-thread-1] mapred.Task (Task.java:initialize(581)) –  Using ResourceCalculatorProcessTree : org.apache.hadoop.yarn.util.WindowsBasedProcessTree@36154615

2014-12-16 15:34:04,483 INFO  [pool-6-thread-1] mapred.ReduceTask (ReduceTask.java:run(362)) – Using ShuffleConsumerPlugin: org.apache.hadoop.mapreduce.task.reduce.Shuffle@e2b02a3

2014-12-16 15:34:04,500 INFO  [pool-6-thread-1] reduce.MergeManagerImpl (MergeManagerImpl.java:<init>(193)) – MergerManager: memoryLimit=949983616, maxSingleShuffleLimit=237495904, mergeThreshold=626989184, ioSortFactor=10, memToMemMergeOutputsThreshold=10

2014-12-16 15:34:04,503 INFO  [EventFetcher for fetching Map Completion Events] reduce.EventFetcher (EventFetcher.java:run(61)) – attempt_local1764589720_0001_r_000000_0 Thread started: EventFetcher for fetching Map Completion Events

2014-12-16 15:34:04,543 INFO  [localfetcher#1] reduce.LocalFetcher (LocalFetcher.java:copyMapOutput(140)) – localfetcher#1 about to shuffle output of map attempt_local1764589720_0001_m_000000_0 decomp: 1832 len: 1836 to MEMORY

2014-12-16 15:34:04,548 INFO  [localfetcher#1] reduce.InMemoryMapOutput (InMemoryMapOutput.java:shuffle(100)) – Read 1832 bytes from map-output for attempt_local1764589720_0001_m_000000_0

2014-12-16 15:34:04,553 INFO  [localfetcher#1] reduce.MergeManagerImpl (MergeManagerImpl.java:closeInMemoryFile(307)) – closeInMemoryFile -> map-output of size: 1832, inMemoryMapOutputs.size() -> 1, commitMemory -> 0, usedMemory ->1832

2014-12-16 15:34:04,564 INFO  [EventFetcher for fetching Map Completion Events] reduce.EventFetcher (EventFetcher.java:run(76)) – EventFetcher is interrupted.. Returning

2014-12-16 15:34:04,566 INFO  [pool-6-thread-1] mapred.LocalJobRunner (LocalJobRunner.java:statusUpdate(591)) – 1 / 1 copied.

2014-12-16 15:34:04,566 INFO  [pool-6-thread-1] reduce.MergeManagerImpl (MergeManagerImpl.java:finalMerge(667)) – finalMerge called with 1 in-memory map-outputs and 0 on-disk map-outputs

2014-12-16 15:34:04,585 INFO  [pool-6-thread-1] mapred.Merger (Merger.java:merge(589)) – Merging 1 sorted segments

2014-12-16 15:34:04,585 INFO  [pool-6-thread-1] mapred.Merger (Merger.java:merge(688)) – Down to the last merge-pass, with 1 segments left of total size: 1823 bytes

2014-12-16 15:34:04,605 INFO  [pool-6-thread-1] reduce.MergeManagerImpl (MergeManagerImpl.java:finalMerge(742)) – Merged 1 segments, 1832 bytes to disk to satisfy reduce memory limit

2014-12-16 15:34:04,605 INFO  [pool-6-thread-1] reduce.MergeManagerImpl (MergeManagerImpl.java:finalMerge(772)) – Merging 1 files, 1836 bytes from disk

2014-12-16 15:34:04,606 INFO  [pool-6-thread-1] reduce.MergeManagerImpl (MergeManagerImpl.java:finalMerge(787)) – Merging 0 segments, 0 bytes from memory into reduce

2014-12-16 15:34:04,607 INFO  [pool-6-thread-1] mapred.Merger (Merger.java:merge(589)) – Merging 1 sorted segments

2014-12-16 15:34:04,608 INFO  [pool-6-thread-1] mapred.Merger (Merger.java:merge(688)) – Down to the last merge-pass, with 1 segments left of total size: 1823 bytes

2014-12-16 15:34:04,608 INFO  [pool-6-thread-1] mapred.LocalJobRunner (LocalJobRunner.java:statusUpdate(591)) – 1 / 1 copied.

2014-12-16 15:34:04,643 INFO  [pool-6-thread-1] Configuration.deprecation (Configuration.java:warnOnceIfDeprecated(996)) – mapred.skip.on is deprecated. Instead, use mapreduce.job.skiprecords

2014-12-16 15:34:04,714 INFO  [main] mapreduce.Job (Job.java:monitorAndPrintJob(1362)) –  map 100% reduce 0%

2014-12-16 15:34:04,842 INFO  [pool-6-thread-1] mapred.Task (Task.java:done(995)) – Task:attempt_local1764589720_0001_r_000000_0 is done. And is in the process of committing

2014-12-16 15:34:04,850 INFO  [pool-6-thread-1] mapred.LocalJobRunner (LocalJobRunner.java:statusUpdate(591)) – 1 / 1 copied.

2014-12-16 15:34:04,850 INFO  [pool-6-thread-1] mapred.Task (Task.java:commit(1156)) – Task attempt_local1764589720_0001_r_000000_0 is allowed to commit now

2014-12-16 15:34:04,881 INFO  [pool-6-thread-1] output.FileOutputCommitter (FileOutputCommitter.java:commitTask(439)) – Saved output of task ‘attempt_local1764589720_0001_r_000000_0’ to hdfs://192.168.52.128:9000/data/output/_temporary/0/task_local1764589720_0001_r_000000

2014-12-16 15:34:04,884 INFO  [pool-6-thread-1] mapred.LocalJobRunner (LocalJobRunner.java:statusUpdate(591)) – reduce > reduce

2014-12-16 15:34:04,884 INFO  [pool-6-thread-1] mapred.Task (Task.java:sendDone(1115)) – Task ‘attempt_local1764589720_0001_r_000000_0’ done.

2014-12-16 15:34:04,885 INFO  [pool-6-thread-1] mapred.LocalJobRunner (LocalJobRunner.java:run(325)) – Finishing task: attempt_local1764589720_0001_r_000000_0

2014-12-16 15:34:04,885 INFO  [Thread-4] mapred.LocalJobRunner (LocalJobRunner.java:runTasks(456)) – reduce task executor complete.

2014-12-16 15:34:05,714 INFO  [main] mapreduce.Job (Job.java:monitorAndPrintJob(1362)) –  map 100% reduce 100%

2014-12-16 15:34:05,714 INFO  [main] mapreduce.Job (Job.java:monitorAndPrintJob(1373)) – Job job_local1764589720_0001 completed successfully

2014-12-16 15:34:05,733 INFO  [main] mapreduce.Job (Job.java:monitorAndPrintJob(1380)) – Counters: 38

    File System Counters

        FILE: Number of bytes read=34542

        FILE: Number of bytes written=470650

        FILE: Number of read operations=0

        FILE: Number of large read operations=0

        FILE: Number of write operations=0

        HDFS: Number of bytes read=2732

        HDFS: Number of bytes written=1306

        HDFS: Number of read operations=15

        HDFS: Number of large read operations=0

        HDFS: Number of write operations=4

    Map-Reduce Framework

        Map input records=31

        Map output records=179

        Map output bytes=2055

        Map output materialized bytes=1836

        Input split bytes=113

        Combine input records=179

        Combine output records=131

        Reduce input groups=131

        Reduce shuffle bytes=1836

        Reduce input records=131

        Reduce output records=131

        Spilled Records=262

        Shuffled Maps =1

        Failed Shuffles=0

        Merged Map outputs=1

        GC time elapsed (ms)=13

        CPU time spent (ms)=0

        Physical memory (bytes) snapshot=0

        Virtual memory (bytes) snapshot=0

        Total committed heap usage (bytes)=440664064

    Shuffle Errors

        BAD_ID=0

        CONNECTION=0

        IO_ERROR=0

        WRONG_LENGTH=0

        WRONG_MAP=0

        WRONG_REDUCE=0

    File Input Format Counters

        Bytes Read=1366

    File Output Format Counters

        Bytes Written=1306

Hadoop2.5.2 新特性   http://www.linuxidc.com/Linux/2014-11/109814.htm

CentOS 安装和配置 Hadoop2.2.0  http://www.linuxidc.com/Linux/2014-01/94685.htm

Ubuntu 13.04 上搭建 Hadoop 环境 http://www.linuxidc.com/Linux/2013-06/86106.htm

Ubuntu 12.10 +Hadoop 1.2.1 版本集群配置 http://www.linuxidc.com/Linux/2013-09/90600.htm

Ubuntu 上搭建 Hadoop 环境(单机模式 + 伪分布模式)http://www.linuxidc.com/Linux/2013-01/77681.htm

Ubuntu 下 Hadoop 环境的配置 http://www.linuxidc.com/Linux/2012-11/74539.htm

单机版搭建 Hadoop 环境图文教程详解 http://www.linuxidc.com/Linux/2012-02/53927.htm

搭建 Hadoop 环境(在 Winodws 环境下用虚拟机虚拟两个 Ubuntu 系统进行搭建)http://www.linuxidc.com/Linux/2011-12/48894.htm

 

正文完
星哥说事-微信公众号
post-qrcode
 0
星锅
版权声明:本站原创文章,由 星锅 于2022-01-20发表,共计30076字。
转载说明:除特殊说明外本站文章皆由CC-4.0协议发布,转载请注明出处。
【腾讯云】推广者专属福利,新客户无门槛领取总价值高达2860元代金券,每种代金券限量500张,先到先得。
阿里云-最新活动爆款每日限量供应
评论(没有评论)
验证码
【腾讯云】云服务器、云数据库、COS、CDN、短信等云产品特惠热卖中