Hadoop Streaming作业失败(未成功)在Python中
我正在尝试在Hadoop Streaming上运行一个Map-Reduce任务,使用的是Python脚本,但遇到了和Hadoop Streaming作业失败的错误一样的问题,不过那些解决方案对我没用。
我的脚本在运行“cat sample.txt | ./p1mapper.py | sort | ./p1reducer.py”时工作得很好。
但是当我运行以下命令时:
./bin/hadoop jar contrib/streaming/hadoop-0.20.2-streaming.jar \
-input "p1input/*" \
-output p1output \
-mapper "python p1mapper.py" \
-reducer "python p1reducer.py" \
-file /Users/Tish/Desktop/HW1/p1mapper.py \
-file /Users/Tish/Desktop/HW1/p1reducer.py
(注意:即使我去掉“python”或者输入-mapper和-reducer的完整路径,结果也是一样的)
这是我得到的输出:
packageJobJar: [/Users/Tish/Desktop/HW1/p1mapper.py, /Users/Tish/Desktop/CS246/HW1/p1reducer.py, /Users/Tish/Documents/workspace/hadoop-0.20.2/tmp/hadoop-unjar4363616744311424878/] [] /var/folders/Mk/MkDxFxURFZmLg+gkCGdO9U+++TM/-Tmp-/streamjob3714058030803466665.jar tmpDir=null
11/01/18 03:02:52 INFO mapred.FileInputFormat: Total input paths to process : 1
11/01/18 03:02:52 INFO streaming.StreamJob: getLocalDirs(): [tmp/mapred/local]
11/01/18 03:02:52 INFO streaming.StreamJob: Running job: job_201101180237_0005
11/01/18 03:02:52 INFO streaming.StreamJob: To kill this job, run:
11/01/18 03:02:52 INFO streaming.StreamJob: /Users/Tish/Documents/workspace/hadoop-0.20.2/bin/../bin/hadoop job -Dmapred.job.tracker=localhost:54311 -kill job_201101180237_0005
11/01/18 03:02:52 INFO streaming.StreamJob: Tracking URL: http://www.glassdoor.com:50030/jobdetails.jsp?jobid=job_201101180237_0005
11/01/18 03:02:53 INFO streaming.StreamJob: map 0% reduce 0%
11/01/18 03:03:05 INFO streaming.StreamJob: map 100% reduce 0%
11/01/18 03:03:44 INFO streaming.StreamJob: map 50% reduce 0%
11/01/18 03:03:47 INFO streaming.StreamJob: map 100% reduce 100%
11/01/18 03:03:47 INFO streaming.StreamJob: To kill this job, run:
11/01/18 03:03:47 INFO streaming.StreamJob: /Users/Tish/Documents/workspace/hadoop-0.20.2/bin/../bin/hadoop job -Dmapred.job.tracker=localhost:54311 -kill job_201101180237_0005
11/01/18 03:03:47 INFO streaming.StreamJob: Tracking URL: http://www.glassdoor.com:50030/jobdetails.jsp?jobid=job_201101180237_0005
11/01/18 03:03:47 ERROR streaming.StreamJob: Job not Successful!
11/01/18 03:03:47 INFO streaming.StreamJob: killJob...
Streaming Job Failed!
对于每个失败或被终止的任务尝试:
Map output lost, rescheduling: getMapOutput(attempt_201101181225_0001_m_000000_0,0) failed :
org.apache.hadoop.util.DiskChecker$DiskErrorException: Could not find taskTracker/jobcache/job_201101181225_0001/attempt_201101181225_0001_m_000000_0/output/file.out.index in any of the configured local directories
at org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathToRead(LocalDirAllocator.java:389)
at org.apache.hadoop.fs.LocalDirAllocator.getLocalPathToRead(LocalDirAllocator.java:138)
at org.apache.hadoop.mapred.TaskTracker$MapOutputServlet.doGet(TaskTracker.java:2887)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:502)
at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:363)
at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:181)
at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:417)
at org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
at org.mortbay.jetty.Server.handle(Server.java:324)
at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:534)
at org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:864)
at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:533)
at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:207)
at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:403)
at org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:409)
at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:522)
这是我的Python脚本:
p1mapper.py
#!/usr/bin/env python
import sys
import re
SEQ_LEN = 4
eos = re.compile('(?<=[a-zA-Z])\.') # period preceded by an alphabet
ignore = re.compile('[\W\d]')
for line in sys.stdin:
array = re.split(eos, line)
for sent in array:
sent = ignore.sub('', sent)
sent = sent.lower()
if len(sent) >= SEQ_LEN:
for i in range(len(sent)-SEQ_LEN + 1):
print '%s 1' % sent[i:i+SEQ_LEN]
p1reducer.py
#!/usr/bin/env python
from operator import itemgetter
import sys
word2count = {}
for line in sys.stdin:
word, count = line.split(' ', 1)
try:
count = int(count)
word2count[word] = word2count.get(word, 0) + count
except ValueError: # count was not a number
pass
# sort
sorted_word2count = sorted(word2count.items(), key=itemgetter(1), reverse=True)
# write the top 3 sequences
for word, count in sorted_word2count[0:3]:
print '%s\t%s'% (word, count)
非常感谢任何帮助!
更新:
hdfs-site.xml:
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>
mapred-site.xml:
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>localhost:54311</value>
</property>
</configuration>
2 个回答
0
我在这个问题上卡了整整两天……Joe在他另一篇帖子中提供的解决方案对我很有效。
针对你的问题,我建议:
1) 盲目地、完全按照这里的说明来设置单节点集群(我假设你已经这么做了)
2) 如果你在任何地方遇到java.io.IOException: Incompatible namespaceIDs错误(你可以查看日志找到它),可以看看这里
3) 从你的命令中去掉所有的双引号,在你的示例中运行
./bin/hadoop jar contrib/streaming/hadoop-0.20.2-streaming.jar \
-input "p1input/*" \
-output p1output \
-mapper p1mapper.py \
-reducer p1reducer.py \
-file /Users/Tish/Desktop/HW1/p1mapper.py \
-file /Users/Tish/Desktop/HW1/p1reducer.py
这真是太荒谬了,但这就是我卡了整整两天的地方
5
你缺少很多配置,需要定义一些目录等等。可以参考这里:
http://wiki.apache.org/hadoop/QuickStart
分布式操作就像上面提到的伪分布式操作,只是有一些不同:
- 在conf/hadoop-site.xml文件中,为fs.default.name和mapred.job.tracker指定主服务器的主机名或IP地址。这些需要以主机:端口的形式写。
- 在conf/hadoop-site.xml文件中,为dfs.name.dir和dfs.data.dir指定目录。这些目录分别用于存储主节点和从节点上的分布式文件系统数据。注意,dfs.data.dir可以包含用空格或逗号分隔的多个目录名,这样数据可以存储在多个设备上。
- 在conf/hadoop-site.xml文件中指定mapred.local.dir。这决定了临时的MapReduce数据存放在哪里。它也可以是一个目录列表。
- 在conf/mapred-default.xml文件中指定mapred.map.tasks和mapred.reduce.tasks。一般来说,mapred.map.tasks的数量可以设置为从节点处理器数量的10倍,而mapred.reduce.tasks的数量可以设置为从节点处理器数量的2倍。
- 在conf/slaves文件中列出所有从节点的主机名或IP地址,每行一个,并确保jobtracker在你的/etc/hosts文件中指向你的jobtracker节点。