python - Error in hadoop 2.6 streaming when processing large files only -


i using hadooop 2.6 streaming python in yarn env 3 node cluster.

i can run mapreduce 1, 5, or 10 gb data file. however, when give same mapreduce call 15 or 24 gb data file fails following error when gets reduce stage:

15/08/16 18:58:55 info mapreduce.job:  map 69% reduce 20% 15/08/16 18:58:56 info mapreduce.job: task id : attempt_1439307476930_0012_m_000094_2, status : failed error: java.lang.runtimeexception: pipemapred.waitoutputthreads(): subprocess failed code 1     @ org.apache.hadoop.streaming.pipemapred.waitoutputthreads(pipemapred.java:322)     @ org.apache.hadoop.streaming.pipemapred.mapredfinished(pipemapred.java:535)     @ org.apache.hadoop.streaming.pipemapper.close(pipemapper.java:130)     @ org.apache.hadoop.mapred.maprunner.run(maprunner.java:61)     @ org.apache.hadoop.streaming.pipemaprunner.run(pipemaprunner.java:34)     @ org.apache.hadoop.mapred.maptask.runoldmapper(maptask.java:450)     @ org.apache.hadoop.mapred.maptask.run(maptask.java:343)     @ org.apache.hadoop.mapred.yarnchild$2.run(yarnchild.java:163)     @ java.security.accesscontroller.doprivileged(native method)     @ javax.security.auth.subject.doas(subject.java:415)     @ org.apache.hadoop.security.usergroupinformation.doas(usergroupinformation.java:1628)     @ org.apache.hadoop.mapred.yarnchild.main(yarnchild.java:158) 

the stderr seemingly not more helpful:

aug 16, 2015 6:56:44 pm com.google.inject.servlet.internalservletmodule$backwardscompatibleservletcontextprovider warning: attempting use deprecated api (specifically, attempting @inject servletcontext inside eagerly created singleton. while allow backwards compatibility, warned may have unexpected behavior if have more 1 injector (with servletmodule) running in same jvm. please consult guice documentation @ http://code.google.com/p/google-guice/wiki/servlets more information. aug 16, 2015 6:56:45 pm com.sun.jersey.guice.spi.container.guicecomponentproviderfactory register info: registering org.apache.hadoop.mapreduce.v2.app.webapp.jaxbcontextresolver provider class aug 16, 2015 6:56:45 pm com.sun.jersey.guice.spi.container.guicecomponentproviderfactory register info: registering org.apache.hadoop.yarn.webapp.genericexceptionhandler provider class aug 16, 2015 6:56:45 pm com.sun.jersey.guice.spi.container.guicecomponentproviderfactory register info: registering org.apache.hadoop.mapreduce.v2.app.webapp.amwebservices root resource class aug 16, 2015 6:56:45 pm com.sun.jersey.server.impl.application.webapplicationimpl _initiate info: initiating jersey application, version 'jersey: 1.9 09/02/2011 11:17 am' aug 16, 2015 6:56:45 pm com.sun.jersey.guice.spi.container.guicecomponentproviderfactory getcomponentprovider info: binding org.apache.hadoop.mapreduce.v2.app.webapp.jaxbcontextresolver guicemanagedcomponentprovider scope "singleton" aug 16, 2015 6:56:45 pm com.sun.jersey.guice.spi.container.guicecomponentproviderfactory getcomponentprovider info: binding org.apache.hadoop.yarn.webapp.genericexceptionhandler guicemanagedcomponentprovider scope "singleton" aug 16, 2015 6:56:46 pm com.sun.jersey.guice.spi.container.guicecomponentproviderfactory getcomponentprovider info: binding org.apache.hadoop.mapreduce.v2.app.webapp.amwebservices guicemanagedcomponentprovider scope "perrequest" log4j:warn no appenders found logger (org.apache.hadoop.ipc.server). log4j:warn please initialize log4j system properly. log4j:warn see http://logging.apache.org/log4j/1.2/faq.html#noconfig more info. 

here hadoop command:

hadoop jar $hadoop_home/share/hadoop/tools/lib/hadoop-streaming-2.6.0.jar \ -d stream.map.output.field.separator=, \ -d stream.num.map.output.key.fields=5 \ -d mapreduce.map.output.key.field.separator=, \ -d mapreduce.partition.keypartitioner.options=-k1,2 \ -d log4j.configuration=/usr/hadoop/hadoop-2.6.0/etc/hadoop/log4j.properties \ -file /usr/hadoop/code/sgw/mapper_sgw_lgi.py \ -mapper 'python mapper_sgw_lgi.py 172.27.64.10' \ -file /usr/hadoop/code/sgw/reducer_sgw_lgi.py \ -reducer 'python reducer_sgw_lgi.py' \ -partitioner org.apache.hadoop.mapred.lib.keyfieldbasedpartitioner \ -input /input/172.27.64.10_sgw_1-150_06212015-nl.log \ -output output3 


Comments

Popular posts from this blog

dns - How To Use Custom Nameserver On Free Cloudflare? -

python - Pygame screen.blit not working -

c# - Web API response xml language -