spark-submit yarn-client run failed -


using yarn-client run spark program. i've build spark on yarn environment. scripts

./bin/spark-submit --class wordcounttest \ --master yarn-client \ --num-executors 1 \ --executor-cores 1 \ --queue root.hadoop \ /root/desktop/test2.jar \ 10 

when running following exception.

15/05/12 17:42:01 info spark.sparkcontext: running spark version 1.3.1 15/05/12 17:42:01 warn spark.sparkconf:  spark_classpath detected (set ':/usr/local/hadoop/hadoop-2.5.2/share/hadoop/common/hadoop-lzo-0.4.20-snapshot.jar'). deprecated in spark 1.0+.  please instead use:  - ./spark-submit --driver-class-path augment driver classpath  - spark.executor.extraclasspath augment executor classpath  15/05/12 17:42:01 warn spark.sparkconf: setting 'spark.executor.extraclasspath' ':/usr/local/hadoop/hadoop-2.5.2/share/hadoop/common/hadoop-lzo-0.4.20-snapshot.jar' work-around. 15/05/12 17:42:01 warn spark.sparkconf: setting 'spark.driver.extraclasspath' ':/usr/local/hadoop/hadoop-2.5.2/share/hadoop/common/hadoop-lzo-0.4.20-snapshot.jar' work-around. 15/05/12 17:42:01 warn util.nativecodeloader: unable load native-hadoop library platform... using builtin-java classes applicable 15/05/12 17:42:02 info spark.securitymanager: changing view acls to: root 15/05/12 17:42:02 info spark.securitymanager: changing modify acls to: root 15/05/12 17:42:02 info spark.securitymanager: securitymanager: authentication disabled; ui acls disabled; users view permissions: set(root); users modify permissions: set(root) 15/05/12 17:42:02 info slf4j.slf4jlogger: slf4jlogger started 15/05/12 17:42:02 info remoting: starting remoting 15/05/12 17:42:03 info remoting: remoting started; listening on addresses :[akka.tcp://sparkdriver@master:49338] 15/05/12 17:42:03 info util.utils: started service 'sparkdriver' on port 49338. 15/05/12 17:42:03 info spark.sparkenv: registering mapoutputtracker 15/05/12 17:42:03 info spark.sparkenv: registering blockmanagermaster 15/05/12 17:42:03 info storage.diskblockmanager: created local directory @ /tmp/spark-57f5fb29-784d-4730-92b8-c2e8be97c038/blockmgr-752988bc-b2d0-42f7-891d-5d3edbb4526d 15/05/12 17:42:03 info storage.memorystore: memorystore started capacity 267.3 mb 15/05/12 17:42:04 info spark.httpfileserver: http file server directory /tmp/spark-2f2a46eb-9259-4c6e-b9af-7159efb0b3e9/httpd-3c50fe1e-430e-4077-9cd0-58246e182d98 15/05/12 17:42:04 info spark.httpserver: starting http server 15/05/12 17:42:04 info server.server: jetty-8.y.z-snapshot 15/05/12 17:42:04 info server.abstractconnector: started socketconnector@0.0.0.0:41749 15/05/12 17:42:04 info util.utils: started service 'http file server' on port 41749. 15/05/12 17:42:04 info spark.sparkenv: registering outputcommitcoordinator 15/05/12 17:42:05 info server.server: jetty-8.y.z-snapshot 15/05/12 17:42:05 info server.abstractconnector: started selectchannelconnector@0.0.0.0:4040 15/05/12 17:42:05 info util.utils: started service 'sparkui' on port 4040. 15/05/12 17:42:05 info ui.sparkui: started sparkui @ http://master:4040 15/05/12 17:42:05 info spark.sparkcontext: added jar file:/root/desktop/test2.jar @ http://192.168.147.201:41749/jars/test2.jar timestamp 1431423725289 15/05/12 17:42:05 warn cluster.yarnclientschedulerbackend: note: spark_worker_memory deprecated. use spark_executor_memory or --executor-memory through spark-submit instead. 15/05/12 17:42:06 info client.rmproxy: connecting resourcemanager @ master/192.168.147.201:8032 15/05/12 17:42:06 info yarn.client: requesting new application cluster 2 nodemanagers 15/05/12 17:42:06 info yarn.client: verifying our application has not requested more maximum memory capability of cluster (8192 mb per container) 15/05/12 17:42:06 info yarn.client: allocate container, 896 mb memory including 384 mb overhead 15/05/12 17:42:06 info yarn.client: setting container launch context our 15/05/12 17:42:06 info yarn.client: preparing resources our container 15/05/12 17:42:07 warn yarn.client: spark_jar detected in system environment. variable has been deprecated in favor of spark.yarn.jar configuration variable. 15/05/12 17:42:07 info yarn.client: uploading resource file:/usr/local/spark/spark-1.3.1-bin-hadoop2.5.0-cdh5.3.2/lib/spark-assembly-1.3.1-hadoop2.5.0-cdh5.3.2.jar -> hdfs://master:9000/user/root/.sparkstaging/application_1431423592173_0003/spark-assembly-1.3.1-hadoop2.5.0-cdh5.3.2.jar 15/05/12 17:42:11 info yarn.client: setting launch environment our container 15/05/12 17:42:11 warn yarn.client: spark_jar detected in system environment. variable has been deprecated in favor of spark.yarn.jar configuration variable. 15/05/12 17:42:11 info spark.securitymanager: changing view acls to: root 15/05/12 17:42:11 info spark.securitymanager: changing modify acls to: root 15/05/12 17:42:11 info spark.securitymanager: securitymanager: authentication disabled; ui acls disabled; users view permissions: set(root); users modify permissions: set(root) 15/05/12 17:42:11 info yarn.client: submitting application 3 resourcemanager 15/05/12 17:42:11 info impl.yarnclientimpl: submitted application application_1431423592173_0003 15/05/12 17:42:12 info yarn.client: application report application_1431423592173_0003 (state: failed) 15/05/12 17:42:12 info yarn.client: client token: n/a      diagnostics: application application_1431423592173_0003 submitted user root unknown queue: root.hadoop      applicationmaster host: n/a      applicationmaster rpc port: -1      queue: root.hadoop      start time: 1431423731271      final status: failed      tracking url: n/a      user: root exception in thread "main" org.apache.spark.sparkexception: yarn application has ended! might have been killed or unable launch application master.     @ org.apache.spark.scheduler.cluster.yarnclientschedulerbackend.waitforapplication(yarnclientschedulerbackend.scala:113)     @ org.apache.spark.scheduler.cluster.yarnclientschedulerbackend.start(yarnclientschedulerbackend.scala:59)     @ org.apache.spark.scheduler.taskschedulerimpl.start(taskschedulerimpl.scala:141)     @ org.apache.spark.sparkcontext.<init>(sparkcontext.scala:381)     @ wordcounttest$.main(wordcounttest.scala:14)     @ wordcounttest.main(wordcounttest.scala)     @ sun.reflect.nativemethodaccessorimpl.invoke0(native method)     @ sun.reflect.nativemethodaccessorimpl.invoke(nativemethodaccessorimpl.java:57)     @ sun.reflect.delegatingmethodaccessorimpl.invoke(delegatingmethodaccessorimpl.java:43)     @ java.lang.reflect.method.invoke(method.java:606)     @ org.apache.spark.deploy.sparksubmit$.org$apache$spark$deploy$sparksubmit$$runmain(sparksubmit.scala:569)     @ org.apache.spark.deploy.sparksubmit$.dorunmain$1(sparksubmit.scala:166)     @ org.apache.spark.deploy.sparksubmit$.submit(sparksubmit.scala:189)     @ org.apache.spark.deploy.sparksubmit$.main(sparksubmit.scala:110)     @ org.apache.spark.deploy.sparksubmit.main(sparksubmit.scala) 

my code simple, following:

object wordcounttest {   def main (args: array[string]): unit = {     logger.getlogger("org.apache.spark").setlevel(level.warn)     logger.getlogger("org.eclipse.jetty.server").setlevel(level.off)      val sparkconf = new sparkconf().setappname("wordcounttest prog")     val sc = new sparkcontext(sparkconf)     val sqlcontext = new sqlcontext(sc)      val file = sc.textfile("/data/test/pom.xml")     val counts = file.flatmap(line => line.split(" ")).map(word => (word, 1)).reducebykey(_ + _)     println(counts)     counts.saveastextfile("/data/test/pom_count.txt")   } } 

i've debug problem 2 days. help!help! thx.

try changing queue name hadoop


Comments

Popular posts from this blog

android - MPAndroidChart - How to add Annotations or images to the chart -

javascript - Add class to another page attribute using URL id - Jquery -

firefox - Where is 'webgl.osmesalib' parameter? -