Hadoop yarn 모드에서 spark-submit 실행 시 발생 장애 처리Big Data2016. 5. 20. 09:24
Table of Contents
SparkConf sparkConf = new SparkConf();
sparkConf.setAppName(uid);
sparkConf.setMaster("yarn");
sparkConf.set("spark.kryo.registrator", "org.kobic.shark.spark.model.SharkRegistrator");
sparkConf.set("spark.serializer", "org.apache.spark.serializer.KryoSerializer");
sparkConf.set("spark.kryo.registrationRequired", "true");
sparkConf.set("spark.executor.cores", config.get(Constants.SPARK_MAX_CORE));
sparkConf.set("spark.kryoserializer.buffer","1024mb");
sparkConf.set("spark.executor.memory", "8192mb");
sparkConf.set("spark.driver.memory", "4096mb");
위와 같이 yarn 모드 실행 다음과 같은 장애 발생
Exception in thread "main" org.apache.spark.SparkException: Application application_1462756529234_0178 finished with failed status
at org.apache.spark.deploy.yarn.Client.run(Client.scala:940)
at org.apache.spark.deploy.yarn.Client$.main(Client.scala:986)
at org.apache.spark.deploy.yarn.Client.main(Client.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:672)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:180)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:205)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:120)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
대응 방안).
위 장애 경우 sparkConf.setMaster("yarn"); 라인을 삭제 후 spark-submit 시 --master yarn --deploy-mode cluster 명령어로 실행으로 장애 처리
ERROR SparkContext: Error initializing SparkContext.
java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.apache.spark.SparkEnv$.instantiateClass$1(SparkEnv.scala:274)
at org.apache.spark.SparkEnv$.instantiateClassFromConf$1(SparkEnv.scala:285)
at org.apache.spark.SparkEnv$.create(SparkEnv.scala:288)
at org.apache.spark.SparkEnv$.createDriverEnv(SparkEnv.scala:193)
at org.apache.spark.SparkContext.createSparkEnv(SparkContext.scala:277)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:450)
at org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:61)
at org.kobic.shark.spark.executor.SparkKOBISFileterExecutor.main(SparkKOBISFileterExecutor.java:57)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:525)
대응 방안).
위 장애 경우 spark.kryoserializer.buffer 크기는 2048mb 이하로 설정하여 장애 처리
반응형
'Big Data' 카테고리의 다른 글
Apache Hadoop-2.7.5와 Spark-2.1.0 완전 분산 모드 설치 (0) | 2018.04.03 |
---|---|
HDFS 데이터 블럭 balancer 실행 명령어 (0) | 2018.03.27 |
Hadoop에서 Maximum Map, Reduce Task 옵션 설정 하기 (0) | 2013.02.08 |
Hadoop 에서 DataNode 와 TaskTracker 제거 및 추가하기 (0) | 2013.02.08 |
Hadoop 실행 중인 작업 강제 취소 명령어 (0) | 2012.05.23 |
@kogun82 :: Ctrl+C&V 로 하는 프로그래밍
Korean BioInformation Center(KOBIC) Korea Research Institute of Bioscience & Biotechnology Address: #52 Eoeun-dong, Yuseong-gu, Deajeon, 305-806, KOREA +82-10-9936-2261 e-mail: kogun82@kribb.re.kr Blog: kogun82.tistory.com Homepage: www.kobic.re.kr
포스팅이 좋았다면 "좋아요❤️" 또는 "구독👍🏻" 해주세요!