web-dev-qa-db-ja.com

org.Apache.spark.rpc.RpcTimeoutException:[120秒]後に先物がタイムアウトしました。このタイムアウトはspark.rpc.lookupTimeoutによって制御されます

sparkアプリケーションをYARNに送信しているときに、コンテナーに関して以下のエラーが発生します。HADOOP(2.7.3)/ SPARK(2.1)環境は、単一ノードで疑似分散モードを実行しています。クラスター。アプリケーションをローカルモデルで実行すると完全に機能しますが、YARNをRMとして使用してクラスターモードでその正確性を確認しようとすると、障害が発生します。この世界では初めてなので、助けを求めています。

---アプリケーションログ

2017-04-11 07:13:28 INFO  Client:58 - Submitting application 1 to ResourceManager
2017-04-11 07:13:28 INFO  YarnClientImpl:174 - Submitted application application_1491909036583_0001 to ResourceManager at /0.0.0.0:8032
2017-04-11 07:13:29 INFO  Client:58 - Application report for application_1491909036583_0001 (state: ACCEPTED)
2017-04-11 07:13:29 INFO  Client:58 - 
     client token: N/A
     diagnostics: N/A
     ApplicationMaster Host: N/A
     ApplicationMaster RPC port: -1
     queue: default
     start time: 1491909208425
     final status: UNDEFINED
     tracking URL: http://ip-xxx.xx.xx.xxx:8088/proxy/application_1491909036583_0001/
     user: xxxx
2017-04-11 07:13:30 INFO  Client:58 - Application report for application_1491909036583_0001 (state: ACCEPTED)
2017-04-11 07:13:31 INFO  Client:58 - Application report for application_1491909036583_0001 (state: ACCEPTED)
2017-04-11 07:13:32 INFO  Client:58 - Application report for application_1491909036583_0001 (state: ACCEPTED)
2017-04-11 07:17:37 INFO  Client:58 - Application report for application_1491909036583_0001 (state: FAILED)
2017-04-11 07:17:37 INFO  Client:58 - 
     client token: N/A
     diagnostics: Application application_1491909036583_0001 failed 2 times due to AM Container for appattempt_1491909036583_0001_000002 exited with  exitCode: 10
For more detailed output, check application tracking page:http://"hostname":8088/cluster/app/application_1491909036583_0001Then, click on links to logs of each attempt.
Diagnostics: Exception from container-launch.
Container id: container_1491909036583_0001_02_000001
Exit code: 10
Stack trace: ExitCodeException exitCode=10: 
    at org.Apache.hadoop.util.Shell.runCommand(Shell.Java:582)
    at org.Apache.hadoop.util.Shell.run(Shell.Java:479)
    at org.Apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.Java:773)
    at org.Apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.Java:212)
    at org.Apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.Java:302)
    at org.Apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.Java:82)
    at Java.util.concurrent.FutureTask.run(FutureTask.Java:266)
    at Java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.Java:1142)
    at Java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.Java:617)
    at Java.lang.Thread.run(Thread.Java:745)

**** ---コンテナログ****

2017-04-11 07:13:30 INFO  ApplicationMaster:47 - Registered signal handlers for [TERM, HUP, INT]
2017-04-11 07:13:31 INFO  ApplicationMaster:59 - ApplicationAttemptId: appattempt_1491909036583_0001_000001
2017-04-11 07:13:32 INFO  SecurityManager:59 - Changing view acls to: root,xxxx
2017-04-11 07:13:32 INFO  SecurityManager:59 - Changing modify acls to: root,xxxx
2017-04-11 07:13:32 INFO  SecurityManager:59 - SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root, xxxx); users with modify permissions: Set(root, xxxx)
2017-04-11 07:13:32 INFO  Slf4jLogger:80 - Slf4jLogger started
2017-04-11 07:13:32 INFO  Remoting:74 - Starting remoting
2017-04-11 07:13:32 INFO  Remoting:74 - Remoting started; listening on addresses :[akka.tcp://[email protected]:45446]
2017-04-11 07:13:32 INFO  Remoting:74 - Remoting now listens on addresses: [akka.tcp://[email protected]:45446]
2017-04-11 07:13:32 INFO  Utils:59 - Successfully started service 'sparkYarnAM' on port 45446.
2017-04-11 07:13:32 INFO  ApplicationMaster:59 - Waiting for Spark driver to be reachable.
2017-04-11 07:13:32 INFO  ApplicationMaster:59 - Driver now available: xxx.xx.xx.xxx:47503
2017-04-11 07:15:32 ERROR ApplicationMaster:96 - Uncaught exception: 
org.Apache.spark.rpc.RpcTimeoutException: Futures timed out after [120 seconds]. This timeout is controlled by spark.rpc.lookupTimeout
    at org.Apache.spark.rpc.RpcTimeout.org$Apache$spark$rpc$RpcTimeout$$createRpcTimeoutException(RpcEnv.scala:214)
    at org.Apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcEnv.scala:229)
    at org.Apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcEnv.scala:225)
    at scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:33)
    at org.Apache.spark.rpc.RpcTimeout.awaitResult(RpcEnv.scala:242)
    at org.Apache.spark.rpc.RpcEnv.setupEndpointRefByURI(RpcEnv.scala:98)
    at org.Apache.spark.rpc.RpcEnv.setupEndpointRef(RpcEnv.scala:116)
    at org.Apache.spark.deploy.yarn.ApplicationMaster.runAMEndpoint(ApplicationMaster.scala:279)
    at org.Apache.spark.deploy.yarn.ApplicationMaster.waitForSparkDriver(ApplicationMaster.scala:473)
    at org.Apache.spark.deploy.yarn.ApplicationMaster.runExecutorLauncher(ApplicationMaster.scala:315)
    at org.Apache.spark.deploy.yarn.ApplicationMaster.run(ApplicationMaster.scala:157)
    at org.Apache.spark.deploy.yarn.ApplicationMaster$$anonfun$main$1.apply$mcV$sp(ApplicationMaster.scala:625)
    at org.Apache.spark.deploy.SparkHadoopUtil$$anon$1.run(SparkHadoopUtil.scala:69)
    at org.Apache.spark.deploy.SparkHadoopUtil$$anon$1.run(SparkHadoopUtil.scala:68)
    at Java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.Java:422)
    at org.Apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.Java:1698)
    at org.Apache.spark.deploy.SparkHadoopUtil.runAsSparkUser(SparkHadoopUtil.scala:68)
    at org.Apache.spark.deploy.yarn.ApplicationMaster$.main(ApplicationMaster.scala:623)
    at org.Apache.spark.deploy.yarn.ExecutorLauncher$.main(ApplicationMaster.scala:646)
    at org.Apache.spark.deploy.yarn.ExecutorLauncher.main(ApplicationMaster.scala)
Caused by: Java.util.concurrent.TimeoutException: Futures timed out after [120 seconds]
    at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:219)
    at scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223)
    at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:107)
    at scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53)
    at scala.concurrent.Await$.result(package.scala:107)
    at org.Apache.spark.rpc.RpcTimeout.awaitResult(RpcEnv.scala:241)
    ... 16 more
2017-04-11 07:15:32 INFO  ApplicationMaster:59 - Final app status: FAILED, exitCode: 10, (reason: Uncaught exception: org.Apache.spark.rpc.RpcTimeoutException: Futures timed out after [120 seconds]. This timeout is controlled by spark.rpc.lookupTimeout)
2017-04-11 07:15:32 INFO  ShutdownHookManager:59 - Shutdown hook called

-Yarn Node失敗時のマネージャログ

2017-04-11 07:15:18,728 INFO org.Apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 30015 for container-id container_1491909036583_0001_01_000001: 201.6 MB of 1 GB physical memory used; 2.3 GB of 4 GB virtual memory used
2017-04-11 07:15:21,735 INFO org.Apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 30015 for container-id container_1491909036583_0001_01_000001: 201.6 MB of 1 GB physical memory used; 2.3 GB of 4 GB virtual memory used
2017-04-11 07:15:24,742 INFO org.Apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 30015 for container-id container_1491909036583_0001_01_000001: 201.6 MB of 1 GB physical memory used; 2.3 GB of 4 GB virtual memory used
2017-04-11 07:15:27,749 INFO org.Apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 30015 for container-id container_1491909036583_0001_01_000001: 201.6 MB of 1 GB physical memory used; 2.3 GB of 4 GB virtual memory used
2017-04-11 07:15:30,756 INFO org.Apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 30015 for container-id container_1491909036583_0001_01_000001: 201.6 MB of 1 GB physical memory used; 2.3 GB of 4 GB virtual memory used
2017-04-11 07:15:33,018 WARN org.Apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Exit code from container container_1491909036583_0001_01_000001 is : 10
2017-04-11 07:15:33,019 WARN org.Apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Exception from container-launch with container ID: container_1491909036583_0001_01_000001 and exit code: 10
ExitCodeException exitCode=10: 
    at org.Apache.hadoop.util.Shell.runCommand(Shell.Java:582)

--SparkCOntextパラメーター

<!-- Spark Configuration -->
<bean id="sparkInfo" class="SparkInfo">
    <property name="appName" value="framework"></property>
    <property name="master" value="yarn-client"></property>
    <property name="dynamicAllocation" value="false"></property>
    <property name="executorInstances" value="2"></property>
    <property name="executorMemory" value="1g"></property>
    <property name="executorCores" value="4"></property>
    <property name="executorCoresMax" value="2"></property>
    <property name="taskCpus" value="4"></property>
    <property name="executorClassPath" value="/usr/hadoop/hadoop-2.7.3/share/hadoop/yarn/lib/*"></property>
    <property name="yarnJar"
        value="${framework.hdfsURI}/app/spark-1.5.0-bin-hadoop2.6/lib/spark-Assembly-1.5.0-hadoop2.6.0.jar"></property>
    <property name="yarnQueue" value="default"></property>
    <property name="memoryFraction" value="0.4"></property>
</bean>

sparks.default.conf

spark.driver.memory              1g
spark.executor.extraJavaOptions   -XX:ReservedCodeCacheSize=100M -XX:MaxMetaspaceSize=256m -XX:CompressedClassSpaceSize=256m
spark.rpc.lookupTimeout          600s

yarn-site.xml

<!-- Site specific YARN configuration properties -->
  <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
  </property>
  <property>
    <name>yarn.scheduler.minimum-allocation-mb</name>
    <value>1024</value>
  </property>
  <property>
    <name>yarn.scheduler.maximum-allocation-mb</name>
    <value>3096</value>
  </property>
  <property>
    <name>yarn.nodemanager.resource.memory-mb</name>
    <value>3096</value>
  </property>
  <property>
    <name>yarn.nodemanager.vmem-pmem-ratio</name>
    <value>4</value>
  </property>
</configuration>
4
Sparknewbie

HimanshuIIITianがコメントで述べているように、問題が発生しなくなるまでspark.network.timeoutを増やし続けることができます。
sparkが重いワークロードの下にある場合、タイムアウト例外が発生する可能性があります。エグゼキュータメモリが少ない場合、GCはシステムを非常にビジー状態に保ち、ワークロードを増加させる可能性があります。ログがある場合はログを調べてください。メモリ不足エラー。-XX:+PrintGCDetails -XX:+PrintGCTimeStampsspark.executor.extraJavaOptionsを有効にし、タスクが完了する前に完全なGCが何度も呼び出された場合はログを調べてください。その場合は、executorMemoryを増やしてください。 。それで問題が解決するはずです。

4
asanand

私にとって、エグゼキュータが正しく接続するのを妨げるのはsparkクラスタのファイアウォール設定です。これは、spark UIがすべてを表示するためすぐに理解できなかった問題です。ワーカーはマスターに接続しましたが、ファイアウォールによってブロックされた他の接続があります。次のポートを設定し、ファイアウォールでそれらを許可した後、問題は解決しました(Sparkこれらにはランダムなポートを使用することに注意してください)デフォルトの設定)

spark.driver.port                    
spark.blockManager.port              
0
Hasson