web-dev-qa-db-ja.com

Kafka Connect Out of Java SSLを有効にした後のヒープスペース

最近SSLを有効にして、Kafka分散モードで接続を開始しようとしました。実行中

connect-distributed connect-distributed.properties

次のエラーが発生します。

[2018-10-09 16:50:57,190] INFO Stopping task (io.confluent.connect.jdbc.sink.JdbcSinkTask:106)
[2018-10-09 16:50:55,471] ERROR WorkerSinkTask{id=sink-mariadb-test} Task threw an uncaught and unrecoverable exception (org.Apache.kafka.connect.runtime.WorkerTask:177)
Java.lang.OutOfMemoryError: Java heap space
        at Java.nio.HeapByteBuffer.<init>(HeapByteBuffer.Java:57)
        at Java.nio.ByteBuffer.allocate(ByteBuffer.Java:335)
        at org.Apache.kafka.common.memory.MemoryPool$1.tryAllocate(MemoryPool.Java:30)
        at org.Apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.Java:112)
        at org.Apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.Java:344)
        at org.Apache.kafka.common.network.KafkaChannel.read(KafkaChannel.Java:305)
        at org.Apache.kafka.common.network.Selector.attemptRead(Selector.Java:560)
        at org.Apache.kafka.common.network.Selector.pollSelectionKeys(Selector.Java:496)
        at org.Apache.kafka.common.network.Selector.poll(Selector.Java:425)
        at org.Apache.kafka.clients.NetworkClient.poll(NetworkClient.Java:510)
        at org.Apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.Java:271)
        at org.Apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.Java:242)
        at org.Apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.Java:218)
        at org.Apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureCoordinatorReady(AbstractCoordinator.Java:230)
        at org.Apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.Java:314)
        at org.Apache.kafka.clients.consumer.KafkaConsumer.updateAssignmentMetadataIfNeeded(KafkaConsumer.Java:1218)
        at org.Apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.Java:1181)
        at org.Apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.Java:1115)
        at org.Apache.kafka.connect.runtime.WorkerSinkTask.pollConsumer(WorkerSinkTask.Java:444)
        at org.Apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.Java:317)
        at org.Apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.Java:225)
        at org.Apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.Java:193)
        at org.Apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.Java:175)
        at org.Apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.Java:219)
        at Java.util.concurrent.Executors$RunnableAdapter.call(Executors.Java:511)
        at Java.util.concurrent.FutureTask.run(FutureTask.Java:266)
        at Java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.Java:1149)
        at Java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.Java:624)
        at Java.lang.Thread.run(Thread.Java:748)

そして

Java.lang.OutOfMemoryError: Direct buffer memory
        at Java.nio.Bits.reserveMemory(Bits.Java:694)
        at Java.nio.DirectByteBuffer.<init>(DirectByteBuffer.Java:123)
        at Java.nio.ByteBuffer.allocateDirect(ByteBuffer.Java:311)
        at Sun.nio.ch.Util.getTemporaryDirectBuffer(Util.Java:241)
        at Sun.nio.ch.IOUtil.read(IOUtil.Java:195)
        at Sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.Java:380)
        at org.Apache.kafka.common.network.PlaintextTransportLayer.read(PlaintextTransportLayer.Java:104)
        at org.Apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.Java:117)
        at org.Apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.Java:344)
        at org.Apache.kafka.common.network.KafkaChannel.read(KafkaChannel.Java:305)
        at org.Apache.kafka.common.network.Selector.attemptRead(Selector.Java:560)
        at org.Apache.kafka.common.network.Selector.pollSelectionKeys(Selector.Java:496)
        at org.Apache.kafka.common.network.Selector.poll(Selector.Java:425)
        at org.Apache.kafka.clients.NetworkClient.poll(NetworkClient.Java:510)
        at org.Apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.Java:271)
        at org.Apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.Java:242)
        at org.Apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.Java:218)
        at org.Apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureCoordinatorReady(AbstractCoordinator.Java:230)
        at org.Apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.Java:314)
        at org.Apache.kafka.clients.consumer.KafkaConsumer.updateAssignmentMetadataIfNeeded(KafkaConsumer.Java:1218)
        at org.Apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.Java:1181)
        at org.Apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.Java:1115)
        at org.Apache.kafka.connect.runtime.WorkerSinkTask.pollConsumer(WorkerSinkTask.Java:444)
        at org.Apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.Java:317)
        at org.Apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.Java:225)
        at org.Apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.Java:193)
        at org.Apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.Java:175)
        at org.Apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.Java:219)
        at Java.util.concurrent.Executors$RunnableAdapter.call(Executors.Java:511)
        at Java.util.concurrent.FutureTask.run(FutureTask.Java:266)
        at Java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.Java:1149)
        at Java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.Java:624)
        at Java.lang.Thread.run(Thread.Java:748)

また、実行して KAFKA_HEAP_OPTS環境変数を設定することによる最大および初期ヒープサイズ を増加させようとしました

KAFKA_HEAP_OPTS="-Xms4g -Xmx6g" connect-distributed connect-distributed.properties

しかし、それでも機能しません。

私の質問は次のとおりです。

  1. SSL認証は万が一メモリ使用量に影響を与える可能性がありますか?
  2. どうすれば問題を解決できますか?

編集:
SSLを無効にしようとしましたが、すべて問題なく動作しています。

6

Kafka Connect:でSASL_SSLを有効にすると、この問題が発生します。

[2018-10-12 12:33:36,426] ERROR WorkerSinkTask{id=test-0} Task threw an uncaught and unrecoverable exception (org.Apache.kafka.connect.runtime.WorkerTask:172) 
Java.lang.OutOfMemoryError: Java heap space

ConsumerConfigの値を確認すると、構成が適用されていないことがわかりました。

[2018-10-12 12:33:35,573] INFO ConsumerConfig values: 
...
security.protocol = PLAINTEXT

構成の前にproducer。またはconsumerを付ける必要があることがわかりました。プロパティファイル内。 [1]

consumer.security.protocol=SASL_SSL

[1] https://docs.confluent.io/current/connect/userguide.html#overriding-producer-and-consumer-settings

2
Gery