IN - type of the records written to Kafka@PublicEvolving public class KafkaSink<IN> extends Object implements org.apache.flink.api.connector.sink2.StatefulSink<IN,org.apache.flink.connector.kafka.sink.KafkaWriterState>, org.apache.flink.api.connector.sink2.TwoPhaseCommittingSink<IN,org.apache.flink.connector.kafka.sink.KafkaCommittable>
DeliveryGuarantee.
DeliveryGuarantee.NONE does not provide any guarantees: messages may be lost in case
of issues on the Kafka broker and messages may be duplicated in case of a Flink failure.
DeliveryGuarantee.AT_LEAST_ONCE the sink will wait for all outstanding records in the
Kafka buffers to be acknowledged by the Kafka producer on a checkpoint. No messages will be
lost in case of any issue with the Kafka brokers but messages may be duplicated when Flink
restarts.
DeliveryGuarantee.EXACTLY_ONCE: In this mode the KafkaSink will write all messages in
a Kafka transaction that will be committed to Kafka on a checkpoint. Thus, if the consumer
reads only committed data (see Kafka consumer config isolation.level), no duplicates will be
seen in case of a Flink restart. However, this delays record writing effectively until a
checkpoint is written, so adjust the checkpoint duration accordingly. Please ensure that you
use unique transactionalIdPrefixs across your applications running on the same Kafka
cluster such that multiple running jobs do not interfere in their transactions! Additionally,
it is highly recommended to tweak Kafka transaction timeout (link) >> maximum checkpoint
duration + maximum restart duration or data loss may happen when Kafka expires an uncommitted
transaction.org.apache.flink.api.connector.sink2.StatefulSink.StatefulSinkWriter<InputT,WriterStateT>, org.apache.flink.api.connector.sink2.StatefulSink.WithCompatibleState| 限定符和类型 | 方法和说明 |
|---|---|
static <IN> KafkaSinkBuilder<IN> |
builder()
Create a
KafkaSinkBuilder to construct a new KafkaSink. |
org.apache.flink.api.connector.sink2.Committer<org.apache.flink.connector.kafka.sink.KafkaCommittable> |
createCommitter() |
org.apache.flink.connector.kafka.sink.KafkaWriter<IN> |
createWriter(org.apache.flink.api.connector.sink2.Sink.InitContext context) |
org.apache.flink.core.io.SimpleVersionedSerializer<org.apache.flink.connector.kafka.sink.KafkaCommittable> |
getCommittableSerializer() |
protected Properties |
getKafkaProducerConfig() |
org.apache.flink.core.io.SimpleVersionedSerializer<org.apache.flink.connector.kafka.sink.KafkaWriterState> |
getWriterStateSerializer() |
org.apache.flink.connector.kafka.sink.KafkaWriter<IN> |
restoreWriter(org.apache.flink.api.connector.sink2.Sink.InitContext context,
Collection<org.apache.flink.connector.kafka.sink.KafkaWriterState> recoveredState) |
public static <IN> KafkaSinkBuilder<IN> builder()
KafkaSinkBuilder to construct a new KafkaSink.IN - type of incoming recordsKafkaSinkBuilder@Internal
public org.apache.flink.api.connector.sink2.Committer<org.apache.flink.connector.kafka.sink.KafkaCommittable> createCommitter()
throws IOException
createCommitter 在接口中 org.apache.flink.api.connector.sink2.TwoPhaseCommittingSink<IN,org.apache.flink.connector.kafka.sink.KafkaCommittable>IOException@Internal public org.apache.flink.core.io.SimpleVersionedSerializer<org.apache.flink.connector.kafka.sink.KafkaCommittable> getCommittableSerializer()
getCommittableSerializer 在接口中 org.apache.flink.api.connector.sink2.TwoPhaseCommittingSink<IN,org.apache.flink.connector.kafka.sink.KafkaCommittable>@Internal public org.apache.flink.connector.kafka.sink.KafkaWriter<IN> createWriter(org.apache.flink.api.connector.sink2.Sink.InitContext context) throws IOException
createWriter 在接口中 org.apache.flink.api.connector.sink2.Sink<IN>createWriter 在接口中 org.apache.flink.api.connector.sink2.StatefulSink<IN,org.apache.flink.connector.kafka.sink.KafkaWriterState>createWriter 在接口中 org.apache.flink.api.connector.sink2.TwoPhaseCommittingSink<IN,org.apache.flink.connector.kafka.sink.KafkaCommittable>IOException@Internal public org.apache.flink.connector.kafka.sink.KafkaWriter<IN> restoreWriter(org.apache.flink.api.connector.sink2.Sink.InitContext context, Collection<org.apache.flink.connector.kafka.sink.KafkaWriterState> recoveredState) throws IOException
restoreWriter 在接口中 org.apache.flink.api.connector.sink2.StatefulSink<IN,org.apache.flink.connector.kafka.sink.KafkaWriterState>IOException@Internal public org.apache.flink.core.io.SimpleVersionedSerializer<org.apache.flink.connector.kafka.sink.KafkaWriterState> getWriterStateSerializer()
getWriterStateSerializer 在接口中 org.apache.flink.api.connector.sink2.StatefulSink<IN,org.apache.flink.connector.kafka.sink.KafkaWriterState>@VisibleForTesting protected Properties getKafkaProducerConfig()
Copyright © 2014–2023 The Apache Software Foundation. All rights reserved.