KafkaSink.@Deprecated @PublicEvolving public class FlinkKafkaProducer<IN> extends org.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunction<IN,FlinkKafkaProducer.KafkaTransactionState,FlinkKafkaProducer.KafkaTransactionContext>
FlinkKafkaProducer.Semantic.AT_LEAST_ONCE semantic. Before using FlinkKafkaProducer.Semantic.EXACTLY_ONCE please refer to Flink's Kafka connector documentation.| 限定符和类型 | 类和说明 |
|---|---|
static class |
FlinkKafkaProducer.ContextStateSerializer
已过时。
TypeSerializer for FlinkKafkaProducer.KafkaTransactionContext. |
static class |
FlinkKafkaProducer.KafkaTransactionContext
已过时。
Context associated to this instance of the
FlinkKafkaProducer. |
static class |
FlinkKafkaProducer.KafkaTransactionState
已过时。
State for handling transactions.
|
static class |
FlinkKafkaProducer.NextTransactionalIdHint
已过时。
Keep information required to deduce next safe to use transactional id.
|
static class |
FlinkKafkaProducer.NextTransactionalIdHintSerializer
已过时。
TypeSerializer for FlinkKafkaProducer.NextTransactionalIdHint. |
static class |
FlinkKafkaProducer.Semantic
已过时。
Semantics that can be chosen.
|
static class |
FlinkKafkaProducer.TransactionStateSerializer
已过时。
TypeSerializer for FlinkKafkaProducer.KafkaTransactionState. |
org.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunction.State<TXN,CONTEXT>, org.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunction.StateSerializer<TXN,CONTEXT>, org.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunction.StateSerializerSnapshot<TXN,CONTEXT>, org.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunction.TransactionHolder<TXN>| 限定符和类型 | 字段和说明 |
|---|---|
protected Exception |
asyncException
已过时。
Errors encountered in the async producer are stored here.
|
protected org.apache.kafka.clients.producer.Callback |
callback
已过时。
The callback than handles error propagation or logging callbacks.
|
static int |
DEFAULT_KAFKA_PRODUCERS_POOL_SIZE
已过时。
Default number of KafkaProducers in the pool.
|
static org.apache.flink.api.common.time.Time |
DEFAULT_KAFKA_TRANSACTION_TIMEOUT
已过时。
Default value for kafka transaction timeout.
|
protected String |
defaultTopicId
已过时。
The name of the default topic this producer is writing data to.
|
static String |
KEY_DISABLE_METRICS
已过时。
Configuration key for disabling the metrics reporting.
|
protected AtomicLong |
pendingRecords
已过时。
Number of unacknowledged records.
|
protected Properties |
producerConfig
已过时。
User defined properties for the Producer.
|
static int |
SAFE_SCALE_DOWN_FACTOR
已过时。
This coefficient determines what is the safe scale down factor.
|
protected FlinkKafkaProducer.Semantic |
semantic
已过时。
Semantic chosen for this instance.
|
protected Map<String,int[]> |
topicPartitionsMap
已过时。
Partitions of each topic.
|
protected boolean |
writeTimestampToKafka
已过时。
Flag controlling whether we are writing the Flink record's timestamp into Kafka.
|
| 构造器和说明 |
|---|
FlinkKafkaProducer(String defaultTopic,
KafkaSerializationSchema<IN> serializationSchema,
Properties producerConfig,
FlinkKafkaProducer.Semantic semantic)
已过时。
Creates a
FlinkKafkaProducer for a given topic. |
FlinkKafkaProducer(String defaultTopic,
KafkaSerializationSchema<IN> serializationSchema,
Properties producerConfig,
FlinkKafkaProducer.Semantic semantic,
int kafkaProducersPoolSize)
已过时。
Creates a FlinkKafkaProducer for a given topic.
|
FlinkKafkaProducer(String topicId,
KeyedSerializationSchema<IN> serializationSchema,
Properties producerConfig)
|
FlinkKafkaProducer(String topicId,
KeyedSerializationSchema<IN> serializationSchema,
Properties producerConfig,
FlinkKafkaProducer.Semantic semantic)
|
FlinkKafkaProducer(String defaultTopicId,
KeyedSerializationSchema<IN> serializationSchema,
Properties producerConfig,
Optional<FlinkKafkaPartitioner<IN>> customPartitioner)
|
FlinkKafkaProducer(String defaultTopicId,
KeyedSerializationSchema<IN> serializationSchema,
Properties producerConfig,
Optional<FlinkKafkaPartitioner<IN>> customPartitioner,
FlinkKafkaProducer.Semantic semantic,
int kafkaProducersPoolSize)
|
FlinkKafkaProducer(String topicId,
org.apache.flink.api.common.serialization.SerializationSchema<IN> serializationSchema,
Properties producerConfig)
已过时。
Creates a FlinkKafkaProducer for a given topic.
|
FlinkKafkaProducer(String topicId,
org.apache.flink.api.common.serialization.SerializationSchema<IN> serializationSchema,
Properties producerConfig,
FlinkKafkaPartitioner<IN> customPartitioner,
FlinkKafkaProducer.Semantic semantic,
int kafkaProducersPoolSize)
已过时。
Creates a FlinkKafkaProducer for a given topic.
|
FlinkKafkaProducer(String topicId,
org.apache.flink.api.common.serialization.SerializationSchema<IN> serializationSchema,
Properties producerConfig,
Optional<FlinkKafkaPartitioner<IN>> customPartitioner)
已过时。
Creates a FlinkKafkaProducer for a given topic.
|
FlinkKafkaProducer(String brokerList,
String topicId,
KeyedSerializationSchema<IN> serializationSchema)
|
FlinkKafkaProducer(String brokerList,
String topicId,
org.apache.flink.api.common.serialization.SerializationSchema<IN> serializationSchema)
已过时。
Creates a FlinkKafkaProducer for a given topic.
|
| 限定符和类型 | 方法和说明 |
|---|---|
protected void |
abort(FlinkKafkaProducer.KafkaTransactionState transaction)
已过时。
|
protected void |
acknowledgeMessage()
已过时。
ATTENTION to subclass implementors: When overriding this method, please always call
super.acknowledgeMessage() to keep the invariants of the internal bookkeeping of the
producer. |
protected FlinkKafkaProducer.KafkaTransactionState |
beginTransaction()
已过时。
|
protected void |
checkErroneous()
已过时。
|
void |
close()
已过时。
|
protected void |
commit(FlinkKafkaProducer.KafkaTransactionState transaction)
已过时。
|
protected FlinkKafkaInternalProducer<byte[],byte[]> |
createProducer()
已过时。
|
protected void |
finishRecoveringContext(Collection<FlinkKafkaProducer.KafkaTransactionState> handledTransactions)
已过时。
|
protected static int[] |
getPartitionsByTopic(String topic,
org.apache.kafka.clients.producer.Producer<byte[],byte[]> producer)
已过时。
|
static long |
getTransactionTimeout(Properties producerConfig)
已过时。
|
FlinkKafkaProducer<IN> |
ignoreFailuresAfterTransactionTimeout()
已过时。
Disables the propagation of exceptions thrown when committing presumably timed out Kafka
transactions during recovery of the job.
|
void |
initializeState(org.apache.flink.runtime.state.FunctionInitializationContext context)
已过时。
|
protected Optional<FlinkKafkaProducer.KafkaTransactionContext> |
initializeUserContext()
已过时。
|
void |
invoke(FlinkKafkaProducer.KafkaTransactionState transaction,
IN next,
org.apache.flink.streaming.api.functions.sink.SinkFunction.Context context)
已过时。
|
void |
open(org.apache.flink.configuration.Configuration configuration)
已过时。
Initializes the connection to Kafka.
|
protected void |
preCommit(FlinkKafkaProducer.KafkaTransactionState transaction)
已过时。
|
protected void |
recoverAndAbort(FlinkKafkaProducer.KafkaTransactionState transaction)
已过时。
|
protected void |
recoverAndCommit(FlinkKafkaProducer.KafkaTransactionState transaction)
已过时。
|
void |
setLogFailuresOnly(boolean logFailuresOnly)
已过时。
Defines whether the producer should fail on errors, or only log them.
|
void |
setTransactionalIdPrefix(String transactionalIdPrefix)
已过时。
Specifies the prefix of the transactional.id property to be used by the producers when
communicating with Kafka.
|
void |
setWriteTimestampToKafka(boolean writeTimestampToKafka)
已过时。
If set to true, Flink will write the (event time) timestamp attached to each record into
Kafka.
|
void |
snapshotState(org.apache.flink.runtime.state.FunctionSnapshotContext context)
已过时。
|
currentTransaction, enableTransactionTimeoutWarnings, finish, finishProcessing, getUserContext, invoke, invoke, notifyCheckpointAborted, notifyCheckpointComplete, pendingTransactions, setTransactionTimeoutgetIterationRuntimeContext, getRuntimeContext, setRuntimeContextpublic static final int SAFE_SCALE_DOWN_FACTOR
If the Flink application previously failed before first checkpoint completed or we are
starting new batch of FlinkKafkaProducer from scratch without clean shutdown of the
previous one, FlinkKafkaProducer doesn't know what was the set of previously used
Kafka's transactionalId's. In that case, it will try to play safe and abort all of the
possible transactionalIds from the range of: [0, getNumberOfParallelSubtasks() *
kafkaProducersPoolSize * SAFE_SCALE_DOWN_FACTOR)
The range of available to use transactional ids is: [0,
getNumberOfParallelSubtasks() * kafkaProducersPoolSize)
This means that if we decrease getNumberOfParallelSubtasks() by a factor larger
than SAFE_SCALE_DOWN_FACTOR we can have a left some lingering transaction.
public static final int DEFAULT_KAFKA_PRODUCERS_POOL_SIZE
FlinkKafkaProducer.Semantic.EXACTLY_ONCE.public static final org.apache.flink.api.common.time.Time DEFAULT_KAFKA_TRANSACTION_TIMEOUT
public static final String KEY_DISABLE_METRICS
protected final Properties producerConfig
protected final String defaultTopicId
protected final Map<String,int[]> topicPartitionsMap
protected boolean writeTimestampToKafka
protected FlinkKafkaProducer.Semantic semantic
@Nullable protected transient org.apache.kafka.clients.producer.Callback callback
@Nullable protected transient volatile Exception asyncException
protected final AtomicLong pendingRecords
public FlinkKafkaProducer(String brokerList, String topicId, org.apache.flink.api.common.serialization.SerializationSchema<IN> serializationSchema)
brokerList - Comma separated addresses of the brokerstopicId - ID of the Kafka topic.serializationSchema - User defined (keyless) serialization schema.public FlinkKafkaProducer(String topicId, org.apache.flink.api.common.serialization.SerializationSchema<IN> serializationSchema, Properties producerConfig)
Using this constructor, the default FlinkFixedPartitioner will be used as the
partitioner. This default partitioner maps each sink subtask to a single Kafka partition
(i.e. all records received by a sink subtask will end up in the same Kafka partition).
To use a custom partitioner, please use FlinkKafkaProducer(String,
SerializationSchema, Properties, Optional) instead.
topicId - ID of the Kafka topic.serializationSchema - User defined key-less serialization schema.producerConfig - Properties with the producer configuration.public FlinkKafkaProducer(String topicId, org.apache.flink.api.common.serialization.SerializationSchema<IN> serializationSchema, Properties producerConfig, Optional<FlinkKafkaPartitioner<IN>> customPartitioner)
SerializationSchema and possibly a custom FlinkKafkaPartitioner.
Since a key-less SerializationSchema is used, all records sent to Kafka will not
have an attached key. Therefore, if a partitioner is also not provided, records will be
distributed to Kafka partitions in a round-robin fashion.
topicId - The topic to write data toserializationSchema - A key-less serializable serialization schema for turning user
objects into a kafka-consumable byte[]producerConfig - Configuration properties for the KafkaProducer. 'bootstrap.servers.' is
the only required argument.customPartitioner - A serializable partitioner for assigning messages to Kafka
partitions. If a partitioner is not provided, records will be distributed to Kafka
partitions in a round-robin fashion.public FlinkKafkaProducer(String topicId, org.apache.flink.api.common.serialization.SerializationSchema<IN> serializationSchema, Properties producerConfig, @Nullable FlinkKafkaPartitioner<IN> customPartitioner, FlinkKafkaProducer.Semantic semantic, int kafkaProducersPoolSize)
SerializationSchema and possibly a custom FlinkKafkaPartitioner.
Since a key-less SerializationSchema is used, all records sent to Kafka will not
have an attached key. Therefore, if a partitioner is also not provided, records will be
distributed to Kafka partitions in a round-robin fashion.
topicId - The topic to write data toserializationSchema - A key-less serializable serialization schema for turning user
objects into a kafka-consumable byte[]producerConfig - Configuration properties for the KafkaProducer. 'bootstrap.servers.' is
the only required argument.customPartitioner - A serializable partitioner for assigning messages to Kafka
partitions. If a partitioner is not provided, records will be distributed to Kafka
partitions in a round-robin fashion.semantic - Defines semantic that will be used by this producer (see FlinkKafkaProducer.Semantic).kafkaProducersPoolSize - Overwrite default KafkaProducers pool size (see FlinkKafkaProducer.Semantic.EXACTLY_ONCE).@Deprecated public FlinkKafkaProducer(String brokerList, String topicId, KeyedSerializationSchema<IN> serializationSchema)
FlinkKafkaProducer(String, KafkaSerializationSchema, Properties,
FlinkKafkaProducer.Semantic)Using this constructor, the default FlinkFixedPartitioner will be used as the
partitioner. This default partitioner maps each sink subtask to a single Kafka partition
(i.e. all records received by a sink subtask will end up in the same Kafka partition).
To use a custom partitioner, please use FlinkKafkaProducer(String,
KeyedSerializationSchema, Properties, Optional) instead.
brokerList - Comma separated addresses of the brokerstopicId - ID of the Kafka topic.serializationSchema - User defined serialization schema supporting key/value messages@Deprecated public FlinkKafkaProducer(String topicId, KeyedSerializationSchema<IN> serializationSchema, Properties producerConfig)
FlinkKafkaProducer(String, KafkaSerializationSchema, Properties,
FlinkKafkaProducer.Semantic)Using this constructor, the default FlinkFixedPartitioner will be used as the
partitioner. This default partitioner maps each sink subtask to a single Kafka partition
(i.e. all records received by a sink subtask will end up in the same Kafka partition).
To use a custom partitioner, please use FlinkKafkaProducer(String,
KeyedSerializationSchema, Properties, Optional) instead.
topicId - ID of the Kafka topic.serializationSchema - User defined serialization schema supporting key/value messagesproducerConfig - Properties with the producer configuration.@Deprecated public FlinkKafkaProducer(String topicId, KeyedSerializationSchema<IN> serializationSchema, Properties producerConfig, FlinkKafkaProducer.Semantic semantic)
FlinkKafkaProducer(String, KafkaSerializationSchema, Properties,
FlinkKafkaProducer.Semantic)Using this constructor, the default FlinkFixedPartitioner will be used as the
partitioner. This default partitioner maps each sink subtask to a single Kafka partition
(i.e. all records received by a sink subtask will end up in the same Kafka partition).
topicId - ID of the Kafka topic.serializationSchema - User defined serialization schema supporting key/value messagesproducerConfig - Properties with the producer configuration.semantic - Defines semantic that will be used by this producer (see FlinkKafkaProducer.Semantic).@Deprecated public FlinkKafkaProducer(String defaultTopicId, KeyedSerializationSchema<IN> serializationSchema, Properties producerConfig, Optional<FlinkKafkaPartitioner<IN>> customPartitioner)
FlinkKafkaProducer(String, KafkaSerializationSchema, Properties,
FlinkKafkaProducer.Semantic)KeyedSerializationSchema and possibly a custom FlinkKafkaPartitioner.
If a partitioner is not provided, written records will be partitioned by the attached key
of each record (as determined by KeyedSerializationSchema.serializeKey(Object)). If
written records do not have a key (i.e., KeyedSerializationSchema.serializeKey(Object) returns null), they will be
distributed to Kafka partitions in a round-robin fashion.
defaultTopicId - The default topic to write data toserializationSchema - A serializable serialization schema for turning user objects into
a kafka-consumable byte[] supporting key/value messagesproducerConfig - Configuration properties for the KafkaProducer. 'bootstrap.servers.' is
the only required argument.customPartitioner - A serializable partitioner for assigning messages to Kafka
partitions. If a partitioner is not provided, records will be partitioned by the key of
each record (determined by KeyedSerializationSchema.serializeKey(Object)). If the
keys are null, then records will be distributed to Kafka partitions in a
round-robin fashion.@Deprecated public FlinkKafkaProducer(String defaultTopicId, KeyedSerializationSchema<IN> serializationSchema, Properties producerConfig, Optional<FlinkKafkaPartitioner<IN>> customPartitioner, FlinkKafkaProducer.Semantic semantic, int kafkaProducersPoolSize)
FlinkKafkaProducer(String, KafkaSerializationSchema, Properties,
FlinkKafkaProducer.Semantic)KeyedSerializationSchema and possibly a custom FlinkKafkaPartitioner.
If a partitioner is not provided, written records will be partitioned by the attached key
of each record (as determined by KeyedSerializationSchema.serializeKey(Object)). If
written records do not have a key (i.e., KeyedSerializationSchema.serializeKey(Object) returns null), they will be
distributed to Kafka partitions in a round-robin fashion.
defaultTopicId - The default topic to write data toserializationSchema - A serializable serialization schema for turning user objects into
a kafka-consumable byte[] supporting key/value messagesproducerConfig - Configuration properties for the KafkaProducer. 'bootstrap.servers.' is
the only required argument.customPartitioner - A serializable partitioner for assigning messages to Kafka
partitions. If a partitioner is not provided, records will be partitioned by the key of
each record (determined by KeyedSerializationSchema.serializeKey(Object)). If the
keys are null, then records will be distributed to Kafka partitions in a
round-robin fashion.semantic - Defines semantic that will be used by this producer (see FlinkKafkaProducer.Semantic).kafkaProducersPoolSize - Overwrite default KafkaProducers pool size (see FlinkKafkaProducer.Semantic.EXACTLY_ONCE).public FlinkKafkaProducer(String defaultTopic, KafkaSerializationSchema<IN> serializationSchema, Properties producerConfig, FlinkKafkaProducer.Semantic semantic)
FlinkKafkaProducer for a given topic. The sink produces its input to the
topic. It accepts a KafkaSerializationSchema for serializing records to a ProducerRecord, including partitioning information.defaultTopic - The default topic to write data toserializationSchema - A serializable serialization schema for turning user objects into
a kafka-consumable byte[] supporting key/value messagesproducerConfig - Configuration properties for the KafkaProducer. 'bootstrap.servers.' is
the only required argument.semantic - Defines semantic that will be used by this producer (see FlinkKafkaProducer.Semantic).public FlinkKafkaProducer(String defaultTopic, KafkaSerializationSchema<IN> serializationSchema, Properties producerConfig, FlinkKafkaProducer.Semantic semantic, int kafkaProducersPoolSize)
KafkaSerializationSchema and possibly a custom FlinkKafkaPartitioner.defaultTopic - The default topic to write data toserializationSchema - A serializable serialization schema for turning user objects into
a kafka-consumable byte[] supporting key/value messagesproducerConfig - Configuration properties for the KafkaProducer. 'bootstrap.servers.' is
the only required argument.semantic - Defines semantic that will be used by this producer (see FlinkKafkaProducer.Semantic).kafkaProducersPoolSize - Overwrite default KafkaProducers pool size (see FlinkKafkaProducer.Semantic.EXACTLY_ONCE).public void setWriteTimestampToKafka(boolean writeTimestampToKafka)
writeTimestampToKafka - Flag indicating if Flink's internal timestamps are written to
Kafka.public void setLogFailuresOnly(boolean logFailuresOnly)
logFailuresOnly - The flag to indicate logging-only on exceptions.public void setTransactionalIdPrefix(String transactionalIdPrefix)
taskName + "-" + operatorUid.
Note that, if we change the prefix when the Flink application previously failed before
first checkpoint completed or we are starting new batch of FlinkKafkaProducer from
scratch without clean shutdown of the previous one, since we don't know what was the
previously used transactional.id prefix, there will be some lingering transactions left.
transactionalIdPrefix - the transactional.id prefixNullPointerException - Thrown, if the transactionalIdPrefix was null.public FlinkKafkaProducer<IN> ignoreFailuresAfterTransactionTimeout()
Note that we use System.currentTimeMillis() to track the age of a transaction.
Moreover, only exceptions thrown during the recovery are caught, i.e., the producer will
attempt at least one commit of the transaction before giving up.
ignoreFailuresAfterTransactionTimeout 在类中 org.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunction<IN,FlinkKafkaProducer.KafkaTransactionState,FlinkKafkaProducer.KafkaTransactionContext>public void open(org.apache.flink.configuration.Configuration configuration)
throws Exception
open 在接口中 org.apache.flink.api.common.functions.RichFunctionopen 在类中 org.apache.flink.api.common.functions.AbstractRichFunctionExceptionpublic void invoke(FlinkKafkaProducer.KafkaTransactionState transaction, IN next, org.apache.flink.streaming.api.functions.sink.SinkFunction.Context context) throws FlinkKafkaException
invoke 在类中 org.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunction<IN,FlinkKafkaProducer.KafkaTransactionState,FlinkKafkaProducer.KafkaTransactionContext>FlinkKafkaExceptionpublic void close()
throws FlinkKafkaException
close 在接口中 org.apache.flink.api.common.functions.RichFunctionclose 在类中 org.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunction<IN,FlinkKafkaProducer.KafkaTransactionState,FlinkKafkaProducer.KafkaTransactionContext>FlinkKafkaExceptionprotected FlinkKafkaProducer.KafkaTransactionState beginTransaction() throws FlinkKafkaException
beginTransaction 在类中 org.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunction<IN,FlinkKafkaProducer.KafkaTransactionState,FlinkKafkaProducer.KafkaTransactionContext>FlinkKafkaExceptionprotected void preCommit(FlinkKafkaProducer.KafkaTransactionState transaction) throws FlinkKafkaException
preCommit 在类中 org.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunction<IN,FlinkKafkaProducer.KafkaTransactionState,FlinkKafkaProducer.KafkaTransactionContext>FlinkKafkaExceptionprotected void commit(FlinkKafkaProducer.KafkaTransactionState transaction)
commit 在类中 org.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunction<IN,FlinkKafkaProducer.KafkaTransactionState,FlinkKafkaProducer.KafkaTransactionContext>protected void recoverAndCommit(FlinkKafkaProducer.KafkaTransactionState transaction)
recoverAndCommit 在类中 org.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunction<IN,FlinkKafkaProducer.KafkaTransactionState,FlinkKafkaProducer.KafkaTransactionContext>protected void abort(FlinkKafkaProducer.KafkaTransactionState transaction)
abort 在类中 org.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunction<IN,FlinkKafkaProducer.KafkaTransactionState,FlinkKafkaProducer.KafkaTransactionContext>protected void recoverAndAbort(FlinkKafkaProducer.KafkaTransactionState transaction)
recoverAndAbort 在类中 org.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunction<IN,FlinkKafkaProducer.KafkaTransactionState,FlinkKafkaProducer.KafkaTransactionContext>protected void acknowledgeMessage()
super.acknowledgeMessage() to keep the invariants of the internal bookkeeping of the
producer. If not, be sure to know what you are doing.public void snapshotState(org.apache.flink.runtime.state.FunctionSnapshotContext context)
throws Exception
snapshotState 在接口中 org.apache.flink.streaming.api.checkpoint.CheckpointedFunctionsnapshotState 在类中 org.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunction<IN,FlinkKafkaProducer.KafkaTransactionState,FlinkKafkaProducer.KafkaTransactionContext>Exceptionpublic void initializeState(org.apache.flink.runtime.state.FunctionInitializationContext context)
throws Exception
initializeState 在接口中 org.apache.flink.streaming.api.checkpoint.CheckpointedFunctioninitializeState 在类中 org.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunction<IN,FlinkKafkaProducer.KafkaTransactionState,FlinkKafkaProducer.KafkaTransactionContext>Exceptionprotected Optional<FlinkKafkaProducer.KafkaTransactionContext> initializeUserContext()
initializeUserContext 在类中 org.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunction<IN,FlinkKafkaProducer.KafkaTransactionState,FlinkKafkaProducer.KafkaTransactionContext>protected void finishRecoveringContext(Collection<FlinkKafkaProducer.KafkaTransactionState> handledTransactions)
finishRecoveringContext 在类中 org.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunction<IN,FlinkKafkaProducer.KafkaTransactionState,FlinkKafkaProducer.KafkaTransactionContext>protected FlinkKafkaInternalProducer<byte[],byte[]> createProducer()
protected void checkErroneous()
throws FlinkKafkaException
protected static int[] getPartitionsByTopic(String topic, org.apache.kafka.clients.producer.Producer<byte[],byte[]> producer)
public static long getTransactionTimeout(Properties producerConfig)
Copyright © 2014–2023 The Apache Software Foundation. All rights reserved.