IN - Type of the messages to write into Kafka.@Internal
public abstract class FlinkKafkaProducerBase<IN>
extends org.apache.flink.streaming.api.functions.sink.RichSinkFunction<IN>
implements org.apache.flink.streaming.api.checkpoint.CheckpointedFunction
Please note that this producer provides at-least-once reliability guarantees when checkpoints are enabled and setFlushOnCheckpoint(true) is set. Otherwise, the producer doesn't provide any reliability guarantees.
| 限定符和类型 | 字段和说明 |
|---|---|
protected Exception |
asyncException
Errors encountered in the async producer are stored here.
|
protected org.apache.kafka.clients.producer.Callback |
callback
The callback than handles error propagation or logging callbacks.
|
protected String |
defaultTopicId
The name of the default topic this producer is writing data to.
|
protected FlinkKafkaPartitioner<IN> |
flinkKafkaPartitioner
User-provided partitioner for assigning an object to a Kafka partition for each topic.
|
protected boolean |
flushOnCheckpoint
If true, the producer will wait until all outstanding records have been send to the broker.
|
static String |
KEY_DISABLE_METRICS
Configuration key for disabling the metrics reporting.
|
protected boolean |
logFailuresOnly
Flag indicating whether to accept failures (and log them), or to fail on failures.
|
protected long |
pendingRecords
Number of unacknowledged records.
|
protected org.apache.flink.util.SerializableObject |
pendingRecordsLock
Lock for accessing the pending records.
|
protected org.apache.kafka.clients.producer.KafkaProducer<byte[],byte[]> |
producer
KafkaProducer instance.
|
protected Properties |
producerConfig
User defined properties for the Producer.
|
protected KeyedSerializationSchema<IN> |
schema
(Serializable) SerializationSchema for turning objects used with Flink into. byte[] for
Kafka.
|
protected Map<String,int[]> |
topicPartitionsMap
Partitions of each topic.
|
| 构造器和说明 |
|---|
FlinkKafkaProducerBase(String defaultTopicId,
KeyedSerializationSchema<IN> serializationSchema,
Properties producerConfig,
FlinkKafkaPartitioner<IN> customPartitioner)
The main constructor for creating a FlinkKafkaProducer.
|
| 限定符和类型 | 方法和说明 |
|---|---|
protected void |
checkErroneous() |
void |
close() |
protected abstract void |
flush()
Flush pending records.
|
protected <K,V> org.apache.kafka.clients.producer.KafkaProducer<K,V> |
getKafkaProducer(Properties props)
Used for testing only.
|
protected static int[] |
getPartitionsByTopic(String topic,
org.apache.kafka.clients.producer.KafkaProducer<byte[],byte[]> producer) |
static Properties |
getPropertiesFromBrokerList(String brokerList) |
void |
initializeState(org.apache.flink.runtime.state.FunctionInitializationContext context) |
void |
invoke(IN next,
org.apache.flink.streaming.api.functions.sink.SinkFunction.Context context)
Called when new data arrives to the sink, and forwards it to Kafka.
|
protected long |
numPendingRecords() |
void |
open(org.apache.flink.configuration.Configuration configuration)
Initializes the connection to Kafka.
|
void |
setFlushOnCheckpoint(boolean flush)
If set to true, the Flink producer will wait for all outstanding messages in the Kafka
buffers to be acknowledged by the Kafka producer on a checkpoint.
|
void |
setLogFailuresOnly(boolean logFailuresOnly)
Defines whether the producer should fail on errors, or only log them.
|
void |
snapshotState(org.apache.flink.runtime.state.FunctionSnapshotContext ctx) |
getIterationRuntimeContext, getRuntimeContext, setRuntimeContextpublic static final String KEY_DISABLE_METRICS
protected final Properties producerConfig
protected final String defaultTopicId
protected final KeyedSerializationSchema<IN> schema
protected final FlinkKafkaPartitioner<IN> flinkKafkaPartitioner
protected boolean logFailuresOnly
protected boolean flushOnCheckpoint
protected transient org.apache.kafka.clients.producer.KafkaProducer<byte[],byte[]> producer
protected transient org.apache.kafka.clients.producer.Callback callback
protected transient volatile Exception asyncException
protected final org.apache.flink.util.SerializableObject pendingRecordsLock
protected long pendingRecords
public FlinkKafkaProducerBase(String defaultTopicId, KeyedSerializationSchema<IN> serializationSchema, Properties producerConfig, FlinkKafkaPartitioner<IN> customPartitioner)
defaultTopicId - The default topic to write data toserializationSchema - A serializable serialization schema for turning user objects into
a kafka-consumable byte[] supporting key/value messagesproducerConfig - Configuration properties for the KafkaProducer. 'bootstrap.servers.' is
the only required argument.customPartitioner - A serializable partitioner for assigning messages to Kafka
partitions. Passing null will use Kafka's partitioner.public void setLogFailuresOnly(boolean logFailuresOnly)
logFailuresOnly - The flag to indicate logging-only on exceptions.public void setFlushOnCheckpoint(boolean flush)
flush - Flag indicating the flushing mode (true = flush on checkpoint)@VisibleForTesting protected <K,V> org.apache.kafka.clients.producer.KafkaProducer<K,V> getKafkaProducer(Properties props)
public void open(org.apache.flink.configuration.Configuration configuration)
throws Exception
open 在接口中 org.apache.flink.api.common.functions.RichFunctionopen 在类中 org.apache.flink.api.common.functions.AbstractRichFunctionExceptionpublic void invoke(IN next, org.apache.flink.streaming.api.functions.sink.SinkFunction.Context context) throws Exception
public void close()
throws Exception
close 在接口中 org.apache.flink.api.common.functions.RichFunctionclose 在类中 org.apache.flink.api.common.functions.AbstractRichFunctionExceptionprotected abstract void flush()
public void initializeState(org.apache.flink.runtime.state.FunctionInitializationContext context)
throws Exception
initializeState 在接口中 org.apache.flink.streaming.api.checkpoint.CheckpointedFunctionExceptionpublic void snapshotState(org.apache.flink.runtime.state.FunctionSnapshotContext ctx)
throws Exception
snapshotState 在接口中 org.apache.flink.streaming.api.checkpoint.CheckpointedFunctionExceptionpublic static Properties getPropertiesFromBrokerList(String brokerList)
protected static int[] getPartitionsByTopic(String topic, org.apache.kafka.clients.producer.KafkaProducer<byte[],byte[]> producer)
@VisibleForTesting protected long numPendingRecords()
Copyright © 2014–2023 The Apache Software Foundation. All rights reserved.