跳过导航链接
A B C D E F G H I J K L M N O P R S T U V W 

A

abort(FlinkKafkaProducer.KafkaTransactionState) - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer
已过时。
 
abortTransaction() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.FlinkKafkaInternalProducer
 
AbstractFetcher<T,KPH> - org.apache.flink.streaming.connectors.kafka.internals中的类
Base class for all fetchers, which implement the connections to Kafka brokers and pull records from Kafka partitions.
AbstractFetcher(SourceFunction.SourceContext<T>, Map<KafkaTopicPartition, Long>, SerializedValue<WatermarkStrategy<T>>, ProcessingTimeService, long, ClassLoader, MetricGroup, boolean) - 类 的构造器org.apache.flink.streaming.connectors.kafka.internals.AbstractFetcher
 
AbstractPartitionDiscoverer - org.apache.flink.streaming.connectors.kafka.internals中的类
Base class for all partition discoverers.
AbstractPartitionDiscoverer(KafkaTopicsDescriptor, int, int) - 类 的构造器org.apache.flink.streaming.connectors.kafka.internals.AbstractPartitionDiscoverer
 
AbstractPartitionDiscoverer.ClosedException - org.apache.flink.streaming.connectors.kafka.internals中的异常错误
Thrown if this discoverer was used to discover partitions after it was closed.
AbstractPartitionDiscoverer.WakeupException - org.apache.flink.streaming.connectors.kafka.internals中的异常错误
Signaling exception to indicate that an actual Kafka call was interrupted.
acknowledgeMessage() - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer
已过时。
ATTENTION to subclass implementors: When overriding this method, please always call super.acknowledgeMessage() to keep the invariants of the internal bookkeeping of the producer.
add(E) - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.ClosableBlockingQueue
Adds the element to the queue, or fails with an exception, if the queue is closed.
addDiscoveredPartitions(List<KafkaTopicPartition>) - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.AbstractFetcher
Adds a list of newly discovered partitions to the fetcher for consuming.
addIfOpen(E) - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.ClosableBlockingQueue
Tries to add an element to the queue, if the queue is still open.
addReader(int) - 类 中的方法org.apache.flink.connector.kafka.source.enumerator.KafkaSourceEnumerator
 
addSplitsBack(List<KafkaPartitionSplit>, int) - 类 中的方法org.apache.flink.connector.kafka.source.enumerator.KafkaSourceEnumerator
 
adjustAutoCommitConfig(Properties, OffsetCommitMode) - 类 中的静态方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase
Make sure that auto commit is disabled when our offset commit mode is ON_CHECKPOINTS.
applyReadableMetadata(List<String>, DataType) - 类 中的方法org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSource
 
applyWatermark(WatermarkStrategy<RowData>) - 类 中的方法org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSource
 
applyWritableMetadata(List<String>, DataType) - 类 中的方法org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSink
 
asRecord() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.KafkaShuffleFetcher.KafkaShuffleElement
 
assign(KafkaTopicPartition, int) - 类 中的静态方法org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartitionAssigner
Returns the index of the target subtask that a specific Kafka partition should be assigned to.
assign(String, int, int) - 类 中的静态方法org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartitionAssigner
 
assignedPartitions() - 类 中的方法org.apache.flink.connector.kafka.source.enumerator.KafkaSourceEnumState
 
assignTimestampsAndWatermarks(WatermarkStrategy<T>) - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase
Sets the given WatermarkStrategy on this consumer.
assignTimestampsAndWatermarks(AssignerWithPunctuatedWatermarks<T>) - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase
assignTimestampsAndWatermarks(AssignerWithPeriodicWatermarks<T>) - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase
asSummaryString() - 类 中的方法org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSink
 
asSummaryString() - 类 中的方法org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSource
 
asWatermark() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.KafkaShuffleFetcher.KafkaShuffleElement
 
asyncException - 类 中的变量org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer
已过时。
Errors encountered in the async producer are stored here.
asyncException - 类 中的变量org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerBase
Errors encountered in the async producer are stored here.

B

beginningOffsets(Collection<TopicPartition>) - 接口 中的方法org.apache.flink.connector.kafka.source.enumerator.initializer.OffsetsInitializer.PartitionOffsetsRetriever
List beginning offsets for the specified partitions.
beginningOffsets(Collection<TopicPartition>) - 类 中的方法org.apache.flink.connector.kafka.source.enumerator.KafkaSourceEnumerator.PartitionOffsetsRetrieverImpl
 
beginTransaction() - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer
已过时。
 
beginTransaction() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.FlinkKafkaInternalProducer
 
BoundedMode - org.apache.flink.streaming.connectors.kafka.config中的枚举
End modes for the Kafka Consumer.
boundedMode - 类 中的变量org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSource
The bounded mode for the contained consumer (default is an unbounded data stream).
boundedTimestampMillis - 类 中的变量org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSource
The bounded timestamp to locate partition offsets; only relevant when bounded mode is BoundedMode.TIMESTAMP.
build() - 类 中的方法org.apache.flink.connector.kafka.sink.KafkaRecordSerializationSchemaBuilder
Constructs the KafkaRecordSerializationSchemaBuilder with the configured properties.
build() - 类 中的方法org.apache.flink.connector.kafka.sink.KafkaSinkBuilder
Constructs the KafkaSink with the configured properties.
build() - 类 中的方法org.apache.flink.connector.kafka.source.KafkaSourceBuilder
Build the KafkaSource.
builder() - 接口 中的静态方法org.apache.flink.connector.kafka.sink.KafkaRecordSerializationSchema
Creates a default schema builder to provide common building blocks i.e. key serialization, value serialization, partitioning.
builder() - 类 中的静态方法org.apache.flink.connector.kafka.sink.KafkaSink
Create a KafkaSinkBuilder to construct a new KafkaSink.
builder() - 类 中的静态方法org.apache.flink.connector.kafka.source.KafkaSource
Get a kafkaSourceBuilder to build a KafkaSource.
BYTES_CONSUMED_TOTAL - 类 中的静态变量org.apache.flink.connector.kafka.source.metrics.KafkaSourceReaderMetrics
 

C

callback - 类 中的变量org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer
已过时。
The callback than handles error propagation or logging callbacks.
callback - 类 中的变量org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerBase
The callback than handles error propagation or logging callbacks.
cancel() - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase
 
cancel() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.AbstractFetcher
 
cancel() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.KafkaFetcher
 
checkAndThrowException() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.ExceptionProxy
Checks whether an exception has been set via ExceptionProxy.reportError(Throwable).
checkErroneous() - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer
已过时。
 
checkErroneous() - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerBase
 
checkpointLock - 类 中的变量org.apache.flink.streaming.connectors.kafka.internals.AbstractFetcher
The lock that guarantees that record emission and state updates are atomic, from the view of taking a checkpoint.
CLIENT_ID_PREFIX - 类 中的静态变量org.apache.flink.connector.kafka.source.KafkaSourceOptions
 
ClosableBlockingQueue<E> - org.apache.flink.streaming.connectors.kafka.internals中的类
A special form of blocking queue with two additions: The queue can be closed atomically when empty.
ClosableBlockingQueue() - 类 的构造器org.apache.flink.streaming.connectors.kafka.internals.ClosableBlockingQueue
Creates a new empty queue.
ClosableBlockingQueue(int) - 类 的构造器org.apache.flink.streaming.connectors.kafka.internals.ClosableBlockingQueue
Creates a new empty queue, reserving space for at least the specified number of elements.
ClosableBlockingQueue(Collection<? extends E>) - 类 的构造器org.apache.flink.streaming.connectors.kafka.internals.ClosableBlockingQueue
Creates a new queue that contains the given elements.
close() - 类 中的方法org.apache.flink.connector.kafka.source.enumerator.KafkaSourceEnumerator
 
close() - 类 中的方法org.apache.flink.connector.kafka.source.enumerator.KafkaSourceEnumerator.PartitionOffsetsRetrieverImpl
 
close() - 类 中的方法org.apache.flink.connector.kafka.source.reader.KafkaPartitionSplitReader
 
close() - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase
 
close() - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer
已过时。
 
close() - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerBase
 
close() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.AbstractPartitionDiscoverer
Closes the partition discoverer, cleaning up all Kafka connections.
close() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.ClosableBlockingQueue
Tries to close the queue.
close() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.FlinkKafkaInternalProducer
 
close(Duration) - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.FlinkKafkaInternalProducer
 
close() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.Handover
Closes the handover.
closeConnections() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.AbstractPartitionDiscoverer
Close all established connections.
closeConnections() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.KafkaPartitionDiscoverer
 
ClosedException() - 异常错误 的构造器org.apache.flink.streaming.connectors.kafka.internals.AbstractPartitionDiscoverer.ClosedException
 
ClosedException() - 异常错误 的构造器org.apache.flink.streaming.connectors.kafka.internals.Handover.ClosedException
 
commit(FlinkKafkaProducer.KafkaTransactionState) - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer
已过时。
 
COMMIT_OFFSETS_ON_CHECKPOINT - 类 中的静态变量org.apache.flink.connector.kafka.source.KafkaSourceOptions
 
commitInternalOffsetsToKafka(Map<KafkaTopicPartition, Long>, KafkaCommitCallback) - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.AbstractFetcher
Commits the given partition offsets to the Kafka brokers (or to ZooKeeper for older Kafka versions).
commitOffsets(Map<TopicPartition, OffsetAndMetadata>, OffsetCommitCallback) - 类 中的方法org.apache.flink.connector.kafka.source.reader.fetcher.KafkaSourceFetcherManager
 
COMMITS_FAILED_METRIC_COUNTER - 类 中的静态变量org.apache.flink.connector.kafka.source.metrics.KafkaSourceReaderMetrics
 
COMMITS_FAILED_METRICS_COUNTER - 类 中的静态变量org.apache.flink.streaming.connectors.kafka.internals.metrics.KafkaConsumerMetricConstants
 
COMMITS_SUCCEEDED_METRIC_COUNTER - 类 中的静态变量org.apache.flink.connector.kafka.source.metrics.KafkaSourceReaderMetrics
 
COMMITS_SUCCEEDED_METRICS_COUNTER - 类 中的静态变量org.apache.flink.streaming.connectors.kafka.internals.metrics.KafkaConsumerMetricConstants
 
COMMITTED_OFFSET - 类 中的静态变量org.apache.flink.connector.kafka.source.split.KafkaPartitionSplit
 
COMMITTED_OFFSET_METRIC_GAUGE - 类 中的静态变量org.apache.flink.connector.kafka.source.metrics.KafkaSourceReaderMetrics
 
COMMITTED_OFFSETS_METRICS_GAUGE - 类 中的静态变量org.apache.flink.streaming.connectors.kafka.internals.metrics.KafkaConsumerMetricConstants
 
committedOffsets() - 接口 中的静态方法org.apache.flink.connector.kafka.source.enumerator.initializer.OffsetsInitializer
Get an OffsetsInitializer which initializes the offsets to the committed offsets.
committedOffsets(OffsetResetStrategy) - 接口 中的静态方法org.apache.flink.connector.kafka.source.enumerator.initializer.OffsetsInitializer
Get an OffsetsInitializer which initializes the offsets to the committed offsets.
committedOffsets(Collection<TopicPartition>) - 接口 中的方法org.apache.flink.connector.kafka.source.enumerator.initializer.OffsetsInitializer.PartitionOffsetsRetriever
The group id should be the set for KafkaSource before invoking this method.
committedOffsets(Collection<TopicPartition>) - 类 中的方法org.apache.flink.connector.kafka.source.enumerator.KafkaSourceEnumerator.PartitionOffsetsRetrieverImpl
 
commitTransaction() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.FlinkKafkaInternalProducer
 
Comparator() - 类 的构造器org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartition.Comparator
 
compare(KafkaTopicPartition, KafkaTopicPartition) - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartition.Comparator
 
consumedDataType - 类 中的变量org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSink
Data type of consumed data type.
CONSUMER_FETCH_MANAGER_GROUP - 类 中的静态变量org.apache.flink.connector.kafka.source.metrics.KafkaSourceReaderMetrics
 
ContextStateSerializer() - 类 的构造器org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.ContextStateSerializer
已过时。
 
ContextStateSerializer() - 类 的构造器org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer011.ContextStateSerializer
 
ContextStateSerializerSnapshot() - 类 的构造器org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.ContextStateSerializer.ContextStateSerializerSnapshot
已过时。
 
ContextStateSerializerSnapshot() - 类 的构造器org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer011.ContextStateSerializer.ContextStateSerializerSnapshot
 
copy(FlinkKafkaProducer.KafkaTransactionContext) - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.ContextStateSerializer
已过时。
 
copy(FlinkKafkaProducer.KafkaTransactionContext, FlinkKafkaProducer.KafkaTransactionContext) - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.ContextStateSerializer
已过时。
 
copy(DataInputView, DataOutputView) - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.ContextStateSerializer
已过时。
 
copy(FlinkKafkaProducer.NextTransactionalIdHint) - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.NextTransactionalIdHintSerializer
已过时。
 
copy(FlinkKafkaProducer.NextTransactionalIdHint, FlinkKafkaProducer.NextTransactionalIdHint) - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.NextTransactionalIdHintSerializer
已过时。
 
copy(DataInputView, DataOutputView) - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.NextTransactionalIdHintSerializer
已过时。
 
copy(FlinkKafkaProducer.KafkaTransactionState) - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.TransactionStateSerializer
已过时。
 
copy(FlinkKafkaProducer.KafkaTransactionState, FlinkKafkaProducer.KafkaTransactionState) - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.TransactionStateSerializer
已过时。
 
copy(DataInputView, DataOutputView) - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.TransactionStateSerializer
已过时。
 
copy() - 类 中的方法org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSink
 
copy() - 类 中的方法org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSource
 
createCommitter() - 类 中的方法org.apache.flink.connector.kafka.sink.KafkaSink
 
createDynamicTableSink(DynamicTableFactory.Context) - 类 中的方法org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicTableFactory
 
createDynamicTableSink(DynamicTableFactory.Context) - 类 中的方法org.apache.flink.streaming.connectors.kafka.table.UpsertKafkaDynamicTableFactory
 
createDynamicTableSource(DynamicTableFactory.Context) - 类 中的方法org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicTableFactory
 
createDynamicTableSource(DynamicTableFactory.Context) - 类 中的方法org.apache.flink.streaming.connectors.kafka.table.UpsertKafkaDynamicTableFactory
 
createEnumerator(SplitEnumeratorContext<KafkaPartitionSplit>) - 类 中的方法org.apache.flink.connector.kafka.source.KafkaSource
 
createFetcher(SourceFunction.SourceContext<T>, Map<KafkaTopicPartition, Long>, SerializedValue<WatermarkStrategy<T>>, StreamingRuntimeContext, OffsetCommitMode, MetricGroup, boolean) - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer
已过时。
 
createFetcher(SourceFunction.SourceContext<T>, Map<KafkaTopicPartition, Long>, SerializedValue<WatermarkStrategy<T>>, StreamingRuntimeContext, OffsetCommitMode, MetricGroup, boolean) - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase
Creates the fetcher that connect to the Kafka brokers, pulls data, deserialized the data, and emits it into the data streams.
createFetcher(SourceFunction.SourceContext<T>, Map<KafkaTopicPartition, Long>, SerializedValue<WatermarkStrategy<T>>, StreamingRuntimeContext, OffsetCommitMode, MetricGroup, boolean) - 类 中的方法org.apache.flink.streaming.connectors.kafka.shuffle.FlinkKafkaShuffleConsumer
 
createInstance() - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.ContextStateSerializer
已过时。
 
createInstance() - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.NextTransactionalIdHintSerializer
已过时。
 
createInstance() - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.TransactionStateSerializer
已过时。
 
createKafkaPartitionHandle(KafkaTopicPartition) - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.AbstractFetcher
Creates the Kafka version specific representation of the given topic partition.
createKafkaPartitionHandle(KafkaTopicPartition) - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.KafkaFetcher
 
createKafkaSource(DeserializationSchema<RowData>, DeserializationSchema<RowData>, TypeInformation<RowData>) - 类 中的方法org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSource
 
createKafkaTableSink(DataType, EncodingFormat<SerializationSchema<RowData>>, EncodingFormat<SerializationSchema<RowData>>, int[], int[], String, String, Properties, FlinkKafkaPartitioner<RowData>, DeliveryGuarantee, Integer, String) - 类 中的方法org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicTableFactory
 
createKafkaTableSource(DataType, DecodingFormat<DeserializationSchema<RowData>>, DecodingFormat<DeserializationSchema<RowData>>, int[], int[], String, List<String>, Pattern, Properties, StartupMode, Map<KafkaTopicPartition, Long>, long, BoundedMode, Map<KafkaTopicPartition, Long>, long, String) - 类 中的方法org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicTableFactory
 
createPartitionDiscoverer(KafkaTopicsDescriptor, int, int) - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer
已过时。
 
createPartitionDiscoverer(KafkaTopicsDescriptor, int, int) - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase
Creates the partition discoverer that is used to find new partitions for this subtask.
createProducer() - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer
已过时。
 
createReader(SourceReaderContext) - 类 中的方法org.apache.flink.connector.kafka.source.KafkaSource
 
createRuntimeDecoder(DynamicTableSource.Context, DataType) - 类 中的方法org.apache.flink.streaming.connectors.kafka.table.UpsertKafkaDynamicTableFactory.DecodingFormatWrapper
 
createRuntimeEncoder(DynamicTableSink.Context, DataType) - 类 中的方法org.apache.flink.streaming.connectors.kafka.table.UpsertKafkaDynamicTableFactory.EncodingFormatWrapper
 
createWriter(Sink.InitContext) - 类 中的方法org.apache.flink.connector.kafka.sink.KafkaSink
 
CURRENT_OFFSET_METRIC_GAUGE - 类 中的静态变量org.apache.flink.connector.kafka.source.metrics.KafkaSourceReaderMetrics
 
CURRENT_OFFSETS_METRICS_GAUGE - 类 中的静态变量org.apache.flink.streaming.connectors.kafka.internals.metrics.KafkaConsumerMetricConstants
 

D

DecodingFormatWrapper(DecodingFormat<DeserializationSchema<RowData>>) - 类 的构造器org.apache.flink.streaming.connectors.kafka.table.UpsertKafkaDynamicTableFactory.DecodingFormatWrapper
 
DEFAULT_KAFKA_PRODUCERS_POOL_SIZE - 类 中的静态变量org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer
已过时。
Default number of KafkaProducers in the pool.
DEFAULT_KAFKA_TRANSACTION_TIMEOUT - 类 中的静态变量org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer
已过时。
Default value for kafka transaction timeout.
DEFAULT_POLL_TIMEOUT - 类 中的静态变量org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer
已过时。
From Kafka's Javadoc: The time, in milliseconds, spent waiting in poll if data is not available.
DefaultKafkaSinkContext - org.apache.flink.connector.kafka.sink中的类
Context providing information to assist constructing a ProducerRecord.
DefaultKafkaSinkContext(int, int, Properties) - 类 的构造器org.apache.flink.connector.kafka.sink.DefaultKafkaSinkContext
 
defaultTopicId - 类 中的变量org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer
已过时。
The name of the default topic this producer is writing data to.
defaultTopicId - 类 中的变量org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerBase
The name of the default topic this producer is writing data to.
DELIVERY_GUARANTEE - 类 中的静态变量org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions
 
deserialize(int, byte[]) - 类 中的方法org.apache.flink.connector.kafka.source.enumerator.KafkaSourceEnumStateSerializer
 
deserialize(ConsumerRecord<byte[], byte[]>, Collector<T>) - 接口 中的方法org.apache.flink.connector.kafka.source.reader.deserializer.KafkaRecordDeserializationSchema
Deserializes the byte message.
deserialize(int, byte[]) - 类 中的方法org.apache.flink.connector.kafka.source.split.KafkaPartitionSplitSerializer
 
deserialize(DataInputView) - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.ContextStateSerializer
已过时。
 
deserialize(FlinkKafkaProducer.KafkaTransactionContext, DataInputView) - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.ContextStateSerializer
已过时。
 
deserialize(DataInputView) - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.NextTransactionalIdHintSerializer
已过时。
 
deserialize(FlinkKafkaProducer.NextTransactionalIdHint, DataInputView) - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.NextTransactionalIdHintSerializer
已过时。
 
deserialize(DataInputView) - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.TransactionStateSerializer
已过时。
 
deserialize(FlinkKafkaProducer.KafkaTransactionState, DataInputView) - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.TransactionStateSerializer
已过时。
 
deserialize(ConsumerRecord<byte[], byte[]>) - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.KafkaDeserializationSchemaWrapper
 
deserialize(ConsumerRecord<byte[], byte[]>, Collector<T>) - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.KafkaDeserializationSchemaWrapper
 
deserialize(ConsumerRecord<byte[], byte[]>) - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.KafkaShuffleFetcher.KafkaShuffleElementDeserializer
 
deserialize(ConsumerRecord<byte[], byte[]>) - 接口 中的方法org.apache.flink.streaming.connectors.kafka.KafkaDeserializationSchema
Deserializes the Kafka record.
deserialize(ConsumerRecord<byte[], byte[]>, Collector<T>) - 接口 中的方法org.apache.flink.streaming.connectors.kafka.KafkaDeserializationSchema
Deserializes the Kafka record.
deserialize(ConsumerRecord<byte[], byte[]>) - 类 中的方法org.apache.flink.streaming.util.serialization.JSONKeyValueDeserializationSchema
 
deserialize(byte[], byte[], String, int, long) - 接口 中的方法org.apache.flink.streaming.util.serialization.KeyedDeserializationSchema
已过时。
Deserializes the byte message.
deserialize(ConsumerRecord<byte[], byte[]>) - 接口 中的方法org.apache.flink.streaming.util.serialization.KeyedDeserializationSchema
已过时。
 
deserialize(ConsumerRecord<byte[], byte[]>) - 类 中的方法org.apache.flink.streaming.util.serialization.TypeInformationKeyValueSerializationSchema
 
deserializer - 类 中的变量org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase
The schema to convert between Kafka's byte messages, and Flink's objects.
DISABLED - 类 中的静态变量org.apache.flink.streaming.connectors.kafka.table.SinkBufferFlushMode
 
disableFilterRestoredPartitionsWithSubscribedTopics() - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase
By default, when restoring from a checkpoint / savepoint, the consumer always ignores restored partitions that are no longer associated with the current specified topics or topic pattern to subscribe to.
discoverPartitions() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.AbstractPartitionDiscoverer
Execute a partition discovery attempt for this subtask.
doCommitInternalOffsetsToKafka(Map<KafkaTopicPartition, Long>, KafkaCommitCallback) - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.AbstractFetcher
 
doCommitInternalOffsetsToKafka(Map<KafkaTopicPartition, Long>, KafkaCommitCallback) - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.KafkaFetcher
 
dropLeaderData(List<KafkaTopicPartitionLeader>) - 类 中的静态方法org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartition
 

E

earliest() - 接口 中的静态方法org.apache.flink.connector.kafka.source.enumerator.initializer.OffsetsInitializer
Get an OffsetsInitializer which initializes the offsets to the earliest available offsets of each partition.
EARLIEST_OFFSET - 类 中的静态变量org.apache.flink.connector.kafka.source.split.KafkaPartitionSplit
 
EARLIEST_OFFSET - 类 中的静态变量org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartitionStateSentinel
Magic number that defines the partition should start from the earliest offset.
emitRecord(ConsumerRecord<byte[], byte[]>, SourceOutput<T>, KafkaPartitionSplitState) - 类 中的方法org.apache.flink.connector.kafka.source.reader.KafkaRecordEmitter
 
emitRecordsWithTimestamps(Queue<T>, KafkaTopicPartitionState<T, KPH>, long, long) - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.AbstractFetcher
Emits a record attaching a timestamp to it.
emitWatermark(Watermark) - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.SourceContextWatermarkOutputAdapter
 
EncodingFormatWrapper(EncodingFormat<SerializationSchema<RowData>>) - 类 的构造器org.apache.flink.streaming.connectors.kafka.table.UpsertKafkaDynamicTableFactory.EncodingFormatWrapper
 
endOffsets(Collection<TopicPartition>) - 接口 中的方法org.apache.flink.connector.kafka.source.enumerator.initializer.OffsetsInitializer.PartitionOffsetsRetriever
List end offsets for the specified partitions.
endOffsets(Collection<TopicPartition>) - 类 中的方法org.apache.flink.connector.kafka.source.enumerator.KafkaSourceEnumerator.PartitionOffsetsRetrieverImpl
 
equals(Object) - 类 中的方法org.apache.flink.connector.kafka.source.split.KafkaPartitionSplit
 
equals(Object) - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.KafkaTransactionContext
已过时。
 
equals(Object) - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.KafkaTransactionState
已过时。
 
equals(Object) - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.NextTransactionalIdHint
已过时。
 
equals(Object) - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.ClosableBlockingQueue
 
equals(Object) - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartition
 
equals(Object) - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartitionLeader
 
equals(Object) - 类 中的方法org.apache.flink.streaming.connectors.kafka.partitioner.FlinkFixedPartitioner
 
equals(Object) - 类 中的方法org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSink
 
equals(Object) - 类 中的方法org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSource
 
equals(Object) - 类 中的方法org.apache.flink.streaming.connectors.kafka.table.SinkBufferFlushMode
 
equals(Object) - 类 中的方法org.apache.flink.streaming.connectors.kafka.table.UpsertKafkaDynamicTableFactory.DecodingFormatWrapper
 
equals(Object) - 类 中的方法org.apache.flink.streaming.connectors.kafka.table.UpsertKafkaDynamicTableFactory.EncodingFormatWrapper
 
ExceptionProxy - org.apache.flink.streaming.connectors.kafka.internals中的类
A proxy that communicates exceptions between threads.
ExceptionProxy(Thread) - 类 的构造器org.apache.flink.streaming.connectors.kafka.internals.ExceptionProxy
Creates an exception proxy that interrupts the given thread upon report of an exception.
extractTimestamp(T, long) - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartitionState
 
extractTimestamp(T, long) - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartitionStateWithWatermarkGenerator
 

F

factoryIdentifier() - 类 中的方法org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicTableFactory
 
factoryIdentifier() - 类 中的方法org.apache.flink.streaming.connectors.kafka.table.UpsertKafkaDynamicTableFactory
 
fetch() - 类 中的方法org.apache.flink.connector.kafka.source.reader.KafkaPartitionSplitReader
 
fetchOffsetsWithTimestamp(Collection<KafkaTopicPartition>, long) - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer
已过时。
 
fetchOffsetsWithTimestamp(Collection<KafkaTopicPartition>, long) - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase
 
finishRecoveringContext(Collection<FlinkKafkaProducer.KafkaTransactionState>) - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer
已过时。
 
FlinkFixedPartitioner<T> - org.apache.flink.streaming.connectors.kafka.partitioner中的类
A partitioner ensuring that each internal Flink partition ends up in one Kafka partition.
FlinkFixedPartitioner() - 类 的构造器org.apache.flink.streaming.connectors.kafka.partitioner.FlinkFixedPartitioner
 
FlinkKafkaConsumer<T> - org.apache.flink.streaming.connectors.kafka中的类
已过时。
FlinkKafkaConsumer(String, DeserializationSchema<T>, Properties) - 类 的构造器org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer
已过时。
Creates a new Kafka streaming source consumer.
FlinkKafkaConsumer(String, KafkaDeserializationSchema<T>, Properties) - 类 的构造器org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer
已过时。
Creates a new Kafka streaming source consumer.
FlinkKafkaConsumer(List<String>, DeserializationSchema<T>, Properties) - 类 的构造器org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer
已过时。
Creates a new Kafka streaming source consumer.
FlinkKafkaConsumer(List<String>, KafkaDeserializationSchema<T>, Properties) - 类 的构造器org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer
已过时。
Creates a new Kafka streaming source consumer.
FlinkKafkaConsumer(Pattern, DeserializationSchema<T>, Properties) - 类 的构造器org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer
已过时。
Creates a new Kafka streaming source consumer.
FlinkKafkaConsumer(Pattern, KafkaDeserializationSchema<T>, Properties) - 类 的构造器org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer
已过时。
Creates a new Kafka streaming source consumer.
FlinkKafkaConsumerBase<T> - org.apache.flink.streaming.connectors.kafka中的类
Base class of all Flink Kafka Consumer data sources.
FlinkKafkaConsumerBase(List<String>, Pattern, KafkaDeserializationSchema<T>, long, boolean) - 类 的构造器org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase
Base constructor.
FlinkKafkaErrorCode - org.apache.flink.streaming.connectors.kafka中的枚举
Error codes used in FlinkKafkaException.
FlinkKafkaException - org.apache.flink.streaming.connectors.kafka中的异常错误
Exception used by FlinkKafkaProducer and FlinkKafkaConsumer.
FlinkKafkaException(FlinkKafkaErrorCode, String) - 异常错误 的构造器org.apache.flink.streaming.connectors.kafka.FlinkKafkaException
 
FlinkKafkaException(FlinkKafkaErrorCode, String, Throwable) - 异常错误 的构造器org.apache.flink.streaming.connectors.kafka.FlinkKafkaException
 
FlinkKafkaInternalProducer<K,V> - org.apache.flink.streaming.connectors.kafka.internals中的类
Internal flink kafka producer.
FlinkKafkaInternalProducer(Properties) - 类 的构造器org.apache.flink.streaming.connectors.kafka.internals.FlinkKafkaInternalProducer
 
flinkKafkaPartitioner - 类 中的变量org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerBase
User-provided partitioner for assigning an object to a Kafka partition for each topic.
FlinkKafkaPartitioner<T> - org.apache.flink.streaming.connectors.kafka.partitioner中的类
A FlinkKafkaPartitioner wraps logic on how to partition records across partitions of multiple Kafka topics.
FlinkKafkaPartitioner() - 类 的构造器org.apache.flink.streaming.connectors.kafka.partitioner.FlinkKafkaPartitioner
 
FlinkKafkaProducer<IN> - org.apache.flink.streaming.connectors.kafka中的类
已过时。
Please use KafkaSink.
FlinkKafkaProducer(String, String, SerializationSchema<IN>) - 类 的构造器org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer
已过时。
Creates a FlinkKafkaProducer for a given topic.
FlinkKafkaProducer(String, SerializationSchema<IN>, Properties) - 类 的构造器org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer
已过时。
Creates a FlinkKafkaProducer for a given topic.
FlinkKafkaProducer(String, SerializationSchema<IN>, Properties, Optional<FlinkKafkaPartitioner<IN>>) - 类 的构造器org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer
已过时。
Creates a FlinkKafkaProducer for a given topic.
FlinkKafkaProducer(String, SerializationSchema<IN>, Properties, FlinkKafkaPartitioner<IN>, FlinkKafkaProducer.Semantic, int) - 类 的构造器org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer
已过时。
Creates a FlinkKafkaProducer for a given topic.
FlinkKafkaProducer(String, String, KeyedSerializationSchema<IN>) - 类 的构造器org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer
FlinkKafkaProducer(String, KeyedSerializationSchema<IN>, Properties) - 类 的构造器org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer
FlinkKafkaProducer(String, KeyedSerializationSchema<IN>, Properties, FlinkKafkaProducer.Semantic) - 类 的构造器org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer
FlinkKafkaProducer(String, KeyedSerializationSchema<IN>, Properties, Optional<FlinkKafkaPartitioner<IN>>) - 类 的构造器org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer
FlinkKafkaProducer(String, KeyedSerializationSchema<IN>, Properties, Optional<FlinkKafkaPartitioner<IN>>, FlinkKafkaProducer.Semantic, int) - 类 的构造器org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer
FlinkKafkaProducer(String, KafkaSerializationSchema<IN>, Properties, FlinkKafkaProducer.Semantic) - 类 的构造器org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer
已过时。
Creates a FlinkKafkaProducer for a given topic.
FlinkKafkaProducer(String, KafkaSerializationSchema<IN>, Properties, FlinkKafkaProducer.Semantic, int) - 类 的构造器org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer
已过时。
Creates a FlinkKafkaProducer for a given topic.
FlinkKafkaProducer.ContextStateSerializer - org.apache.flink.streaming.connectors.kafka中的类
已过时。
FlinkKafkaProducer.ContextStateSerializer.ContextStateSerializerSnapshot - org.apache.flink.streaming.connectors.kafka中的类
已过时。
Serializer configuration snapshot for compatibility and format evolution.
FlinkKafkaProducer.KafkaTransactionContext - org.apache.flink.streaming.connectors.kafka中的类
已过时。
Context associated to this instance of the FlinkKafkaProducer.
FlinkKafkaProducer.KafkaTransactionState - org.apache.flink.streaming.connectors.kafka中的类
已过时。
State for handling transactions.
FlinkKafkaProducer.NextTransactionalIdHint - org.apache.flink.streaming.connectors.kafka中的类
已过时。
Keep information required to deduce next safe to use transactional id.
FlinkKafkaProducer.NextTransactionalIdHintSerializer - org.apache.flink.streaming.connectors.kafka中的类
已过时。
FlinkKafkaProducer.NextTransactionalIdHintSerializer.NextTransactionalIdHintSerializerSnapshot - org.apache.flink.streaming.connectors.kafka中的类
已过时。
Serializer configuration snapshot for compatibility and format evolution.
FlinkKafkaProducer.Semantic - org.apache.flink.streaming.connectors.kafka中的枚举
已过时。
Semantics that can be chosen.
FlinkKafkaProducer.TransactionStateSerializer - org.apache.flink.streaming.connectors.kafka中的类
已过时。
FlinkKafkaProducer.TransactionStateSerializer.TransactionStateSerializerSnapshot - org.apache.flink.streaming.connectors.kafka中的类
已过时。
Serializer configuration snapshot for compatibility and format evolution.
FlinkKafkaProducer011 - org.apache.flink.streaming.connectors.kafka中的类
Compatibility class to make migration possible from the 0.11 connector to the universal one.
FlinkKafkaProducer011() - 类 的构造器org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer011
 
FlinkKafkaProducer011.ContextStateSerializer - org.apache.flink.streaming.connectors.kafka中的类
 
FlinkKafkaProducer011.ContextStateSerializer.ContextStateSerializerSnapshot - org.apache.flink.streaming.connectors.kafka中的类
 
FlinkKafkaProducer011.NextTransactionalIdHint - org.apache.flink.streaming.connectors.kafka中的类
 
FlinkKafkaProducer011.NextTransactionalIdHintSerializer - org.apache.flink.streaming.connectors.kafka中的类
 
FlinkKafkaProducer011.NextTransactionalIdHintSerializer.NextTransactionalIdHintSerializerSnapshot - org.apache.flink.streaming.connectors.kafka中的类
 
FlinkKafkaProducer011.TransactionStateSerializer - org.apache.flink.streaming.connectors.kafka中的类
 
FlinkKafkaProducer011.TransactionStateSerializer.TransactionStateSerializerSnapshot - org.apache.flink.streaming.connectors.kafka中的类
 
FlinkKafkaProducerBase<IN> - org.apache.flink.streaming.connectors.kafka中的类
Flink Sink to produce data into a Kafka topic.
FlinkKafkaProducerBase(String, KeyedSerializationSchema<IN>, Properties, FlinkKafkaPartitioner<IN>) - 类 的构造器org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerBase
The main constructor for creating a FlinkKafkaProducer.
FlinkKafkaShuffle - org.apache.flink.streaming.connectors.kafka.shuffle中的类
FlinkKafkaShuffle uses Kafka as a message bus to shuffle and persist data at the same time.
FlinkKafkaShuffle() - 类 的构造器org.apache.flink.streaming.connectors.kafka.shuffle.FlinkKafkaShuffle
 
FlinkKafkaShuffleConsumer<T> - org.apache.flink.streaming.connectors.kafka.shuffle中的类
Flink Kafka Shuffle Consumer Function.
FlinkKafkaShuffleProducer<IN,KEY> - org.apache.flink.streaming.connectors.kafka.shuffle中的类
Flink Kafka Shuffle Producer Function.
FlinkKafkaShuffleProducer.KafkaSerializer<IN> - org.apache.flink.streaming.connectors.kafka.shuffle中的类
Flink Kafka Shuffle Serializer.
flush() - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerBase
Flush pending records.
flush() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.FlinkKafkaInternalProducer
 
flushMode - 类 中的变量org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSink
Sink buffer flush config which only supported in upsert mode now.
flushOnCheckpoint - 类 中的变量org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerBase
If true, the producer will wait until all outstanding records have been send to the broker.
forwardOptions() - 类 中的方法org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicTableFactory
 
fromConfiguration(boolean, boolean, boolean) - 类 中的静态方法org.apache.flink.streaming.connectors.kafka.config.OffsetCommitModes
Determine the offset commit mode using several configuration values.

G

generateIdsToAbort() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.TransactionalIdsGenerator
If we have to abort previous transactional id in case of restart after a failure BEFORE first checkpoint completed, we don't know what was the parallelism used in previous attempt.
generateIdsToUse(long) - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.TransactionalIdsGenerator
Range of available transactional ids to use is: [nextFreeTransactionalId, nextFreeTransactionalId + parallelism * kafkaProducersPoolSize) loop below picks in a deterministic way a subrange of those available transactional ids based on index of this subtask.
getAllPartitionsForTopics(List<String>) - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.AbstractPartitionDiscoverer
Fetch the list of all partitions for a specific topics list from Kafka.
getAllPartitionsForTopics(List<String>) - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.KafkaPartitionDiscoverer
 
getAllTopics() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.AbstractPartitionDiscoverer
Fetch the list of all topics from Kafka.
getAllTopics() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.KafkaPartitionDiscoverer
 
getAutoOffsetResetStrategy() - 类 中的方法org.apache.flink.connector.kafka.source.enumerator.initializer.NoStoppingOffsetsInitializer
 
getAutoOffsetResetStrategy() - 接口 中的方法org.apache.flink.connector.kafka.source.enumerator.initializer.OffsetsInitializer
Get the auto offset reset strategy in case the initialized offsets falls out of the range.
getBatchBlocking() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.ClosableBlockingQueue
Gets all the elements found in the list, or blocks until at least one element was added.
getBatchBlocking(long) - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.ClosableBlockingQueue
Gets all the elements found in the list, or blocks until at least one element was added.
getBatchIntervalMs() - 类 中的方法org.apache.flink.streaming.connectors.kafka.table.SinkBufferFlushMode
 
getBatchSize() - 类 中的方法org.apache.flink.streaming.connectors.kafka.table.SinkBufferFlushMode
 
getBoundedness() - 类 中的方法org.apache.flink.connector.kafka.source.KafkaSource
 
getChangelogMode(ChangelogMode) - 类 中的方法org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSink
 
getChangelogMode() - 类 中的方法org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSource
 
getChangelogMode() - 类 中的方法org.apache.flink.streaming.connectors.kafka.table.UpsertKafkaDynamicTableFactory.DecodingFormatWrapper
 
getChangelogMode() - 类 中的方法org.apache.flink.streaming.connectors.kafka.table.UpsertKafkaDynamicTableFactory.EncodingFormatWrapper
 
getCommittableSerializer() - 类 中的方法org.apache.flink.connector.kafka.sink.KafkaSink
 
getCommittedOffset() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartitionState
 
getCurrentOffset() - 类 中的方法org.apache.flink.connector.kafka.source.split.KafkaPartitionSplitState
 
getDescription() - 枚举 中的方法org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions.ScanBoundedMode
 
getDescription() - 枚举 中的方法org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions.ScanStartupMode
 
getElementBlocking() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.ClosableBlockingQueue
Returns the next element in the queue.
getElementBlocking(long) - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.ClosableBlockingQueue
Returns the next element in the queue.
getEnableCommitOnCheckpoints() - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase
 
getEnum(String) - 类 中的静态方法org.apache.flink.streaming.connectors.kafka.internals.FlinkKafkaInternalProducer
 
getEnumeratorCheckpointSerializer() - 类 中的方法org.apache.flink.connector.kafka.source.KafkaSource
 
getEpoch() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.FlinkKafkaInternalProducer
 
getErrorCode() - 异常错误 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaException
 
getFetcherName() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.KafkaFetcher
Gets the name of this fetcher, for thread naming and logging purposes.
getFetcherName() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.KafkaShuffleFetcher
 
getField(Object, String) - 类 中的静态方法org.apache.flink.streaming.connectors.kafka.internals.FlinkKafkaInternalProducer
Gets and returns the field fieldName from the given Object object using reflection.
getFixedTopics() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicsDescriptor
 
getIsAutoCommitEnabled() - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer
已过时。
 
getIsAutoCommitEnabled() - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase
 
getKafkaMetric(Map<MetricName, ? extends Metric>, String, String) - 类 中的静态方法org.apache.flink.connector.kafka.MetricUtil
Tries to find the Kafka Metric in the provided metrics.
getKafkaMetric(Map<MetricName, ? extends Metric>, Predicate<Map.Entry<MetricName, ? extends Metric>>) - 类 中的静态方法org.apache.flink.connector.kafka.MetricUtil
Tries to find the Kafka Metric in the provided metrics matching a given filter.
getKafkaPartitionHandle() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartitionState
Gets Kafka's descriptor for the Kafka Partition.
getKafkaProducer(Properties) - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerBase
Used for testing only.
getKafkaProducerConfig() - 类 中的方法org.apache.flink.connector.kafka.sink.KafkaSink
 
getKafkaTopicPartition() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartitionState
Gets Flink's descriptor for the Kafka Partition.
getLeader() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartitionLeader
 
getLength() - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.ContextStateSerializer
已过时。
 
getLength() - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.NextTransactionalIdHintSerializer
已过时。
 
getLength() - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.TransactionStateSerializer
已过时。
 
getNumberOfParallelInstances() - 类 中的方法org.apache.flink.connector.kafka.sink.DefaultKafkaSinkContext
 
getNumberOfParallelInstances() - 接口 中的方法org.apache.flink.connector.kafka.sink.KafkaRecordSerializationSchema.KafkaSinkContext
 
getOffset() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartitionState
The current offset in the partition.
getOption(Properties, ConfigOption<?>, Function<String, T>) - 类 中的静态方法org.apache.flink.connector.kafka.source.KafkaSourceOptions
 
getParallelInstanceId() - 类 中的方法org.apache.flink.connector.kafka.sink.DefaultKafkaSinkContext
 
getParallelInstanceId() - 接口 中的方法org.apache.flink.connector.kafka.sink.KafkaRecordSerializationSchema.KafkaSinkContext
Get the ID of the subtask the KafkaSink is running on.
getPartition() - 类 中的方法org.apache.flink.connector.kafka.source.split.KafkaPartitionSplit
 
getPartition() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartition
 
getPartition() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartitionState
 
getPartitionOffsets(Collection<TopicPartition>, OffsetsInitializer.PartitionOffsetsRetriever) - 类 中的方法org.apache.flink.connector.kafka.source.enumerator.initializer.NoStoppingOffsetsInitializer
 
getPartitionOffsets(Collection<TopicPartition>, OffsetsInitializer.PartitionOffsetsRetriever) - 接口 中的方法org.apache.flink.connector.kafka.source.enumerator.initializer.OffsetsInitializer
Get the initial offsets for the given Kafka partitions.
getPartitionsByTopic(String, Producer<byte[], byte[]>) - 类 中的静态方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer
已过时。
 
getPartitionsByTopic(String, KafkaProducer<byte[], byte[]>) - 类 中的静态方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerBase
 
getPartitionSetSubscriber(Set<TopicPartition>) - 接口 中的静态方法org.apache.flink.connector.kafka.source.enumerator.subscriber.KafkaSubscriber
 
getPartitionsForTopic(String) - 类 中的方法org.apache.flink.connector.kafka.sink.DefaultKafkaSinkContext
 
getPartitionsForTopic(String) - 接口 中的方法org.apache.flink.connector.kafka.sink.KafkaRecordSerializationSchema.KafkaSinkContext
For a given topic id retrieve the available partitions.
getProducedType() - 类 中的方法org.apache.flink.connector.kafka.source.KafkaSource
 
getProducedType() - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase
 
getProducedType() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.KafkaDeserializationSchemaWrapper
 
getProducedType() - 类 中的方法org.apache.flink.streaming.util.serialization.JSONKeyValueDeserializationSchema
 
getProducedType() - 类 中的方法org.apache.flink.streaming.util.serialization.TypeInformationKeyValueSerializationSchema
 
getProducer() - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.KafkaTransactionState
已过时。
 
getProducerId() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.FlinkKafkaInternalProducer
 
getPropertiesFromBrokerList(String) - 类 中的静态方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerBase
 
getScanRuntimeProvider(ScanTableSource.ScanContext) - 类 中的方法org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSource
 
getSerializationSchema() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.KeyedSerializationSchemaWrapper
 
getSinkRuntimeProvider(DynamicTableSink.Context) - 类 中的方法org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSink
 
getSplitSerializer() - 类 中的方法org.apache.flink.connector.kafka.source.KafkaSource
 
getStartingOffset() - 类 中的方法org.apache.flink.connector.kafka.source.split.KafkaPartitionSplit
 
getStateSentinel() - 枚举 中的方法org.apache.flink.streaming.connectors.kafka.config.StartupMode
 
getStoppingOffset() - 类 中的方法org.apache.flink.connector.kafka.source.split.KafkaPartitionSplit
 
getSubscribedTopicPartitions(AdminClient) - 接口 中的方法org.apache.flink.connector.kafka.source.enumerator.subscriber.KafkaSubscriber
Get a set of subscribed TopicPartitions.
getSubtask() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.KafkaShuffleFetcher.KafkaShuffleWatermark
 
getTargetTopic(T) - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.KafkaSerializationSchemaWrapper
 
getTargetTopic(T) - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.KeyedSerializationSchemaWrapper
 
getTargetTopic(T) - 接口 中的方法org.apache.flink.streaming.connectors.kafka.KafkaContextAware
Returns the topic that the presented element should be sent to.
getTargetTopic(T) - 接口 中的方法org.apache.flink.streaming.util.serialization.KeyedSerializationSchema
已过时。
Optional method to determine the target topic for the element.
getTargetTopic(Tuple2<K, V>) - 类 中的方法org.apache.flink.streaming.util.serialization.TypeInformationKeyValueSerializationSchema
 
getTimestamp() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.KafkaShuffleFetcher.KafkaShuffleRecord
 
getTopic() - 类 中的方法org.apache.flink.connector.kafka.source.split.KafkaPartitionSplit
 
getTopic() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartition
 
getTopic() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartitionState
 
getTopicListSubscriber(List<String>) - 接口 中的静态方法org.apache.flink.connector.kafka.source.enumerator.subscriber.KafkaSubscriber
 
getTopicPartition() - 类 中的方法org.apache.flink.connector.kafka.source.split.KafkaPartitionSplit
 
getTopicPartition() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartitionLeader
 
getTopicPatternSubscriber(Pattern) - 接口 中的静态方法org.apache.flink.connector.kafka.source.enumerator.subscriber.KafkaSubscriber
 
getTransactionalId() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.FlinkKafkaInternalProducer
 
getTransactionCoordinatorId() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.FlinkKafkaInternalProducer
 
getTransactionTimeout(Properties) - 类 中的静态方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer
已过时。
 
getValue() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.KafkaShuffleFetcher.KafkaShuffleRecord
 
getValue() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.metrics.KafkaMetricMutableWrapper
 
getValue() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.metrics.KafkaMetricWrapper
 
getVersion() - 类 中的方法org.apache.flink.connector.kafka.source.enumerator.KafkaSourceEnumStateSerializer
 
getVersion() - 类 中的方法org.apache.flink.connector.kafka.source.split.KafkaPartitionSplitSerializer
 
getWatermark() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.KafkaShuffleFetcher.KafkaShuffleWatermark
 
getWriterStateSerializer() - 类 中的方法org.apache.flink.connector.kafka.sink.KafkaSink
 
GROUP_OFFSET - 类 中的静态变量org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartitionStateSentinel
Magic number that defines the partition should start from its committed group offset in Kafka.

H

handleSplitRequest(int, String) - 类 中的方法org.apache.flink.connector.kafka.source.enumerator.KafkaSourceEnumerator
 
handleSplitsChanges(SplitsChange<KafkaPartitionSplit>) - 类 中的方法org.apache.flink.connector.kafka.source.reader.KafkaPartitionSplitReader
 
Handover - org.apache.flink.streaming.connectors.kafka.internals中的类
The Handover is a utility to hand over data (a buffer of records) and exception from a producer thread to a consumer thread.
Handover() - 类 的构造器org.apache.flink.streaming.connectors.kafka.internals.Handover
 
Handover.ClosedException - org.apache.flink.streaming.connectors.kafka.internals中的异常错误
An exception thrown by the Handover in the Handover.pollNext() or Handover.produce(ConsumerRecords) method, after the Handover was closed via Handover.close().
Handover.WakeupException - org.apache.flink.streaming.connectors.kafka.internals中的异常错误
A special exception thrown bv the Handover in the Handover.produce(ConsumerRecords) method when the producer is woken up from a blocking call via Handover.wakeupProducer().
hashCode() - 类 中的方法org.apache.flink.connector.kafka.source.split.KafkaPartitionSplit
 
hashCode() - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.KafkaTransactionContext
已过时。
 
hashCode() - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.KafkaTransactionState
已过时。
 
hashCode() - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.NextTransactionalIdHint
已过时。
 
hashCode() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.ClosableBlockingQueue
 
hashCode() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartition
 
hashCode() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartitionLeader
 
hashCode() - 类 中的方法org.apache.flink.streaming.connectors.kafka.partitioner.FlinkFixedPartitioner
 
hashCode() - 类 中的方法org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSink
 
hashCode() - 类 中的方法org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSource
 
hashCode() - 类 中的方法org.apache.flink.streaming.connectors.kafka.table.SinkBufferFlushMode
 
hashCode() - 类 中的方法org.apache.flink.streaming.connectors.kafka.table.UpsertKafkaDynamicTableFactory.DecodingFormatWrapper
 
hashCode() - 类 中的方法org.apache.flink.streaming.connectors.kafka.table.UpsertKafkaDynamicTableFactory.EncodingFormatWrapper
 

I

IDENTIFIER - 类 中的静态变量org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicTableFactory
 
IDENTIFIER - 类 中的静态变量org.apache.flink.streaming.connectors.kafka.table.UpsertKafkaDynamicTableFactory
 
ignoreFailuresAfterTransactionTimeout() - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer
已过时。
Disables the propagation of exceptions thrown when committing presumably timed out Kafka transactions during recovery of the job.
INITIAL_OFFSET - 类 中的静态变量org.apache.flink.connector.kafka.source.metrics.KafkaSourceReaderMetrics
 
initializeConnections() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.AbstractPartitionDiscoverer
Establish the required connections in order to fetch topics and partitions metadata.
initializeConnections() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.KafkaPartitionDiscoverer
 
initializedState(KafkaPartitionSplit) - 类 中的方法org.apache.flink.connector.kafka.source.reader.KafkaSourceReader
 
initializeState(FunctionInitializationContext) - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase
 
initializeState(FunctionInitializationContext) - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer
已过时。
 
initializeState(FunctionInitializationContext) - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerBase
 
initializeUserContext() - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer
已过时。
 
initTransactions() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.FlinkKafkaInternalProducer
 
invoke(FlinkKafkaProducer.KafkaTransactionState, IN, SinkFunction.Context) - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer
已过时。
 
invoke(IN, SinkFunction.Context) - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerBase
Called when new data arrives to the sink, and forwards it to Kafka.
invoke(Object, String, Object...) - 类 中的静态方法org.apache.flink.streaming.connectors.kafka.internals.FlinkKafkaInternalProducer
 
invoke(FlinkKafkaProducer.KafkaTransactionState, IN, SinkFunction.Context) - 类 中的方法org.apache.flink.streaming.connectors.kafka.shuffle.FlinkKafkaShuffleProducer
This is the function invoked to handle each element.
invoke(Watermark) - 类 中的方法org.apache.flink.streaming.connectors.kafka.shuffle.FlinkKafkaShuffleProducer
This is the function invoked to handle each watermark.
isEmpty() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.ClosableBlockingQueue
Checks whether the queue is empty (has no elements).
isEnabled() - 类 中的方法org.apache.flink.streaming.connectors.kafka.table.SinkBufferFlushMode
 
isEndOfStream(T) - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.KafkaDeserializationSchemaWrapper
 
isEndOfStream(T) - 接口 中的方法org.apache.flink.streaming.connectors.kafka.KafkaDeserializationSchema
Method to decide whether the element signals the end of the stream.
isEndOfStream(ObjectNode) - 类 中的方法org.apache.flink.streaming.util.serialization.JSONKeyValueDeserializationSchema
 
isEndOfStream(Tuple2<K, V>) - 类 中的方法org.apache.flink.streaming.util.serialization.TypeInformationKeyValueSerializationSchema
This schema never considers an element to signal end-of-stream, so this method returns always false.
isFixedTopics() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicsDescriptor
 
isImmutableType() - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.ContextStateSerializer
已过时。
 
isImmutableType() - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.NextTransactionalIdHintSerializer
已过时。
 
isImmutableType() - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.TransactionStateSerializer
已过时。
 
isMatchingTopic(String) - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicsDescriptor
Check if the input topic matches the topics described by this KafkaTopicDescriptor.
isOffsetDefined() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartitionState
 
isOpen() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.ClosableBlockingQueue
Checks whether the queue is currently open, meaning elements can be added and polled.
isRecord() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.KafkaShuffleFetcher.KafkaShuffleElement
 
isSentinel(long) - 类 中的静态方法org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartitionStateSentinel
 
isTopicPattern() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicsDescriptor
 
isWatermark() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.KafkaShuffleFetcher.KafkaShuffleElement
 

J

JSONKeyValueDeserializationSchema - org.apache.flink.streaming.util.serialization中的类
DeserializationSchema that deserializes a JSON String into an ObjectNode.
JSONKeyValueDeserializationSchema(boolean) - 类 的构造器org.apache.flink.streaming.util.serialization.JSONKeyValueDeserializationSchema
 

K

KAFKA_CONSUMER_METRIC_GROUP - 类 中的静态变量org.apache.flink.connector.kafka.source.metrics.KafkaSourceReaderMetrics
 
KAFKA_CONSUMER_METRICS_GROUP - 类 中的静态变量org.apache.flink.streaming.connectors.kafka.internals.metrics.KafkaConsumerMetricConstants
 
KAFKA_SOURCE_READER_METRIC_GROUP - 类 中的静态变量org.apache.flink.connector.kafka.source.metrics.KafkaSourceReaderMetrics
 
KafkaCommitCallback - org.apache.flink.streaming.connectors.kafka.internals中的接口
A callback interface that the source operator can implement to trigger custom actions when a commit request completes, which should normally be triggered from checkpoint complete event.
KafkaConnectorOptions - org.apache.flink.streaming.connectors.kafka.table中的类
Options for the Kafka connector.
KafkaConnectorOptions.ScanBoundedMode - org.apache.flink.streaming.connectors.kafka.table中的枚举
Bounded mode for the Kafka consumer, see KafkaConnectorOptions.SCAN_BOUNDED_MODE.
KafkaConnectorOptions.ScanStartupMode - org.apache.flink.streaming.connectors.kafka.table中的枚举
Startup mode for the Kafka consumer, see KafkaConnectorOptions.SCAN_STARTUP_MODE.
KafkaConnectorOptions.ValueFieldsStrategy - org.apache.flink.streaming.connectors.kafka.table中的枚举
Strategies to derive the data type of a value format by considering a key format.
KafkaConsumerMetricConstants - org.apache.flink.streaming.connectors.kafka.internals.metrics中的类
A collection of Kafka consumer metrics related constant strings.
KafkaConsumerMetricConstants() - 类 的构造器org.apache.flink.streaming.connectors.kafka.internals.metrics.KafkaConsumerMetricConstants
 
KafkaConsumerThread<T> - org.apache.flink.streaming.connectors.kafka.internals中的类
The thread the runs the KafkaConsumer, connecting to the brokers and polling records.
KafkaConsumerThread(Logger, Handover, Properties, ClosableBlockingQueue<KafkaTopicPartitionState<T, TopicPartition>>, String, long, boolean, MetricGroup, MetricGroup) - 类 的构造器org.apache.flink.streaming.connectors.kafka.internals.KafkaConsumerThread
 
KafkaContextAware<T> - org.apache.flink.streaming.connectors.kafka中的接口
An interface for KafkaSerializationSchemas that need information about the context where the Kafka Producer is running along with information about the available partitions.
KafkaDeserializationSchema<T> - org.apache.flink.streaming.connectors.kafka中的接口
The deserialization schema describes how to turn the Kafka ConsumerRecords into data types (Java/Scala objects) that are processed by Flink.
KafkaDeserializationSchemaWrapper<T> - org.apache.flink.streaming.connectors.kafka.internals中的类
A simple wrapper for using the DeserializationSchema with the KafkaDeserializationSchema interface.
KafkaDeserializationSchemaWrapper(DeserializationSchema<T>) - 类 的构造器org.apache.flink.streaming.connectors.kafka.internals.KafkaDeserializationSchemaWrapper
 
KafkaDynamicSink - org.apache.flink.streaming.connectors.kafka.table中的类
A version-agnostic Kafka DynamicTableSink.
KafkaDynamicSink(DataType, DataType, EncodingFormat<SerializationSchema<RowData>>, EncodingFormat<SerializationSchema<RowData>>, int[], int[], String, String, Properties, FlinkKafkaPartitioner<RowData>, DeliveryGuarantee, boolean, SinkBufferFlushMode, Integer, String) - 类 的构造器org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSink
 
KafkaDynamicSource - org.apache.flink.streaming.connectors.kafka.table中的类
A version-agnostic Kafka ScanTableSource.
KafkaDynamicSource(DataType, DecodingFormat<DeserializationSchema<RowData>>, DecodingFormat<DeserializationSchema<RowData>>, int[], int[], String, List<String>, Pattern, Properties, StartupMode, Map<KafkaTopicPartition, Long>, long, BoundedMode, Map<KafkaTopicPartition, Long>, long, boolean, String) - 类 的构造器org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSource
 
KafkaDynamicTableFactory - org.apache.flink.streaming.connectors.kafka.table中的类
Factory for creating configured instances of KafkaDynamicSource and KafkaDynamicSink.
KafkaDynamicTableFactory() - 类 的构造器org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicTableFactory
 
KafkaFetcher<T> - org.apache.flink.streaming.connectors.kafka.internals中的类
A fetcher that fetches data from Kafka brokers via the Kafka consumer API.
KafkaFetcher(SourceFunction.SourceContext<T>, Map<KafkaTopicPartition, Long>, SerializedValue<WatermarkStrategy<T>>, ProcessingTimeService, long, ClassLoader, String, KafkaDeserializationSchema<T>, Properties, long, MetricGroup, MetricGroup, boolean) - 类 的构造器org.apache.flink.streaming.connectors.kafka.internals.KafkaFetcher
 
KafkaMetricMutableWrapper - org.apache.flink.streaming.connectors.kafka.internals.metrics中的类
Gauge for getting the current value of a Kafka metric.
KafkaMetricMutableWrapper(Metric) - 类 的构造器org.apache.flink.streaming.connectors.kafka.internals.metrics.KafkaMetricMutableWrapper
 
KafkaMetricWrapper - org.apache.flink.streaming.connectors.kafka.internals.metrics中的类
Gauge for getting the current value of a Kafka metric.
KafkaMetricWrapper(Metric) - 类 的构造器org.apache.flink.streaming.connectors.kafka.internals.metrics.KafkaMetricWrapper
 
KafkaPartitionDiscoverer - org.apache.flink.streaming.connectors.kafka.internals中的类
A partition discoverer that can be used to discover topics and partitions metadata from Kafka brokers via the Kafka high-level consumer API.
KafkaPartitionDiscoverer(KafkaTopicsDescriptor, int, int, Properties) - 类 的构造器org.apache.flink.streaming.connectors.kafka.internals.KafkaPartitionDiscoverer
 
KafkaPartitionSplit - org.apache.flink.connector.kafka.source.split中的类
A SourceSplit for a Kafka partition.
KafkaPartitionSplit(TopicPartition, long) - 类 的构造器org.apache.flink.connector.kafka.source.split.KafkaPartitionSplit
 
KafkaPartitionSplit(TopicPartition, long, long) - 类 的构造器org.apache.flink.connector.kafka.source.split.KafkaPartitionSplit
 
KafkaPartitionSplitReader - org.apache.flink.connector.kafka.source.reader中的类
A SplitReader implementation that reads records from Kafka partitions.
KafkaPartitionSplitReader(Properties, SourceReaderContext, KafkaSourceReaderMetrics) - 类 的构造器org.apache.flink.connector.kafka.source.reader.KafkaPartitionSplitReader
 
KafkaPartitionSplitSerializer - org.apache.flink.connector.kafka.source.split中的类
The serializer for KafkaPartitionSplit.
KafkaPartitionSplitSerializer() - 类 的构造器org.apache.flink.connector.kafka.source.split.KafkaPartitionSplitSerializer
 
KafkaPartitionSplitState - org.apache.flink.connector.kafka.source.split中的类
This class extends KafkaPartitionSplit to track a mutable current offset.
KafkaPartitionSplitState(KafkaPartitionSplit) - 类 的构造器org.apache.flink.connector.kafka.source.split.KafkaPartitionSplitState
 
kafkaProducer - 类 中的变量org.apache.flink.streaming.connectors.kafka.internals.FlinkKafkaInternalProducer
 
KafkaRecordDeserializationSchema<T> - org.apache.flink.connector.kafka.source.reader.deserializer中的接口
An interface for the deserialization of Kafka records.
KafkaRecordEmitter<T> - org.apache.flink.connector.kafka.source.reader中的类
The RecordEmitter implementation for KafkaSourceReader.
KafkaRecordEmitter(KafkaRecordDeserializationSchema<T>) - 类 的构造器org.apache.flink.connector.kafka.source.reader.KafkaRecordEmitter
 
KafkaRecordSerializationSchema<T> - org.apache.flink.connector.kafka.sink中的接口
A serialization schema which defines how to convert a value of type T to ProducerRecord.
KafkaRecordSerializationSchema.KafkaSinkContext - org.apache.flink.connector.kafka.sink中的接口
Context providing information of the kafka record target location.
KafkaRecordSerializationSchemaBuilder<IN> - org.apache.flink.connector.kafka.sink中的类
Builder to construct KafkaRecordSerializationSchema.
KafkaRecordSerializationSchemaBuilder() - 类 的构造器org.apache.flink.connector.kafka.sink.KafkaRecordSerializationSchemaBuilder
 
KafkaSerializationSchema<T> - org.apache.flink.streaming.connectors.kafka中的接口
A KafkaSerializationSchema defines how to serialize values of type T into ProducerRecords.
KafkaSerializationSchemaWrapper<T> - org.apache.flink.streaming.connectors.kafka.internals中的类
An adapter from old style interfaces such as SerializationSchema, FlinkKafkaPartitioner to the KafkaSerializationSchema.
KafkaSerializationSchemaWrapper(String, FlinkKafkaPartitioner<T>, boolean, SerializationSchema<T>) - 类 的构造器org.apache.flink.streaming.connectors.kafka.internals.KafkaSerializationSchemaWrapper
 
KafkaShuffleElement() - 类 的构造器org.apache.flink.streaming.connectors.kafka.internals.KafkaShuffleFetcher.KafkaShuffleElement
 
KafkaShuffleElementDeserializer(TypeSerializer<T>) - 类 的构造器org.apache.flink.streaming.connectors.kafka.internals.KafkaShuffleFetcher.KafkaShuffleElementDeserializer
 
KafkaShuffleFetcher<T> - org.apache.flink.streaming.connectors.kafka.internals中的类
Fetch data from Kafka for Kafka Shuffle.
KafkaShuffleFetcher(SourceFunction.SourceContext<T>, Map<KafkaTopicPartition, Long>, SerializedValue<WatermarkStrategy<T>>, ProcessingTimeService, long, ClassLoader, String, KafkaDeserializationSchema<T>, Properties, long, MetricGroup, MetricGroup, boolean, TypeSerializer<T>, int) - 类 的构造器org.apache.flink.streaming.connectors.kafka.internals.KafkaShuffleFetcher
 
KafkaShuffleFetcher.KafkaShuffleElement - org.apache.flink.streaming.connectors.kafka.internals中的类
An element in a KafkaShuffle.
KafkaShuffleFetcher.KafkaShuffleElementDeserializer<T> - org.apache.flink.streaming.connectors.kafka.internals中的类
Deserializer for KafkaShuffleElement.
KafkaShuffleFetcher.KafkaShuffleRecord<T> - org.apache.flink.streaming.connectors.kafka.internals中的类
One value with Type T in a KafkaShuffle.
KafkaShuffleFetcher.KafkaShuffleWatermark - org.apache.flink.streaming.connectors.kafka.internals中的类
A watermark element in a KafkaShuffle.
KafkaSink<IN> - org.apache.flink.connector.kafka.sink中的类
Flink Sink to produce data into a Kafka topic.
KafkaSinkBuilder<IN> - org.apache.flink.connector.kafka.sink中的类
Builder to construct KafkaSink.
KafkaSource<OUT> - org.apache.flink.connector.kafka.source中的类
The Source implementation of Kafka.
KafkaSourceBuilder<OUT> - org.apache.flink.connector.kafka.source中的类
The builder class for KafkaSource to make it easier for the users to construct a KafkaSource.
KafkaSourceEnumerator - org.apache.flink.connector.kafka.source.enumerator中的类
The enumerator class for Kafka source.
KafkaSourceEnumerator(KafkaSubscriber, OffsetsInitializer, OffsetsInitializer, Properties, SplitEnumeratorContext<KafkaPartitionSplit>, Boundedness) - 类 的构造器org.apache.flink.connector.kafka.source.enumerator.KafkaSourceEnumerator
 
KafkaSourceEnumerator(KafkaSubscriber, OffsetsInitializer, OffsetsInitializer, Properties, SplitEnumeratorContext<KafkaPartitionSplit>, Boundedness, Set<TopicPartition>) - 类 的构造器org.apache.flink.connector.kafka.source.enumerator.KafkaSourceEnumerator
 
KafkaSourceEnumerator.PartitionOffsetsRetrieverImpl - org.apache.flink.connector.kafka.source.enumerator中的类
The implementation for offsets retriever with a consumer and an admin client.
KafkaSourceEnumState - org.apache.flink.connector.kafka.source.enumerator中的类
The state of Kafka source enumerator.
KafkaSourceEnumStateSerializer - org.apache.flink.connector.kafka.source.enumerator中的类
The Serializer for the enumerator state of Kafka source.
KafkaSourceEnumStateSerializer() - 类 的构造器org.apache.flink.connector.kafka.source.enumerator.KafkaSourceEnumStateSerializer
 
KafkaSourceFetcherManager - org.apache.flink.connector.kafka.source.reader.fetcher中的类
The SplitFetcherManager for Kafka source.
KafkaSourceFetcherManager(FutureCompletingBlockingQueue<RecordsWithSplitIds<ConsumerRecord<byte[], byte[]>>>, Supplier<SplitReader<ConsumerRecord<byte[], byte[]>, KafkaPartitionSplit>>, Consumer<Collection<String>>) - 类 的构造器org.apache.flink.connector.kafka.source.reader.fetcher.KafkaSourceFetcherManager
Creates a new SplitFetcherManager with a single I/O threads.
KafkaSourceOptions - org.apache.flink.connector.kafka.source中的类
Configurations for KafkaSource.
KafkaSourceOptions() - 类 的构造器org.apache.flink.connector.kafka.source.KafkaSourceOptions
 
KafkaSourceReader<T> - org.apache.flink.connector.kafka.source.reader中的类
The source reader for Kafka partitions.
KafkaSourceReader(FutureCompletingBlockingQueue<RecordsWithSplitIds<ConsumerRecord<byte[], byte[]>>>, KafkaSourceFetcherManager, RecordEmitter<ConsumerRecord<byte[], byte[]>, T, KafkaPartitionSplitState>, Configuration, SourceReaderContext, KafkaSourceReaderMetrics) - 类 的构造器org.apache.flink.connector.kafka.source.reader.KafkaSourceReader
 
KafkaSourceReaderMetrics - org.apache.flink.connector.kafka.source.metrics中的类
A collection class for handling metrics in KafkaSourceReader.
KafkaSourceReaderMetrics(SourceReaderMetricGroup) - 类 的构造器org.apache.flink.connector.kafka.source.metrics.KafkaSourceReaderMetrics
 
KafkaSubscriber - org.apache.flink.connector.kafka.source.enumerator.subscriber中的接口
Kafka consumer allows a few different ways to consume from the topics, including: Subscribe from a collection of topics.
KafkaTopicPartition - org.apache.flink.streaming.connectors.kafka.internals中的类
Flink's description of a partition in a Kafka topic.
KafkaTopicPartition(String, int) - 类 的构造器org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartition
 
KafkaTopicPartition.Comparator - org.apache.flink.streaming.connectors.kafka.internals中的类
KafkaTopicPartitionAssigner - org.apache.flink.streaming.connectors.kafka.internals中的类
Utility for assigning Kafka partitions to consumer subtasks.
KafkaTopicPartitionAssigner() - 类 的构造器org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartitionAssigner
 
KafkaTopicPartitionLeader - org.apache.flink.streaming.connectors.kafka.internals中的类
Serializable Topic Partition info with leader Node information.
KafkaTopicPartitionLeader(KafkaTopicPartition, Node) - 类 的构造器org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartitionLeader
 
KafkaTopicPartitionState<T,KPH> - org.apache.flink.streaming.connectors.kafka.internals中的类
The state that the Flink Kafka Consumer holds for each Kafka partition.
KafkaTopicPartitionState(KafkaTopicPartition, KPH) - 类 的构造器org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartitionState
 
KafkaTopicPartitionStateSentinel - org.apache.flink.streaming.connectors.kafka.internals中的类
Magic values used to represent special offset states before partitions are actually read.
KafkaTopicPartitionStateSentinel() - 类 的构造器org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartitionStateSentinel
 
KafkaTopicPartitionStateWithWatermarkGenerator<T,KPH> - org.apache.flink.streaming.connectors.kafka.internals中的类
A special version of the per-kafka-partition-state that additionally holds a TimestampAssigner, WatermarkGenerator, an immediate WatermarkOutput, and a deferred WatermarkOutput for this partition.
KafkaTopicPartitionStateWithWatermarkGenerator(KafkaTopicPartition, KPH, TimestampAssigner<T>, WatermarkGenerator<T>, WatermarkOutput, WatermarkOutput) - 类 的构造器org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartitionStateWithWatermarkGenerator
 
KafkaTopicsDescriptor - org.apache.flink.streaming.connectors.kafka.internals中的类
A Kafka Topics Descriptor describes how the consumer subscribes to Kafka topics - either a fixed list of topics, or a topic pattern.
KafkaTopicsDescriptor(List<String>, Pattern) - 类 的构造器org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicsDescriptor
 
KafkaTransactionContext(Set<String>) - 类 的构造器org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.KafkaTransactionContext
已过时。
 
KafkaTransactionState(String, FlinkKafkaInternalProducer<byte[], byte[]>) - 类 的构造器org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.KafkaTransactionState
已过时。
 
KafkaTransactionState(FlinkKafkaInternalProducer<byte[], byte[]>) - 类 的构造器org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.KafkaTransactionState
已过时。
 
KafkaTransactionState(String, long, short, FlinkKafkaInternalProducer<byte[], byte[]>) - 类 的构造器org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.KafkaTransactionState
已过时。
 
KEY_DISABLE_METRICS - 类 中的静态变量org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase
Boolean configuration key to disable metrics tracking
KEY_DISABLE_METRICS - 类 中的静态变量org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer
已过时。
Configuration key for disabling the metrics reporting.
KEY_DISABLE_METRICS - 类 中的静态变量org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerBase
Configuration key for disabling the metrics reporting.
KEY_FIELDS - 类 中的静态变量org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions
 
KEY_FIELDS_PREFIX - 类 中的静态变量org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions
 
KEY_FORMAT - 类 中的静态变量org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions
 
KEY_PARTITION_DISCOVERY_INTERVAL_MILLIS - 类 中的静态变量org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase
Configuration key to define the consumer's partition discovery interval, in milliseconds.
KEY_POLL_TIMEOUT - 类 中的静态变量org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer
已过时。
Configuration key to change the polling timeout
keyDecodingFormat - 类 中的变量org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSource
Optional format for decoding keys from Kafka.
KeyedDeserializationSchema<T> - org.apache.flink.streaming.util.serialization中的接口
已过时。
KeyedSerializationSchema<T> - org.apache.flink.streaming.util.serialization中的接口
已过时。
KeyedSerializationSchemaWrapper<T> - org.apache.flink.streaming.connectors.kafka.internals中的类
A simple wrapper for using the SerializationSchema with the KeyedSerializationSchema interface.
KeyedSerializationSchemaWrapper(SerializationSchema<T>) - 类 的构造器org.apache.flink.streaming.connectors.kafka.internals.KeyedSerializationSchemaWrapper
 
keyEncodingFormat - 类 中的变量org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSink
Optional format for encoding keys to Kafka.
keyPrefix - 类 中的变量org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSink
Prefix that needs to be removed from fields when constructing the physical data type.
keyPrefix - 类 中的变量org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSource
Prefix that needs to be removed from fields when constructing the physical data type.
keyProjection - 类 中的变量org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSink
Indices that determine the key fields and the source position in the consumed row.
keyProjection - 类 中的变量org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSource
Indices that determine the key fields and the target position in the produced row.

L

lastParallelism - 类 中的变量org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.NextTransactionalIdHint
已过时。
 
latest() - 接口 中的静态方法org.apache.flink.connector.kafka.source.enumerator.initializer.OffsetsInitializer
Get an OffsetsInitializer which initializes the offsets to the latest offsets of each partition.
LATEST_OFFSET - 类 中的静态变量org.apache.flink.connector.kafka.source.split.KafkaPartitionSplit
 
LATEST_OFFSET - 类 中的静态变量org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartitionStateSentinel
Magic number that defines the partition should start from the latest offset.
LEGACY_COMMITTED_OFFSETS_METRICS_GROUP - 类 中的静态变量org.apache.flink.streaming.connectors.kafka.internals.metrics.KafkaConsumerMetricConstants
 
LEGACY_CURRENT_OFFSETS_METRICS_GROUP - 类 中的静态变量org.apache.flink.streaming.connectors.kafka.internals.metrics.KafkaConsumerMetricConstants
 
listReadableMetadata() - 类 中的方法org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSource
 
listWritableMetadata() - 类 中的方法org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSink
 
LOG - 类 中的静态变量org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase
 
logFailuresOnly - 类 中的变量org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerBase
Flag indicating whether to accept failures (and log them), or to fail on failures.

M

markActive() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.SourceContextWatermarkOutputAdapter
 
markIdle() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.SourceContextWatermarkOutputAdapter
 
MAX_NUM_PENDING_CHECKPOINTS - 类 中的静态变量org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase
The maximum number of pending non-committed checkpoints to track, to avoid memory leaks.
maybeAddRecordsLagMetric(KafkaConsumer<?, ?>, TopicPartition) - 类 中的方法org.apache.flink.connector.kafka.source.metrics.KafkaSourceReaderMetrics
Add a partition's records-lag metric to tracking list if this partition never appears before.
metadataKeys - 类 中的变量org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSink
Metadata that is appended at the end of a physical sink row.
metadataKeys - 类 中的变量org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSource
Metadata that is appended at the end of a physical source row.
metrics() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.FlinkKafkaInternalProducer
 
MetricUtil - org.apache.flink.connector.kafka中的类
Collection of methods to interact with Kafka's client metric system.
MetricUtil() - 类 的构造器org.apache.flink.connector.kafka.MetricUtil
 

N

nextFreeTransactionalId - 类 中的变量org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.NextTransactionalIdHint
已过时。
 
NextTransactionalIdHint() - 类 的构造器org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.NextTransactionalIdHint
已过时。
 
NextTransactionalIdHint(int, long) - 类 的构造器org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.NextTransactionalIdHint
已过时。
 
NextTransactionalIdHint() - 类 的构造器org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer011.NextTransactionalIdHint
 
NextTransactionalIdHintSerializer() - 类 的构造器org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.NextTransactionalIdHintSerializer
已过时。
 
NextTransactionalIdHintSerializer() - 类 的构造器org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer011.NextTransactionalIdHintSerializer
 
NextTransactionalIdHintSerializerSnapshot() - 类 的构造器org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.NextTransactionalIdHintSerializer.NextTransactionalIdHintSerializerSnapshot
已过时。
 
NextTransactionalIdHintSerializerSnapshot() - 类 的构造器org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer011.NextTransactionalIdHintSerializer.NextTransactionalIdHintSerializerSnapshot
 
NO_STOPPING_OFFSET - 类 中的静态变量org.apache.flink.connector.kafka.source.split.KafkaPartitionSplit
 
NoStoppingOffsetsInitializer - org.apache.flink.connector.kafka.source.enumerator.initializer中的类
An implementation of OffsetsInitializer which does not initialize anything.
NoStoppingOffsetsInitializer() - 类 的构造器org.apache.flink.connector.kafka.source.enumerator.initializer.NoStoppingOffsetsInitializer
 
notifyCheckpointAborted(long) - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase
 
notifyCheckpointComplete(Map<TopicPartition, OffsetAndMetadata>, OffsetCommitCallback) - 类 中的方法org.apache.flink.connector.kafka.source.reader.KafkaPartitionSplitReader
 
notifyCheckpointComplete(long) - 类 中的方法org.apache.flink.connector.kafka.source.reader.KafkaSourceReader
 
notifyCheckpointComplete(long) - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase
 
numPendingRecords() - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerBase
 

O

of(KafkaDeserializationSchema<V>) - 接口 中的静态方法org.apache.flink.connector.kafka.source.reader.deserializer.KafkaRecordDeserializationSchema
Wraps a legacy KafkaDeserializationSchema as the deserializer of the ConsumerRecords.
OFFSET_NOT_SET - 类 中的静态变量org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartitionStateSentinel
Magic number that defines an unset offset.
OffsetCommitMode - org.apache.flink.streaming.connectors.kafka.config中的枚举
The offset commit mode represents the behaviour of how offsets are externally committed back to Kafka brokers / Zookeeper.
OffsetCommitModes - org.apache.flink.streaming.connectors.kafka.config中的类
Utilities for OffsetCommitMode.
OffsetCommitModes() - 类 的构造器org.apache.flink.streaming.connectors.kafka.config.OffsetCommitModes
 
offsets(Map<TopicPartition, Long>) - 接口 中的静态方法org.apache.flink.connector.kafka.source.enumerator.initializer.OffsetsInitializer
Get an OffsetsInitializer which initializes the offsets to the specified offsets.
offsets(Map<TopicPartition, Long>, OffsetResetStrategy) - 接口 中的静态方法org.apache.flink.connector.kafka.source.enumerator.initializer.OffsetsInitializer
Get an OffsetsInitializer which initializes the offsets to the specified offsets.
OFFSETS_BY_PARTITION_METRICS_GROUP - 类 中的静态变量org.apache.flink.streaming.connectors.kafka.internals.metrics.KafkaConsumerMetricConstants
 
OFFSETS_BY_TOPIC_METRICS_GROUP - 类 中的静态变量org.apache.flink.streaming.connectors.kafka.internals.metrics.KafkaConsumerMetricConstants
 
offsetsForTimes(Map<TopicPartition, Long>) - 接口 中的方法org.apache.flink.connector.kafka.source.enumerator.initializer.OffsetsInitializer.PartitionOffsetsRetriever
List offsets matching a timestamp for the specified partitions.
offsetsForTimes(Map<TopicPartition, Long>) - 类 中的方法org.apache.flink.connector.kafka.source.enumerator.KafkaSourceEnumerator.PartitionOffsetsRetrieverImpl
 
OffsetsInitializer - org.apache.flink.connector.kafka.source.enumerator.initializer中的接口
An interface for users to specify the starting / stopping offset of a KafkaPartitionSplit.
OffsetsInitializer.PartitionOffsetsRetriever - org.apache.flink.connector.kafka.source.enumerator.initializer中的接口
An interface that provides necessary information to the OffsetsInitializer to get the initial offsets of the Kafka partitions.
OffsetsInitializerValidator - org.apache.flink.connector.kafka.source.enumerator.initializer中的接口
Interface for validating OffsetsInitializer with properties from KafkaSource.
onEvent(T, long) - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartitionState
 
onEvent(T, long) - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartitionStateWithWatermarkGenerator
 
onException(Throwable) - 接口 中的方法org.apache.flink.streaming.connectors.kafka.internals.KafkaCommitCallback
A callback method the user can implement to provide asynchronous handling of commit request failure.
onPeriodicEmit() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartitionState
 
onPeriodicEmit() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartitionStateWithWatermarkGenerator
 
onSplitFinished(Map<String, KafkaPartitionSplitState>) - 类 中的方法org.apache.flink.connector.kafka.source.reader.KafkaSourceReader
 
onSuccess() - 接口 中的方法org.apache.flink.streaming.connectors.kafka.internals.KafkaCommitCallback
A callback method the user can implement to provide asynchronous handling of commit request completion.
open(SerializationSchema.InitializationContext, KafkaRecordSerializationSchema.KafkaSinkContext) - 接口 中的方法org.apache.flink.connector.kafka.sink.KafkaRecordSerializationSchema
Initialization method for the schema.
open(DeserializationSchema.InitializationContext) - 接口 中的方法org.apache.flink.connector.kafka.source.reader.deserializer.KafkaRecordDeserializationSchema
Initialization method for the schema.
open(Configuration) - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase
 
open(Configuration) - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer
已过时。
Initializes the connection to Kafka.
open(Configuration) - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerBase
Initializes the connection to Kafka.
open() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.AbstractPartitionDiscoverer
Opens the partition discoverer, initializing all required Kafka connections.
open(DeserializationSchema.InitializationContext) - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.KafkaDeserializationSchemaWrapper
 
open(SerializationSchema.InitializationContext) - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.KafkaSerializationSchemaWrapper
 
open(DeserializationSchema.InitializationContext) - 接口 中的方法org.apache.flink.streaming.connectors.kafka.KafkaDeserializationSchema
Initialization method for the schema.
open(SerializationSchema.InitializationContext) - 接口 中的方法org.apache.flink.streaming.connectors.kafka.KafkaSerializationSchema
Initialization method for the schema.
open(int, int) - 类 中的方法org.apache.flink.streaming.connectors.kafka.partitioner.FlinkFixedPartitioner
 
open(int, int) - 类 中的方法org.apache.flink.streaming.connectors.kafka.partitioner.FlinkKafkaPartitioner
Initializer for the partitioner.
open(DeserializationSchema.InitializationContext) - 类 中的方法org.apache.flink.streaming.util.serialization.JSONKeyValueDeserializationSchema
 
optionalOptions() - 类 中的方法org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicTableFactory
 
optionalOptions() - 类 中的方法org.apache.flink.streaming.connectors.kafka.table.UpsertKafkaDynamicTableFactory
 
org.apache.flink.connector.kafka - 程序包 org.apache.flink.connector.kafka
 
org.apache.flink.connector.kafka.sink - 程序包 org.apache.flink.connector.kafka.sink
 
org.apache.flink.connector.kafka.source - 程序包 org.apache.flink.connector.kafka.source
 
org.apache.flink.connector.kafka.source.enumerator - 程序包 org.apache.flink.connector.kafka.source.enumerator
 
org.apache.flink.connector.kafka.source.enumerator.initializer - 程序包 org.apache.flink.connector.kafka.source.enumerator.initializer
 
org.apache.flink.connector.kafka.source.enumerator.subscriber - 程序包 org.apache.flink.connector.kafka.source.enumerator.subscriber
 
org.apache.flink.connector.kafka.source.metrics - 程序包 org.apache.flink.connector.kafka.source.metrics
 
org.apache.flink.connector.kafka.source.reader - 程序包 org.apache.flink.connector.kafka.source.reader
 
org.apache.flink.connector.kafka.source.reader.deserializer - 程序包 org.apache.flink.connector.kafka.source.reader.deserializer
 
org.apache.flink.connector.kafka.source.reader.fetcher - 程序包 org.apache.flink.connector.kafka.source.reader.fetcher
 
org.apache.flink.connector.kafka.source.split - 程序包 org.apache.flink.connector.kafka.source.split
 
org.apache.flink.streaming.connectors.kafka - 程序包 org.apache.flink.streaming.connectors.kafka
 
org.apache.flink.streaming.connectors.kafka.config - 程序包 org.apache.flink.streaming.connectors.kafka.config
 
org.apache.flink.streaming.connectors.kafka.internals - 程序包 org.apache.flink.streaming.connectors.kafka.internals
 
org.apache.flink.streaming.connectors.kafka.internals.metrics - 程序包 org.apache.flink.streaming.connectors.kafka.internals.metrics
 
org.apache.flink.streaming.connectors.kafka.partitioner - 程序包 org.apache.flink.streaming.connectors.kafka.partitioner
 
org.apache.flink.streaming.connectors.kafka.shuffle - 程序包 org.apache.flink.streaming.connectors.kafka.shuffle
 
org.apache.flink.streaming.connectors.kafka.table - 程序包 org.apache.flink.streaming.connectors.kafka.table
 
org.apache.flink.streaming.util.serialization - 程序包 org.apache.flink.streaming.util.serialization
 

P

parallelism - 类 中的变量org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSink
Parallelism of the physical Kafka producer
partition(T, byte[], byte[], String, int[]) - 类 中的方法org.apache.flink.streaming.connectors.kafka.partitioner.FlinkFixedPartitioner
 
partition(T, byte[], byte[], String, int[]) - 类 中的方法org.apache.flink.streaming.connectors.kafka.partitioner.FlinkKafkaPartitioner
Determine the id of the partition that the record should be written to.
PARTITION_DISCOVERY_DISABLED - 类 中的静态变量org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase
The default interval to execute partition discovery, in milliseconds (Long.MIN_VALUE, i.e. disabled by default).
PARTITION_DISCOVERY_INTERVAL_MS - 类 中的静态变量org.apache.flink.connector.kafka.source.KafkaSourceOptions
 
PARTITION_GROUP - 类 中的静态变量org.apache.flink.connector.kafka.source.metrics.KafkaSourceReaderMetrics
 
partitionConsumerRecordsHandler(List<ConsumerRecord<byte[], byte[]>>, KafkaTopicPartitionState<T, TopicPartition>) - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.KafkaFetcher
 
partitionConsumerRecordsHandler(List<ConsumerRecord<byte[], byte[]>>, KafkaTopicPartitionState<T, TopicPartition>) - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.KafkaShuffleFetcher
 
partitioner - 类 中的变量org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSink
Partitioner to select Kafka partition for each item.
PartitionOffsetsRetrieverImpl(AdminClient, String) - 类 的构造器org.apache.flink.connector.kafka.source.enumerator.KafkaSourceEnumerator.PartitionOffsetsRetrieverImpl
 
partitionsFor(String) - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.FlinkKafkaInternalProducer
 
pauseOrResumeSplits(Collection<KafkaPartitionSplit>, Collection<KafkaPartitionSplit>) - 类 中的方法org.apache.flink.connector.kafka.source.reader.KafkaPartitionSplitReader
 
pauseOrResumeSplits(Collection<String>, Collection<String>) - 类 中的方法org.apache.flink.connector.kafka.source.reader.KafkaSourceReader
 
peek() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.ClosableBlockingQueue
Returns the queue's next element without removing it, if the queue is non-empty.
pendingRecords - 类 中的变量org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer
已过时。
Number of unacknowledged records.
pendingRecords - 类 中的变量org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerBase
Number of unacknowledged records.
pendingRecordsLock - 类 中的变量org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerBase
Lock for accessing the pending records.
persistentKeyBy(DataStream<T>, String, int, int, Properties, KeySelector<T, K>) - 类 中的静态方法org.apache.flink.streaming.connectors.kafka.shuffle.FlinkKafkaShuffle
Uses Kafka as a message bus to persist keyBy shuffle.
persistentKeyBy(DataStream<T>, String, int, int, Properties, int...) - 类 中的静态方法org.apache.flink.streaming.connectors.kafka.shuffle.FlinkKafkaShuffle
Uses Kafka as a message bus to persist keyBy shuffle.
physicalDataType - 类 中的变量org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSink
Data type to configure the formats.
physicalDataType - 类 中的变量org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSource
Data type to configure the formats.
poll() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.ClosableBlockingQueue
Returns the queue's next element and removes it, the queue is non-empty.
pollBatch() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.ClosableBlockingQueue
Returns all of the queue's current elements in a list, if the queue is non-empty.
pollNext() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.Handover
Polls the next element from the Handover, possibly blocking until the next element is available.
pollTimeout - 类 中的变量org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer
已过时。
From Kafka's Javadoc: The time, in milliseconds, spent waiting in poll if data is not available.
preCommit(FlinkKafkaProducer.KafkaTransactionState) - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer
已过时。
 
produce(ConsumerRecords<byte[], byte[]>) - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.Handover
Hands over an element from the producer.
producedDataType - 类 中的变量org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSource
Data type that describes the final output of the source.
producer - 类 中的变量org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerBase
KafkaProducer instance.
producerConfig - 类 中的变量org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer
已过时。
User defined properties for the Producer.
producerConfig - 类 中的变量org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerBase
User defined properties for the Producer.
properties - 类 中的变量org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer
已过时。
User-supplied properties for Kafka
properties - 类 中的变量org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSink
Properties for the Kafka producer.
properties - 类 中的变量org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSource
Properties for the Kafka consumer.
props - 类 中的变量org.apache.flink.connector.kafka.source.KafkaSourceBuilder
 
PROPS_BOOTSTRAP_SERVERS - 类 中的静态变量org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions
 
PROPS_GROUP_ID - 类 中的静态变量org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions
 

R

readKeyBy(String, StreamExecutionEnvironment, TypeInformation<T>, Properties, KeySelector<T, K>) - 类 中的静态方法org.apache.flink.streaming.connectors.kafka.shuffle.FlinkKafkaShuffle
recordCommittedOffset(TopicPartition, long) - 类 中的方法org.apache.flink.connector.kafka.source.metrics.KafkaSourceReaderMetrics
Update the latest committed offset of the given TopicPartition.
recordCurrentOffset(TopicPartition, long) - 类 中的方法org.apache.flink.connector.kafka.source.metrics.KafkaSourceReaderMetrics
Update current consuming offset of the given TopicPartition.
recordFailedCommit() - 类 中的方法org.apache.flink.connector.kafka.source.metrics.KafkaSourceReaderMetrics
Mark a failure commit.
RECORDS_LAG - 类 中的静态变量org.apache.flink.connector.kafka.source.metrics.KafkaSourceReaderMetrics
 
recordSucceededCommit() - 类 中的方法org.apache.flink.connector.kafka.source.metrics.KafkaSourceReaderMetrics
Mark a successful commit.
recoverAndAbort(FlinkKafkaProducer.KafkaTransactionState) - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer
已过时。
 
recoverAndCommit(FlinkKafkaProducer.KafkaTransactionState) - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer
已过时。
 
REGISTER_KAFKA_CONSUMER_METRICS - 类 中的静态变量org.apache.flink.connector.kafka.source.KafkaSourceOptions
 
registerKafkaConsumerMetrics(KafkaConsumer<?, ?>) - 类 中的方法org.apache.flink.connector.kafka.source.metrics.KafkaSourceReaderMetrics
Register metrics of KafkaConsumer in Kafka metric group.
registerNumBytesIn(KafkaConsumer<?, ?>) - 类 中的方法org.apache.flink.connector.kafka.source.metrics.KafkaSourceReaderMetrics
Register MetricNames.IO_NUM_BYTES_IN.
registerTopicPartition(TopicPartition) - 类 中的方法org.apache.flink.connector.kafka.source.metrics.KafkaSourceReaderMetrics
Register metric groups for the given TopicPartition.
removeRecordsLagMetric(TopicPartition) - 类 中的方法org.apache.flink.connector.kafka.source.metrics.KafkaSourceReaderMetrics
Remove a partition's records-lag metric from tracking list.
reportError(Throwable) - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.ExceptionProxy
Sets the exception and interrupts the target thread, if no other exception has occurred so far.
reportError(Throwable) - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.Handover
Reports an exception.
requiredOptions() - 类 中的方法org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicTableFactory
 
requiredOptions() - 类 中的方法org.apache.flink.streaming.connectors.kafka.table.UpsertKafkaDynamicTableFactory
 
restoreEnumerator(SplitEnumeratorContext<KafkaPartitionSplit>, KafkaSourceEnumState) - 类 中的方法org.apache.flink.connector.kafka.source.KafkaSource
 
restoreWriter(Sink.InitContext, Collection<KafkaWriterState>) - 类 中的方法org.apache.flink.connector.kafka.sink.KafkaSink
 
resumeTransaction(long, short) - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.FlinkKafkaInternalProducer
Instead of obtaining producerId and epoch from the transaction coordinator, re-use previously obtained ones, so that we can resume transaction after a restart.
run(SourceFunction.SourceContext<T>) - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase
 
run() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.KafkaConsumerThread
 
runFetchLoop() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.AbstractFetcher
 
runFetchLoop() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.KafkaFetcher
 

S

SAFE_SCALE_DOWN_FACTOR - 类 中的静态变量org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer
已过时。
This coefficient determines what is the safe scale down factor.
SCAN_BOUNDED_MODE - 类 中的静态变量org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions
 
SCAN_BOUNDED_SPECIFIC_OFFSETS - 类 中的静态变量org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions
 
SCAN_BOUNDED_TIMESTAMP_MILLIS - 类 中的静态变量org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions
 
SCAN_STARTUP_MODE - 类 中的静态变量org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions
 
SCAN_STARTUP_SPECIFIC_OFFSETS - 类 中的静态变量org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions
 
SCAN_STARTUP_TIMESTAMP_MILLIS - 类 中的静态变量org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions
 
SCAN_TOPIC_PARTITION_DISCOVERY - 类 中的静态变量org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions
 
schema - 类 中的变量org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerBase
(Serializable) SerializationSchema for turning objects used with Flink into. byte[] for Kafka.
semantic - 类 中的变量org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer
已过时。
Semantic chosen for this instance.
send(ProducerRecord<K, V>) - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.FlinkKafkaInternalProducer
 
send(ProducerRecord<K, V>, Callback) - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.FlinkKafkaInternalProducer
 
sendOffsetsToTransaction(Map<TopicPartition, OffsetAndMetadata>, String) - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.FlinkKafkaInternalProducer
 
sendOffsetsToTransaction(Map<TopicPartition, OffsetAndMetadata>, ConsumerGroupMetadata) - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.FlinkKafkaInternalProducer
 
serialize(T, KafkaRecordSerializationSchema.KafkaSinkContext, Long) - 接口 中的方法org.apache.flink.connector.kafka.sink.KafkaRecordSerializationSchema
Serializes given element and returns it as a ProducerRecord.
serialize(KafkaSourceEnumState) - 类 中的方法org.apache.flink.connector.kafka.source.enumerator.KafkaSourceEnumStateSerializer
 
serialize(KafkaPartitionSplit) - 类 中的方法org.apache.flink.connector.kafka.source.split.KafkaPartitionSplitSerializer
 
serialize(FlinkKafkaProducer.KafkaTransactionContext, DataOutputView) - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.ContextStateSerializer
已过时。
 
serialize(FlinkKafkaProducer.NextTransactionalIdHint, DataOutputView) - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.NextTransactionalIdHintSerializer
已过时。
 
serialize(FlinkKafkaProducer.KafkaTransactionState, DataOutputView) - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.TransactionStateSerializer
已过时。
 
serialize(T, Long) - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.KafkaSerializationSchemaWrapper
 
serialize(T, Long) - 接口 中的方法org.apache.flink.streaming.connectors.kafka.KafkaSerializationSchema
Serializes given element and returns it as a ProducerRecord.
serializeKey(T) - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.KeyedSerializationSchemaWrapper
 
serializeKey(T) - 接口 中的方法org.apache.flink.streaming.util.serialization.KeyedSerializationSchema
已过时。
Serializes the key of the incoming element to a byte array This method might return null if no key is available.
serializeKey(Tuple2<K, V>) - 类 中的方法org.apache.flink.streaming.util.serialization.TypeInformationKeyValueSerializationSchema
 
serializeValue(T) - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.KeyedSerializationSchemaWrapper
 
serializeValue(T) - 接口 中的方法org.apache.flink.streaming.util.serialization.KeyedSerializationSchema
已过时。
Serializes the value of the incoming element to a byte array.
serializeValue(Tuple2<K, V>) - 类 中的方法org.apache.flink.streaming.util.serialization.TypeInformationKeyValueSerializationSchema
 
setAndCheckDiscoveredPartition(KafkaTopicPartition) - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.AbstractPartitionDiscoverer
Sets a partition as discovered.
setBootstrapServers(String) - 类 中的方法org.apache.flink.connector.kafka.sink.KafkaSinkBuilder
Sets the Kafka bootstrap servers.
setBootstrapServers(String) - 类 中的方法org.apache.flink.connector.kafka.source.KafkaSourceBuilder
Sets the bootstrap servers for the KafkaConsumer of the KafkaSource.
setBounded(OffsetsInitializer) - 类 中的方法org.apache.flink.connector.kafka.source.KafkaSourceBuilder
By default the KafkaSource is set to run as Boundedness.CONTINUOUS_UNBOUNDED and thus never stops until the Flink job fails or is canceled.
setClientIdPrefix(String) - 类 中的方法org.apache.flink.connector.kafka.source.KafkaSourceBuilder
Sets the client id prefix of this KafkaSource.
setCommitOffsetsOnCheckpoints(boolean) - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase
Specifies whether or not the consumer should commit offsets back to Kafka on checkpoints.
setCommittedOffset(long) - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartitionState
 
setCurrentOffset(long) - 类 中的方法org.apache.flink.connector.kafka.source.split.KafkaPartitionSplitState
 
setDeliverGuarantee(DeliveryGuarantee) - 类 中的方法org.apache.flink.connector.kafka.sink.KafkaSinkBuilder
已过时。
setDeliveryGuarantee(DeliveryGuarantee) - 类 中的方法org.apache.flink.connector.kafka.sink.KafkaSinkBuilder
Sets the wanted the DeliveryGuarantee.
setDeserializer(KafkaRecordDeserializationSchema<OUT>) - 类 中的方法org.apache.flink.connector.kafka.source.KafkaSourceBuilder
Sets the deserializer of the ConsumerRecord for KafkaSource.
setField(Object, String, Object) - 类 中的静态方法org.apache.flink.streaming.connectors.kafka.internals.FlinkKafkaInternalProducer
Sets the field fieldName on the given Object object to value using reflection.
setFlushOnCheckpoint(boolean) - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerBase
If set to true, the Flink producer will wait for all outstanding messages in the Kafka buffers to be acknowledged by the Kafka producer on a checkpoint.
setGroupId(String) - 类 中的方法org.apache.flink.connector.kafka.source.KafkaSourceBuilder
Sets the consumer group id of the KafkaSource.
setKafkaKeySerializer(Class<? extends Serializer<? super T>>) - 类 中的方法org.apache.flink.connector.kafka.sink.KafkaRecordSerializationSchemaBuilder
Sets Kafka's Serializer to serialize incoming elements to the key of the ProducerRecord.
setKafkaKeySerializer(Class<S>, Map<String, String>) - 类 中的方法org.apache.flink.connector.kafka.sink.KafkaRecordSerializationSchemaBuilder
Sets a configurable Kafka Serializer and pass a configuration to serialize incoming elements to the key of the ProducerRecord.
setKafkaMetric(Metric) - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.metrics.KafkaMetricMutableWrapper
 
setKafkaProducerConfig(Properties) - 类 中的方法org.apache.flink.connector.kafka.sink.KafkaSinkBuilder
Sets the configuration which used to instantiate all used KafkaProducer.
setKafkaSubscriber(KafkaSubscriber) - 类 中的方法org.apache.flink.connector.kafka.source.KafkaSourceBuilder
Set a custom Kafka subscriber to use to discover new splits.
setKafkaValueSerializer(Class<? extends Serializer<? super T>>) - 类 中的方法org.apache.flink.connector.kafka.sink.KafkaRecordSerializationSchemaBuilder
Sets Kafka's Serializer to serialize incoming elements to the value of the ProducerRecord.
setKafkaValueSerializer(Class<S>, Map<String, String>) - 类 中的方法org.apache.flink.connector.kafka.sink.KafkaRecordSerializationSchemaBuilder
Sets a configurable Kafka Serializer and pass a configuration to serialize incoming elements to the value of the ProducerRecord.
setKeySerializationSchema(SerializationSchema<? super T>) - 类 中的方法org.apache.flink.connector.kafka.sink.KafkaRecordSerializationSchemaBuilder
Sets a SerializationSchema which is used to serialize the incoming element to the key of the ProducerRecord.
setLogFailuresOnly(boolean) - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer
已过时。
Defines whether the producer should fail on errors, or only log them.
setLogFailuresOnly(boolean) - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerBase
Defines whether the producer should fail on errors, or only log them.
setNumParallelInstances(int) - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.KafkaSerializationSchemaWrapper
 
setNumParallelInstances(int) - 接口 中的方法org.apache.flink.streaming.connectors.kafka.KafkaContextAware
Sets the parallelism with which the parallel task of the Kafka Producer runs.
setOffset(long) - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartitionState
 
setParallelInstanceId(int) - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.KafkaSerializationSchemaWrapper
 
setParallelInstanceId(int) - 接口 中的方法org.apache.flink.streaming.connectors.kafka.KafkaContextAware
Sets the number of the parallel subtask that the Kafka Producer is running on.
setPartitioner(FlinkKafkaPartitioner<? super T>) - 类 中的方法org.apache.flink.connector.kafka.sink.KafkaRecordSerializationSchemaBuilder
Sets a custom partitioner determining the target partition of the target topic.
setPartitions(Set<TopicPartition>) - 类 中的方法org.apache.flink.connector.kafka.source.KafkaSourceBuilder
Set a set of partitions to consume from.
setPartitions(int[]) - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.KafkaSerializationSchemaWrapper
 
setPartitions(int[]) - 接口 中的方法org.apache.flink.streaming.connectors.kafka.KafkaContextAware
Sets the available partitions for the topic returned from KafkaContextAware.getTargetTopic(Object).
setProperties(Properties) - 类 中的方法org.apache.flink.connector.kafka.source.KafkaSourceBuilder
Set arbitrary properties for the KafkaSource and KafkaConsumer.
setProperty(String, String) - 类 中的方法org.apache.flink.connector.kafka.sink.KafkaSinkBuilder
 
setProperty(String, String) - 类 中的方法org.apache.flink.connector.kafka.source.KafkaSourceBuilder
Set an arbitrary property for the KafkaSource and KafkaConsumer.
setRecordSerializer(KafkaRecordSerializationSchema<IN>) - 类 中的方法org.apache.flink.connector.kafka.sink.KafkaSinkBuilder
Sets the KafkaRecordSerializationSchema that transforms incoming records to ProducerRecords.
setStartFromEarliest() - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase
Specifies the consumer to start reading from the earliest offset for all partitions.
setStartFromGroupOffsets() - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase
Specifies the consumer to start reading from any committed group offsets found in Zookeeper / Kafka brokers.
setStartFromLatest() - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase
Specifies the consumer to start reading from the latest offset for all partitions.
setStartFromSpecificOffsets(Map<KafkaTopicPartition, Long>) - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase
Specifies the consumer to start reading partitions from specific offsets, set independently for each partition.
setStartFromTimestamp(long) - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase
Specifies the consumer to start reading partitions from a specified timestamp.
setStartingOffsets(OffsetsInitializer) - 类 中的方法org.apache.flink.connector.kafka.source.KafkaSourceBuilder
Specify from which offsets the KafkaSource should start consuming from by providing an OffsetsInitializer.
setTopic(String) - 类 中的方法org.apache.flink.connector.kafka.sink.KafkaRecordSerializationSchemaBuilder
Sets a fixed topic which used as destination for all records.
setTopicPattern(Pattern) - 类 中的方法org.apache.flink.connector.kafka.source.KafkaSourceBuilder
Set a topic pattern to consume from use the java Pattern.
setTopics(List<String>) - 类 中的方法org.apache.flink.connector.kafka.source.KafkaSourceBuilder
Set a list of topics the KafkaSource should consume from.
setTopics(String...) - 类 中的方法org.apache.flink.connector.kafka.source.KafkaSourceBuilder
Set a list of topics the KafkaSource should consume from.
setTopicSelector(TopicSelector<? super T>) - 类 中的方法org.apache.flink.connector.kafka.sink.KafkaRecordSerializationSchemaBuilder
Sets a topic selector which computes the target topic for every incoming record.
setTransactionalIdPrefix(String) - 类 中的方法org.apache.flink.connector.kafka.sink.KafkaSinkBuilder
Sets the prefix for all created transactionalIds if DeliveryGuarantee.EXACTLY_ONCE is configured.
setTransactionalIdPrefix(String) - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer
已过时。
Specifies the prefix of the transactional.id property to be used by the producers when communicating with Kafka.
setUnbounded(OffsetsInitializer) - 类 中的方法org.apache.flink.connector.kafka.source.KafkaSourceBuilder
By default the KafkaSource is set to run as Boundedness.CONTINUOUS_UNBOUNDED and thus never stops until the Flink job fails or is canceled.
setValueOnlyDeserializer(DeserializationSchema<OUT>) - 类 中的方法org.apache.flink.connector.kafka.source.KafkaSourceBuilder
Sets the deserializer of the ConsumerRecord for KafkaSource.
setValueSerializationSchema(SerializationSchema<T>) - 类 中的方法org.apache.flink.connector.kafka.sink.KafkaRecordSerializationSchemaBuilder
Sets a SerializationSchema which is used to serialize the incoming element to the value of the ProducerRecord.
setWriteTimestamp(boolean) - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.KafkaSerializationSchemaWrapper
 
setWriteTimestampToKafka(boolean) - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer
已过时。
If set to true, Flink will write the (event time) timestamp attached to each record into Kafka.
shutdown() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.KafkaConsumerThread
Shuts this thread down, waking up the thread gracefully if blocked (without Thread.interrupt() calls).
SINK_BUFFER_FLUSH_INTERVAL - 类 中的静态变量org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions
 
SINK_BUFFER_FLUSH_MAX_ROWS - 类 中的静态变量org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions
 
SINK_CHANGELOG_MODE - 类 中的静态变量org.apache.flink.streaming.connectors.kafka.table.UpsertKafkaDynamicTableFactory.EncodingFormatWrapper
 
SINK_PARALLELISM - 类 中的静态变量org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions
 
SINK_PARTITIONER - 类 中的静态变量org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions
 
SinkBufferFlushMode - org.apache.flink.streaming.connectors.kafka.table中的类
Sink buffer flush configuration.
SinkBufferFlushMode(int, long) - 类 的构造器org.apache.flink.streaming.connectors.kafka.table.SinkBufferFlushMode
 
size() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.ClosableBlockingQueue
Gets the number of elements currently in the queue.
snapshotConfiguration() - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.ContextStateSerializer
已过时。
 
snapshotConfiguration() - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.NextTransactionalIdHintSerializer
已过时。
 
snapshotConfiguration() - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.TransactionStateSerializer
已过时。
 
snapshotCurrentState() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.AbstractFetcher
Takes a snapshot of the partition offsets.
snapshotState(long) - 类 中的方法org.apache.flink.connector.kafka.source.enumerator.KafkaSourceEnumerator
 
snapshotState(long) - 类 中的方法org.apache.flink.connector.kafka.source.reader.KafkaSourceReader
 
snapshotState(FunctionSnapshotContext) - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase
 
snapshotState(FunctionSnapshotContext) - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer
已过时。
 
snapshotState(FunctionSnapshotContext) - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerBase
 
sourceContext - 类 中的变量org.apache.flink.streaming.connectors.kafka.internals.AbstractFetcher
The source context to emit records and watermarks to.
SourceContextWatermarkOutputAdapter<T> - org.apache.flink.streaming.connectors.kafka.internals中的类
A WatermarkOutput that forwards calls to a SourceFunction.SourceContext.
SourceContextWatermarkOutputAdapter(SourceFunction.SourceContext<T>) - 类 的构造器org.apache.flink.streaming.connectors.kafka.internals.SourceContextWatermarkOutputAdapter
 
specificBoundedOffsets - 类 中的变量org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSource
Specific end offsets; only relevant when bounded mode is BoundedMode.SPECIFIC_OFFSETS.
specificStartupOffsets - 类 中的变量org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSource
Specific startup offsets; only relevant when startup mode is StartupMode.SPECIFIC_OFFSETS.
splitId() - 类 中的方法org.apache.flink.connector.kafka.source.split.KafkaPartitionSplit
 
start() - 类 中的方法org.apache.flink.connector.kafka.source.enumerator.KafkaSourceEnumerator
Start the enumerator.
StartupMode - org.apache.flink.streaming.connectors.kafka.config中的枚举
Startup modes for the Kafka Consumer.
startupMode - 类 中的变量org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSource
The startup mode for the contained consumer (default is StartupMode.GROUP_OFFSETS).
startupTimestampMillis - 类 中的变量org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSource
The start timestamp to locate partition offsets; only relevant when startup mode is StartupMode.TIMESTAMP.
subscribedPartitionStates() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.AbstractFetcher
Gets all partitions (with partition state) that this fetcher is subscribed to.
supportsMetadataProjection() - 类 中的方法org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSource
 
sync(Metric, Counter) - 类 中的静态方法org.apache.flink.connector.kafka.MetricUtil
Ensures that the counter has the same value as the given Kafka metric.

T

tableIdentifier - 类 中的变量org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSource
 
TAG_REC_WITH_TIMESTAMP - 类 中的静态变量org.apache.flink.streaming.connectors.kafka.shuffle.FlinkKafkaShuffleProducer.KafkaSerializer
 
TAG_REC_WITHOUT_TIMESTAMP - 类 中的静态变量org.apache.flink.streaming.connectors.kafka.shuffle.FlinkKafkaShuffleProducer.KafkaSerializer
 
TAG_WATERMARK - 类 中的静态变量org.apache.flink.streaming.connectors.kafka.shuffle.FlinkKafkaShuffleProducer.KafkaSerializer
 
timestamp(long) - 接口 中的静态方法org.apache.flink.connector.kafka.source.enumerator.initializer.OffsetsInitializer
Get an OffsetsInitializer which initializes the offsets in each partition so that the initialized offset is the offset of the first record whose record timestamp is greater than or equals the given timestamp (milliseconds).
toKafkaPartitionSplit() - 类 中的方法org.apache.flink.connector.kafka.source.split.KafkaPartitionSplitState
Use the current offset as the starting offset to create a new KafkaPartitionSplit.
TOPIC - 类 中的静态变量org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions
 
topic - 类 中的变量org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSink
The Kafka topic to write to.
TOPIC_GROUP - 类 中的静态变量org.apache.flink.connector.kafka.source.metrics.KafkaSourceReaderMetrics
 
TOPIC_PATTERN - 类 中的静态变量org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions
 
topicPartitionsMap - 类 中的变量org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer
已过时。
Partitions of each topic.
topicPartitionsMap - 类 中的变量org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerBase
Partitions of each topic.
topicPattern - 类 中的变量org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSource
The Kafka topic pattern to consume.
topics - 类 中的变量org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSource
The Kafka topics to consume.
TopicSelector<IN> - org.apache.flink.connector.kafka.sink中的接口
Selects a topic for the incoming record.
toSplitId(TopicPartition) - 类 中的静态方法org.apache.flink.connector.kafka.source.split.KafkaPartitionSplit
 
toSplitType(String, KafkaPartitionSplitState) - 类 中的方法org.apache.flink.connector.kafka.source.reader.KafkaSourceReader
 
toString() - 类 中的方法org.apache.flink.connector.kafka.source.split.KafkaPartitionSplit
 
toString() - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.KafkaTransactionState
已过时。
 
toString() - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.NextTransactionalIdHint
已过时。
 
toString() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.ClosableBlockingQueue
 
toString() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartition
 
toString(Map<KafkaTopicPartition, Long>) - 类 中的静态方法org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartition
 
toString(List<KafkaTopicPartition>) - 类 中的静态方法org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartition
 
toString() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartitionLeader
 
toString() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartitionState
 
toString() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartitionStateWithWatermarkGenerator
 
toString() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicsDescriptor
 
toString() - 枚举 中的方法org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions.ScanBoundedMode
 
toString() - 枚举 中的方法org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions.ScanStartupMode
 
TRANSACTIONAL_ID_PREFIX - 类 中的静态变量org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions
 
transactionalId - 类 中的变量org.apache.flink.streaming.connectors.kafka.internals.FlinkKafkaInternalProducer
 
TransactionalIdsGenerator - org.apache.flink.streaming.connectors.kafka.internals中的类
Class responsible for generating transactional ids to use when communicating with Kafka.
TransactionalIdsGenerator(String, int, int, int, int) - 类 的构造器org.apache.flink.streaming.connectors.kafka.internals.TransactionalIdsGenerator
 
TransactionStateSerializer() - 类 的构造器org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.TransactionStateSerializer
已过时。
 
TransactionStateSerializer() - 类 的构造器org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer011.TransactionStateSerializer
 
TransactionStateSerializerSnapshot() - 类 的构造器org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.TransactionStateSerializer.TransactionStateSerializerSnapshot
已过时。
 
TransactionStateSerializerSnapshot() - 类 的构造器org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer011.TransactionStateSerializer.TransactionStateSerializerSnapshot
 
TypeInformationKeyValueSerializationSchema<K,V> - org.apache.flink.streaming.util.serialization中的类
A serialization and deserialization schema for Key Value Pairs that uses Flink's serialization stack to transform typed from and to byte arrays.
TypeInformationKeyValueSerializationSchema(TypeInformation<K>, TypeInformation<V>, ExecutionConfig) - 类 的构造器org.apache.flink.streaming.util.serialization.TypeInformationKeyValueSerializationSchema
Creates a new de-/serialization schema for the given types.
TypeInformationKeyValueSerializationSchema(Class<K>, Class<V>, ExecutionConfig) - 类 的构造器org.apache.flink.streaming.util.serialization.TypeInformationKeyValueSerializationSchema
Creates a new de-/serialization schema for the given types.

U

unassignedPartitionsQueue - 类 中的变量org.apache.flink.streaming.connectors.kafka.internals.AbstractFetcher
Queue of partitions that are not yet assigned to any Kafka clients for consuming.
updateNumBytesInCounter() - 类 中的方法org.apache.flink.connector.kafka.source.metrics.KafkaSourceReaderMetrics
Update MetricNames.IO_NUM_BYTES_IN.
UpsertKafkaDynamicTableFactory - org.apache.flink.streaming.connectors.kafka.table中的类
Upsert-Kafka factory.
UpsertKafkaDynamicTableFactory() - 类 的构造器org.apache.flink.streaming.connectors.kafka.table.UpsertKafkaDynamicTableFactory
 
UpsertKafkaDynamicTableFactory.DecodingFormatWrapper - org.apache.flink.streaming.connectors.kafka.table中的类
It is used to wrap the decoding format and expose the desired changelog mode.
UpsertKafkaDynamicTableFactory.EncodingFormatWrapper - org.apache.flink.streaming.connectors.kafka.table中的类
It is used to wrap the encoding format and expose the desired changelog mode.
upsertMode - 类 中的变量org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSink
Flag to determine sink mode.
upsertMode - 类 中的变量org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSource
Flag to determine source mode.

V

VALID_STARTING_OFFSET_MARKERS - 类 中的静态变量org.apache.flink.connector.kafka.source.split.KafkaPartitionSplit
 
VALID_STOPPING_OFFSET_MARKERS - 类 中的静态变量org.apache.flink.connector.kafka.source.split.KafkaPartitionSplit
 
validate(Properties) - 接口 中的方法org.apache.flink.connector.kafka.source.enumerator.initializer.OffsetsInitializerValidator
Validate offsets initializer with properties of Kafka source.
VALUE_FIELDS_INCLUDE - 类 中的静态变量org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions
 
VALUE_FORMAT - 类 中的静态变量org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions
 
valueDecodingFormat - 类 中的变量org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSource
Format for decoding values from Kafka.
valueEncodingFormat - 类 中的变量org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSink
Format for encoding values to Kafka.
valueOf(String) - 枚举 中的静态方法org.apache.flink.streaming.connectors.kafka.config.BoundedMode
返回带有指定名称的该类型的枚举常量。
valueOf(String) - 枚举 中的静态方法org.apache.flink.streaming.connectors.kafka.config.OffsetCommitMode
返回带有指定名称的该类型的枚举常量。
valueOf(String) - 枚举 中的静态方法org.apache.flink.streaming.connectors.kafka.config.StartupMode
返回带有指定名称的该类型的枚举常量。
valueOf(String) - 枚举 中的静态方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaErrorCode
返回带有指定名称的该类型的枚举常量。
valueOf(String) - 枚举 中的静态方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.Semantic
已过时。
返回带有指定名称的该类型的枚举常量。
valueOf(String) - 枚举 中的静态方法org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions.ScanBoundedMode
返回带有指定名称的该类型的枚举常量。
valueOf(String) - 枚举 中的静态方法org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions.ScanStartupMode
返回带有指定名称的该类型的枚举常量。
valueOf(String) - 枚举 中的静态方法org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions.ValueFieldsStrategy
返回带有指定名称的该类型的枚举常量。
valueOnly(DeserializationSchema<V>) - 接口 中的静态方法org.apache.flink.connector.kafka.source.reader.deserializer.KafkaRecordDeserializationSchema
Wraps a DeserializationSchema as the value deserialization schema of the ConsumerRecords.
valueOnly(Class<? extends Deserializer<V>>) - 接口 中的静态方法org.apache.flink.connector.kafka.source.reader.deserializer.KafkaRecordDeserializationSchema
Wraps a Kafka Deserializer to a KafkaRecordDeserializationSchema.
valueOnly(Class<D>, Map<String, String>) - 接口 中的静态方法org.apache.flink.connector.kafka.source.reader.deserializer.KafkaRecordDeserializationSchema
Wraps a Kafka Deserializer to a KafkaRecordDeserializationSchema.
valueProjection - 类 中的变量org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSink
Indices that determine the value fields and the source position in the consumed row.
valueProjection - 类 中的变量org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSource
Indices that determine the value fields and the target position in the produced row.
values() - 枚举 中的静态方法org.apache.flink.streaming.connectors.kafka.config.BoundedMode
按照声明该枚举类型的常量的顺序, 返回 包含这些常量的数组。
values() - 枚举 中的静态方法org.apache.flink.streaming.connectors.kafka.config.OffsetCommitMode
按照声明该枚举类型的常量的顺序, 返回 包含这些常量的数组。
values() - 枚举 中的静态方法org.apache.flink.streaming.connectors.kafka.config.StartupMode
按照声明该枚举类型的常量的顺序, 返回 包含这些常量的数组。
values() - 枚举 中的静态方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaErrorCode
按照声明该枚举类型的常量的顺序, 返回 包含这些常量的数组。
values() - 枚举 中的静态方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.Semantic
已过时。
按照声明该枚举类型的常量的顺序, 返回 包含这些常量的数组。
values() - 枚举 中的静态方法org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions.ScanBoundedMode
按照声明该枚举类型的常量的顺序, 返回 包含这些常量的数组。
values() - 枚举 中的静态方法org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions.ScanStartupMode
按照声明该枚举类型的常量的顺序, 返回 包含这些常量的数组。
values() - 枚举 中的静态方法org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions.ValueFieldsStrategy
按照声明该枚举类型的常量的顺序, 返回 包含这些常量的数组。

W

wakeUp() - 类 中的方法org.apache.flink.connector.kafka.source.reader.KafkaPartitionSplitReader
 
wakeup() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.AbstractPartitionDiscoverer
Interrupt an in-progress discovery attempt by throwing a AbstractPartitionDiscoverer.WakeupException.
wakeupConnections() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.AbstractPartitionDiscoverer
wakeupConnections() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.KafkaPartitionDiscoverer
 
WakeupException() - 异常错误 的构造器org.apache.flink.streaming.connectors.kafka.internals.AbstractPartitionDiscoverer.WakeupException
 
WakeupException() - 异常错误 的构造器org.apache.flink.streaming.connectors.kafka.internals.Handover.WakeupException
 
wakeupProducer() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.Handover
Wakes the producer thread up.
watermarkOutput - 类 中的变量org.apache.flink.streaming.connectors.kafka.internals.AbstractFetcher
Wrapper around our SourceContext for allowing the WatermarkGenerator to emit watermarks and mark idleness.
watermarkStrategy - 类 中的变量org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSource
Watermark strategy that is used to generate per-partition watermark.
writeKeyBy(DataStream<T>, String, Properties, KeySelector<T, K>) - 类 中的静态方法org.apache.flink.streaming.connectors.kafka.shuffle.FlinkKafkaShuffle
writeKeyBy(DataStream<T>, String, Properties, int...) - 类 中的静态方法org.apache.flink.streaming.connectors.kafka.shuffle.FlinkKafkaShuffle
writeTimestampToKafka - 类 中的变量org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer
已过时。
Flag controlling whether we are writing the Flink record's timestamp into Kafka.
A B C D E F G H I J K L M N O P R S T U V W 
跳过导航链接

Copyright © 2014–2023 The Apache Software Foundation. All rights reserved.