- abort(FlinkKafkaProducer.KafkaTransactionState) - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer
-
已过时。
- abortTransaction() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.FlinkKafkaInternalProducer
-
- AbstractFetcher<T,KPH> - org.apache.flink.streaming.connectors.kafka.internals中的类
-
Base class for all fetchers, which implement the connections to Kafka brokers and pull records
from Kafka partitions.
- AbstractFetcher(SourceFunction.SourceContext<T>, Map<KafkaTopicPartition, Long>, SerializedValue<WatermarkStrategy<T>>, ProcessingTimeService, long, ClassLoader, MetricGroup, boolean) - 类 的构造器org.apache.flink.streaming.connectors.kafka.internals.AbstractFetcher
-
- AbstractPartitionDiscoverer - org.apache.flink.streaming.connectors.kafka.internals中的类
-
Base class for all partition discoverers.
- AbstractPartitionDiscoverer(KafkaTopicsDescriptor, int, int) - 类 的构造器org.apache.flink.streaming.connectors.kafka.internals.AbstractPartitionDiscoverer
-
- AbstractPartitionDiscoverer.ClosedException - org.apache.flink.streaming.connectors.kafka.internals中的异常错误
-
Thrown if this discoverer was used to discover partitions after it was closed.
- AbstractPartitionDiscoverer.WakeupException - org.apache.flink.streaming.connectors.kafka.internals中的异常错误
-
Signaling exception to indicate that an actual Kafka call was interrupted.
- acknowledgeMessage() - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer
-
已过时。
ATTENTION to subclass implementors: When overriding this method, please always call
super.acknowledgeMessage() to keep the invariants of the internal bookkeeping of the
producer.
- add(E) - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.ClosableBlockingQueue
-
Adds the element to the queue, or fails with an exception, if the queue is closed.
- addDiscoveredPartitions(List<KafkaTopicPartition>) - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.AbstractFetcher
-
Adds a list of newly discovered partitions to the fetcher for consuming.
- addIfOpen(E) - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.ClosableBlockingQueue
-
Tries to add an element to the queue, if the queue is still open.
- addReader(int) - 类 中的方法org.apache.flink.connector.kafka.source.enumerator.KafkaSourceEnumerator
-
- addSplitsBack(List<KafkaPartitionSplit>, int) - 类 中的方法org.apache.flink.connector.kafka.source.enumerator.KafkaSourceEnumerator
-
- adjustAutoCommitConfig(Properties, OffsetCommitMode) - 类 中的静态方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase
-
Make sure that auto commit is disabled when our offset commit mode is ON_CHECKPOINTS.
- applyReadableMetadata(List<String>, DataType) - 类 中的方法org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSource
-
- applyWatermark(WatermarkStrategy<RowData>) - 类 中的方法org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSource
-
- applyWritableMetadata(List<String>, DataType) - 类 中的方法org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSink
-
- asRecord() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.KafkaShuffleFetcher.KafkaShuffleElement
-
- assign(KafkaTopicPartition, int) - 类 中的静态方法org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartitionAssigner
-
Returns the index of the target subtask that a specific Kafka partition should be assigned
to.
- assign(String, int, int) - 类 中的静态方法org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartitionAssigner
-
- assignedPartitions() - 类 中的方法org.apache.flink.connector.kafka.source.enumerator.KafkaSourceEnumState
-
- assignTimestampsAndWatermarks(WatermarkStrategy<T>) - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase
-
Sets the given WatermarkStrategy on this consumer.
- assignTimestampsAndWatermarks(AssignerWithPunctuatedWatermarks<T>) - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase
-
- assignTimestampsAndWatermarks(AssignerWithPeriodicWatermarks<T>) - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase
-
- asSummaryString() - 类 中的方法org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSink
-
- asSummaryString() - 类 中的方法org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSource
-
- asWatermark() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.KafkaShuffleFetcher.KafkaShuffleElement
-
- asyncException - 类 中的变量org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer
-
已过时。
Errors encountered in the async producer are stored here.
- asyncException - 类 中的变量org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerBase
-
Errors encountered in the async producer are stored here.
- callback - 类 中的变量org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer
-
已过时。
The callback than handles error propagation or logging callbacks.
- callback - 类 中的变量org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerBase
-
The callback than handles error propagation or logging callbacks.
- cancel() - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase
-
- cancel() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.AbstractFetcher
-
- cancel() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.KafkaFetcher
-
- checkAndThrowException() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.ExceptionProxy
-
- checkErroneous() - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer
-
已过时。
- checkErroneous() - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerBase
-
- checkpointLock - 类 中的变量org.apache.flink.streaming.connectors.kafka.internals.AbstractFetcher
-
The lock that guarantees that record emission and state updates are atomic, from the view of
taking a checkpoint.
- CLIENT_ID_PREFIX - 类 中的静态变量org.apache.flink.connector.kafka.source.KafkaSourceOptions
-
- ClosableBlockingQueue<E> - org.apache.flink.streaming.connectors.kafka.internals中的类
-
A special form of blocking queue with two additions:
The queue can be closed atomically when empty.
- ClosableBlockingQueue() - 类 的构造器org.apache.flink.streaming.connectors.kafka.internals.ClosableBlockingQueue
-
Creates a new empty queue.
- ClosableBlockingQueue(int) - 类 的构造器org.apache.flink.streaming.connectors.kafka.internals.ClosableBlockingQueue
-
Creates a new empty queue, reserving space for at least the specified number of elements.
- ClosableBlockingQueue(Collection<? extends E>) - 类 的构造器org.apache.flink.streaming.connectors.kafka.internals.ClosableBlockingQueue
-
Creates a new queue that contains the given elements.
- close() - 类 中的方法org.apache.flink.connector.kafka.source.enumerator.KafkaSourceEnumerator
-
- close() - 类 中的方法org.apache.flink.connector.kafka.source.enumerator.KafkaSourceEnumerator.PartitionOffsetsRetrieverImpl
-
- close() - 类 中的方法org.apache.flink.connector.kafka.source.reader.KafkaPartitionSplitReader
-
- close() - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase
-
- close() - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer
-
已过时。
- close() - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerBase
-
- close() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.AbstractPartitionDiscoverer
-
Closes the partition discoverer, cleaning up all Kafka connections.
- close() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.ClosableBlockingQueue
-
Tries to close the queue.
- close() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.FlinkKafkaInternalProducer
-
- close(Duration) - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.FlinkKafkaInternalProducer
-
- close() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.Handover
-
Closes the handover.
- closeConnections() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.AbstractPartitionDiscoverer
-
Close all established connections.
- closeConnections() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.KafkaPartitionDiscoverer
-
- ClosedException() - 异常错误 的构造器org.apache.flink.streaming.connectors.kafka.internals.AbstractPartitionDiscoverer.ClosedException
-
- ClosedException() - 异常错误 的构造器org.apache.flink.streaming.connectors.kafka.internals.Handover.ClosedException
-
- commit(FlinkKafkaProducer.KafkaTransactionState) - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer
-
已过时。
- COMMIT_OFFSETS_ON_CHECKPOINT - 类 中的静态变量org.apache.flink.connector.kafka.source.KafkaSourceOptions
-
- commitInternalOffsetsToKafka(Map<KafkaTopicPartition, Long>, KafkaCommitCallback) - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.AbstractFetcher
-
Commits the given partition offsets to the Kafka brokers (or to ZooKeeper for older Kafka
versions).
- commitOffsets(Map<TopicPartition, OffsetAndMetadata>, OffsetCommitCallback) - 类 中的方法org.apache.flink.connector.kafka.source.reader.fetcher.KafkaSourceFetcherManager
-
- COMMITS_FAILED_METRIC_COUNTER - 类 中的静态变量org.apache.flink.connector.kafka.source.metrics.KafkaSourceReaderMetrics
-
- COMMITS_FAILED_METRICS_COUNTER - 类 中的静态变量org.apache.flink.streaming.connectors.kafka.internals.metrics.KafkaConsumerMetricConstants
-
- COMMITS_SUCCEEDED_METRIC_COUNTER - 类 中的静态变量org.apache.flink.connector.kafka.source.metrics.KafkaSourceReaderMetrics
-
- COMMITS_SUCCEEDED_METRICS_COUNTER - 类 中的静态变量org.apache.flink.streaming.connectors.kafka.internals.metrics.KafkaConsumerMetricConstants
-
- COMMITTED_OFFSET - 类 中的静态变量org.apache.flink.connector.kafka.source.split.KafkaPartitionSplit
-
- COMMITTED_OFFSET_METRIC_GAUGE - 类 中的静态变量org.apache.flink.connector.kafka.source.metrics.KafkaSourceReaderMetrics
-
- COMMITTED_OFFSETS_METRICS_GAUGE - 类 中的静态变量org.apache.flink.streaming.connectors.kafka.internals.metrics.KafkaConsumerMetricConstants
-
- committedOffsets() - 接口 中的静态方法org.apache.flink.connector.kafka.source.enumerator.initializer.OffsetsInitializer
-
- committedOffsets(OffsetResetStrategy) - 接口 中的静态方法org.apache.flink.connector.kafka.source.enumerator.initializer.OffsetsInitializer
-
- committedOffsets(Collection<TopicPartition>) - 接口 中的方法org.apache.flink.connector.kafka.source.enumerator.initializer.OffsetsInitializer.PartitionOffsetsRetriever
-
The group id should be the set for
KafkaSource before invoking this
method.
- committedOffsets(Collection<TopicPartition>) - 类 中的方法org.apache.flink.connector.kafka.source.enumerator.KafkaSourceEnumerator.PartitionOffsetsRetrieverImpl
-
- commitTransaction() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.FlinkKafkaInternalProducer
-
- Comparator() - 类 的构造器org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartition.Comparator
-
- compare(KafkaTopicPartition, KafkaTopicPartition) - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartition.Comparator
-
- consumedDataType - 类 中的变量org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSink
-
Data type of consumed data type.
- CONSUMER_FETCH_MANAGER_GROUP - 类 中的静态变量org.apache.flink.connector.kafka.source.metrics.KafkaSourceReaderMetrics
-
- ContextStateSerializer() - 类 的构造器org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.ContextStateSerializer
-
已过时。
- ContextStateSerializer() - 类 的构造器org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer011.ContextStateSerializer
-
- ContextStateSerializerSnapshot() - 类 的构造器org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.ContextStateSerializer.ContextStateSerializerSnapshot
-
已过时。
- ContextStateSerializerSnapshot() - 类 的构造器org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer011.ContextStateSerializer.ContextStateSerializerSnapshot
-
- copy(FlinkKafkaProducer.KafkaTransactionContext) - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.ContextStateSerializer
-
已过时。
- copy(FlinkKafkaProducer.KafkaTransactionContext, FlinkKafkaProducer.KafkaTransactionContext) - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.ContextStateSerializer
-
已过时。
- copy(DataInputView, DataOutputView) - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.ContextStateSerializer
-
已过时。
- copy(FlinkKafkaProducer.NextTransactionalIdHint) - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.NextTransactionalIdHintSerializer
-
已过时。
- copy(FlinkKafkaProducer.NextTransactionalIdHint, FlinkKafkaProducer.NextTransactionalIdHint) - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.NextTransactionalIdHintSerializer
-
已过时。
- copy(DataInputView, DataOutputView) - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.NextTransactionalIdHintSerializer
-
已过时。
- copy(FlinkKafkaProducer.KafkaTransactionState) - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.TransactionStateSerializer
-
已过时。
- copy(FlinkKafkaProducer.KafkaTransactionState, FlinkKafkaProducer.KafkaTransactionState) - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.TransactionStateSerializer
-
已过时。
- copy(DataInputView, DataOutputView) - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.TransactionStateSerializer
-
已过时。
- copy() - 类 中的方法org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSink
-
- copy() - 类 中的方法org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSource
-
- createCommitter() - 类 中的方法org.apache.flink.connector.kafka.sink.KafkaSink
-
- createDynamicTableSink(DynamicTableFactory.Context) - 类 中的方法org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicTableFactory
-
- createDynamicTableSink(DynamicTableFactory.Context) - 类 中的方法org.apache.flink.streaming.connectors.kafka.table.UpsertKafkaDynamicTableFactory
-
- createDynamicTableSource(DynamicTableFactory.Context) - 类 中的方法org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicTableFactory
-
- createDynamicTableSource(DynamicTableFactory.Context) - 类 中的方法org.apache.flink.streaming.connectors.kafka.table.UpsertKafkaDynamicTableFactory
-
- createEnumerator(SplitEnumeratorContext<KafkaPartitionSplit>) - 类 中的方法org.apache.flink.connector.kafka.source.KafkaSource
-
- createFetcher(SourceFunction.SourceContext<T>, Map<KafkaTopicPartition, Long>, SerializedValue<WatermarkStrategy<T>>, StreamingRuntimeContext, OffsetCommitMode, MetricGroup, boolean) - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer
-
已过时。
- createFetcher(SourceFunction.SourceContext<T>, Map<KafkaTopicPartition, Long>, SerializedValue<WatermarkStrategy<T>>, StreamingRuntimeContext, OffsetCommitMode, MetricGroup, boolean) - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase
-
Creates the fetcher that connect to the Kafka brokers, pulls data, deserialized the data, and
emits it into the data streams.
- createFetcher(SourceFunction.SourceContext<T>, Map<KafkaTopicPartition, Long>, SerializedValue<WatermarkStrategy<T>>, StreamingRuntimeContext, OffsetCommitMode, MetricGroup, boolean) - 类 中的方法org.apache.flink.streaming.connectors.kafka.shuffle.FlinkKafkaShuffleConsumer
-
- createInstance() - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.ContextStateSerializer
-
已过时。
- createInstance() - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.NextTransactionalIdHintSerializer
-
已过时。
- createInstance() - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.TransactionStateSerializer
-
已过时。
- createKafkaPartitionHandle(KafkaTopicPartition) - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.AbstractFetcher
-
Creates the Kafka version specific representation of the given topic partition.
- createKafkaPartitionHandle(KafkaTopicPartition) - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.KafkaFetcher
-
- createKafkaSource(DeserializationSchema<RowData>, DeserializationSchema<RowData>, TypeInformation<RowData>) - 类 中的方法org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSource
-
- createKafkaTableSink(DataType, EncodingFormat<SerializationSchema<RowData>>, EncodingFormat<SerializationSchema<RowData>>, int[], int[], String, String, Properties, FlinkKafkaPartitioner<RowData>, DeliveryGuarantee, Integer, String) - 类 中的方法org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicTableFactory
-
- createKafkaTableSource(DataType, DecodingFormat<DeserializationSchema<RowData>>, DecodingFormat<DeserializationSchema<RowData>>, int[], int[], String, List<String>, Pattern, Properties, StartupMode, Map<KafkaTopicPartition, Long>, long, BoundedMode, Map<KafkaTopicPartition, Long>, long, String) - 类 中的方法org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicTableFactory
-
- createPartitionDiscoverer(KafkaTopicsDescriptor, int, int) - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer
-
已过时。
- createPartitionDiscoverer(KafkaTopicsDescriptor, int, int) - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase
-
Creates the partition discoverer that is used to find new partitions for this subtask.
- createProducer() - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer
-
已过时。
- createReader(SourceReaderContext) - 类 中的方法org.apache.flink.connector.kafka.source.KafkaSource
-
- createRuntimeDecoder(DynamicTableSource.Context, DataType) - 类 中的方法org.apache.flink.streaming.connectors.kafka.table.UpsertKafkaDynamicTableFactory.DecodingFormatWrapper
-
- createRuntimeEncoder(DynamicTableSink.Context, DataType) - 类 中的方法org.apache.flink.streaming.connectors.kafka.table.UpsertKafkaDynamicTableFactory.EncodingFormatWrapper
-
- createWriter(Sink.InitContext) - 类 中的方法org.apache.flink.connector.kafka.sink.KafkaSink
-
- CURRENT_OFFSET_METRIC_GAUGE - 类 中的静态变量org.apache.flink.connector.kafka.source.metrics.KafkaSourceReaderMetrics
-
- CURRENT_OFFSETS_METRICS_GAUGE - 类 中的静态变量org.apache.flink.streaming.connectors.kafka.internals.metrics.KafkaConsumerMetricConstants
-
- factoryIdentifier() - 类 中的方法org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicTableFactory
-
- factoryIdentifier() - 类 中的方法org.apache.flink.streaming.connectors.kafka.table.UpsertKafkaDynamicTableFactory
-
- fetch() - 类 中的方法org.apache.flink.connector.kafka.source.reader.KafkaPartitionSplitReader
-
- fetchOffsetsWithTimestamp(Collection<KafkaTopicPartition>, long) - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer
-
已过时。
- fetchOffsetsWithTimestamp(Collection<KafkaTopicPartition>, long) - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase
-
- finishRecoveringContext(Collection<FlinkKafkaProducer.KafkaTransactionState>) - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer
-
已过时。
- FlinkFixedPartitioner<T> - org.apache.flink.streaming.connectors.kafka.partitioner中的类
-
A partitioner ensuring that each internal Flink partition ends up in one Kafka partition.
- FlinkFixedPartitioner() - 类 的构造器org.apache.flink.streaming.connectors.kafka.partitioner.FlinkFixedPartitioner
-
- FlinkKafkaConsumer<T> - org.apache.flink.streaming.connectors.kafka中的类
-
已过时。
- FlinkKafkaConsumer(String, DeserializationSchema<T>, Properties) - 类 的构造器org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer
-
已过时。
Creates a new Kafka streaming source consumer.
- FlinkKafkaConsumer(String, KafkaDeserializationSchema<T>, Properties) - 类 的构造器org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer
-
已过时。
Creates a new Kafka streaming source consumer.
- FlinkKafkaConsumer(List<String>, DeserializationSchema<T>, Properties) - 类 的构造器org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer
-
已过时。
Creates a new Kafka streaming source consumer.
- FlinkKafkaConsumer(List<String>, KafkaDeserializationSchema<T>, Properties) - 类 的构造器org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer
-
已过时。
Creates a new Kafka streaming source consumer.
- FlinkKafkaConsumer(Pattern, DeserializationSchema<T>, Properties) - 类 的构造器org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer
-
已过时。
Creates a new Kafka streaming source consumer.
- FlinkKafkaConsumer(Pattern, KafkaDeserializationSchema<T>, Properties) - 类 的构造器org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer
-
已过时。
Creates a new Kafka streaming source consumer.
- FlinkKafkaConsumerBase<T> - org.apache.flink.streaming.connectors.kafka中的类
-
Base class of all Flink Kafka Consumer data sources.
- FlinkKafkaConsumerBase(List<String>, Pattern, KafkaDeserializationSchema<T>, long, boolean) - 类 的构造器org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase
-
Base constructor.
- FlinkKafkaErrorCode - org.apache.flink.streaming.connectors.kafka中的枚举
-
- FlinkKafkaException - org.apache.flink.streaming.connectors.kafka中的异常错误
-
- FlinkKafkaException(FlinkKafkaErrorCode, String) - 异常错误 的构造器org.apache.flink.streaming.connectors.kafka.FlinkKafkaException
-
- FlinkKafkaException(FlinkKafkaErrorCode, String, Throwable) - 异常错误 的构造器org.apache.flink.streaming.connectors.kafka.FlinkKafkaException
-
- FlinkKafkaInternalProducer<K,V> - org.apache.flink.streaming.connectors.kafka.internals中的类
-
Internal flink kafka producer.
- FlinkKafkaInternalProducer(Properties) - 类 的构造器org.apache.flink.streaming.connectors.kafka.internals.FlinkKafkaInternalProducer
-
- flinkKafkaPartitioner - 类 中的变量org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerBase
-
User-provided partitioner for assigning an object to a Kafka partition for each topic.
- FlinkKafkaPartitioner<T> - org.apache.flink.streaming.connectors.kafka.partitioner中的类
-
A
FlinkKafkaPartitioner wraps logic on how to partition records across partitions of
multiple Kafka topics.
- FlinkKafkaPartitioner() - 类 的构造器org.apache.flink.streaming.connectors.kafka.partitioner.FlinkKafkaPartitioner
-
- FlinkKafkaProducer<IN> - org.apache.flink.streaming.connectors.kafka中的类
-
- FlinkKafkaProducer(String, String, SerializationSchema<IN>) - 类 的构造器org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer
-
已过时。
Creates a FlinkKafkaProducer for a given topic.
- FlinkKafkaProducer(String, SerializationSchema<IN>, Properties) - 类 的构造器org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer
-
已过时。
Creates a FlinkKafkaProducer for a given topic.
- FlinkKafkaProducer(String, SerializationSchema<IN>, Properties, Optional<FlinkKafkaPartitioner<IN>>) - 类 的构造器org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer
-
已过时。
Creates a FlinkKafkaProducer for a given topic.
- FlinkKafkaProducer(String, SerializationSchema<IN>, Properties, FlinkKafkaPartitioner<IN>, FlinkKafkaProducer.Semantic, int) - 类 的构造器org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer
-
已过时。
Creates a FlinkKafkaProducer for a given topic.
- FlinkKafkaProducer(String, String, KeyedSerializationSchema<IN>) - 类 的构造器org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer
-
- FlinkKafkaProducer(String, KeyedSerializationSchema<IN>, Properties) - 类 的构造器org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer
-
- FlinkKafkaProducer(String, KeyedSerializationSchema<IN>, Properties, FlinkKafkaProducer.Semantic) - 类 的构造器org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer
-
- FlinkKafkaProducer(String, KeyedSerializationSchema<IN>, Properties, Optional<FlinkKafkaPartitioner<IN>>) - 类 的构造器org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer
-
- FlinkKafkaProducer(String, KeyedSerializationSchema<IN>, Properties, Optional<FlinkKafkaPartitioner<IN>>, FlinkKafkaProducer.Semantic, int) - 类 的构造器org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer
-
- FlinkKafkaProducer(String, KafkaSerializationSchema<IN>, Properties, FlinkKafkaProducer.Semantic) - 类 的构造器org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer
-
已过时。
- FlinkKafkaProducer(String, KafkaSerializationSchema<IN>, Properties, FlinkKafkaProducer.Semantic, int) - 类 的构造器org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer
-
已过时。
Creates a FlinkKafkaProducer for a given topic.
- FlinkKafkaProducer.ContextStateSerializer - org.apache.flink.streaming.connectors.kafka中的类
-
已过时。
- FlinkKafkaProducer.ContextStateSerializer.ContextStateSerializerSnapshot - org.apache.flink.streaming.connectors.kafka中的类
-
已过时。
Serializer configuration snapshot for compatibility and format evolution.
- FlinkKafkaProducer.KafkaTransactionContext - org.apache.flink.streaming.connectors.kafka中的类
-
已过时。
- FlinkKafkaProducer.KafkaTransactionState - org.apache.flink.streaming.connectors.kafka中的类
-
已过时。
State for handling transactions.
- FlinkKafkaProducer.NextTransactionalIdHint - org.apache.flink.streaming.connectors.kafka中的类
-
已过时。
Keep information required to deduce next safe to use transactional id.
- FlinkKafkaProducer.NextTransactionalIdHintSerializer - org.apache.flink.streaming.connectors.kafka中的类
-
已过时。
- FlinkKafkaProducer.NextTransactionalIdHintSerializer.NextTransactionalIdHintSerializerSnapshot - org.apache.flink.streaming.connectors.kafka中的类
-
已过时。
Serializer configuration snapshot for compatibility and format evolution.
- FlinkKafkaProducer.Semantic - org.apache.flink.streaming.connectors.kafka中的枚举
-
已过时。
Semantics that can be chosen.
- FlinkKafkaProducer.TransactionStateSerializer - org.apache.flink.streaming.connectors.kafka中的类
-
已过时。
- FlinkKafkaProducer.TransactionStateSerializer.TransactionStateSerializerSnapshot - org.apache.flink.streaming.connectors.kafka中的类
-
已过时。
Serializer configuration snapshot for compatibility and format evolution.
- FlinkKafkaProducer011 - org.apache.flink.streaming.connectors.kafka中的类
-
Compatibility class to make migration possible from the 0.11 connector to the universal one.
- FlinkKafkaProducer011() - 类 的构造器org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer011
-
- FlinkKafkaProducer011.ContextStateSerializer - org.apache.flink.streaming.connectors.kafka中的类
-
- FlinkKafkaProducer011.ContextStateSerializer.ContextStateSerializerSnapshot - org.apache.flink.streaming.connectors.kafka中的类
-
- FlinkKafkaProducer011.NextTransactionalIdHint - org.apache.flink.streaming.connectors.kafka中的类
-
- FlinkKafkaProducer011.NextTransactionalIdHintSerializer - org.apache.flink.streaming.connectors.kafka中的类
-
- FlinkKafkaProducer011.NextTransactionalIdHintSerializer.NextTransactionalIdHintSerializerSnapshot - org.apache.flink.streaming.connectors.kafka中的类
-
- FlinkKafkaProducer011.TransactionStateSerializer - org.apache.flink.streaming.connectors.kafka中的类
-
- FlinkKafkaProducer011.TransactionStateSerializer.TransactionStateSerializerSnapshot - org.apache.flink.streaming.connectors.kafka中的类
-
- FlinkKafkaProducerBase<IN> - org.apache.flink.streaming.connectors.kafka中的类
-
Flink Sink to produce data into a Kafka topic.
- FlinkKafkaProducerBase(String, KeyedSerializationSchema<IN>, Properties, FlinkKafkaPartitioner<IN>) - 类 的构造器org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerBase
-
The main constructor for creating a FlinkKafkaProducer.
- FlinkKafkaShuffle - org.apache.flink.streaming.connectors.kafka.shuffle中的类
-
FlinkKafkaShuffle uses Kafka as a message bus to shuffle and persist data at the same
time.
- FlinkKafkaShuffle() - 类 的构造器org.apache.flink.streaming.connectors.kafka.shuffle.FlinkKafkaShuffle
-
- FlinkKafkaShuffleConsumer<T> - org.apache.flink.streaming.connectors.kafka.shuffle中的类
-
Flink Kafka Shuffle Consumer Function.
- FlinkKafkaShuffleProducer<IN,KEY> - org.apache.flink.streaming.connectors.kafka.shuffle中的类
-
Flink Kafka Shuffle Producer Function.
- FlinkKafkaShuffleProducer.KafkaSerializer<IN> - org.apache.flink.streaming.connectors.kafka.shuffle中的类
-
Flink Kafka Shuffle Serializer.
- flush() - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerBase
-
Flush pending records.
- flush() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.FlinkKafkaInternalProducer
-
- flushMode - 类 中的变量org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSink
-
Sink buffer flush config which only supported in upsert mode now.
- flushOnCheckpoint - 类 中的变量org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerBase
-
If true, the producer will wait until all outstanding records have been send to the broker.
- forwardOptions() - 类 中的方法org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicTableFactory
-
- fromConfiguration(boolean, boolean, boolean) - 类 中的静态方法org.apache.flink.streaming.connectors.kafka.config.OffsetCommitModes
-
Determine the offset commit mode using several configuration values.
- generateIdsToAbort() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.TransactionalIdsGenerator
-
If we have to abort previous transactional id in case of restart after a failure BEFORE first
checkpoint completed, we don't know what was the parallelism used in previous attempt.
- generateIdsToUse(long) - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.TransactionalIdsGenerator
-
Range of available transactional ids to use is: [nextFreeTransactionalId,
nextFreeTransactionalId + parallelism * kafkaProducersPoolSize) loop below picks in a
deterministic way a subrange of those available transactional ids based on index of this
subtask.
- getAllPartitionsForTopics(List<String>) - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.AbstractPartitionDiscoverer
-
Fetch the list of all partitions for a specific topics list from Kafka.
- getAllPartitionsForTopics(List<String>) - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.KafkaPartitionDiscoverer
-
- getAllTopics() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.AbstractPartitionDiscoverer
-
Fetch the list of all topics from Kafka.
- getAllTopics() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.KafkaPartitionDiscoverer
-
- getAutoOffsetResetStrategy() - 类 中的方法org.apache.flink.connector.kafka.source.enumerator.initializer.NoStoppingOffsetsInitializer
-
- getAutoOffsetResetStrategy() - 接口 中的方法org.apache.flink.connector.kafka.source.enumerator.initializer.OffsetsInitializer
-
Get the auto offset reset strategy in case the initialized offsets falls out of the range.
- getBatchBlocking() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.ClosableBlockingQueue
-
Gets all the elements found in the list, or blocks until at least one element was added.
- getBatchBlocking(long) - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.ClosableBlockingQueue
-
Gets all the elements found in the list, or blocks until at least one element was added.
- getBatchIntervalMs() - 类 中的方法org.apache.flink.streaming.connectors.kafka.table.SinkBufferFlushMode
-
- getBatchSize() - 类 中的方法org.apache.flink.streaming.connectors.kafka.table.SinkBufferFlushMode
-
- getBoundedness() - 类 中的方法org.apache.flink.connector.kafka.source.KafkaSource
-
- getChangelogMode(ChangelogMode) - 类 中的方法org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSink
-
- getChangelogMode() - 类 中的方法org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSource
-
- getChangelogMode() - 类 中的方法org.apache.flink.streaming.connectors.kafka.table.UpsertKafkaDynamicTableFactory.DecodingFormatWrapper
-
- getChangelogMode() - 类 中的方法org.apache.flink.streaming.connectors.kafka.table.UpsertKafkaDynamicTableFactory.EncodingFormatWrapper
-
- getCommittableSerializer() - 类 中的方法org.apache.flink.connector.kafka.sink.KafkaSink
-
- getCommittedOffset() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartitionState
-
- getCurrentOffset() - 类 中的方法org.apache.flink.connector.kafka.source.split.KafkaPartitionSplitState
-
- getDescription() - 枚举 中的方法org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions.ScanBoundedMode
-
- getDescription() - 枚举 中的方法org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions.ScanStartupMode
-
- getElementBlocking() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.ClosableBlockingQueue
-
Returns the next element in the queue.
- getElementBlocking(long) - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.ClosableBlockingQueue
-
Returns the next element in the queue.
- getEnableCommitOnCheckpoints() - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase
-
- getEnum(String) - 类 中的静态方法org.apache.flink.streaming.connectors.kafka.internals.FlinkKafkaInternalProducer
-
- getEnumeratorCheckpointSerializer() - 类 中的方法org.apache.flink.connector.kafka.source.KafkaSource
-
- getEpoch() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.FlinkKafkaInternalProducer
-
- getErrorCode() - 异常错误 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaException
-
- getFetcherName() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.KafkaFetcher
-
Gets the name of this fetcher, for thread naming and logging purposes.
- getFetcherName() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.KafkaShuffleFetcher
-
- getField(Object, String) - 类 中的静态方法org.apache.flink.streaming.connectors.kafka.internals.FlinkKafkaInternalProducer
-
Gets and returns the field fieldName from the given Object object using
reflection.
- getFixedTopics() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicsDescriptor
-
- getIsAutoCommitEnabled() - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer
-
已过时。
- getIsAutoCommitEnabled() - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase
-
- getKafkaMetric(Map<MetricName, ? extends Metric>, String, String) - 类 中的静态方法org.apache.flink.connector.kafka.MetricUtil
-
Tries to find the Kafka Metric in the provided metrics.
- getKafkaMetric(Map<MetricName, ? extends Metric>, Predicate<Map.Entry<MetricName, ? extends Metric>>) - 类 中的静态方法org.apache.flink.connector.kafka.MetricUtil
-
Tries to find the Kafka Metric in the provided metrics matching a given filter.
- getKafkaPartitionHandle() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartitionState
-
Gets Kafka's descriptor for the Kafka Partition.
- getKafkaProducer(Properties) - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerBase
-
Used for testing only.
- getKafkaProducerConfig() - 类 中的方法org.apache.flink.connector.kafka.sink.KafkaSink
-
- getKafkaTopicPartition() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartitionState
-
Gets Flink's descriptor for the Kafka Partition.
- getLeader() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartitionLeader
-
- getLength() - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.ContextStateSerializer
-
已过时。
- getLength() - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.NextTransactionalIdHintSerializer
-
已过时。
- getLength() - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.TransactionStateSerializer
-
已过时。
- getNumberOfParallelInstances() - 类 中的方法org.apache.flink.connector.kafka.sink.DefaultKafkaSinkContext
-
- getNumberOfParallelInstances() - 接口 中的方法org.apache.flink.connector.kafka.sink.KafkaRecordSerializationSchema.KafkaSinkContext
-
- getOffset() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartitionState
-
The current offset in the partition.
- getOption(Properties, ConfigOption<?>, Function<String, T>) - 类 中的静态方法org.apache.flink.connector.kafka.source.KafkaSourceOptions
-
- getParallelInstanceId() - 类 中的方法org.apache.flink.connector.kafka.sink.DefaultKafkaSinkContext
-
- getParallelInstanceId() - 接口 中的方法org.apache.flink.connector.kafka.sink.KafkaRecordSerializationSchema.KafkaSinkContext
-
Get the ID of the subtask the KafkaSink is running on.
- getPartition() - 类 中的方法org.apache.flink.connector.kafka.source.split.KafkaPartitionSplit
-
- getPartition() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartition
-
- getPartition() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartitionState
-
- getPartitionOffsets(Collection<TopicPartition>, OffsetsInitializer.PartitionOffsetsRetriever) - 类 中的方法org.apache.flink.connector.kafka.source.enumerator.initializer.NoStoppingOffsetsInitializer
-
- getPartitionOffsets(Collection<TopicPartition>, OffsetsInitializer.PartitionOffsetsRetriever) - 接口 中的方法org.apache.flink.connector.kafka.source.enumerator.initializer.OffsetsInitializer
-
Get the initial offsets for the given Kafka partitions.
- getPartitionsByTopic(String, Producer<byte[], byte[]>) - 类 中的静态方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer
-
已过时。
- getPartitionsByTopic(String, KafkaProducer<byte[], byte[]>) - 类 中的静态方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerBase
-
- getPartitionSetSubscriber(Set<TopicPartition>) - 接口 中的静态方法org.apache.flink.connector.kafka.source.enumerator.subscriber.KafkaSubscriber
-
- getPartitionsForTopic(String) - 类 中的方法org.apache.flink.connector.kafka.sink.DefaultKafkaSinkContext
-
- getPartitionsForTopic(String) - 接口 中的方法org.apache.flink.connector.kafka.sink.KafkaRecordSerializationSchema.KafkaSinkContext
-
For a given topic id retrieve the available partitions.
- getProducedType() - 类 中的方法org.apache.flink.connector.kafka.source.KafkaSource
-
- getProducedType() - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase
-
- getProducedType() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.KafkaDeserializationSchemaWrapper
-
- getProducedType() - 类 中的方法org.apache.flink.streaming.util.serialization.JSONKeyValueDeserializationSchema
-
- getProducedType() - 类 中的方法org.apache.flink.streaming.util.serialization.TypeInformationKeyValueSerializationSchema
-
- getProducer() - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.KafkaTransactionState
-
已过时。
- getProducerId() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.FlinkKafkaInternalProducer
-
- getPropertiesFromBrokerList(String) - 类 中的静态方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerBase
-
- getScanRuntimeProvider(ScanTableSource.ScanContext) - 类 中的方法org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSource
-
- getSerializationSchema() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.KeyedSerializationSchemaWrapper
-
- getSinkRuntimeProvider(DynamicTableSink.Context) - 类 中的方法org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSink
-
- getSplitSerializer() - 类 中的方法org.apache.flink.connector.kafka.source.KafkaSource
-
- getStartingOffset() - 类 中的方法org.apache.flink.connector.kafka.source.split.KafkaPartitionSplit
-
- getStateSentinel() - 枚举 中的方法org.apache.flink.streaming.connectors.kafka.config.StartupMode
-
- getStoppingOffset() - 类 中的方法org.apache.flink.connector.kafka.source.split.KafkaPartitionSplit
-
- getSubscribedTopicPartitions(AdminClient) - 接口 中的方法org.apache.flink.connector.kafka.source.enumerator.subscriber.KafkaSubscriber
-
Get a set of subscribed TopicPartitions.
- getSubtask() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.KafkaShuffleFetcher.KafkaShuffleWatermark
-
- getTargetTopic(T) - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.KafkaSerializationSchemaWrapper
-
- getTargetTopic(T) - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.KeyedSerializationSchemaWrapper
-
- getTargetTopic(T) - 接口 中的方法org.apache.flink.streaming.connectors.kafka.KafkaContextAware
-
Returns the topic that the presented element should be sent to.
- getTargetTopic(T) - 接口 中的方法org.apache.flink.streaming.util.serialization.KeyedSerializationSchema
-
已过时。
Optional method to determine the target topic for the element.
- getTargetTopic(Tuple2<K, V>) - 类 中的方法org.apache.flink.streaming.util.serialization.TypeInformationKeyValueSerializationSchema
-
- getTimestamp() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.KafkaShuffleFetcher.KafkaShuffleRecord
-
- getTopic() - 类 中的方法org.apache.flink.connector.kafka.source.split.KafkaPartitionSplit
-
- getTopic() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartition
-
- getTopic() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartitionState
-
- getTopicListSubscriber(List<String>) - 接口 中的静态方法org.apache.flink.connector.kafka.source.enumerator.subscriber.KafkaSubscriber
-
- getTopicPartition() - 类 中的方法org.apache.flink.connector.kafka.source.split.KafkaPartitionSplit
-
- getTopicPartition() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartitionLeader
-
- getTopicPatternSubscriber(Pattern) - 接口 中的静态方法org.apache.flink.connector.kafka.source.enumerator.subscriber.KafkaSubscriber
-
- getTransactionalId() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.FlinkKafkaInternalProducer
-
- getTransactionCoordinatorId() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.FlinkKafkaInternalProducer
-
- getTransactionTimeout(Properties) - 类 中的静态方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer
-
已过时。
- getValue() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.KafkaShuffleFetcher.KafkaShuffleRecord
-
- getValue() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.metrics.KafkaMetricMutableWrapper
-
- getValue() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.metrics.KafkaMetricWrapper
-
- getVersion() - 类 中的方法org.apache.flink.connector.kafka.source.enumerator.KafkaSourceEnumStateSerializer
-
- getVersion() - 类 中的方法org.apache.flink.connector.kafka.source.split.KafkaPartitionSplitSerializer
-
- getWatermark() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.KafkaShuffleFetcher.KafkaShuffleWatermark
-
- getWriterStateSerializer() - 类 中的方法org.apache.flink.connector.kafka.sink.KafkaSink
-
- GROUP_OFFSET - 类 中的静态变量org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartitionStateSentinel
-
Magic number that defines the partition should start from its committed group offset in
Kafka.
- KAFKA_CONSUMER_METRIC_GROUP - 类 中的静态变量org.apache.flink.connector.kafka.source.metrics.KafkaSourceReaderMetrics
-
- KAFKA_CONSUMER_METRICS_GROUP - 类 中的静态变量org.apache.flink.streaming.connectors.kafka.internals.metrics.KafkaConsumerMetricConstants
-
- KAFKA_SOURCE_READER_METRIC_GROUP - 类 中的静态变量org.apache.flink.connector.kafka.source.metrics.KafkaSourceReaderMetrics
-
- KafkaCommitCallback - org.apache.flink.streaming.connectors.kafka.internals中的接口
-
A callback interface that the source operator can implement to trigger custom actions when a
commit request completes, which should normally be triggered from checkpoint complete event.
- KafkaConnectorOptions - org.apache.flink.streaming.connectors.kafka.table中的类
-
Options for the Kafka connector.
- KafkaConnectorOptions.ScanBoundedMode - org.apache.flink.streaming.connectors.kafka.table中的枚举
-
- KafkaConnectorOptions.ScanStartupMode - org.apache.flink.streaming.connectors.kafka.table中的枚举
-
- KafkaConnectorOptions.ValueFieldsStrategy - org.apache.flink.streaming.connectors.kafka.table中的枚举
-
Strategies to derive the data type of a value format by considering a key format.
- KafkaConsumerMetricConstants - org.apache.flink.streaming.connectors.kafka.internals.metrics中的类
-
A collection of Kafka consumer metrics related constant strings.
- KafkaConsumerMetricConstants() - 类 的构造器org.apache.flink.streaming.connectors.kafka.internals.metrics.KafkaConsumerMetricConstants
-
- KafkaConsumerThread<T> - org.apache.flink.streaming.connectors.kafka.internals中的类
-
The thread the runs the KafkaConsumer, connecting to the brokers and polling records.
- KafkaConsumerThread(Logger, Handover, Properties, ClosableBlockingQueue<KafkaTopicPartitionState<T, TopicPartition>>, String, long, boolean, MetricGroup, MetricGroup) - 类 的构造器org.apache.flink.streaming.connectors.kafka.internals.KafkaConsumerThread
-
- KafkaContextAware<T> - org.apache.flink.streaming.connectors.kafka中的接口
-
An interface for
KafkaSerializationSchemas that need information
about the context where the Kafka Producer is running along with information about the available
partitions.
- KafkaDeserializationSchema<T> - org.apache.flink.streaming.connectors.kafka中的接口
-
The deserialization schema describes how to turn the Kafka ConsumerRecords into data types
(Java/Scala objects) that are processed by Flink.
- KafkaDeserializationSchemaWrapper<T> - org.apache.flink.streaming.connectors.kafka.internals中的类
-
A simple wrapper for using the DeserializationSchema with the KafkaDeserializationSchema
interface.
- KafkaDeserializationSchemaWrapper(DeserializationSchema<T>) - 类 的构造器org.apache.flink.streaming.connectors.kafka.internals.KafkaDeserializationSchemaWrapper
-
- KafkaDynamicSink - org.apache.flink.streaming.connectors.kafka.table中的类
-
A version-agnostic Kafka DynamicTableSink.
- KafkaDynamicSink(DataType, DataType, EncodingFormat<SerializationSchema<RowData>>, EncodingFormat<SerializationSchema<RowData>>, int[], int[], String, String, Properties, FlinkKafkaPartitioner<RowData>, DeliveryGuarantee, boolean, SinkBufferFlushMode, Integer, String) - 类 的构造器org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSink
-
- KafkaDynamicSource - org.apache.flink.streaming.connectors.kafka.table中的类
-
A version-agnostic Kafka ScanTableSource.
- KafkaDynamicSource(DataType, DecodingFormat<DeserializationSchema<RowData>>, DecodingFormat<DeserializationSchema<RowData>>, int[], int[], String, List<String>, Pattern, Properties, StartupMode, Map<KafkaTopicPartition, Long>, long, BoundedMode, Map<KafkaTopicPartition, Long>, long, boolean, String) - 类 的构造器org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSource
-
- KafkaDynamicTableFactory - org.apache.flink.streaming.connectors.kafka.table中的类
-
- KafkaDynamicTableFactory() - 类 的构造器org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicTableFactory
-
- KafkaFetcher<T> - org.apache.flink.streaming.connectors.kafka.internals中的类
-
A fetcher that fetches data from Kafka brokers via the Kafka consumer API.
- KafkaFetcher(SourceFunction.SourceContext<T>, Map<KafkaTopicPartition, Long>, SerializedValue<WatermarkStrategy<T>>, ProcessingTimeService, long, ClassLoader, String, KafkaDeserializationSchema<T>, Properties, long, MetricGroup, MetricGroup, boolean) - 类 的构造器org.apache.flink.streaming.connectors.kafka.internals.KafkaFetcher
-
- KafkaMetricMutableWrapper - org.apache.flink.streaming.connectors.kafka.internals.metrics中的类
-
Gauge for getting the current value of a Kafka metric.
- KafkaMetricMutableWrapper(Metric) - 类 的构造器org.apache.flink.streaming.connectors.kafka.internals.metrics.KafkaMetricMutableWrapper
-
- KafkaMetricWrapper - org.apache.flink.streaming.connectors.kafka.internals.metrics中的类
-
Gauge for getting the current value of a Kafka metric.
- KafkaMetricWrapper(Metric) - 类 的构造器org.apache.flink.streaming.connectors.kafka.internals.metrics.KafkaMetricWrapper
-
- KafkaPartitionDiscoverer - org.apache.flink.streaming.connectors.kafka.internals中的类
-
A partition discoverer that can be used to discover topics and partitions metadata from Kafka
brokers via the Kafka high-level consumer API.
- KafkaPartitionDiscoverer(KafkaTopicsDescriptor, int, int, Properties) - 类 的构造器org.apache.flink.streaming.connectors.kafka.internals.KafkaPartitionDiscoverer
-
- KafkaPartitionSplit - org.apache.flink.connector.kafka.source.split中的类
-
A SourceSplit for a Kafka partition.
- KafkaPartitionSplit(TopicPartition, long) - 类 的构造器org.apache.flink.connector.kafka.source.split.KafkaPartitionSplit
-
- KafkaPartitionSplit(TopicPartition, long, long) - 类 的构造器org.apache.flink.connector.kafka.source.split.KafkaPartitionSplit
-
- KafkaPartitionSplitReader - org.apache.flink.connector.kafka.source.reader中的类
-
A SplitReader implementation that reads records from Kafka partitions.
- KafkaPartitionSplitReader(Properties, SourceReaderContext, KafkaSourceReaderMetrics) - 类 的构造器org.apache.flink.connector.kafka.source.reader.KafkaPartitionSplitReader
-
- KafkaPartitionSplitSerializer - org.apache.flink.connector.kafka.source.split中的类
-
- KafkaPartitionSplitSerializer() - 类 的构造器org.apache.flink.connector.kafka.source.split.KafkaPartitionSplitSerializer
-
- KafkaPartitionSplitState - org.apache.flink.connector.kafka.source.split中的类
-
This class extends KafkaPartitionSplit to track a mutable current offset.
- KafkaPartitionSplitState(KafkaPartitionSplit) - 类 的构造器org.apache.flink.connector.kafka.source.split.KafkaPartitionSplitState
-
- kafkaProducer - 类 中的变量org.apache.flink.streaming.connectors.kafka.internals.FlinkKafkaInternalProducer
-
- KafkaRecordDeserializationSchema<T> - org.apache.flink.connector.kafka.source.reader.deserializer中的接口
-
An interface for the deserialization of Kafka records.
- KafkaRecordEmitter<T> - org.apache.flink.connector.kafka.source.reader中的类
-
- KafkaRecordEmitter(KafkaRecordDeserializationSchema<T>) - 类 的构造器org.apache.flink.connector.kafka.source.reader.KafkaRecordEmitter
-
- KafkaRecordSerializationSchema<T> - org.apache.flink.connector.kafka.sink中的接口
-
A serialization schema which defines how to convert a value of type T to ProducerRecord.
- KafkaRecordSerializationSchema.KafkaSinkContext - org.apache.flink.connector.kafka.sink中的接口
-
Context providing information of the kafka record target location.
- KafkaRecordSerializationSchemaBuilder<IN> - org.apache.flink.connector.kafka.sink中的类
-
- KafkaRecordSerializationSchemaBuilder() - 类 的构造器org.apache.flink.connector.kafka.sink.KafkaRecordSerializationSchemaBuilder
-
- KafkaSerializationSchema<T> - org.apache.flink.streaming.connectors.kafka中的接口
-
- KafkaSerializationSchemaWrapper<T> - org.apache.flink.streaming.connectors.kafka.internals中的类
-
- KafkaSerializationSchemaWrapper(String, FlinkKafkaPartitioner<T>, boolean, SerializationSchema<T>) - 类 的构造器org.apache.flink.streaming.connectors.kafka.internals.KafkaSerializationSchemaWrapper
-
- KafkaShuffleElement() - 类 的构造器org.apache.flink.streaming.connectors.kafka.internals.KafkaShuffleFetcher.KafkaShuffleElement
-
- KafkaShuffleElementDeserializer(TypeSerializer<T>) - 类 的构造器org.apache.flink.streaming.connectors.kafka.internals.KafkaShuffleFetcher.KafkaShuffleElementDeserializer
-
- KafkaShuffleFetcher<T> - org.apache.flink.streaming.connectors.kafka.internals中的类
-
Fetch data from Kafka for Kafka Shuffle.
- KafkaShuffleFetcher(SourceFunction.SourceContext<T>, Map<KafkaTopicPartition, Long>, SerializedValue<WatermarkStrategy<T>>, ProcessingTimeService, long, ClassLoader, String, KafkaDeserializationSchema<T>, Properties, long, MetricGroup, MetricGroup, boolean, TypeSerializer<T>, int) - 类 的构造器org.apache.flink.streaming.connectors.kafka.internals.KafkaShuffleFetcher
-
- KafkaShuffleFetcher.KafkaShuffleElement - org.apache.flink.streaming.connectors.kafka.internals中的类
-
An element in a KafkaShuffle.
- KafkaShuffleFetcher.KafkaShuffleElementDeserializer<T> - org.apache.flink.streaming.connectors.kafka.internals中的类
-
Deserializer for KafkaShuffleElement.
- KafkaShuffleFetcher.KafkaShuffleRecord<T> - org.apache.flink.streaming.connectors.kafka.internals中的类
-
One value with Type T in a KafkaShuffle.
- KafkaShuffleFetcher.KafkaShuffleWatermark - org.apache.flink.streaming.connectors.kafka.internals中的类
-
A watermark element in a KafkaShuffle.
- KafkaSink<IN> - org.apache.flink.connector.kafka.sink中的类
-
Flink Sink to produce data into a Kafka topic.
- KafkaSinkBuilder<IN> - org.apache.flink.connector.kafka.sink中的类
-
- KafkaSource<OUT> - org.apache.flink.connector.kafka.source中的类
-
The Source implementation of Kafka.
- KafkaSourceBuilder<OUT> - org.apache.flink.connector.kafka.source中的类
-
- KafkaSourceEnumerator - org.apache.flink.connector.kafka.source.enumerator中的类
-
The enumerator class for Kafka source.
- KafkaSourceEnumerator(KafkaSubscriber, OffsetsInitializer, OffsetsInitializer, Properties, SplitEnumeratorContext<KafkaPartitionSplit>, Boundedness) - 类 的构造器org.apache.flink.connector.kafka.source.enumerator.KafkaSourceEnumerator
-
- KafkaSourceEnumerator(KafkaSubscriber, OffsetsInitializer, OffsetsInitializer, Properties, SplitEnumeratorContext<KafkaPartitionSplit>, Boundedness, Set<TopicPartition>) - 类 的构造器org.apache.flink.connector.kafka.source.enumerator.KafkaSourceEnumerator
-
- KafkaSourceEnumerator.PartitionOffsetsRetrieverImpl - org.apache.flink.connector.kafka.source.enumerator中的类
-
The implementation for offsets retriever with a consumer and an admin client.
- KafkaSourceEnumState - org.apache.flink.connector.kafka.source.enumerator中的类
-
The state of Kafka source enumerator.
- KafkaSourceEnumStateSerializer - org.apache.flink.connector.kafka.source.enumerator中的类
-
The Serializer for the enumerator
state of Kafka source.
- KafkaSourceEnumStateSerializer() - 类 的构造器org.apache.flink.connector.kafka.source.enumerator.KafkaSourceEnumStateSerializer
-
- KafkaSourceFetcherManager - org.apache.flink.connector.kafka.source.reader.fetcher中的类
-
The SplitFetcherManager for Kafka source.
- KafkaSourceFetcherManager(FutureCompletingBlockingQueue<RecordsWithSplitIds<ConsumerRecord<byte[], byte[]>>>, Supplier<SplitReader<ConsumerRecord<byte[], byte[]>, KafkaPartitionSplit>>, Consumer<Collection<String>>) - 类 的构造器org.apache.flink.connector.kafka.source.reader.fetcher.KafkaSourceFetcherManager
-
Creates a new SplitFetcherManager with a single I/O threads.
- KafkaSourceOptions - org.apache.flink.connector.kafka.source中的类
-
Configurations for KafkaSource.
- KafkaSourceOptions() - 类 的构造器org.apache.flink.connector.kafka.source.KafkaSourceOptions
-
- KafkaSourceReader<T> - org.apache.flink.connector.kafka.source.reader中的类
-
The source reader for Kafka partitions.
- KafkaSourceReader(FutureCompletingBlockingQueue<RecordsWithSplitIds<ConsumerRecord<byte[], byte[]>>>, KafkaSourceFetcherManager, RecordEmitter<ConsumerRecord<byte[], byte[]>, T, KafkaPartitionSplitState>, Configuration, SourceReaderContext, KafkaSourceReaderMetrics) - 类 的构造器org.apache.flink.connector.kafka.source.reader.KafkaSourceReader
-
- KafkaSourceReaderMetrics - org.apache.flink.connector.kafka.source.metrics中的类
-
- KafkaSourceReaderMetrics(SourceReaderMetricGroup) - 类 的构造器org.apache.flink.connector.kafka.source.metrics.KafkaSourceReaderMetrics
-
- KafkaSubscriber - org.apache.flink.connector.kafka.source.enumerator.subscriber中的接口
-
Kafka consumer allows a few different ways to consume from the topics, including:
Subscribe from a collection of topics.
- KafkaTopicPartition - org.apache.flink.streaming.connectors.kafka.internals中的类
-
Flink's description of a partition in a Kafka topic.
- KafkaTopicPartition(String, int) - 类 的构造器org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartition
-
- KafkaTopicPartition.Comparator - org.apache.flink.streaming.connectors.kafka.internals中的类
-
- KafkaTopicPartitionAssigner - org.apache.flink.streaming.connectors.kafka.internals中的类
-
Utility for assigning Kafka partitions to consumer subtasks.
- KafkaTopicPartitionAssigner() - 类 的构造器org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartitionAssigner
-
- KafkaTopicPartitionLeader - org.apache.flink.streaming.connectors.kafka.internals中的类
-
Serializable Topic Partition info with leader Node information.
- KafkaTopicPartitionLeader(KafkaTopicPartition, Node) - 类 的构造器org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartitionLeader
-
- KafkaTopicPartitionState<T,KPH> - org.apache.flink.streaming.connectors.kafka.internals中的类
-
The state that the Flink Kafka Consumer holds for each Kafka partition.
- KafkaTopicPartitionState(KafkaTopicPartition, KPH) - 类 的构造器org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartitionState
-
- KafkaTopicPartitionStateSentinel - org.apache.flink.streaming.connectors.kafka.internals中的类
-
Magic values used to represent special offset states before partitions are actually read.
- KafkaTopicPartitionStateSentinel() - 类 的构造器org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartitionStateSentinel
-
- KafkaTopicPartitionStateWithWatermarkGenerator<T,KPH> - org.apache.flink.streaming.connectors.kafka.internals中的类
-
A special version of the per-kafka-partition-state that additionally holds a TimestampAssigner, WatermarkGenerator, an immediate WatermarkOutput, and a
deferred WatermarkOutput for this partition.
- KafkaTopicPartitionStateWithWatermarkGenerator(KafkaTopicPartition, KPH, TimestampAssigner<T>, WatermarkGenerator<T>, WatermarkOutput, WatermarkOutput) - 类 的构造器org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartitionStateWithWatermarkGenerator
-
- KafkaTopicsDescriptor - org.apache.flink.streaming.connectors.kafka.internals中的类
-
A Kafka Topics Descriptor describes how the consumer subscribes to Kafka topics - either a fixed
list of topics, or a topic pattern.
- KafkaTopicsDescriptor(List<String>, Pattern) - 类 的构造器org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicsDescriptor
-
- KafkaTransactionContext(Set<String>) - 类 的构造器org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.KafkaTransactionContext
-
已过时。
- KafkaTransactionState(String, FlinkKafkaInternalProducer<byte[], byte[]>) - 类 的构造器org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.KafkaTransactionState
-
已过时。
- KafkaTransactionState(FlinkKafkaInternalProducer<byte[], byte[]>) - 类 的构造器org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.KafkaTransactionState
-
已过时。
- KafkaTransactionState(String, long, short, FlinkKafkaInternalProducer<byte[], byte[]>) - 类 的构造器org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.KafkaTransactionState
-
已过时。
- KEY_DISABLE_METRICS - 类 中的静态变量org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase
-
Boolean configuration key to disable metrics tracking
- KEY_DISABLE_METRICS - 类 中的静态变量org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer
-
已过时。
Configuration key for disabling the metrics reporting.
- KEY_DISABLE_METRICS - 类 中的静态变量org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerBase
-
Configuration key for disabling the metrics reporting.
- KEY_FIELDS - 类 中的静态变量org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions
-
- KEY_FIELDS_PREFIX - 类 中的静态变量org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions
-
- KEY_FORMAT - 类 中的静态变量org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions
-
- KEY_PARTITION_DISCOVERY_INTERVAL_MILLIS - 类 中的静态变量org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase
-
Configuration key to define the consumer's partition discovery interval, in milliseconds.
- KEY_POLL_TIMEOUT - 类 中的静态变量org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer
-
已过时。
Configuration key to change the polling timeout
- keyDecodingFormat - 类 中的变量org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSource
-
Optional format for decoding keys from Kafka.
- KeyedDeserializationSchema<T> - org.apache.flink.streaming.util.serialization中的接口
-
- KeyedSerializationSchema<T> - org.apache.flink.streaming.util.serialization中的接口
-
- KeyedSerializationSchemaWrapper<T> - org.apache.flink.streaming.connectors.kafka.internals中的类
-
A simple wrapper for using the SerializationSchema with the KeyedSerializationSchema interface.
- KeyedSerializationSchemaWrapper(SerializationSchema<T>) - 类 的构造器org.apache.flink.streaming.connectors.kafka.internals.KeyedSerializationSchemaWrapper
-
- keyEncodingFormat - 类 中的变量org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSink
-
Optional format for encoding keys to Kafka.
- keyPrefix - 类 中的变量org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSink
-
Prefix that needs to be removed from fields when constructing the physical data type.
- keyPrefix - 类 中的变量org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSource
-
Prefix that needs to be removed from fields when constructing the physical data type.
- keyProjection - 类 中的变量org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSink
-
Indices that determine the key fields and the source position in the consumed row.
- keyProjection - 类 中的变量org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSource
-
Indices that determine the key fields and the target position in the produced row.
- parallelism - 类 中的变量org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSink
-
Parallelism of the physical Kafka producer
- partition(T, byte[], byte[], String, int[]) - 类 中的方法org.apache.flink.streaming.connectors.kafka.partitioner.FlinkFixedPartitioner
-
- partition(T, byte[], byte[], String, int[]) - 类 中的方法org.apache.flink.streaming.connectors.kafka.partitioner.FlinkKafkaPartitioner
-
Determine the id of the partition that the record should be written to.
- PARTITION_DISCOVERY_DISABLED - 类 中的静态变量org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase
-
The default interval to execute partition discovery, in milliseconds (Long.MIN_VALUE,
i.e. disabled by default).
- PARTITION_DISCOVERY_INTERVAL_MS - 类 中的静态变量org.apache.flink.connector.kafka.source.KafkaSourceOptions
-
- PARTITION_GROUP - 类 中的静态变量org.apache.flink.connector.kafka.source.metrics.KafkaSourceReaderMetrics
-
- partitionConsumerRecordsHandler(List<ConsumerRecord<byte[], byte[]>>, KafkaTopicPartitionState<T, TopicPartition>) - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.KafkaFetcher
-
- partitionConsumerRecordsHandler(List<ConsumerRecord<byte[], byte[]>>, KafkaTopicPartitionState<T, TopicPartition>) - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.KafkaShuffleFetcher
-
- partitioner - 类 中的变量org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSink
-
Partitioner to select Kafka partition for each item.
- PartitionOffsetsRetrieverImpl(AdminClient, String) - 类 的构造器org.apache.flink.connector.kafka.source.enumerator.KafkaSourceEnumerator.PartitionOffsetsRetrieverImpl
-
- partitionsFor(String) - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.FlinkKafkaInternalProducer
-
- pauseOrResumeSplits(Collection<KafkaPartitionSplit>, Collection<KafkaPartitionSplit>) - 类 中的方法org.apache.flink.connector.kafka.source.reader.KafkaPartitionSplitReader
-
- pauseOrResumeSplits(Collection<String>, Collection<String>) - 类 中的方法org.apache.flink.connector.kafka.source.reader.KafkaSourceReader
-
- peek() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.ClosableBlockingQueue
-
Returns the queue's next element without removing it, if the queue is non-empty.
- pendingRecords - 类 中的变量org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer
-
已过时。
Number of unacknowledged records.
- pendingRecords - 类 中的变量org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerBase
-
Number of unacknowledged records.
- pendingRecordsLock - 类 中的变量org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerBase
-
Lock for accessing the pending records.
- persistentKeyBy(DataStream<T>, String, int, int, Properties, KeySelector<T, K>) - 类 中的静态方法org.apache.flink.streaming.connectors.kafka.shuffle.FlinkKafkaShuffle
-
Uses Kafka as a message bus to persist keyBy shuffle.
- persistentKeyBy(DataStream<T>, String, int, int, Properties, int...) - 类 中的静态方法org.apache.flink.streaming.connectors.kafka.shuffle.FlinkKafkaShuffle
-
Uses Kafka as a message bus to persist keyBy shuffle.
- physicalDataType - 类 中的变量org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSink
-
Data type to configure the formats.
- physicalDataType - 类 中的变量org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSource
-
Data type to configure the formats.
- poll() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.ClosableBlockingQueue
-
Returns the queue's next element and removes it, the queue is non-empty.
- pollBatch() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.ClosableBlockingQueue
-
Returns all of the queue's current elements in a list, if the queue is non-empty.
- pollNext() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.Handover
-
Polls the next element from the Handover, possibly blocking until the next element is
available.
- pollTimeout - 类 中的变量org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer
-
已过时。
From Kafka's Javadoc: The time, in milliseconds, spent waiting in poll if data is not
available.
- preCommit(FlinkKafkaProducer.KafkaTransactionState) - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer
-
已过时。
- produce(ConsumerRecords<byte[], byte[]>) - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.Handover
-
Hands over an element from the producer.
- producedDataType - 类 中的变量org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSource
-
Data type that describes the final output of the source.
- producer - 类 中的变量org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerBase
-
KafkaProducer instance.
- producerConfig - 类 中的变量org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer
-
已过时。
User defined properties for the Producer.
- producerConfig - 类 中的变量org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerBase
-
User defined properties for the Producer.
- properties - 类 中的变量org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer
-
已过时。
User-supplied properties for Kafka
- properties - 类 中的变量org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSink
-
Properties for the Kafka producer.
- properties - 类 中的变量org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSource
-
Properties for the Kafka consumer.
- props - 类 中的变量org.apache.flink.connector.kafka.source.KafkaSourceBuilder
-
- PROPS_BOOTSTRAP_SERVERS - 类 中的静态变量org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions
-
- PROPS_GROUP_ID - 类 中的静态变量org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions
-
- SAFE_SCALE_DOWN_FACTOR - 类 中的静态变量org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer
-
已过时。
This coefficient determines what is the safe scale down factor.
- SCAN_BOUNDED_MODE - 类 中的静态变量org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions
-
- SCAN_BOUNDED_SPECIFIC_OFFSETS - 类 中的静态变量org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions
-
- SCAN_BOUNDED_TIMESTAMP_MILLIS - 类 中的静态变量org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions
-
- SCAN_STARTUP_MODE - 类 中的静态变量org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions
-
- SCAN_STARTUP_SPECIFIC_OFFSETS - 类 中的静态变量org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions
-
- SCAN_STARTUP_TIMESTAMP_MILLIS - 类 中的静态变量org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions
-
- SCAN_TOPIC_PARTITION_DISCOVERY - 类 中的静态变量org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions
-
- schema - 类 中的变量org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerBase
-
(Serializable) SerializationSchema for turning objects used with Flink into. byte[] for
Kafka.
- semantic - 类 中的变量org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer
-
已过时。
Semantic chosen for this instance.
- send(ProducerRecord<K, V>) - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.FlinkKafkaInternalProducer
-
- send(ProducerRecord<K, V>, Callback) - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.FlinkKafkaInternalProducer
-
- sendOffsetsToTransaction(Map<TopicPartition, OffsetAndMetadata>, String) - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.FlinkKafkaInternalProducer
-
- sendOffsetsToTransaction(Map<TopicPartition, OffsetAndMetadata>, ConsumerGroupMetadata) - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.FlinkKafkaInternalProducer
-
- serialize(T, KafkaRecordSerializationSchema.KafkaSinkContext, Long) - 接口 中的方法org.apache.flink.connector.kafka.sink.KafkaRecordSerializationSchema
-
Serializes given element and returns it as a ProducerRecord.
- serialize(KafkaSourceEnumState) - 类 中的方法org.apache.flink.connector.kafka.source.enumerator.KafkaSourceEnumStateSerializer
-
- serialize(KafkaPartitionSplit) - 类 中的方法org.apache.flink.connector.kafka.source.split.KafkaPartitionSplitSerializer
-
- serialize(FlinkKafkaProducer.KafkaTransactionContext, DataOutputView) - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.ContextStateSerializer
-
已过时。
- serialize(FlinkKafkaProducer.NextTransactionalIdHint, DataOutputView) - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.NextTransactionalIdHintSerializer
-
已过时。
- serialize(FlinkKafkaProducer.KafkaTransactionState, DataOutputView) - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.TransactionStateSerializer
-
已过时。
- serialize(T, Long) - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.KafkaSerializationSchemaWrapper
-
- serialize(T, Long) - 接口 中的方法org.apache.flink.streaming.connectors.kafka.KafkaSerializationSchema
-
Serializes given element and returns it as a ProducerRecord.
- serializeKey(T) - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.KeyedSerializationSchemaWrapper
-
- serializeKey(T) - 接口 中的方法org.apache.flink.streaming.util.serialization.KeyedSerializationSchema
-
已过时。
Serializes the key of the incoming element to a byte array This method might return null if
no key is available.
- serializeKey(Tuple2<K, V>) - 类 中的方法org.apache.flink.streaming.util.serialization.TypeInformationKeyValueSerializationSchema
-
- serializeValue(T) - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.KeyedSerializationSchemaWrapper
-
- serializeValue(T) - 接口 中的方法org.apache.flink.streaming.util.serialization.KeyedSerializationSchema
-
已过时。
Serializes the value of the incoming element to a byte array.
- serializeValue(Tuple2<K, V>) - 类 中的方法org.apache.flink.streaming.util.serialization.TypeInformationKeyValueSerializationSchema
-
- setAndCheckDiscoveredPartition(KafkaTopicPartition) - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.AbstractPartitionDiscoverer
-
Sets a partition as discovered.
- setBootstrapServers(String) - 类 中的方法org.apache.flink.connector.kafka.sink.KafkaSinkBuilder
-
Sets the Kafka bootstrap servers.
- setBootstrapServers(String) - 类 中的方法org.apache.flink.connector.kafka.source.KafkaSourceBuilder
-
Sets the bootstrap servers for the KafkaConsumer of the KafkaSource.
- setBounded(OffsetsInitializer) - 类 中的方法org.apache.flink.connector.kafka.source.KafkaSourceBuilder
-
By default the KafkaSource is set to run as Boundedness.CONTINUOUS_UNBOUNDED and thus
never stops until the Flink job fails or is canceled.
- setClientIdPrefix(String) - 类 中的方法org.apache.flink.connector.kafka.source.KafkaSourceBuilder
-
Sets the client id prefix of this KafkaSource.
- setCommitOffsetsOnCheckpoints(boolean) - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase
-
Specifies whether or not the consumer should commit offsets back to Kafka on checkpoints.
- setCommittedOffset(long) - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartitionState
-
- setCurrentOffset(long) - 类 中的方法org.apache.flink.connector.kafka.source.split.KafkaPartitionSplitState
-
- setDeliverGuarantee(DeliveryGuarantee) - 类 中的方法org.apache.flink.connector.kafka.sink.KafkaSinkBuilder
-
- setDeliveryGuarantee(DeliveryGuarantee) - 类 中的方法org.apache.flink.connector.kafka.sink.KafkaSinkBuilder
-
Sets the wanted the DeliveryGuarantee.
- setDeserializer(KafkaRecordDeserializationSchema<OUT>) - 类 中的方法org.apache.flink.connector.kafka.source.KafkaSourceBuilder
-
Sets the
deserializer of the
ConsumerRecord for KafkaSource.
- setField(Object, String, Object) - 类 中的静态方法org.apache.flink.streaming.connectors.kafka.internals.FlinkKafkaInternalProducer
-
Sets the field fieldName on the given Object object to value using
reflection.
- setFlushOnCheckpoint(boolean) - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerBase
-
If set to true, the Flink producer will wait for all outstanding messages in the Kafka
buffers to be acknowledged by the Kafka producer on a checkpoint.
- setGroupId(String) - 类 中的方法org.apache.flink.connector.kafka.source.KafkaSourceBuilder
-
Sets the consumer group id of the KafkaSource.
- setKafkaKeySerializer(Class<? extends Serializer<? super T>>) - 类 中的方法org.apache.flink.connector.kafka.sink.KafkaRecordSerializationSchemaBuilder
-
Sets Kafka's Serializer to serialize incoming elements to the key of the ProducerRecord.
- setKafkaKeySerializer(Class<S>, Map<String, String>) - 类 中的方法org.apache.flink.connector.kafka.sink.KafkaRecordSerializationSchemaBuilder
-
Sets a configurable Kafka Serializer and pass a configuration to serialize incoming
elements to the key of the ProducerRecord.
- setKafkaMetric(Metric) - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.metrics.KafkaMetricMutableWrapper
-
- setKafkaProducerConfig(Properties) - 类 中的方法org.apache.flink.connector.kafka.sink.KafkaSinkBuilder
-
Sets the configuration which used to instantiate all used KafkaProducer.
- setKafkaSubscriber(KafkaSubscriber) - 类 中的方法org.apache.flink.connector.kafka.source.KafkaSourceBuilder
-
Set a custom Kafka subscriber to use to discover new splits.
- setKafkaValueSerializer(Class<? extends Serializer<? super T>>) - 类 中的方法org.apache.flink.connector.kafka.sink.KafkaRecordSerializationSchemaBuilder
-
Sets Kafka's Serializer to serialize incoming elements to the value of the ProducerRecord.
- setKafkaValueSerializer(Class<S>, Map<String, String>) - 类 中的方法org.apache.flink.connector.kafka.sink.KafkaRecordSerializationSchemaBuilder
-
Sets a configurable Kafka Serializer and pass a configuration to serialize incoming
elements to the value of the ProducerRecord.
- setKeySerializationSchema(SerializationSchema<? super T>) - 类 中的方法org.apache.flink.connector.kafka.sink.KafkaRecordSerializationSchemaBuilder
-
Sets a SerializationSchema which is used to serialize the incoming element to the key
of the ProducerRecord.
- setLogFailuresOnly(boolean) - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer
-
已过时。
Defines whether the producer should fail on errors, or only log them.
- setLogFailuresOnly(boolean) - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerBase
-
Defines whether the producer should fail on errors, or only log them.
- setNumParallelInstances(int) - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.KafkaSerializationSchemaWrapper
-
- setNumParallelInstances(int) - 接口 中的方法org.apache.flink.streaming.connectors.kafka.KafkaContextAware
-
Sets the parallelism with which the parallel task of the Kafka Producer runs.
- setOffset(long) - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartitionState
-
- setParallelInstanceId(int) - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.KafkaSerializationSchemaWrapper
-
- setParallelInstanceId(int) - 接口 中的方法org.apache.flink.streaming.connectors.kafka.KafkaContextAware
-
Sets the number of the parallel subtask that the Kafka Producer is running on.
- setPartitioner(FlinkKafkaPartitioner<? super T>) - 类 中的方法org.apache.flink.connector.kafka.sink.KafkaRecordSerializationSchemaBuilder
-
Sets a custom partitioner determining the target partition of the target topic.
- setPartitions(Set<TopicPartition>) - 类 中的方法org.apache.flink.connector.kafka.source.KafkaSourceBuilder
-
Set a set of partitions to consume from.
- setPartitions(int[]) - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.KafkaSerializationSchemaWrapper
-
- setPartitions(int[]) - 接口 中的方法org.apache.flink.streaming.connectors.kafka.KafkaContextAware
-
- setProperties(Properties) - 类 中的方法org.apache.flink.connector.kafka.source.KafkaSourceBuilder
-
Set arbitrary properties for the KafkaSource and KafkaConsumer.
- setProperty(String, String) - 类 中的方法org.apache.flink.connector.kafka.sink.KafkaSinkBuilder
-
- setProperty(String, String) - 类 中的方法org.apache.flink.connector.kafka.source.KafkaSourceBuilder
-
Set an arbitrary property for the KafkaSource and KafkaConsumer.
- setRecordSerializer(KafkaRecordSerializationSchema<IN>) - 类 中的方法org.apache.flink.connector.kafka.sink.KafkaSinkBuilder
-
- setStartFromEarliest() - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase
-
Specifies the consumer to start reading from the earliest offset for all partitions.
- setStartFromGroupOffsets() - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase
-
Specifies the consumer to start reading from any committed group offsets found in Zookeeper /
Kafka brokers.
- setStartFromLatest() - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase
-
Specifies the consumer to start reading from the latest offset for all partitions.
- setStartFromSpecificOffsets(Map<KafkaTopicPartition, Long>) - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase
-
Specifies the consumer to start reading partitions from specific offsets, set independently
for each partition.
- setStartFromTimestamp(long) - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase
-
Specifies the consumer to start reading partitions from a specified timestamp.
- setStartingOffsets(OffsetsInitializer) - 类 中的方法org.apache.flink.connector.kafka.source.KafkaSourceBuilder
-
Specify from which offsets the KafkaSource should start consuming from by providing an
OffsetsInitializer.
- setTopic(String) - 类 中的方法org.apache.flink.connector.kafka.sink.KafkaRecordSerializationSchemaBuilder
-
Sets a fixed topic which used as destination for all records.
- setTopicPattern(Pattern) - 类 中的方法org.apache.flink.connector.kafka.source.KafkaSourceBuilder
-
Set a topic pattern to consume from use the java
Pattern.
- setTopics(List<String>) - 类 中的方法org.apache.flink.connector.kafka.source.KafkaSourceBuilder
-
Set a list of topics the KafkaSource should consume from.
- setTopics(String...) - 类 中的方法org.apache.flink.connector.kafka.source.KafkaSourceBuilder
-
Set a list of topics the KafkaSource should consume from.
- setTopicSelector(TopicSelector<? super T>) - 类 中的方法org.apache.flink.connector.kafka.sink.KafkaRecordSerializationSchemaBuilder
-
Sets a topic selector which computes the target topic for every incoming record.
- setTransactionalIdPrefix(String) - 类 中的方法org.apache.flink.connector.kafka.sink.KafkaSinkBuilder
-
Sets the prefix for all created transactionalIds if DeliveryGuarantee.EXACTLY_ONCE is
configured.
- setTransactionalIdPrefix(String) - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer
-
已过时。
Specifies the prefix of the transactional.id property to be used by the producers when
communicating with Kafka.
- setUnbounded(OffsetsInitializer) - 类 中的方法org.apache.flink.connector.kafka.source.KafkaSourceBuilder
-
By default the KafkaSource is set to run as Boundedness.CONTINUOUS_UNBOUNDED and thus
never stops until the Flink job fails or is canceled.
- setValueOnlyDeserializer(DeserializationSchema<OUT>) - 类 中的方法org.apache.flink.connector.kafka.source.KafkaSourceBuilder
-
Sets the
deserializer of the
ConsumerRecord for KafkaSource.
- setValueSerializationSchema(SerializationSchema<T>) - 类 中的方法org.apache.flink.connector.kafka.sink.KafkaRecordSerializationSchemaBuilder
-
Sets a SerializationSchema which is used to serialize the incoming element to the
value of the ProducerRecord.
- setWriteTimestamp(boolean) - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.KafkaSerializationSchemaWrapper
-
- setWriteTimestampToKafka(boolean) - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer
-
已过时。
If set to true, Flink will write the (event time) timestamp attached to each record into
Kafka.
- shutdown() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.KafkaConsumerThread
-
Shuts this thread down, waking up the thread gracefully if blocked (without
Thread.interrupt() calls).
- SINK_BUFFER_FLUSH_INTERVAL - 类 中的静态变量org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions
-
- SINK_BUFFER_FLUSH_MAX_ROWS - 类 中的静态变量org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions
-
- SINK_CHANGELOG_MODE - 类 中的静态变量org.apache.flink.streaming.connectors.kafka.table.UpsertKafkaDynamicTableFactory.EncodingFormatWrapper
-
- SINK_PARALLELISM - 类 中的静态变量org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions
-
- SINK_PARTITIONER - 类 中的静态变量org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions
-
- SinkBufferFlushMode - org.apache.flink.streaming.connectors.kafka.table中的类
-
Sink buffer flush configuration.
- SinkBufferFlushMode(int, long) - 类 的构造器org.apache.flink.streaming.connectors.kafka.table.SinkBufferFlushMode
-
- size() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.ClosableBlockingQueue
-
Gets the number of elements currently in the queue.
- snapshotConfiguration() - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.ContextStateSerializer
-
已过时。
- snapshotConfiguration() - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.NextTransactionalIdHintSerializer
-
已过时。
- snapshotConfiguration() - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.TransactionStateSerializer
-
已过时。
- snapshotCurrentState() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.AbstractFetcher
-
Takes a snapshot of the partition offsets.
- snapshotState(long) - 类 中的方法org.apache.flink.connector.kafka.source.enumerator.KafkaSourceEnumerator
-
- snapshotState(long) - 类 中的方法org.apache.flink.connector.kafka.source.reader.KafkaSourceReader
-
- snapshotState(FunctionSnapshotContext) - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase
-
- snapshotState(FunctionSnapshotContext) - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer
-
已过时。
- snapshotState(FunctionSnapshotContext) - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerBase
-
- sourceContext - 类 中的变量org.apache.flink.streaming.connectors.kafka.internals.AbstractFetcher
-
The source context to emit records and watermarks to.
- SourceContextWatermarkOutputAdapter<T> - org.apache.flink.streaming.connectors.kafka.internals中的类
-
A WatermarkOutput that forwards calls to a SourceFunction.SourceContext.
- SourceContextWatermarkOutputAdapter(SourceFunction.SourceContext<T>) - 类 的构造器org.apache.flink.streaming.connectors.kafka.internals.SourceContextWatermarkOutputAdapter
-
- specificBoundedOffsets - 类 中的变量org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSource
-
- specificStartupOffsets - 类 中的变量org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSource
-
- splitId() - 类 中的方法org.apache.flink.connector.kafka.source.split.KafkaPartitionSplit
-
- start() - 类 中的方法org.apache.flink.connector.kafka.source.enumerator.KafkaSourceEnumerator
-
Start the enumerator.
- StartupMode - org.apache.flink.streaming.connectors.kafka.config中的枚举
-
Startup modes for the Kafka Consumer.
- startupMode - 类 中的变量org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSource
-
- startupTimestampMillis - 类 中的变量org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSource
-
The start timestamp to locate partition offsets; only relevant when startup mode is
StartupMode.TIMESTAMP.
- subscribedPartitionStates() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internals.AbstractFetcher
-
Gets all partitions (with partition state) that this fetcher is subscribed to.
- supportsMetadataProjection() - 类 中的方法org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSource
-
- sync(Metric, Counter) - 类 中的静态方法org.apache.flink.connector.kafka.MetricUtil
-
Ensures that the counter has the same value as the given Kafka metric.
- VALID_STARTING_OFFSET_MARKERS - 类 中的静态变量org.apache.flink.connector.kafka.source.split.KafkaPartitionSplit
-
- VALID_STOPPING_OFFSET_MARKERS - 类 中的静态变量org.apache.flink.connector.kafka.source.split.KafkaPartitionSplit
-
- validate(Properties) - 接口 中的方法org.apache.flink.connector.kafka.source.enumerator.initializer.OffsetsInitializerValidator
-
Validate offsets initializer with properties of Kafka source.
- VALUE_FIELDS_INCLUDE - 类 中的静态变量org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions
-
- VALUE_FORMAT - 类 中的静态变量org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions
-
- valueDecodingFormat - 类 中的变量org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSource
-
Format for decoding values from Kafka.
- valueEncodingFormat - 类 中的变量org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSink
-
Format for encoding values to Kafka.
- valueOf(String) - 枚举 中的静态方法org.apache.flink.streaming.connectors.kafka.config.BoundedMode
-
返回带有指定名称的该类型的枚举常量。
- valueOf(String) - 枚举 中的静态方法org.apache.flink.streaming.connectors.kafka.config.OffsetCommitMode
-
返回带有指定名称的该类型的枚举常量。
- valueOf(String) - 枚举 中的静态方法org.apache.flink.streaming.connectors.kafka.config.StartupMode
-
返回带有指定名称的该类型的枚举常量。
- valueOf(String) - 枚举 中的静态方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaErrorCode
-
返回带有指定名称的该类型的枚举常量。
- valueOf(String) - 枚举 中的静态方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.Semantic
-
已过时。
返回带有指定名称的该类型的枚举常量。
- valueOf(String) - 枚举 中的静态方法org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions.ScanBoundedMode
-
返回带有指定名称的该类型的枚举常量。
- valueOf(String) - 枚举 中的静态方法org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions.ScanStartupMode
-
返回带有指定名称的该类型的枚举常量。
- valueOf(String) - 枚举 中的静态方法org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions.ValueFieldsStrategy
-
返回带有指定名称的该类型的枚举常量。
- valueOnly(DeserializationSchema<V>) - 接口 中的静态方法org.apache.flink.connector.kafka.source.reader.deserializer.KafkaRecordDeserializationSchema
-
Wraps a DeserializationSchema as the value deserialization schema of the ConsumerRecords.
- valueOnly(Class<? extends Deserializer<V>>) - 接口 中的静态方法org.apache.flink.connector.kafka.source.reader.deserializer.KafkaRecordDeserializationSchema
-
- valueOnly(Class<D>, Map<String, String>) - 接口 中的静态方法org.apache.flink.connector.kafka.source.reader.deserializer.KafkaRecordDeserializationSchema
-
- valueProjection - 类 中的变量org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSink
-
Indices that determine the value fields and the source position in the consumed row.
- valueProjection - 类 中的变量org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSource
-
Indices that determine the value fields and the target position in the produced row.
- values() - 枚举 中的静态方法org.apache.flink.streaming.connectors.kafka.config.BoundedMode
-
按照声明该枚举类型的常量的顺序, 返回
包含这些常量的数组。
- values() - 枚举 中的静态方法org.apache.flink.streaming.connectors.kafka.config.OffsetCommitMode
-
按照声明该枚举类型的常量的顺序, 返回
包含这些常量的数组。
- values() - 枚举 中的静态方法org.apache.flink.streaming.connectors.kafka.config.StartupMode
-
按照声明该枚举类型的常量的顺序, 返回
包含这些常量的数组。
- values() - 枚举 中的静态方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaErrorCode
-
按照声明该枚举类型的常量的顺序, 返回
包含这些常量的数组。
- values() - 枚举 中的静态方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.Semantic
-
已过时。
按照声明该枚举类型的常量的顺序, 返回
包含这些常量的数组。
- values() - 枚举 中的静态方法org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions.ScanBoundedMode
-
按照声明该枚举类型的常量的顺序, 返回
包含这些常量的数组。
- values() - 枚举 中的静态方法org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions.ScanStartupMode
-
按照声明该枚举类型的常量的顺序, 返回
包含这些常量的数组。
- values() - 枚举 中的静态方法org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions.ValueFieldsStrategy
-
按照声明该枚举类型的常量的顺序, 返回
包含这些常量的数组。