| 限定符和类型 | 方法和说明 |
|---|---|
protected abstract Map<KafkaTopicPartition,Long> |
FlinkKafkaConsumerBase.fetchOffsetsWithTimestamp(Collection<KafkaTopicPartition> partitions,
long timestamp) |
protected Map<KafkaTopicPartition,Long> |
FlinkKafkaConsumer.fetchOffsetsWithTimestamp(Collection<KafkaTopicPartition> partitions,
long timestamp)
已过时。
|
| 限定符和类型 | 方法和说明 |
|---|---|
protected abstract AbstractFetcher<T,?> |
FlinkKafkaConsumerBase.createFetcher(org.apache.flink.streaming.api.functions.source.SourceFunction.SourceContext<T> sourceContext,
Map<KafkaTopicPartition,Long> subscribedPartitionsToStartOffsets,
org.apache.flink.util.SerializedValue<org.apache.flink.api.common.eventtime.WatermarkStrategy<T>> watermarkStrategy,
org.apache.flink.streaming.api.operators.StreamingRuntimeContext runtimeContext,
OffsetCommitMode offsetCommitMode,
org.apache.flink.metrics.MetricGroup kafkaMetricGroup,
boolean useMetrics)
Creates the fetcher that connect to the Kafka brokers, pulls data, deserialized the data, and
emits it into the data streams.
|
protected AbstractFetcher<T,?> |
FlinkKafkaConsumer.createFetcher(org.apache.flink.streaming.api.functions.source.SourceFunction.SourceContext<T> sourceContext,
Map<KafkaTopicPartition,Long> assignedPartitionsWithInitialOffsets,
org.apache.flink.util.SerializedValue<org.apache.flink.api.common.eventtime.WatermarkStrategy<T>> watermarkStrategy,
org.apache.flink.streaming.api.operators.StreamingRuntimeContext runtimeContext,
OffsetCommitMode offsetCommitMode,
org.apache.flink.metrics.MetricGroup consumerMetricGroup,
boolean useMetrics)
已过时。
|
protected abstract Map<KafkaTopicPartition,Long> |
FlinkKafkaConsumerBase.fetchOffsetsWithTimestamp(Collection<KafkaTopicPartition> partitions,
long timestamp) |
protected Map<KafkaTopicPartition,Long> |
FlinkKafkaConsumer.fetchOffsetsWithTimestamp(Collection<KafkaTopicPartition> partitions,
long timestamp)
已过时。
|
FlinkKafkaConsumerBase<T> |
FlinkKafkaConsumerBase.setStartFromSpecificOffsets(Map<KafkaTopicPartition,Long> specificStartupOffsets)
Specifies the consumer to start reading partitions from specific offsets, set independently
for each partition.
|
| 限定符和类型 | 方法和说明 |
|---|---|
KafkaTopicPartition |
KafkaTopicPartitionState.getKafkaTopicPartition()
Gets Flink's descriptor for the Kafka Partition.
|
KafkaTopicPartition |
KafkaTopicPartitionLeader.getTopicPartition() |
| 限定符和类型 | 方法和说明 |
|---|---|
List<KafkaTopicPartition> |
AbstractPartitionDiscoverer.discoverPartitions()
Execute a partition discovery attempt for this subtask.
|
static List<KafkaTopicPartition> |
KafkaTopicPartition.dropLeaderData(List<KafkaTopicPartitionLeader> partitionInfos) |
protected List<KafkaTopicPartition> |
KafkaPartitionDiscoverer.getAllPartitionsForTopics(List<String> topics) |
protected abstract List<KafkaTopicPartition> |
AbstractPartitionDiscoverer.getAllPartitionsForTopics(List<String> topics)
Fetch the list of all partitions for a specific topics list from Kafka.
|
HashMap<KafkaTopicPartition,Long> |
AbstractFetcher.snapshotCurrentState()
Takes a snapshot of the partition offsets.
|
| 限定符和类型 | 方法和说明 |
|---|---|
static int |
KafkaTopicPartitionAssigner.assign(KafkaTopicPartition partition,
int numParallelSubtasks)
Returns the index of the target subtask that a specific Kafka partition should be assigned
to.
|
int |
KafkaTopicPartition.Comparator.compare(KafkaTopicPartition p1,
KafkaTopicPartition p2) |
org.apache.kafka.common.TopicPartition |
KafkaFetcher.createKafkaPartitionHandle(KafkaTopicPartition partition) |
protected abstract KPH |
AbstractFetcher.createKafkaPartitionHandle(KafkaTopicPartition partition)
Creates the Kafka version specific representation of the given topic partition.
|
boolean |
AbstractPartitionDiscoverer.setAndCheckDiscoveredPartition(KafkaTopicPartition partition)
Sets a partition as discovered.
|
| 限定符和类型 | 方法和说明 |
|---|---|
void |
AbstractFetcher.addDiscoveredPartitions(List<KafkaTopicPartition> newPartitions)
Adds a list of newly discovered partitions to the fetcher for consuming.
|
void |
AbstractFetcher.commitInternalOffsetsToKafka(Map<KafkaTopicPartition,Long> offsets,
KafkaCommitCallback commitCallback)
Commits the given partition offsets to the Kafka brokers (or to ZooKeeper for older Kafka
versions).
|
protected void |
KafkaFetcher.doCommitInternalOffsetsToKafka(Map<KafkaTopicPartition,Long> offsets,
KafkaCommitCallback commitCallback) |
protected abstract void |
AbstractFetcher.doCommitInternalOffsetsToKafka(Map<KafkaTopicPartition,Long> offsets,
KafkaCommitCallback commitCallback) |
static String |
KafkaTopicPartition.toString(List<KafkaTopicPartition> partitions) |
static String |
KafkaTopicPartition.toString(Map<KafkaTopicPartition,Long> map) |
| 构造器和说明 |
|---|
KafkaTopicPartitionLeader(KafkaTopicPartition topicPartition,
org.apache.kafka.common.Node leader) |
KafkaTopicPartitionState(KafkaTopicPartition partition,
KPH kafkaPartitionHandle) |
KafkaTopicPartitionStateWithWatermarkGenerator(KafkaTopicPartition partition,
KPH kafkaPartitionHandle,
org.apache.flink.api.common.eventtime.TimestampAssigner<T> timestampAssigner,
org.apache.flink.api.common.eventtime.WatermarkGenerator<T> watermarkGenerator,
org.apache.flink.api.common.eventtime.WatermarkOutput immediateOutput,
org.apache.flink.api.common.eventtime.WatermarkOutput deferredOutput) |
| 构造器和说明 |
|---|
AbstractFetcher(org.apache.flink.streaming.api.functions.source.SourceFunction.SourceContext<T> sourceContext,
Map<KafkaTopicPartition,Long> seedPartitionsWithInitialOffsets,
org.apache.flink.util.SerializedValue<org.apache.flink.api.common.eventtime.WatermarkStrategy<T>> watermarkStrategy,
org.apache.flink.streaming.runtime.tasks.ProcessingTimeService processingTimeProvider,
long autoWatermarkInterval,
ClassLoader userCodeClassLoader,
org.apache.flink.metrics.MetricGroup consumerMetricGroup,
boolean useMetrics) |
KafkaFetcher(org.apache.flink.streaming.api.functions.source.SourceFunction.SourceContext<T> sourceContext,
Map<KafkaTopicPartition,Long> assignedPartitionsWithInitialOffsets,
org.apache.flink.util.SerializedValue<org.apache.flink.api.common.eventtime.WatermarkStrategy<T>> watermarkStrategy,
org.apache.flink.streaming.runtime.tasks.ProcessingTimeService processingTimeProvider,
long autoWatermarkInterval,
ClassLoader userCodeClassLoader,
String taskNameWithSubtasks,
KafkaDeserializationSchema<T> deserializer,
Properties kafkaProperties,
long pollTimeout,
org.apache.flink.metrics.MetricGroup subtaskMetricGroup,
org.apache.flink.metrics.MetricGroup consumerMetricGroup,
boolean useMetrics) |
KafkaShuffleFetcher(org.apache.flink.streaming.api.functions.source.SourceFunction.SourceContext<T> sourceContext,
Map<KafkaTopicPartition,Long> assignedPartitionsWithInitialOffsets,
org.apache.flink.util.SerializedValue<org.apache.flink.api.common.eventtime.WatermarkStrategy<T>> watermarkStrategy,
org.apache.flink.streaming.runtime.tasks.ProcessingTimeService processingTimeProvider,
long autoWatermarkInterval,
ClassLoader userCodeClassLoader,
String taskNameWithSubtasks,
KafkaDeserializationSchema<T> deserializer,
Properties kafkaProperties,
long pollTimeout,
org.apache.flink.metrics.MetricGroup subtaskMetricGroup,
org.apache.flink.metrics.MetricGroup consumerMetricGroup,
boolean useMetrics,
org.apache.flink.api.common.typeutils.TypeSerializer<T> typeSerializer,
int producerParallelism) |
| 限定符和类型 | 方法和说明 |
|---|---|
protected AbstractFetcher<T,?> |
FlinkKafkaShuffleConsumer.createFetcher(org.apache.flink.streaming.api.functions.source.SourceFunction.SourceContext<T> sourceContext,
Map<KafkaTopicPartition,Long> assignedPartitionsWithInitialOffsets,
org.apache.flink.util.SerializedValue<org.apache.flink.api.common.eventtime.WatermarkStrategy<T>> watermarkStrategy,
org.apache.flink.streaming.api.operators.StreamingRuntimeContext runtimeContext,
OffsetCommitMode offsetCommitMode,
org.apache.flink.metrics.MetricGroup consumerMetricGroup,
boolean useMetrics) |
| 限定符和类型 | 字段和说明 |
|---|---|
protected Map<KafkaTopicPartition,Long> |
KafkaDynamicSource.specificBoundedOffsets
Specific end offsets; only relevant when bounded mode is
BoundedMode.SPECIFIC_OFFSETS. |
protected Map<KafkaTopicPartition,Long> |
KafkaDynamicSource.specificStartupOffsets
Specific startup offsets; only relevant when startup mode is
StartupMode.SPECIFIC_OFFSETS. |
| 限定符和类型 | 方法和说明 |
|---|---|
protected KafkaDynamicSource |
KafkaDynamicTableFactory.createKafkaTableSource(org.apache.flink.table.types.DataType physicalDataType,
org.apache.flink.table.connector.format.DecodingFormat<org.apache.flink.api.common.serialization.DeserializationSchema<org.apache.flink.table.data.RowData>> keyDecodingFormat,
org.apache.flink.table.connector.format.DecodingFormat<org.apache.flink.api.common.serialization.DeserializationSchema<org.apache.flink.table.data.RowData>> valueDecodingFormat,
int[] keyProjection,
int[] valueProjection,
String keyPrefix,
List<String> topics,
Pattern topicPattern,
Properties properties,
StartupMode startupMode,
Map<KafkaTopicPartition,Long> specificStartupOffsets,
long startupTimestampMillis,
BoundedMode boundedMode,
Map<KafkaTopicPartition,Long> specificEndOffsets,
long endTimestampMillis,
String tableIdentifier) |
protected KafkaDynamicSource |
KafkaDynamicTableFactory.createKafkaTableSource(org.apache.flink.table.types.DataType physicalDataType,
org.apache.flink.table.connector.format.DecodingFormat<org.apache.flink.api.common.serialization.DeserializationSchema<org.apache.flink.table.data.RowData>> keyDecodingFormat,
org.apache.flink.table.connector.format.DecodingFormat<org.apache.flink.api.common.serialization.DeserializationSchema<org.apache.flink.table.data.RowData>> valueDecodingFormat,
int[] keyProjection,
int[] valueProjection,
String keyPrefix,
List<String> topics,
Pattern topicPattern,
Properties properties,
StartupMode startupMode,
Map<KafkaTopicPartition,Long> specificStartupOffsets,
long startupTimestampMillis,
BoundedMode boundedMode,
Map<KafkaTopicPartition,Long> specificEndOffsets,
long endTimestampMillis,
String tableIdentifier) |
| 构造器和说明 |
|---|
KafkaDynamicSource(org.apache.flink.table.types.DataType physicalDataType,
org.apache.flink.table.connector.format.DecodingFormat<org.apache.flink.api.common.serialization.DeserializationSchema<org.apache.flink.table.data.RowData>> keyDecodingFormat,
org.apache.flink.table.connector.format.DecodingFormat<org.apache.flink.api.common.serialization.DeserializationSchema<org.apache.flink.table.data.RowData>> valueDecodingFormat,
int[] keyProjection,
int[] valueProjection,
String keyPrefix,
List<String> topics,
Pattern topicPattern,
Properties properties,
StartupMode startupMode,
Map<KafkaTopicPartition,Long> specificStartupOffsets,
long startupTimestampMillis,
BoundedMode boundedMode,
Map<KafkaTopicPartition,Long> specificBoundedOffsets,
long boundedTimestampMillis,
boolean upsertMode,
String tableIdentifier) |
KafkaDynamicSource(org.apache.flink.table.types.DataType physicalDataType,
org.apache.flink.table.connector.format.DecodingFormat<org.apache.flink.api.common.serialization.DeserializationSchema<org.apache.flink.table.data.RowData>> keyDecodingFormat,
org.apache.flink.table.connector.format.DecodingFormat<org.apache.flink.api.common.serialization.DeserializationSchema<org.apache.flink.table.data.RowData>> valueDecodingFormat,
int[] keyProjection,
int[] valueProjection,
String keyPrefix,
List<String> topics,
Pattern topicPattern,
Properties properties,
StartupMode startupMode,
Map<KafkaTopicPartition,Long> specificStartupOffsets,
long startupTimestampMillis,
BoundedMode boundedMode,
Map<KafkaTopicPartition,Long> specificBoundedOffsets,
long boundedTimestampMillis,
boolean upsertMode,
String tableIdentifier) |
Copyright © 2014–2023 The Apache Software Foundation. All rights reserved.