@Internal public class KafkaSourceEnumerator extends Object implements org.apache.flink.api.connector.source.SplitEnumerator<KafkaPartitionSplit,KafkaSourceEnumState>
| 限定符和类型 | 类和说明 |
|---|---|
static class |
KafkaSourceEnumerator.PartitionOffsetsRetrieverImpl
The implementation for offsets retriever with a consumer and an admin client.
|
| 构造器和说明 |
|---|
KafkaSourceEnumerator(KafkaSubscriber subscriber,
OffsetsInitializer startingOffsetInitializer,
OffsetsInitializer stoppingOffsetInitializer,
Properties properties,
org.apache.flink.api.connector.source.SplitEnumeratorContext<KafkaPartitionSplit> context,
org.apache.flink.api.connector.source.Boundedness boundedness) |
KafkaSourceEnumerator(KafkaSubscriber subscriber,
OffsetsInitializer startingOffsetInitializer,
OffsetsInitializer stoppingOffsetInitializer,
Properties properties,
org.apache.flink.api.connector.source.SplitEnumeratorContext<KafkaPartitionSplit> context,
org.apache.flink.api.connector.source.Boundedness boundedness,
Set<org.apache.kafka.common.TopicPartition> assignedPartitions) |
| 限定符和类型 | 方法和说明 |
|---|---|
void |
addReader(int subtaskId) |
void |
addSplitsBack(List<KafkaPartitionSplit> splits,
int subtaskId) |
void |
close() |
void |
handleSplitRequest(int subtaskId,
String requesterHostname) |
KafkaSourceEnumState |
snapshotState(long checkpointId) |
void |
start()
Start the enumerator.
|
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, waitpublic KafkaSourceEnumerator(KafkaSubscriber subscriber, OffsetsInitializer startingOffsetInitializer, OffsetsInitializer stoppingOffsetInitializer, Properties properties, org.apache.flink.api.connector.source.SplitEnumeratorContext<KafkaPartitionSplit> context, org.apache.flink.api.connector.source.Boundedness boundedness)
public KafkaSourceEnumerator(KafkaSubscriber subscriber, OffsetsInitializer startingOffsetInitializer, OffsetsInitializer stoppingOffsetInitializer, Properties properties, org.apache.flink.api.connector.source.SplitEnumeratorContext<KafkaPartitionSplit> context, org.apache.flink.api.connector.source.Boundedness boundedness, Set<org.apache.kafka.common.TopicPartition> assignedPartitions)
public void start()
Depending on partitionDiscoveryIntervalMs, the enumerator will trigger a one-time
partition discovery, or schedule a callable for discover partitions periodically.
The invoking chain of partition discovery would be:
getSubscribedTopicPartitions() in worker thread
checkPartitionChanges(java.util.Set<org.apache.kafka.common.TopicPartition>, java.lang.Throwable) in coordinator thread
initializePartitionSplits(org.apache.flink.connector.kafka.source.enumerator.KafkaSourceEnumerator.PartitionChange) in worker thread
handlePartitionSplitChanges(org.apache.flink.connector.kafka.source.enumerator.KafkaSourceEnumerator.PartitionSplitChange, java.lang.Throwable) in coordinator thread
start 在接口中 org.apache.flink.api.connector.source.SplitEnumerator<KafkaPartitionSplit,KafkaSourceEnumState>public void handleSplitRequest(int subtaskId,
@Nullable
String requesterHostname)
handleSplitRequest 在接口中 org.apache.flink.api.connector.source.SplitEnumerator<KafkaPartitionSplit,KafkaSourceEnumState>public void addSplitsBack(List<KafkaPartitionSplit> splits, int subtaskId)
addSplitsBack 在接口中 org.apache.flink.api.connector.source.SplitEnumerator<KafkaPartitionSplit,KafkaSourceEnumState>public void addReader(int subtaskId)
addReader 在接口中 org.apache.flink.api.connector.source.SplitEnumerator<KafkaPartitionSplit,KafkaSourceEnumState>public KafkaSourceEnumState snapshotState(long checkpointId) throws Exception
snapshotState 在接口中 org.apache.flink.api.connector.source.SplitEnumerator<KafkaPartitionSplit,KafkaSourceEnumState>Exceptionpublic void close()
close 在接口中 AutoCloseableclose 在接口中 org.apache.flink.api.connector.source.SplitEnumerator<KafkaPartitionSplit,KafkaSourceEnumState>Copyright © 2014–2023 The Apache Software Foundation. All rights reserved.