OUT - the output type of the source.@PublicEvolving public class KafkaSource<OUT> extends Object implements org.apache.flink.api.connector.source.Source<OUT,KafkaPartitionSplit,KafkaSourceEnumState>, org.apache.flink.api.java.typeutils.ResultTypeQueryable<OUT>
KafkaSourceBuilder to construct a KafkaSource. The following example shows how to create a KafkaSource emitting records of
String type.
KafkaSource<String> source = KafkaSource
.<String>builder()
.setBootstrapServers(KafkaSourceTestEnv.brokerConnectionStrings)
.setGroupId("MyGroup")
.setTopics(Arrays.asList(TOPIC1, TOPIC2))
.setDeserializer(new TestingKafkaRecordDeserializationSchema())
.setStartingOffsets(OffsetsInitializer.earliest())
.build();
KafkaSourceEnumerator only supports
adding new splits and not removing splits in split discovery.
See KafkaSourceBuilder for more details on how to configure this source.
| 限定符和类型 | 方法和说明 |
|---|---|
static <OUT> KafkaSourceBuilder<OUT> |
builder()
Get a kafkaSourceBuilder to build a
KafkaSource. |
org.apache.flink.api.connector.source.SplitEnumerator<KafkaPartitionSplit,KafkaSourceEnumState> |
createEnumerator(org.apache.flink.api.connector.source.SplitEnumeratorContext<KafkaPartitionSplit> enumContext) |
org.apache.flink.api.connector.source.SourceReader<OUT,KafkaPartitionSplit> |
createReader(org.apache.flink.api.connector.source.SourceReaderContext readerContext) |
org.apache.flink.api.connector.source.Boundedness |
getBoundedness() |
org.apache.flink.core.io.SimpleVersionedSerializer<KafkaSourceEnumState> |
getEnumeratorCheckpointSerializer() |
org.apache.flink.api.common.typeinfo.TypeInformation<OUT> |
getProducedType() |
org.apache.flink.core.io.SimpleVersionedSerializer<KafkaPartitionSplit> |
getSplitSerializer() |
org.apache.flink.api.connector.source.SplitEnumerator<KafkaPartitionSplit,KafkaSourceEnumState> |
restoreEnumerator(org.apache.flink.api.connector.source.SplitEnumeratorContext<KafkaPartitionSplit> enumContext,
KafkaSourceEnumState checkpoint) |
public static <OUT> KafkaSourceBuilder<OUT> builder()
KafkaSource.public org.apache.flink.api.connector.source.Boundedness getBoundedness()
getBoundedness 在接口中 org.apache.flink.api.connector.source.Source<OUT,KafkaPartitionSplit,KafkaSourceEnumState>@Internal public org.apache.flink.api.connector.source.SourceReader<OUT,KafkaPartitionSplit> createReader(org.apache.flink.api.connector.source.SourceReaderContext readerContext) throws Exception
createReader 在接口中 org.apache.flink.api.connector.source.SourceReaderFactory<OUT,KafkaPartitionSplit>Exception@Internal public org.apache.flink.api.connector.source.SplitEnumerator<KafkaPartitionSplit,KafkaSourceEnumState> createEnumerator(org.apache.flink.api.connector.source.SplitEnumeratorContext<KafkaPartitionSplit> enumContext)
createEnumerator 在接口中 org.apache.flink.api.connector.source.Source<OUT,KafkaPartitionSplit,KafkaSourceEnumState>@Internal public org.apache.flink.api.connector.source.SplitEnumerator<KafkaPartitionSplit,KafkaSourceEnumState> restoreEnumerator(org.apache.flink.api.connector.source.SplitEnumeratorContext<KafkaPartitionSplit> enumContext, KafkaSourceEnumState checkpoint) throws IOException
restoreEnumerator 在接口中 org.apache.flink.api.connector.source.Source<OUT,KafkaPartitionSplit,KafkaSourceEnumState>IOException@Internal public org.apache.flink.core.io.SimpleVersionedSerializer<KafkaPartitionSplit> getSplitSerializer()
getSplitSerializer 在接口中 org.apache.flink.api.connector.source.Source<OUT,KafkaPartitionSplit,KafkaSourceEnumState>@Internal public org.apache.flink.core.io.SimpleVersionedSerializer<KafkaSourceEnumState> getEnumeratorCheckpointSerializer()
getEnumeratorCheckpointSerializer 在接口中 org.apache.flink.api.connector.source.Source<OUT,KafkaPartitionSplit,KafkaSourceEnumState>Copyright © 2014–2023 The Apache Software Foundation. All rights reserved.