public interface KafkaWriteStream<K,V> extends WriteStream<org.apache.kafka.clients.producer.ProducerRecord<K,V>>
WriteStream
for writing to Kafka ProducerRecord
.
The WriteStream.write(Object)
provides global control over writing a record.
Modifier and Type | Field and Description |
---|---|
static int |
DEFAULT_MAX_SIZE |
Modifier and Type | Method and Description |
---|---|
Future<Void> |
abortTransaction()
Like
abortTransaction(Handler) but with a future of the result |
KafkaWriteStream<K,V> |
abortTransaction(Handler<AsyncResult<Void>> handler)
Aborts the ongoing transaction.
|
Future<Void> |
beginTransaction()
Like
beginTransaction(Handler) but with a future of the result |
KafkaWriteStream<K,V> |
beginTransaction(Handler<AsyncResult<Void>> handler)
Starts a new kafka transaction.
|
Future<Void> |
close()
Close the stream
|
void |
close(Handler<AsyncResult<Void>> completionHandler)
Close the stream
|
Future<Void> |
close(long timeout)
Like
close(long, Handler) but returns a Future of the asynchronous result |
void |
close(long timeout,
Handler<AsyncResult<Void>> completionHandler)
Close the stream
|
Future<Void> |
commitTransaction()
Like
commitTransaction(Handler) but with a future of the result |
KafkaWriteStream<K,V> |
commitTransaction(Handler<AsyncResult<Void>> handler)
Commits the ongoing transaction.
|
static <K,V> KafkaWriteStream<K,V> |
create(Vertx vertx,
KafkaClientOptions options)
Create a new KafkaWriteStream instance
|
static <K,V> KafkaWriteStream<K,V> |
create(Vertx vertx,
KafkaClientOptions options,
Class<K> keyType,
Class<V> valueType)
Create a new KafkaWriteStream instance
|
static <K,V> KafkaWriteStream<K,V> |
create(Vertx vertx,
KafkaClientOptions options,
org.apache.kafka.common.serialization.Serializer<K> keySerializer,
org.apache.kafka.common.serialization.Serializer<V> valueSerializer)
Create a new KafkaWriteStream instance
|
static <K,V> KafkaWriteStream<K,V> |
create(Vertx vertx,
Map<String,Object> config)
Create a new KafkaWriteStream instance
|
static <K,V> KafkaWriteStream<K,V> |
create(Vertx vertx,
Map<String,Object> config,
Class<K> keyType,
Class<V> valueType)
Create a new KafkaWriteStream instance
|
static <K,V> KafkaWriteStream<K,V> |
create(Vertx vertx,
Map<String,Object> config,
org.apache.kafka.common.serialization.Serializer<K> keySerializer,
org.apache.kafka.common.serialization.Serializer<V> valueSerializer)
Create a new KafkaWriteStream instance
|
static <K,V> KafkaWriteStream<K,V> |
create(Vertx vertx,
org.apache.kafka.clients.producer.Producer<K,V> producer)
Create a new KafkaWriteStream instance
|
static <K,V> KafkaWriteStream<K,V> |
create(Vertx vertx,
Properties config)
Create a new KafkaWriteStream instance
|
static <K,V> KafkaWriteStream<K,V> |
create(Vertx vertx,
Properties config,
Class<K> keyType,
Class<V> valueType)
Create a new KafkaWriteStream instance
|
static <K,V> KafkaWriteStream<K,V> |
create(Vertx vertx,
Properties config,
org.apache.kafka.common.serialization.Serializer<K> keySerializer,
org.apache.kafka.common.serialization.Serializer<V> valueSerializer)
Create a new KafkaWriteStream instance
|
KafkaWriteStream<K,V> |
drainHandler(Handler<Void> handler)
Set a drain handler on the stream.
|
KafkaWriteStream<K,V> |
exceptionHandler(Handler<Throwable> handler)
Set an exception handler on the write stream.
|
Future<Void> |
flush()
Like
flush(Handler) but returns a Future of the asynchronous result |
KafkaWriteStream<K,V> |
flush(Handler<AsyncResult<Void>> completionHandler)
Invoking this method makes all buffered records immediately available to write
|
Future<Void> |
initTransactions()
Like
initTransactions(Handler) but with a future of the result |
KafkaWriteStream<K,V> |
initTransactions(Handler<AsyncResult<Void>> handler)
Initializes the underlying kafka transactional producer.
|
Future<List<org.apache.kafka.common.PartitionInfo>> |
partitionsFor(String topic)
Like
partitionsFor(String, Handler) but returns a Future of the asynchronous result |
KafkaWriteStream<K,V> |
partitionsFor(String topic,
Handler<AsyncResult<List<org.apache.kafka.common.PartitionInfo>>> handler)
Get the partition metadata for the give topic.
|
Future<org.apache.kafka.clients.producer.RecordMetadata> |
send(org.apache.kafka.clients.producer.ProducerRecord<K,V> record)
Asynchronously write a record to a topic
|
KafkaWriteStream<K,V> |
send(org.apache.kafka.clients.producer.ProducerRecord<K,V> record,
Handler<AsyncResult<org.apache.kafka.clients.producer.RecordMetadata>> handler)
Asynchronously write a record to a topic
|
KafkaWriteStream<K,V> |
setWriteQueueMaxSize(int i)
Set the maximum size of the write queue to
maxSize . |
org.apache.kafka.clients.producer.Producer<K,V> |
unwrap() |
end, end, end, end, write, write, writeQueueFull
static final int DEFAULT_MAX_SIZE
static <K,V> KafkaWriteStream<K,V> create(Vertx vertx, Properties config)
vertx
- Vert.x instance to useconfig
- Kafka producer configurationstatic <K,V> KafkaWriteStream<K,V> create(Vertx vertx, Properties config, Class<K> keyType, Class<V> valueType)
vertx
- Vert.x instance to useconfig
- Kafka producer configurationkeyType
- class type for the key serializationvalueType
- class type for the value serializationstatic <K,V> KafkaWriteStream<K,V> create(Vertx vertx, Properties config, org.apache.kafka.common.serialization.Serializer<K> keySerializer, org.apache.kafka.common.serialization.Serializer<V> valueSerializer)
vertx
- Vert.x instance to useconfig
- Kafka producer configurationkeySerializer
- key serializervalueSerializer
- value serializerstatic <K,V> KafkaWriteStream<K,V> create(Vertx vertx, Map<String,Object> config)
vertx
- Vert.x instance to useconfig
- Kafka producer configurationstatic <K,V> KafkaWriteStream<K,V> create(Vertx vertx, Map<String,Object> config, Class<K> keyType, Class<V> valueType)
vertx
- Vert.x instance to useconfig
- Kafka producer configurationkeyType
- class type for the key serializationvalueType
- class type for the value serializationstatic <K,V> KafkaWriteStream<K,V> create(Vertx vertx, Map<String,Object> config, org.apache.kafka.common.serialization.Serializer<K> keySerializer, org.apache.kafka.common.serialization.Serializer<V> valueSerializer)
vertx
- Vert.x instance to useconfig
- Kafka producer configurationkeySerializer
- key serializervalueSerializer
- value serializerstatic <K,V> KafkaWriteStream<K,V> create(Vertx vertx, KafkaClientOptions options)
vertx
- Vert.x instance to useoptions
- Kafka producer optionsstatic <K,V> KafkaWriteStream<K,V> create(Vertx vertx, KafkaClientOptions options, Class<K> keyType, Class<V> valueType)
vertx
- Vert.x instance to useoptions
- Kafka producer optionskeyType
- class type for the key serializationvalueType
- class type for the value serializationstatic <K,V> KafkaWriteStream<K,V> create(Vertx vertx, KafkaClientOptions options, org.apache.kafka.common.serialization.Serializer<K> keySerializer, org.apache.kafka.common.serialization.Serializer<V> valueSerializer)
vertx
- Vert.x instance to useoptions
- Kafka producer optionskeySerializer
- key serializervalueSerializer
- value serializerstatic <K,V> KafkaWriteStream<K,V> create(Vertx vertx, org.apache.kafka.clients.producer.Producer<K,V> producer)
vertx
- Vert.x instance to useproducer
- native Kafka producer instanceKafkaWriteStream<K,V> exceptionHandler(Handler<Throwable> handler)
WriteStream
exceptionHandler
in interface StreamBase
exceptionHandler
in interface WriteStream<org.apache.kafka.clients.producer.ProducerRecord<K,V>>
handler
- the exception handlerKafkaWriteStream<K,V> setWriteQueueMaxSize(int i)
WriteStream
maxSize
. You will still be able to write to the stream even
if there is more than maxSize
items in the write queue. This is used as an indicator by classes such as
Pipe
to provide flow control.
The value is defined by the implementation of the stream, e.g in bytes for a
NetSocket
, etc...setWriteQueueMaxSize
in interface WriteStream<org.apache.kafka.clients.producer.ProducerRecord<K,V>>
i
- the max size of the write streamKafkaWriteStream<K,V> drainHandler(Handler<Void> handler)
WriteStream
Pipe
for an example of this being used.
The stream implementation defines when the drain handler, for example it could be when the queue size has been
reduced to maxSize / 2
.
drainHandler
in interface WriteStream<org.apache.kafka.clients.producer.ProducerRecord<K,V>>
handler
- the handlerKafkaWriteStream<K,V> initTransactions(Handler<AsyncResult<Void>> handler)
KafkaProducer.initTransactions()
()}handler
- handler called on operation completedFuture<Void> initTransactions()
initTransactions(Handler)
but with a future of the resultKafkaWriteStream<K,V> beginTransaction(Handler<AsyncResult<Void>> handler)
KafkaProducer.beginTransaction()
handler
- handler called on operation completedFuture<Void> beginTransaction()
beginTransaction(Handler)
but with a future of the resultKafkaWriteStream<K,V> commitTransaction(Handler<AsyncResult<Void>> handler)
KafkaProducer.commitTransaction()
handler
- handler called on operation completedFuture<Void> commitTransaction()
commitTransaction(Handler)
but with a future of the resultKafkaWriteStream<K,V> abortTransaction(Handler<AsyncResult<Void>> handler)
KafkaProducer.abortTransaction()
handler
- handler called on operation completedFuture<Void> abortTransaction()
abortTransaction(Handler)
but with a future of the resultFuture<org.apache.kafka.clients.producer.RecordMetadata> send(org.apache.kafka.clients.producer.ProducerRecord<K,V> record)
record
- record to writeFuture
completed with the record metadataKafkaWriteStream<K,V> send(org.apache.kafka.clients.producer.ProducerRecord<K,V> record, Handler<AsyncResult<org.apache.kafka.clients.producer.RecordMetadata>> handler)
record
- record to writehandler
- handler called on operation completedKafkaWriteStream<K,V> partitionsFor(String topic, Handler<AsyncResult<List<org.apache.kafka.common.PartitionInfo>>> handler)
topic
- topic partition for which getting partitions infohandler
- handler called on operation completedFuture<List<org.apache.kafka.common.PartitionInfo>> partitionsFor(String topic)
partitionsFor(String, Handler)
but returns a Future
of the asynchronous resultKafkaWriteStream<K,V> flush(Handler<AsyncResult<Void>> completionHandler)
completionHandler
- handler called on operation completedFuture<Void> flush()
flush(Handler)
but returns a Future
of the asynchronous resultvoid close(Handler<AsyncResult<Void>> completionHandler)
completionHandler
- handler called on operation completedvoid close(long timeout, Handler<AsyncResult<Void>> completionHandler)
timeout
- timeout to wait for closingcompletionHandler
- handler called on operation completedFuture<Void> close(long timeout)
close(long, Handler)
but returns a Future
of the asynchronous resultCopyright © 2021 Eclipse. All rights reserved.