public class KafkaProducer<K,V> extends Object implements WriteStream<KafkaProducerRecord<K,V>>
The provides global control over writing a record.
NOTE: This class has been automatically generated from theoriginal
non RX-ified interface using Vert.x codegen.Modifier and Type | Field and Description |
---|---|
static io.vertx.lang.rx.TypeArg<KafkaProducer> |
__TYPE_ARG |
io.vertx.lang.rx.TypeArg<K> |
__typeArg_0 |
io.vertx.lang.rx.TypeArg<V> |
__typeArg_1 |
Constructor and Description |
---|
KafkaProducer(KafkaProducer delegate) |
KafkaProducer(Object delegate,
io.vertx.lang.rx.TypeArg<K> typeArg_0,
io.vertx.lang.rx.TypeArg<V> typeArg_1) |
Modifier and Type | Method and Description |
---|---|
void |
close()
Close the producer
|
void |
close(Handler<AsyncResult<Void>> completionHandler)
Close the producer
|
void |
close(long timeout,
Handler<AsyncResult<Void>> completionHandler)
Close the producer
|
static <K,V> KafkaProducer<K,V> |
create(Vertx vertx,
Map<String,String> config)
Create a new KafkaProducer instance
|
static <K,V> KafkaProducer<K,V> |
create(Vertx vertx,
Map<String,String> config,
Class<K> keyType,
Class<V> valueType)
Create a new KafkaProducer instance
|
static <K,V> KafkaProducer<K,V> |
createShared(Vertx vertx,
String name,
Map<String,String> config)
Get or create a KafkaProducer instance which shares its stream with any other KafkaProducer created with the same
name |
static <K,V> KafkaProducer<K,V> |
createShared(Vertx vertx,
String name,
Map<String,String> config,
Class<K> keyType,
Class<V> valueType)
Get or create a KafkaProducer instance which shares its stream with any other KafkaProducer created with the same
name |
KafkaProducer<K,V> |
drainHandler(Handler<Void> handler)
Set a drain handler on the stream.
|
void |
end()
Ends the stream.
|
void |
end(Handler<AsyncResult<Void>> handler)
Same as
WriteStream.end() but with an handler called when the operation completes |
void |
end(KafkaProducerRecord<K,V> data)
Same as
WriteStream.end() but writes some data to the stream before ending. |
void |
end(KafkaProducerRecord<K,V> data,
Handler<AsyncResult<Void>> handler)
Same as but with an
handler called when the operation completes |
boolean |
equals(Object o) |
KafkaProducer<K,V> |
exceptionHandler(Handler<Throwable> handler)
Set an exception handler on the write stream.
|
KafkaProducer<K,V> |
flush(Handler<Void> completionHandler)
Invoking this method makes all buffered records immediately available to write
|
KafkaProducer |
getDelegate() |
int |
hashCode() |
static <K,V> KafkaProducer<K,V> |
newInstance(KafkaProducer arg) |
static <K,V> KafkaProducer<K,V> |
newInstance(KafkaProducer arg,
io.vertx.lang.rx.TypeArg<K> __typeArg_K,
io.vertx.lang.rx.TypeArg<V> __typeArg_V) |
KafkaProducer<K,V> |
partitionsFor(String topic,
Handler<AsyncResult<List<PartitionInfo>>> handler)
Get the partition metadata for the give topic.
|
Single<Void> |
rxClose()
Close the producer
|
Single<Void> |
rxClose(long timeout)
Close the producer
|
Single<Void> |
rxEnd()
Same as
WriteStream.end() but with an handler called when the operation completes |
Single<Void> |
rxEnd(KafkaProducerRecord<K,V> data)
Same as but with an
handler called when the operation completes |
Single<List<PartitionInfo>> |
rxPartitionsFor(String topic)
Get the partition metadata for the give topic.
|
Single<RecordMetadata> |
rxSend(KafkaProducerRecord<K,V> record)
Asynchronously write a record to a topic
|
Single<Void> |
rxWrite(KafkaProducerRecord<K,V> data) |
KafkaProducer<K,V> |
send(KafkaProducerRecord<K,V> record)
Asynchronously write a record to a topic
|
KafkaProducer<K,V> |
send(KafkaProducerRecord<K,V> record,
Handler<AsyncResult<RecordMetadata>> handler)
Asynchronously write a record to a topic
|
KafkaProducer<K,V> |
setWriteQueueMaxSize(int i)
Set the maximum size of the write queue to
maxSize . |
String |
toString() |
io.vertx.rx.java.WriteStreamSubscriber<KafkaProducerRecord<K,V>> |
toSubscriber() |
KafkaProducer<K,V> |
write(KafkaProducerRecord<K,V> kafkaProducerRecord)
Write some data to the stream.
|
KafkaProducer<K,V> |
write(KafkaProducerRecord<K,V> data,
Handler<AsyncResult<Void>> handler)
Same as but with an
handler called when the operation completes |
boolean |
writeQueueFull()
This will return
true if there are more bytes in the write queue than the value set using WriteStream.setWriteQueueMaxSize(int) |
clone, finalize, getClass, notify, notifyAll, wait, wait, wait
newInstance, newInstance
newInstance
public static final io.vertx.lang.rx.TypeArg<KafkaProducer> __TYPE_ARG
public final io.vertx.lang.rx.TypeArg<K> __typeArg_0
public final io.vertx.lang.rx.TypeArg<V> __typeArg_1
public KafkaProducer(KafkaProducer delegate)
public KafkaProducer getDelegate()
getDelegate
in interface StreamBase
getDelegate
in interface WriteStream<KafkaProducerRecord<K,V>>
public io.vertx.rx.java.WriteStreamSubscriber<KafkaProducerRecord<K,V>> toSubscriber()
public void end()
Once the stream has ended, it cannot be used any more.
end
in interface WriteStream<KafkaProducerRecord<K,V>>
public void end(Handler<AsyncResult<Void>> handler)
WriteStream.end()
but with an handler
called when the operation completesend
in interface WriteStream<KafkaProducerRecord<K,V>>
handler
- public Single<Void> rxEnd()
WriteStream.end()
but with an handler
called when the operation completespublic void end(KafkaProducerRecord<K,V> data)
WriteStream.end()
but writes some data to the stream before ending.end
in interface WriteStream<KafkaProducerRecord<K,V>>
data
- the data to writepublic void end(KafkaProducerRecord<K,V> data, Handler<AsyncResult<Void>> handler)
handler
called when the operation completesend
in interface WriteStream<KafkaProducerRecord<K,V>>
data
- handler
- public Single<Void> rxEnd(KafkaProducerRecord<K,V> data)
handler
called when the operation completesdata
- public static <K,V> KafkaProducer<K,V> createShared(Vertx vertx, String name, Map<String,String> config)
name
vertx
- Vert.x instance to usename
- the producer name to identify itconfig
- Kafka producer configurationpublic static <K,V> KafkaProducer<K,V> createShared(Vertx vertx, String name, Map<String,String> config, Class<K> keyType, Class<V> valueType)
name
vertx
- Vert.x instance to usename
- the producer name to identify itconfig
- Kafka producer configurationkeyType
- class type for the key serializationvalueType
- class type for the value serializationpublic static <K,V> KafkaProducer<K,V> create(Vertx vertx, Map<String,String> config)
vertx
- Vert.x instance to useconfig
- Kafka producer configurationpublic static <K,V> KafkaProducer<K,V> create(Vertx vertx, Map<String,String> config, Class<K> keyType, Class<V> valueType)
vertx
- Vert.x instance to useconfig
- Kafka producer configurationkeyType
- class type for the key serializationvalueType
- class type for the value serializationpublic KafkaProducer<K,V> exceptionHandler(Handler<Throwable> handler)
WriteStream
exceptionHandler
in interface StreamBase
exceptionHandler
in interface WriteStream<KafkaProducerRecord<K,V>>
handler
- the exception handlerpublic KafkaProducer<K,V> write(KafkaProducerRecord<K,V> kafkaProducerRecord)
WriteStream
WriteStream.writeQueueFull()
method before writing. This is done automatically if using a Pump
.write
in interface WriteStream<KafkaProducerRecord<K,V>>
kafkaProducerRecord
- the data to writepublic KafkaProducer<K,V> setWriteQueueMaxSize(int i)
WriteStream
maxSize
. You will still be able to write to the stream even
if there is more than maxSize
items in the write queue. This is used as an indicator by classes such as
Pump
to provide flow control.
The value is defined by the implementation of the stream, e.g in bytes for a
NetSocket
, the number of Message
for a
MessageProducer
, etc...setWriteQueueMaxSize
in interface WriteStream<KafkaProducerRecord<K,V>>
i
- the max size of the write streampublic boolean writeQueueFull()
WriteStream
true
if there are more bytes in the write queue than the value set using WriteStream.setWriteQueueMaxSize(int)
writeQueueFull
in interface WriteStream<KafkaProducerRecord<K,V>>
public KafkaProducer<K,V> drainHandler(Handler<Void> handler)
WriteStream
Pump
for an example of this being used.
The stream implementation defines when the drain handler, for example it could be when the queue size has been
reduced to maxSize / 2
.drainHandler
in interface WriteStream<KafkaProducerRecord<K,V>>
handler
- the handlerpublic KafkaProducer<K,V> write(KafkaProducerRecord<K,V> data, Handler<AsyncResult<Void>> handler)
WriteStream
handler
called when the operation completeswrite
in interface WriteStream<KafkaProducerRecord<K,V>>
public Single<Void> rxWrite(KafkaProducerRecord<K,V> data)
public KafkaProducer<K,V> send(KafkaProducerRecord<K,V> record)
record
- record to writepublic KafkaProducer<K,V> send(KafkaProducerRecord<K,V> record, Handler<AsyncResult<RecordMetadata>> handler)
record
- record to writehandler
- handler called on operation completedpublic Single<RecordMetadata> rxSend(KafkaProducerRecord<K,V> record)
record
- record to writepublic KafkaProducer<K,V> partitionsFor(String topic, Handler<AsyncResult<List<PartitionInfo>>> handler)
topic
- topic partition for which getting partitions infohandler
- handler called on operation completedpublic Single<List<PartitionInfo>> rxPartitionsFor(String topic)
topic
- topic partition for which getting partitions infopublic KafkaProducer<K,V> flush(Handler<Void> completionHandler)
completionHandler
- handler called on operation completedpublic void close()
public void close(Handler<AsyncResult<Void>> completionHandler)
completionHandler
- handler called on operation completedpublic void close(long timeout, Handler<AsyncResult<Void>> completionHandler)
timeout
- timeout to wait for closingcompletionHandler
- handler called on operation completedpublic Single<Void> rxClose(long timeout)
timeout
- timeout to wait for closingpublic static <K,V> KafkaProducer<K,V> newInstance(KafkaProducer arg)
public static <K,V> KafkaProducer<K,V> newInstance(KafkaProducer arg, io.vertx.lang.rx.TypeArg<K> __typeArg_K, io.vertx.lang.rx.TypeArg<V> __typeArg_V)
Copyright © 2023 Eclipse. All rights reserved.