public class DeadLetterPublishingRecoverer
extends java.lang.Object
implements java.util.function.BiConsumer<org.apache.kafka.clients.consumer.ConsumerRecord<?,?>,java.lang.Exception>
BiConsumer that publishes a failed record to a dead-letter topic.| Constructor and Description |
|---|
DeadLetterPublishingRecoverer(KafkaTemplate<java.lang.Object,java.lang.Object> template)
Create an instance with the provided template and a default destination resolving
function that returns a TopicPartition based on the original topic (appended with ".DLT")
from the failed record, and the same partition as the failed record.
|
DeadLetterPublishingRecoverer(KafkaTemplate<java.lang.Object,java.lang.Object> template,
java.util.function.BiFunction<org.apache.kafka.clients.consumer.ConsumerRecord<?,?>,java.lang.Exception,org.apache.kafka.common.TopicPartition> destinationResolver)
Create an instance with the provided template and destination resolving function,
that receives the failed consumer record and the exception and returns a
TopicPartition. |
| Modifier and Type | Method and Description |
|---|---|
void |
accept(org.apache.kafka.clients.consumer.ConsumerRecord<?,?> record,
java.lang.Exception exception) |
protected org.apache.kafka.clients.producer.ProducerRecord<java.lang.Object,java.lang.Object> |
createProducerRecord(org.apache.kafka.clients.consumer.ConsumerRecord<?,?> record,
org.apache.kafka.common.TopicPartition topicPartition,
org.apache.kafka.common.header.internals.RecordHeaders headers)
Subclasses can override this method to customize the producer record to send to the DLQ.
|
public DeadLetterPublishingRecoverer(KafkaTemplate<java.lang.Object,java.lang.Object> template)
template - the KafkaTemplate to use for publishing.public DeadLetterPublishingRecoverer(KafkaTemplate<java.lang.Object,java.lang.Object> template, java.util.function.BiFunction<org.apache.kafka.clients.consumer.ConsumerRecord<?,?>,java.lang.Exception,org.apache.kafka.common.TopicPartition> destinationResolver)
TopicPartition. If the partition in the TopicPartition is less than 0, no
partition is set when publishing to the topic.template - the KafkaTemplate to use for publishing.destinationResolver - the resolving function.public void accept(org.apache.kafka.clients.consumer.ConsumerRecord<?,?> record,
java.lang.Exception exception)
accept in interface java.util.function.BiConsumer<org.apache.kafka.clients.consumer.ConsumerRecord<?,?>,java.lang.Exception>protected org.apache.kafka.clients.producer.ProducerRecord<java.lang.Object,java.lang.Object> createProducerRecord(org.apache.kafka.clients.consumer.ConsumerRecord<?,?> record,
org.apache.kafka.common.TopicPartition topicPartition,
org.apache.kafka.common.header.internals.RecordHeaders headers)
TopicPartition is less than 0, it must be set to null
in the ProducerRecord.record - the failed recordtopicPartition - the TopicPartition returned by the destination resolver.headers - the headers - original record headers plus DLT headers.KafkaHeaders