Class AdminApiDriver<K,V>

java.lang.Object
org.apache.kafka.clients.admin.internals.AdminApiDriver<K,V>
Type Parameters:
K - The key type, which is also the granularity of the request routing (e.g. this could be `TopicPartition` in the case of requests intended for a partition leader or the `GroupId` in the case of consumer group requests intended for the group coordinator)
V - The fulfillment type for each key (e.g. this could be consumer group state when the key type is a consumer `GroupId`)

public class AdminApiDriver<K,V> extends Object
The `KafkaAdminClient`'s internal `Call` primitive is not a good fit for multi-stage request workflows such as we see with the group coordinator APIs or any request which needs to be sent to a partition leader. Typically these APIs have two concrete stages: 1. Lookup: Find the broker that can fulfill the request (e.g. partition leader or group coordinator) 2. Fulfillment: Send the request to the broker found in the first step This is complicated by the fact that `Admin` APIs are typically batched, which means the Lookup stage may result in a set of brokers. For example, take a `ListOffsets` request for a set of topic partitions. In the Lookup stage, we will find the partition leaders for this set of partitions; in the Fulfillment stage, we will group together partition according to the IDs of the discovered leaders. Additionally, the flow between these two stages is bi-directional. We may find after sending a `ListOffsets` request to an expected leader that there was a leader change. This would result in a topic partition being sent back to the Lookup stage. Managing this complexity by chaining together `Call` implementations is challenging and messy, so instead we use this class to do the bookkeeping. It handles both the batching aspect as well as the transitions between the Lookup and Fulfillment stages. Note that the interpretation of the `retries` configuration becomes ambiguous for this kind of pipeline. We could treat it as an overall limit on the number of requests that can be sent, but that is not very useful because each pipeline has a minimum number of requests that need to be sent in order to satisfy the request. Instead, we treat this number of retries independently at each stage so that each stage has at least one opportunity to complete. So if a user sets `retries=1`, then the full pipeline can still complete as long as there are no request failures.