KAFKA-15852: Move UncleanLeaderElectionTest to server module#21689
KAFKA-15852: Move UncleanLeaderElectionTest to server module#21689mimaison wants to merge 3 commits intoapache:trunkfrom
Conversation
| assertInstanceOf(InvalidConfigurationException.class, e.getCause()); | ||
| } | ||
|
|
||
| public void verifyUncleanLeaderElectionEnabled(GroupProtocol groupProtocol) throws Exception { |
There was a problem hiding this comment.
Nit: Doesn't need to be public.
Or we can better move it to utils to be reused.
| assertEquals(List.of("first", "third"), consumeAllMessages(2, groupProtocol)); | ||
| } | ||
|
|
||
| public void verifyUncleanLeaderElectionDisabled(GroupProtocol groupProtocol) throws Exception { |
akhileshchg
left a comment
There was a problem hiding this comment.
PR looks great. LGTM with minor comments.
| new IllegalStateException("Cannot get topic: " + topic + ", partition: " + partition + " in server metadata cache")); | ||
| } | ||
|
|
||
| private static <K, V> List<ConsumerRecord<K, V>> pollUntilAtLeastNumRecords(Consumer<K, V> consumer, int numRecords) throws Exception { |
There was a problem hiding this comment.
Could you merge pollRecordsUntilTrue with pollUntilAtLeastNumRecords? For example:
private static <K, V> List<ConsumerRecord<K, V>> pollUntilAtLeastNumRecords(Consumer<K, V> consumer, int numRecords) throws Exception {
final List<ConsumerRecord<K, V>> records = new ArrayList<>();
waitForCondition(() -> {
consumer.poll(Duration.ofMillis(100L)).forEach(records::add);
return records.size() >= numRecords;
}, DEFAULT_MAX_WAIT_MS, () -> "Consumed " + records.size() + " records before timeout instead of the expected " + numRecords + " records");
return records;
}There was a problem hiding this comment.
The original methods in kafka.utils.TestUtils are called from multiple places hence why I kept them separate there too.
| * @return The current leader for the topic partition | ||
| * @throws InterruptedException If waitForCondition is interrupted | ||
| */ | ||
| public static int awaitLeaderChange(ClusterInstance cluster, TopicPartition tp, Optional<Integer> expectedLeaderOpt, long timeout) throws InterruptedException { |
There was a problem hiding this comment.
public static int awaitLeaderChange(ClusterInstance cluster, TopicPartition tp, Optional<Integer> expectedLeaderOpt, long timeout) throws InterruptedException {
if (expectedLeaderOpt.isPresent()) {
LOG.debug("Waiting for leader of {} to change to {}", tp, expectedLeaderOpt.get());
} else {
LOG.debug("Waiting for a leader to be elected for {}", tp);
}
Supplier<Optional<Integer>> newLeaderExists = () -> cluster.brokers().values().stream()
.filter(broker -> expectedLeaderOpt.isEmpty() || broker.config().brokerId() == expectedLeaderOpt.get())
.filter(broker -> broker.replicaManager().onlinePartition(tp).exists(partition -> partition.leaderLogIfLocal().isDefined()))
.map(broker -> broker.config().brokerId())
.findFirst();
waitForCondition(
() -> newLeaderExists.get().isPresent(),
timeout,
"Did not observe leader change for partition " + tp + " after " + timeout + " ms"
);
return newLeaderExists.get().get();
}| return admin.incrementalAlterConfigs(Map.of(configResource, configEntries)); | ||
| } | ||
|
|
||
| private void waitForNoLeaderAndIsrHasOldLeaderId(MetadataCache metadataCache, int leaderId) throws InterruptedException { |
There was a problem hiding this comment.
private void waitForNoLeaderAndIsrHasOldLeaderId(MetadataCache metadataCache, int leaderId) throws InterruptedException {
waitForCondition(
() -> metadataCache.getLeaderAndIsr(TOPIC, PARTITION_ID)
.filter(leaderAndIsr -> leaderAndIsr.leader() == LeaderConstants.NO_LEADER)
.filter(leaderAndIsr -> leaderAndIsr.isr().equals(Set.of(leaderId)))
.isPresent(),
DEFAULT_MAX_WAIT_MS,
"Timed out waiting for broker metadata cache updates the info for topic partition:" + TOPIC_PARTITION);
}| private AlterConfigsResult alterTopicConfigs(Map<String, String> configs) { | ||
| ConfigResource configResource = new ConfigResource(ConfigResource.Type.TOPIC, TOPIC); | ||
|
|
||
| Collection<AlterConfigOp> configEntries = configs.entrySet().stream() |
There was a problem hiding this comment.
var configEntries = configs.entrySet().stream()
.map(e -> new AlterConfigOp(new ConfigEntry(e.getKey(), e.getValue()),
AlterConfigOp.OpType.SET))
.toList();We could use a single map operation to create the AlterConfigOp instances
There was a problem hiding this comment.
Good idea, I pushed an update. Thanks
| private static final int BROKER_ID_0 = 0; | ||
| private static final int BROKER_ID_1 = 1; | ||
| private static final Random RANDOM = new Random(); | ||
| private static final String TOPIC = "topic" + RANDOM.nextLong(); |
There was a problem hiding this comment.
It seems the previous behaviour was to generate a different topic name for each test case
There was a problem hiding this comment.
Each test uses a new cluster, so instead I was thinking of removing the randomness altogether to make things simpler.
Convert the test from
QuorumTestHarnessto@ClusterTest.Since we can have
@ParameterizedTestwith@ClusterTestI createdClassic and Consumer protocol versions of all the tests that use a
consumer.