Labour Day Sale - Limited Time 70% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: mxmas70

Home > Confluent > Confluent Certified Developer > CCDAK

CCDAK Confluent Certified Developer for Apache Kafka Certification Examination Question and Answers

Question # 4

In Avro, adding an element to an enum without a default is a __ schema evolution

A.

breaking

B.

full

C.

backward

D.

forward

Full Access
Question # 5

How will you find out all the partitions where one or more of the replicas for the partition are not in-sync with the leader?

A.

kafka-topics.sh --bootstrap-server localhost:9092 --describe --unavailable- partitions

B.

kafka-topics.sh --zookeeper localhost:2181 --describe --unavailable- partitions

C.

kafka-topics.sh --broker-list localhost:9092 --describe --under-replicated-partitions

D.

kafka-topics.sh --zookeeper localhost:2181 --describe --under-replicated-partitions

Full Access
Question # 6

We would like to be in an at-most once consuming scenario. Which offset commit strategy would you recommend?

A.

Commit the offsets on disk, after processing the data

B.

Do not commit any offsets and read from beginning

C.

Commit the offsets in Kafka, after processing the data

D.

Commit the offsets in Kafka, before processing the data

Full Access
Question # 7

Select all that applies (select THREE)

A.

min.insync.replicas is a producer setting

B.

acks is a topic setting

C.

acks is a producer setting

D.

min.insync.replicas is a topic setting

E.

min.insync.replicas matters regardless of the values of acks

F.

min.insync.replicas only matters if acks=all

Full Access
Question # 8

There are 3 brokers in the cluster. You want to create a topic with a single partition that is resilient to one broker failure and one broker maintenance. What is the replication factor will you specify while creating the topic?

A.

6

B.

3

C.

2

D.

1

Full Access
Question # 9

You want to perform table lookups against a KTable everytime a new record is received from the KStream. What is the output of KStream-KTable join?

A.

KTable

B.

GlobalKTable

C.

You choose between KStream or KTable

D.

Kstream

Full Access
Question # 10

You are sending messages with keys to a topic. To increase throughput, you decide to increase the number of partitions of the topic. Select all that apply.

A.

All the existing records will get rebalanced among the partitions to balance load

B.

New records with the same key will get written to the partition where old records with that key were written

C.

New records may get written to a different partition

D.

Old records will stay in their partitions

Full Access
Question # 11

How often is log compaction evaluated?

A.

Every time a new partition is created

B.

Every time a segment is closed

C.

Every time a message is sent to Kafka

D.

Every time a message is flushed to disk

Full Access
Question # 12

We have a store selling shoes. What dataset is a great candidate to be modeled as a KTable in Kafka Streams?

A.

Money made until now

B.

The transaction stream

C.

Items returned

D.

Inventory contents right now

Full Access
Question # 13

Select all the way for one consumer to subscribe simultaneously to the following topics - topic.history, topic.sports, topic.politics? (select two)

A.

consumer.subscribe(Pattern.compile("topic\..*"));

B.

consumer.subscribe("topic.history"); consumer.subscribe("topic.sports"); consumer.subscribe("topic.politics");

C.

consumer.subscribePrefix("topic.");

D.

consumer.subscribe(Arrays.asList("topic.history", "topic.sports", "topic.politics"));

Full Access
Question # 14

In Kafka Streams, by what value are internal topics prefixed by?

A.

tasks-

B.

application.id

C.

group.id

D.

kafka-streams-

Full Access
Question # 15

How do you create a topic named test with 3 partitions and 3 replicas using the Kafka CLI?

A.

bin/kafka-topics.sh --create --broker-list localhost:9092 --replication-factor 3 --partitions 3 --topic test

B.

bin/kafka-topics-create.sh --zookeeper localhost:9092 --replication-factor 3 --partitions 3 --topic test

C.

bin/kafka-topics.sh --create --bootstrap-server localhost:9092 --replication-factor 3 --partitions 3 --topic test

D.

bin/kafka-topics.sh --create --bootstrap-server localhost:2181 --replication-factor 3 --partitions 3 --topic test

Full Access
Question # 16

You want to send a message of size 3 MB to a topic with default message size configuration. How does KafkaProducer handle large messages?

A.

KafkaProducer divides messages into sizes of max.request.size and sends them in order

B.

KafkaProducer divides messages into sizes of message.max.bytes and sends them in order

C.

MessageSizeTooLarge exception will be thrown, KafkaProducer will not retry and return exception immediately

D.

MessageSizeTooLarge exception will be thrown, KafkaProducer retries until the number of retries are exhausted

Full Access
Question # 17

In Kafka, every broker... (select three)

A.

contains all the topics and all the partitions

B.

knows all the metadata for all topics and partitions

C.

is a controller

D.

knows the metadata for the topics and partitions it has on its disk

E.

is a bootstrap broker

F.

contains only a subset of the topics and the partitions

Full Access
Question # 18

An ecommerce website maintains two topics - a high volume "purchase" topic with 5 partitions and low volume "customer" topic with 3 partitions. You would like to do a stream-table join of these topics. How should you proceed?

A.

Repartition the purchase topic to have 3 partitions

B.

Repartition customer topic to have 5 partitions

C.

Model customer as a GlobalKTable

D.

Do a KStream / KTable join after a repartition step

Full Access
Question # 19

A kafka topic has a replication factor of 3 and min.insync.replicas setting of 2. How many brokers can go down before a producer with acks=all can't produce?

A.

0

B.

2

C.

1

D.

3

Full Access
Question # 20

To allow consumers in a group to resume at the previously committed offset, I need to set the proper value for...

A.

value.deserializer

B.

auto.offset.resets

C.

group.id

D.

enable.auto.commit

Full Access
Question # 21

The exactly once guarantee in the Kafka Streams is for which flow of data?

A.

Kafka => Kafka

B.

Kafka => External

C.

External => Kafka

Full Access
Question # 22

In Avro, adding a field to a record without default is a __ schema evolution

A.

forward

B.

backward

C.

full

D.

breaking

Full Access