Spring Sale Special - Limited Time 70% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: mxmas70

Home > Confluent > Confluent Certified Developer > CCDAK

CCDAK Confluent Certified Developer for Apache Kafka Certification Examination Question and Answers

Question # 4

The producer code below features a Callback class with a method called onCompletion().

In the onCompletion() method, when the request is completed successfully, what does the value metadata.offset() represent?

A.

The sequential ID of the message committed into a partition

B.

Its position in the producer’s batch of messages

C.

The number of bytes that overflowed beyond a producer batch of messages

D.

The ID of the partition to which the message was committed

Full Access
Question # 5

You are composing a REST request to create a new connector in a running Connect cluster. You invoke POST /connectors with a configuration and receive a 409 (Conflict) response.

What are two reasons for this response? (Select two.)

A.

The connector configuration was invalid, and the response body will expand on the configuration error.

B.

The connect cluster has reached capacity, and new connectors cannot be created without expanding the cluster.

C.

The Connector already exists in the cluster.

D.

The Connect cluster is in process of rebalancing.

Full Access
Question # 6

(You are designing a stream pipeline to monitor the real-time location of GPS trackers, where historical location data is not required.

Each event has:

• Key: trackerId

• Value: latitude, longitude

You need to ensure that the latest location for each tracker is always retained in the Kafka topic.

Which topic configuration parameter should you set?)

A.

cleanup.policy=compact

B.

retention.ms=infinite

C.

min.cleanable.dirty.ratio=-1

D.

retention.ms=0

Full Access
Question # 7

Which tool can you use to modify the replication factor of an existing topic?

A.

kafka-reassign-partitions.sh

B.

kafka-recreate-topic.sh

C.

kafka-topics.sh

D.

kafka-reassign-topics.sh

Full Access
Question # 8

(Your application consumes from a topic configured with a deserializer.

You want the application to be resilient to badly formatted records (poison pills).

You surround the poll() call with a try/catch block for RecordDeserializationException.

You need to log the bad record, skip it, and continue processing other records.

Which action should you take in the catch block?)

A.

Log the bad record and seek the consumer to the offset of the next record.

B.

Log the bad record and call consumer.skip() method.

C.

Throw a runtime exception to trigger a restart of the application.

D.

Log the bad record; no other action is needed.

Full Access
Question # 9

(You need to send a JSON message on the wire. The message key is a string.

How would you do this?)

A.

Specify a key serializer class for the JSON contents of the message’s value. Set the value serializer class to null.

B.

Specify a value serializer class for the JSON contents of the message’s value. Set a key serializer for the string value.

C.

Specify a value serializer class for the JSON contents of the message’s value. Set the key serializer class to null.

D.

Specify a value serializer class for the JSON contents of the message’s value. Set the key serializer class to JSON.

Full Access
Question # 10

You are writing to a topic with acks=all.

The producer receives acknowledgments but you notice duplicate messages.

You find that timeouts due to network delay are causing resends.

Which configuration should you use to prevent duplicates?

A.

enable.auto.commit=true

B.

retries=2147483647max.in.flight.requests.per.connection=5enable.idempotence=true

C.

retries=0max.in.flight.requests.per.connection=5enable.idempotence=true

D.

retries=2147483647max.in.flight.requests.per.connection=1enable.idempotence=false

Full Access
Question # 11

A stream processing application is tracking user activity in online shopping carts.

You want to identify periods of user inactivity.

Which type of Kafka Streams window should you use?

A.

Sliding

B.

Tumbling

C.

Hopping

D.

Session

Full Access
Question # 12

(You are implementing a Kafka Streams application to process financial transactions.

Each transaction must be processed exactly once to ensure accuracy.

The application reads from an input topic, performs computations, and writes results to an output topic.

During testing, you notice duplicate entries in the output topic, which violates the exactly-once processing requirement.

You need to ensure exactly-once semantics (EOS) for this Kafka Streams application.

Which step should you take?)

A.

Enable compaction on the output topic to handle duplicates.

B.

Set enable.idempotence=true in the internal producer configuration of the Kafka Streams application.

C.

Set enable.exactly_once=true in the Kafka Streams configuration.

D.

Set processing.guarantee=exactly_once_v2 in the Kafka Streams configuration.

Full Access
Question # 13

You need to set alerts on key broker metrics to trigger notifications when the cluster is unhealthy.

Which are three minimum broker metrics to monitor?

(Select three.)

A.

kafka.controller:type=KafkaController,name=TopicsToDeleteCount

B.

kafka.controller:type=KafkaController,name=OfflinePartitionsCount

C.

kafka.controller:type=KafkaController,name=ActiveControllerCount

D.

kafka.controller:type=ControllerStats,name=UncleanLeaderElectionsPerSec

E.

kafka.controller:type=KafkaController,name=LastCommittedRecordOffset

Full Access
Question # 14

You need to configure a sink connector to write records that fail into a dead letter queue topic. Requirements:

Topic name: DLQ-Topic

Headers containing error context must be added to the messagesWhich three configuration parameters are necessary?(Select three.)

A.

errors.tolerance=all

B.

errors.deadletterqueue.topic.name=DLQ-Topic

C.

errors.deadletterqueue.context.headers.enable=true

D.

errors.tolerance=none

E.

errors.log.enable=true

F.

errors.log.include.messages=true

Full Access
Question # 15

(Your configuration parameters for a Source connector and Connect worker are:

• offset.flush.interval.ms=60000

• offset.flush.timeout.ms=500

• offset.storage.topic=connect-offsets

• offset.storage.replication.factor=-1

Which two statements match the expected behavior?

Select two.)

A.

The offsets topic will use the broker default replication factor.

B.

The connector will commit offsets to the broker default offsets topic.

C.

The connector will commit offsets to a topic called connect-offsets.

D.

The connector will wait 500 ms before trying to commit offsets for tasks.

Full Access
Question # 16

Match the testing tool with the type of test it is typically used to perform.

Full Access
Question # 17

(A consumer application runs once every two weeks and reads from a Kafka topic.

The last time the application ran, the last offset processed was 217.

The application is configured with auto.offset.reset=latest.

The current offsets in the topic start at 318 and end at 588.

Which offset will the application start reading from when it starts up for its next run?)

A.

0

B.

218

C.

318

D.

589

Full Access
Question # 18

(You want to enrich the content of a topic by joining it with key records from a second topic.

The two topics have a different number of partitions.

Which two solutions can you use?

Select two.)

A.

Use a GlobalKTable for one of the topics where data does not change frequently and use a KStream–GlobalKTable join.

B.

Repartition one topic to a new topic with the same number of partitions as the other topic (co-partitioning constraint) and use a KStream–KTable join.

C.

Create as many Kafka Streams application instances as the maximum number of partitions of the two topics and use a KStream–KTable join.

D.

Use a KStream–KTable join; Kafka Streams will automatically repartition the topics to satisfy the co-partitioning constraint.

Full Access
Question # 19

(An S3 source connector named s3-connector stopped running.

You use the Kafka Connect REST API to query the connector and task status.

One of the three tasks has failed.

You need to restart the connector and all currently running tasks.

Which REST request will restart the connector instance and all its tasks?)

A.

POST /connectors/s3-connector/restart?includeTasks=true

B.

POST /connectors/s3-connector/restart?includeTasks=true&onlyFailed=true

C.

POST /connectors/s3-connector/restart

D.

POST /connectors/s3-connector/tasks/0/restart

Full Access
Question # 20

Which two statements are correct about transactions in Kafka?

(Select two.)

A.

All messages from a failed transaction will be deleted from a Kafka topic.

B.

Transactions are only possible when writing messages to a topic with single partition.

C.

Consumers can consume both committed and uncommitted transactions.

D.

Information about producers and their transactions is stored in the _transaction_state topic.

E.

Transactions guarantee at least once delivery of messages.

Full Access
Question # 21

This schema excerpt is an example of which schema format?

package com.mycorp.mynamespace;

message SampleRecord {

int32 Stock = 1;

double Price = 2;

string Product_Name = 3;

}

A.

Avro

B.

Protobuf

C.

JSON Schema

D.

YAML

Full Access
Question # 22

You create a producer that writes messages about bank account transactions from tens of thousands of different customers into a topic.

Your consumers must process these messages with low latency and minimize consumer lag

Processing takes ~6x longer than producing

Transactions for each bank account must be processed in orderWhich strategy should you use?

A.

Use the timestamp of the message's arrival as its key.

B.

Use the bank account number found in the message as the message key.

C.

Use a combination of the bank account number and the transaction timestamp as the message key.

D.

Use a unique identifier such as a universally unique identifier (UUID) as the message key.

Full Access
Question # 23

(Which configuration is valid for deploying a JDBC Source Connector to read all rows from the orders table and write them to the dbl-orders topic?)

A.

{"name": "orders-connect","connector.class": "io.confluent.connect.jdbc.DdbcSourceConnector","tasks.max": "1","connection.url": "jdbc:mysql://mysql:3306/dbl","topic.whitelist": "orders","auto.create": "true"}

B.

{"name": "dbl-orders","connector.class": "io.confluent.connect.jdbc.DdbcSourceConnector","tasks.max": "1","connection.url": "jdbc:mysql://mysql:3306/dbl?user=user&password=pas","topic.prefix": "dbl-","table.blacklist": "ord*"}

C.

{"name": "jdbc-source","connector.class": "io.confluent.connect.jdbc.DdbcSourceConnector","tasks.max": "1","connection.url": "jdbc:mysql://mysql:3306/dbl?user=user&useAutoAuth=true","topic.prefix": "dbl-","table.whitelist": "orders"}

D.

{"name": "jdbc-source","connector.class": "io.confluent.connect.jdbc.JdbcSourceConnector","tasks.max": "1","connection.url": "jdbc:mysql://mysql:3306/dbl?user=user&password=pas","topic.prefix": "dbl-","table.whitelist": "orders"}

Full Access
Question # 24

You have a Kafka Connect cluster with multiple connectors.

One connector is not working as expected.

How can you find logs related to that specific connector?

A.

Modify the log4j.properties file to enable connector context.

B.

Modify the log4j.properties file to add a dedicated log appender for the connector.

C.

Change the log level to DEBUG to have connector context information in logs.

D.

Make no change, there is no way to find logs other than by stopping all the other connectors.

Full Access
Question # 25

You are creating a Kafka Streams application to process retail data.

Match the input data streams with the appropriate Kafka Streams object.

Full Access
Question # 26

(You deploy a Kafka Streams application with five application instances.

Kafka Streams stores application metadata using internal topics.

Auto-topic creation is disabled in the Kafka cluster.

Which statement about this scenario is true?)

A.

The application will continue to work and internal topics will be created, even if auto-topic creation is disabled.

B.

The application will terminate with a non-retriable exception.

C.

The application will work, but application metadata will not be stored.

D.

The application will be on hold until internal topics are created manually.

Full Access
Question # 27

You have a topic with four partitions. The application reads from it using two consumers in a single consumer group.

Processing is CPU-bound, and lag is increasing.

What should you do?

A.

Add more consumers to increase the level of parallelism of the processing.

B.

Add more partitions to the topic to increase the level of parallelism of the processing.

C.

Increase the max.poll.records property of consumers.

D.

Decrease the max.poll.records property of consumers.

Full Access