The producer code below features a Callback class with a method called onCompletion().
In the onCompletion() method, when the request is completed successfully, what does the value metadata.offset() represent?
You are composing a REST request to create a new connector in a running Connect cluster. You invoke POST /connectors with a configuration and receive a 409 (Conflict) response.
What are two reasons for this response? (Select two.)
(You are designing a stream pipeline to monitor the real-time location of GPS trackers, where historical location data is not required.
Each event has:
• Key: trackerId
• Value: latitude, longitude
You need to ensure that the latest location for each tracker is always retained in the Kafka topic.
Which topic configuration parameter should you set?)
Which tool can you use to modify the replication factor of an existing topic?
(Your application consumes from a topic configured with a deserializer.
You want the application to be resilient to badly formatted records (poison pills).
You surround the poll() call with a try/catch block for RecordDeserializationException.
You need to log the bad record, skip it, and continue processing other records.
Which action should you take in the catch block?)
(You need to send a JSON message on the wire. The message key is a string.
How would you do this?)
You are writing to a topic with acks=all.
The producer receives acknowledgments but you notice duplicate messages.
You find that timeouts due to network delay are causing resends.
Which configuration should you use to prevent duplicates?
A stream processing application is tracking user activity in online shopping carts.
You want to identify periods of user inactivity.
Which type of Kafka Streams window should you use?
(You are implementing a Kafka Streams application to process financial transactions.
Each transaction must be processed exactly once to ensure accuracy.
The application reads from an input topic, performs computations, and writes results to an output topic.
During testing, you notice duplicate entries in the output topic, which violates the exactly-once processing requirement.
You need to ensure exactly-once semantics (EOS) for this Kafka Streams application.
Which step should you take?)
You need to set alerts on key broker metrics to trigger notifications when the cluster is unhealthy.
Which are three minimum broker metrics to monitor?
(Select three.)
You need to configure a sink connector to write records that fail into a dead letter queue topic. Requirements:
Topic name: DLQ-Topic
Headers containing error context must be added to the messagesWhich three configuration parameters are necessary?(Select three.)
(Your configuration parameters for a Source connector and Connect worker are:
• offset.flush.interval.ms=60000
• offset.flush.timeout.ms=500
• offset.storage.topic=connect-offsets
• offset.storage.replication.factor=-1
Which two statements match the expected behavior?
Select two.)
Match the testing tool with the type of test it is typically used to perform.
(A consumer application runs once every two weeks and reads from a Kafka topic.
The last time the application ran, the last offset processed was 217.
The application is configured with auto.offset.reset=latest.
The current offsets in the topic start at 318 and end at 588.
Which offset will the application start reading from when it starts up for its next run?)
(You want to enrich the content of a topic by joining it with key records from a second topic.
The two topics have a different number of partitions.
Which two solutions can you use?
Select two.)
(An S3 source connector named s3-connector stopped running.
You use the Kafka Connect REST API to query the connector and task status.
One of the three tasks has failed.
You need to restart the connector and all currently running tasks.
Which REST request will restart the connector instance and all its tasks?)
Which two statements are correct about transactions in Kafka?
(Select two.)
This schema excerpt is an example of which schema format?
package com.mycorp.mynamespace;
message SampleRecord {
int32 Stock = 1;
double Price = 2;
string Product_Name = 3;
}
You create a producer that writes messages about bank account transactions from tens of thousands of different customers into a topic.
Your consumers must process these messages with low latency and minimize consumer lag
Processing takes ~6x longer than producing
Transactions for each bank account must be processed in orderWhich strategy should you use?
(Which configuration is valid for deploying a JDBC Source Connector to read all rows from the orders table and write them to the dbl-orders topic?)
You have a Kafka Connect cluster with multiple connectors.
One connector is not working as expected.
How can you find logs related to that specific connector?
You are creating a Kafka Streams application to process retail data.
Match the input data streams with the appropriate Kafka Streams object.
(You deploy a Kafka Streams application with five application instances.
Kafka Streams stores application metadata using internal topics.
Auto-topic creation is disabled in the Kafka cluster.
Which statement about this scenario is true?)
You have a topic with four partitions. The application reads from it using two consumers in a single consumer group.
Processing is CPU-bound, and lag is increasing.
What should you do?