
CCDAK Exam Information and Outline
Confluent Certified Developer for Apache Kafka
CCDAK Exam Syllabus & Study Guide
Before you start practicing with our exam simulator, it is essential to understand the official CCDAK exam objectives. This course outline serves as your roadmap, breaking down exactly which technical domains and skills will be tested. By reviewing the syllabus, you can identify your strengths and focus your study time on the areas where you need the most improvement.
The information below reflects the latest 2026 course contents as defined by Confluent. We provide this detailed breakdown to help you align your preparation with the actual exam format, ensuring there are no surprises on test day. Use this outline as a checklist to track your progress as you move through our practice question banks.
Below are complete topics detail with latest syllabus and course outline, that will help you good knowledge about exam objectives and topics that you have to prepare. These contents are covered in questions and answers pool of exam.
Exam Code: CCDAK
Exam Name: Confluent CCDAK Confluent Certified Developer for Apache Kafka
Number of questions: ~60 questions (multiple-choice and scenario-based)
Exam duration: 90 minutes (1 hour 30 minutes)
Passing Marks: ~70-75%
Format: Multiple choice (and possibly multiple response/scenario questions)
Proctoring: Remote proctor or testing center (webcam required)
Introductory Concepts
- Write code to connect to a Kafka cluster
- Distinguish between leaders and followers and work with replicas
- Explain what a segment is and explore retention
- Use the CLI to work with topics, producers, and consumers
Working with Producers
- Describe the work a producer performs, and the core components needed to produce messages
- Create producers and specify configuration properties
- Explain how to configure producers to know that Kafka receives messages
- Delve into how batching works and explore batching configurations
- Explore reacting to failed delivery and tuning producers with timeouts
- Use the APIs for Java, C#/.NET, or Python to create a Producer
Consumers, Groups, and Partitions
- Create and manage consumers and their property files
- Illustrate how consumer groups and partitions provide scalability and fault tolerance
- Explore managing consumer offsets
- Tune fetch requests
- Explain how consumer groups are managed and their benefits
- Compare and contrast group management strategies and when you might use each
- Use the API for Java, C#/.NET, or Python to create a Consumer
Schemas and the Confluent Schema Registry
- Describe Kafka schemas and how they work
- Write an Avro compatible schema and explore using Protobuf and JSON schemas
- Write schemas that can evolve
- Write and read messages using schema-enabled Kafka client applications
- Using Avro, the API for Java, C#/.NET, or Python, write a schema-enabled producer or consumer that leverages the Confluent Schema Registry
Streaming and Kafka Streams
- Develop an appreciation for what streaming applications can do for you back on the job
- Describe Kafka Streams and explore steams properties and topologies
- Compare and contrast steams and tables, and relate events in streams to records/messages in topics
- Write an application using the Streams DSL (Domain-Specific Language)
Introduction to Confluent ksqlDB
- Describe how Kafka Streams and ksqlDB relate
- Explore the ksqlDB CLI
- Use ksqlDB to filter and transform data
- Compare and contrast types of ksqlDB queries
- Leverage ksqlDB to perform time-based stream operations
- Write a ksqlDB query that relates data between two streams or a stream and a table Kafka Connect
- List some of the components of Kafka Connect and describe how they relate
- Set configurations for components of Kafka Connect
- Describe connect integration and how data flows between applications and Kafka
- Explore some use-cases where Kafka Connect makes development efficient
- Use Kafka Connect in conjunction with other tools to process data in motion in the most efficient way
- Create a Connector and import data from a database to a Kafka cluster
Design Decisions and Considerations
- Delve into how compaction affects consumer offsets
- Explore how consumers work with offsets in scenarios outside of normal processing behavior and understand how to manipulate offsets to deal with anomalies
- Evaluate decisions about consumer and partition counts and how they relate
- Address decisions that arise from default key-based partitioning and consider alternative partitioning strategies
- Configure producers to deliver messages without duplicates and with ordering guarantees
- List ways to manage large message sizes
- Describe how to work with messages in transactions and how Kafka enables transactions
Robust Development
- Compare and contrast error handling options with Kafka Connect, including the dead letter queue
- Distinguish between various categories of testing
- List considerations for stress and load test a Kafka system