d6bc6bb5d2
* Added test class for a simple shallow copy and deep copy * Added test class for a simple shallow copy and deep copy * refactor naming of test method * formatted * refactor test whenIsAShallowCopyDoneByCopyConstructor_thenImmutableObjectWillNotChange * Renamed package and added md file * refactor README.md * first push * refactor * Revert "refactor README.md" This reverts commit eae77c453ba0bf2af62bad52dc1ed61d07931e34. * Revert "Renamed package and added md file" This reverts commit 42c6f97cbde39cc0a5e0bacf34f86a32ded4f4aa. * Revert "refactor test whenIsAShallowCopyDoneByCopyConstructor_thenImmutableObjectWillNotChange" This reverts commit 44fb57fe2b51857f960dc216d33508e718e5414f. * Revert "formatted" This reverts commit 44be87ef25e566b8e9175cb0fdeed7f0ef485dd3. * Revert "refactor naming of test method" This reverts commit 6133c31057e39b19c4978f960cda1c0ba5559aae. * Revert "Added test class for a simple shallow copy and deep copy" This reverts commit 2cae083578883ae693d1c5e76fd4948e213e9ea0. * Revert "Added test class for a simple shallow copy and deep copy" This reverts commit f43312e2c1979410409f46020a3f7d555e11e966. * Merge prohect java-supplier-callable to project core-java-lambdas * adjusted package name * removed AbstractAgeCalculator.java * added test for supplier-callable * first push for article "Implementing Retry In Kafka Consumer" Co-authored-by: Cesare <cesare.valenti@hotmail.com> |
||
---|---|---|
.. | ||
src | ||
README.md | ||
pom.xml |
README.md
Spring Kafka
This module contains articles about Spring with Kafka
Relevant articles
- Intro to Apache Kafka with Spring
- Testing Kafka and Spring Boot
- Monitor the Consumer Lag in Apache Kafka
- Send Large Messages With Kafka
- Configuring Kafka SSL Using Spring Boot
- Kafka Streams With Spring Boot
- Get the Number of Messages in an Apache Kafka Topic
Intro
This is a simple Spring Boot app to demonstrate sending and receiving of messages in Kafka using spring-kafka.
As Kafka topics are not created automatically by default, this application requires that you create the following topics manually.
$ bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic baeldung
$ bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 5 --topic partitioned
$ bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic filtered
$ bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic greeting
When the application runs successfully, following output is logged on to console (along with spring logs):
Message received from the 'baeldung' topic by the basic listeners with groups foo and bar
Received Message in group 'foo': Hello, World!
Received Message in group 'bar': Hello, World!
Message received from the 'baeldung' topic, with the partition info
Received Message: Hello, World! from partition: 0
Message received from the 'partitioned' topic, only from specific partitions
Received Message: Hello To Partioned Topic! from partition: 0
Received Message: Hello To Partioned Topic! from partition: 3
Message received from the 'filtered' topic after filtering
Received Message in filtered listener: Hello Baeldung!
Message (Serialized Java Object) received from the 'greeting' topic
Received greeting message: Greetings, World!!