From the course: Complete Guide to Apache Kafka for Beginners
Unlock the full course today
Join today to access over 23,200 courses taught by industry experts.
Case study: Big data ingestion - Kafka Tutorial
From the course: Complete Guide to Apache Kafka for Beginners
Case study: Big data ingestion
So Kafka historically was created to do big data ingestion. So it's a very common in the old days to have generic connectors that will take data and put it into Kafka and then from Kafka to offload it into HDFS, Amazon S3 or ElasticSearch, for example. So Kafka can serve a double purpose. In that case, it can be a speed layer for your real time applications while having a slow layer that will have applications extract in a batch manner in data stores that are going to be helpful, for example, HDFS and S3 when you want to do analytics. So Kafka as a front to big data is a very common pattern in the big data world, and it's also used as an ingestion buffers in front of other stores if you need some kind of buffer. So this is the architecture you would strive for. So you have your data producers could be any kind of data within your company that sends data into Kafka. There you would have a speed layer, so it could be your Kafka consumer, but also big data frameworks such as Spark…
Practice while you learn with exercise files
Download the files the instructor uses to teach the course. Follow along and learn by watching, listening and practicing.
Contents
-
-
-
-
-
-
-
-
-
-
-
-
-
(Locked)
Choosing partition count and replication factor5m 21s
-
(Locked)
Kafka topics naming convention1m 31s
-
(Locked)
Case study: MovieFlix5m 10s
-
(Locked)
Case study: GetTaxi4m 18s
-
(Locked)
Case study: MySocialMedia5m 32s
-
(Locked)
Case study: MyBank3m 41s
-
(Locked)
Case study: Big data ingestion1m 36s
-
(Locked)
Case study: Logging and metrics aggregation1m 8s
-
(Locked)
-
-
-