Skip to content

This Confluent Kafka Pipeline project is designed to ingest data from sensors, process it using Kafka Streams, and then store the processed data in MongoDB. It provides a structured data pipeline for real-time data processing and storage.

License

Notifications You must be signed in to change notification settings

Rohii1515/kafka-pipeline

Repository files navigation

confluent-kafka-Pipline

pipeline

Overview

This project is designed to demonstrate how to build a Confluent Kafka Pipeline that facilitates the collection, transformation, and storage of sensor data using Kafka and MongoDB. It provides a step-by-step guide on how to publish sensor data to Kafka topics, stream and transform the data using Confluent Kafka Streams, and finally, consume and store the data in a MongoDB database.

download


High Level Architecture 🪢📈

image


How to setup confluent Kafka.

  1. Account Setup
  2. Cluster Setup
  3. Kafka Topic
  4. Obtain secrets

To use confluent kafka we need following details from Confluent dashboard.

confluentClusterName = ""
confluentBootstrapServers = ""
confluentTopicName = ""
confluentApiKey = ""
confluentSecret = ""
confluentSchemaApiKey = ""
confluentSchemaSecret = ""
endpoint = ""

To consume confluent kafka data we need following details from MongoDB Atlas.

MONGO_DB_URL = "mongodb+srv://Rohii:<password>@cluster9.fgrr4ct5.mongodb.net/?retryWrites=true&w=majority"

Tech Stack Used karate_chop

  1. Python
  2. Bash
  3. MongoDB

Step 1: Create a conda environment

conda --version

Step 2: Create a conda environment

conda create -p venv python==3.10 -y

step 3:

conda activate venv/

Step 4:

pip install -r requirements.txt

step 5:

Run producer_main.py to prdouce data from data source to topics in json

step 6:

Run consumer_main.py to consume data from Confluent Kafka to MongoDB in json format

Contributing

If you'd like to contribute to this project, please follow these guidelines:

  • Fork the repository.
  • Create a new branch for your feature or bug fix.
  • Make your changes and commit them with descriptive messages.
  • Push your branch to your fork.
  • Create a pull request to merge your changes into the main branch of this repository.

License

This project is licensed under the MIT License. Feel free to use, modify, and distribute it as needed.

About

This Confluent Kafka Pipeline project is designed to ingest data from sensors, process it using Kafka Streams, and then store the processed data in MongoDB. It provides a structured data pipeline for real-time data processing and storage.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published