Mule meets Kafka Best Practices for data consumption
Mule Meets Kafka: Best Practices For Data Consumption
Published 3/2024
MP4 | Video: h264, 1920x1080 | Audio: AAC, 44.1 KHz
Language: English
| Size: 4.16 GB
| Duration: 7h 3m
Discover MuleSoft's capabilities for Kafka and Confluent to consume data performant, fault-tolerant and reusable
What you'll learn
How to implement a performant, fault-tolerant and reusable Kafka data consumption solution using MuleSoft
Gaining significantly better performance results by using batch messages and parallel processing
Filtering and logging problematic messages without using a dead-letter queue
Ensuring consistency when dealing with messages that have to be consumed following the "all or nothing" principle
Populating a target system using the example of a database
Extract recurrent parts of your implementation to reusable components
Take special actions such as stopping the consumption flow in case of a critical error
Populating a Kafka topic with large and customized mocking data using DataWeave capabilities
Requirements
Basic understanding of Apache Kafka, Mule API implementation concepts and relational databases
For the hands-on part: A recent Windows or Mac machine with at least 8GB RAM (16GB recommended), approximately 20 GB of disk space, a REST client such as Postman, a Docker Desktop and a Maven installation
Description
Are you looking for a way to consume data from Kafka topics quick, reliable and efficient?Maybe you have already tried to use MuleSoft for consuming Kafka topic data and struggled with performance issues, unrecoverable errors or implementation efforts?If so this course is for you.You will learn about MuleSoft's capabilities that will allow you toconsume your data in a performant way by using parallelism and data segmentation at multiple levelshandle errors effectively by classifying an error based on several criteria such as reproducibility and triggering appropriate actionsspeed up implementation by creating reusable components that are available across your appsensure data consistency in case of an incomplete or aborted consumptionAfter this course, you will have a better understanding of which tasks you should pay attention to when implementing a Kafka topic data integration solution and how MuleSoft can help you solving them.This is a hands-on course that guides you in implementing and testing a complete sample application from scratch on your computer for consuming data from a Kafka topic and populating the data to a target system. This also includes the hosting and population of a sample Confluent Kafka topic with mocking data.The capabilities you will learn about are also potentially useful for integrating data from other sources than a Kafka topic.
Overview
Section 1: Introduction
Lecture 1 Why I made this course
Lecture 2 The overall picture
Lecture 3 A personal message from your instructor
Section 2: Setting up your environment
Lecture 4 Apache Kafka
Lecture 5 MySQL
Lecture 6 MuleSoft
Section 3: Consuming the data
Lecture 7 Implementing the basic consumption process
Lecture 8 Preparing the payload
Lecture 9 Populating the target system
Lecture 10 Handling tombstone messages
Section 4: Error handling
Lecture 11 Overview
Lecture 12 Populating the error log table
Lecture 13 Handling deserialization errors
Lecture 14 Handling System API call errors
Lecture 15 Logging the Correlation ID
Lecture 16 Stopping the consumption flow
Section 5: Reusability
Lecture 17 Overview
Lecture 18 Extracting message deserialization and payload preparation
Lecture 19 Extracting message consumption and error handling
Lecture 20 Congratulations
Developers and architects who want to get to know MuleSoft's capabilities for performant, fault-tolerant and reusable data consumption,Developers who want to geo to know which tasks you should pay attention to when implementing a Kafka topic data integration solution and how MuleSoft can help you solving them
https://rapidgator.net/file/9f9b48ed11f22ced303a672f9f1998fe/
https://rapidgator.net/file/65584caf47f340bba83a7dfa478a4665/