Flink Forward San Francisco 2022.
Being in the payments space, Stripe requires strict correctness and freshness guarantees. We rely on Flink as the natural solution for delivering on this in support of our Change Data Capture (CDC) infrastructure. We heavily rely on CDC as a tool for capturing data change streams from our databases without critically impacting database reliability, scalability, and maintainability. Data derived from these streams is used broadly across the business and powers many of our critical financial reporting systems totalling over $640 Billion in payment volume annually. We use many components of Flink’s flexible DataStream API to perform aggregations and abstract away the complexities of stream processing from our downstreams. In this talk, we’ll walk through our experience from the very beginning to what we have in production today. We’ll share stories around the technical details and trade-offs we encountered along the way.
by
Jeff Chao
Real-Life Use Cases & Architectures for Event Streaming with Apache KafkaKai Wähner
Streaming all over the World: Real-Life Use Cases & Architectures for Event Streaming with Apache Kafka.
Learn about various case studies for event streaming with Apache Kafka across industries. The talk explores architectures for real-world deployments from Audi, BMW, Disney, Generali, Paypal, Tesla, Unity, Walmart, William Hill, and more. Use cases include fraud detection, mainframe offloading, predictive maintenance, cybersecurity, edge computing, track&trace, live betting, and much more.
A brief introduction to Apache Kafka and describe its usage as a platform for streaming data. It will introduce some of the newer components of Kafka that will help make this possible, including Kafka Connect, a framework for capturing continuous data streams, and Kafka Streams, a lightweight stream processing library.
Delta Lake, an open-source innovations which brings new capabilities for transactions, version control and indexing your data lakes. We uncover how Delta Lake benefits and why it matters to you. Through this session, we showcase some of its benefits and how they can improve your modern data engineering pipelines. Delta lake provides snapshot isolation which helps concurrent read/write operations and enables efficient insert, update, deletes, and rollback capabilities. It allows background file optimization through compaction and z-order partitioning achieving better performance improvements. In this presentation, we will learn the Delta Lake benefits and how it solves common data lake challenges, and most importantly new Delta Time Travel capability.
Building Cloud-Native App Series - Part 3 of 11
Microservices Architecture Series
AWS Kinesis Data Streams
AWS Kinesis Firehose
AWS Kinesis Data Analytics
Apache Flink - Analytics
- Delta Lake is an open source project that provides ACID transactions, schema enforcement, and time travel capabilities to data stored in data lakes such as S3 and ADLS.
- It allows building a "Lakehouse" architecture where the same data can be used for both batch and streaming analytics.
- Key features include ACID transactions, scalable metadata handling, time travel to view past data states, schema enforcement, schema evolution, and change data capture for streaming inserts, updates and deletes.
The Top 5 Apache Kafka Use Cases and Architectures in 2022Kai Wähner
This document discusses the top 5 use cases and architectures for data in motion in 2022. It describes:
1) The Kappa architecture as an alternative to the Lambda architecture that uses a single stream to handle both real-time and batch data.
2) Hyper-personalized omnichannel experiences that integrate customer data from multiple sources in real-time to provide personalized experiences across channels.
3) Multi-cloud deployments using Apache Kafka and data mesh architectures to share data across different cloud platforms.
4) Edge analytics that deploy stream processing and Kafka brokers at the edge to enable low-latency use cases and offline functionality.
5) Real-time cybersecurity applications that use streaming data
ksqlDB: A Stream-Relational Database Systemconfluent
Speaker: Matthias J. Sax, Software Engineer, Confluent
ksqlDB is a distributed event streaming database system that allows users to express SQL queries over relational tables and event streams. The project was released by Confluent in 2017 and is hosted on Github and developed with an open-source spirit. ksqlDB is built on top of Apache Kafka®, a distributed event streaming platform. In this talk, we discuss ksqlDB’s architecture that is influenced by Apache Kafka and its stream processing library, Kafka Streams. We explain how ksqlDB executes continuous queries while achieving fault tolerance and high vailability. Furthermore, we explore ksqlDB’s streaming SQL dialect and the different types of supported queries.
Matthias J. Sax is a software engineer at Confluent working on ksqlDB. He mainly contributes to Kafka Streams, Apache Kafka's stream processing library, which serves as ksqlDB's execution engine. Furthermore, he helps evolve ksqlDB's "streaming SQL" language. In the past, Matthias also contributed to Apache Flink and Apache Storm and he is an Apache committer and PMC member. Matthias holds a Ph.D. from Humboldt University of Berlin, where he studied distributed data stream processing systems.
https://db.cs.cmu.edu/events/quarantine-db-talk-2020-confluent-ksqldb-a-stream-relational-database-system/
Designing ETL Pipelines with Structured Streaming and Delta Lake—How to Archi...Databricks
Structured Streaming has proven to be the best platform for building distributed stream processing applications. Its unified SQL/Dataset/DataFrame APIs and Spark’s built-in functions make it easy for developers to express complex computations. Delta Lake, on the other hand, is the best way to store structured data because it is a open-source storage layer that brings ACID transactions to Apache Spark and big data workloads Together, these can make it very easy to build pipelines in many common scenarios. However, expressing the business logic is only part of the larger problem of building end-to-end streaming pipelines that interact with a complex ecosystem of storage systems and workloads. It is important for the developer to truly understand the business problem that needs to be solved. Apache Spark, being a unified analytics engine doing both batch and stream processing, often provides multiples ways to solve the same problem. So understanding the requirements carefully helps you to architect your pipeline that solves your business needs in the most resource efficient manner.
In this talk, I am going examine a number common streaming design patterns in the context of the following questions.
WHAT are you trying to consume? What are you trying to produce? What is the final output that the business wants? What are your throughput and latency requirements?
WHY do you really have those requirements? Would solving the requirements of the individual pipeline actually solve your end-to-end business requirements?
HOW are going to architect the solution? And how much are you willing to pay for it?
Clarity in understanding the ‘what and why’ of any problem can automatically much clarity on the ‘how’ to architect it using Structured Streaming and, in many cases, Delta Lake.
Large Scale Lakehouse Implementation Using Structured StreamingDatabricks
Business leads, executives, analysts, and data scientists rely on up-to-date information to make business decision, adjust to the market, meet needs of their customers or run effective supply chain operations.
Come hear how Asurion used Delta, Structured Streaming, AutoLoader and SQL Analytics to improve production data latency from day-minus-one to near real time Asurion’s technical team will share battle tested tips and tricks you only get with certain scale. Asurion data lake executes 4000+ streaming jobs and hosts over 4000 tables in production Data Lake on AWS.
Near real-time statistical modeling and anomaly detection using Flink!Flink Forward
Flink Forward San Francisco 2022.
At ThousandEyes we receive billions of events every day that allow us to monitor the internet; the most important aspect of our platform is to detect outages and anomalies that have a potential to cause serious impact to customer applications and user experience. Automatic detection of such events at lowest latency and highest accuracy is extremely important for our customers and their business. After launching several resilient and low latency data pipelines in production using Flink we decided to take it up a notch; we leveraged Flink to build statistical models in near real-time and apply them on incoming stream of events to detect anomalies! In this session we will deep dive into the design as well as discuss pitfalls and learnings while developing our real-time platform that leverages Debezium, Kafka, Flink, ElasticCache and DynamoDB to process events at scale!
by
Kunal Umrigar & Balint Kurnasz
Databricks CEO Ali Ghodsi introduces Databricks Delta, a new data management system that combines the scale and cost-efficiency of a data lake, the performance and reliability of a data warehouse, and the low latency of streaming.
Deploying Flink on Kubernetes - David AndersonVerverica
Kubernetes has rapidly established itself as the de facto standard for orchestrating containerized infrastructures. And with the recent completion of the refactoring of Flink's deployment and process model known as FLIP-6, Kubernetes has become a natural choice for Flink deployments. In this talk we will walk through how to get Flink running on Kubernetes
A Thorough Comparison of Delta Lake, Iceberg and HudiDatabricks
Recently, a set of modern table formats such as Delta Lake, Hudi, Iceberg spring out. Along with Hive Metastore these table formats are trying to solve problems that stand in traditional data lake for a long time with their declared features like ACID, schema evolution, upsert, time travel, incremental consumption etc.
Apache Kafka in the Airline, Aviation and Travel IndustryKai Wähner
Aviation and travel are notoriously vulnerable to social, economic, and political events, as well as the ever-changing expectations of consumers. Coronavirus is just a piece of the challenge.
This presentation explores use cases, architectures, and references for Apache Kafka as event streaming technology in the aviation industry, including airline, airports, global distribution systems (GDS), aircraft manufacturers, and more.
Examples include Lufthansa, Singapore Airlines, Air France Hop, Amadeus, and more. Technologies include Kafka, Kafka Connect, Kafka Streams, ksqlDB, Machine Learning, Cloud, and more.
This document discusses using Apache Kafka as a data hub to capture changes from various data sources using change data capture (CDC). It outlines several common CDC patterns like using modification dates, database triggers, or log files to identify changes. It then discusses using Kafka Connect to integrate various data sources like MongoDB, PostgreSQL and replicate changes. The document provides examples of open source CDC connectors and concludes with suggestions for getting involved in the Apache Kafka community.
Tame the small files problem and optimize data layout for streaming ingestion...Flink Forward
Flink Forward San Francisco 2022.
In modern data platform architectures, stream processing engines such as Apache Flink are used to ingest continuous streams of data into data lakes such as Apache Iceberg. Streaming ingestion to iceberg tables can suffer by two problems (1) small files problem that can hurt read performance (2) poor data clustering that can make file pruning less effective. To address those two problems, we propose adding a shuffling stage to the Flink Iceberg streaming writer. The shuffling stage can intelligently group data via bin packing or range partition. This can reduce the number of concurrent files that every task writes. It can also improve data clustering. In this talk, we will explain the motivations in details and dive into the design of the shuffling stage. We will also share the evaluation results that demonstrate the effectiveness of smart shuffling.
by
Gang Ye & Steven Wu
GCP for Apache Kafka® Users: Stream Ingestion and Processingconfluent
Watch this talk here: https://www.confluent.io/online-talks/gcp-for-apache-kafka-users-stream-ingestion-processing
In private and public clouds, stream analytics commonly means stateless processing systems organized around Apache Kafka® or a similar distributed log service. GCP took a somewhat different tack, with Cloud Pub/Sub, Dataflow, and BigQuery, distributing the responsibility for processing among ingestion, processing and database technologies.
We compare the two approaches to data integration and show how Dataflow allows you to join and transform and deliver data streams among on-prem and cloud Apache Kafka clusters, Cloud Pub/Sub topics and a variety of databases. The session will have a mix of architectural discussions and practical code reviews of Dataflow-based pipelines.
Serverless Kafka and Spark in a Multi-Cloud Lakehouse ArchitectureKai Wähner
Apache Kafka in conjunction with Apache Spark became the de facto standard for processing and analyzing data. Both frameworks are open, flexible, and scalable.
Unfortunately, the latter makes operations a challenge for many teams. Ideally, teams can use serverless SaaS offerings to focus on business logic. However, hybrid and multi-cloud scenarios require a cloud-native platform that provides automated and elastic tooling to reduce the operations burden.
This session explores different architectures to build serverless Apache Kafka and Apache Spark multi-cloud architectures across regions and continents.
We start from the analytics perspective of a data lake and explore its relation to a fully integrated data streaming layer with Kafka to build a modern data Data Lakehouse.
Real-world use cases show the joint value and explore the benefit of the "delta lake" integration.
This document provides an overview of Apache Flink internals. It begins with an introduction and recap of Flink programming concepts. It then discusses how Flink programs are compiled into execution plans and executed in a pipelined fashion, as opposed to being executed eagerly like regular code. The document outlines Flink's architecture including the optimizer, runtime environment, and data storage integrations. It also covers iterative processing and how Flink handles iterations both by unrolling loops and with native iterative datasets.
The document discusses how Jazz for Service Management can help integrate data from different sources to create a unified view. It does this through linked data and open services that allow for plug-and-play integration across tools from multiple vendors. This simplifies integration and enables things like dashboards, reports, and mobile access using common standards.
This document discusses Mastercard's use of Technology Business Management (TBM) to more accurately track costs of software and infrastructure back to individual business units. It describes how Mastercard collects usage data from Pivotal Cloud Foundation (PCF) and infrastructure-as-a-service providers to import into the Apptio cost analysis platform. This enables generating reports to show estimated charges to business units for their application instance usage of shared PCF platforms and services. Future work includes collecting more detailed usage data across all organizations and spaces in all PCF foundations for improved cost modeling and charge-back.
Kakfa summit london 2019 - the art of the event-streaming appNeil Avery
Have you ever imagined what it would be like to build a massively scalable streaming application on Kafka, the challenges, the patterns and the thought process involved? How much of the application can be reused? What patterns will you discover? How does it all fit together? Depending upon your use case and business, this can mean many things. Starting out with a data pipeline is one thing, but evolving into a company-wide real-time application that is business critical and entirely dependent upon a streaming platform is a giant leap. Large-scale streaming applications are also called event streaming applications. They are classically different from other data systems; event streaming applications are viewed as a series of interconnected streams that are topologically defined using stream processors; they hold state that models your use case as events. Almost like a deconstructed real-time database.
In this talk, I step through the origins of event streaming systems, understanding how they are developed from raw events to evolve into something that can be adopted at an organizational scale. I start with event-first thinking, Domain Driven Design to build data models that work with the fundamentals of Streams, Kafka Streams, KSQL and Serverless (FaaS).
Building upon this, I explain how to build common business functionality by stepping through the patterns for: – Scalable payment processing – Run it on rails: Instrumentation and monitoring – Control flow patterns Finally, all of these concepts are combined in a solution architecture that can be used at an enterprise scale. I will introduce enterprise patterns such as events-as-a-backbone, events as APIs and methods for governance and self-service. You will leave talk with an understanding of how to model events with event-first thinking, how to work towards reusable streaming patterns and most importantly, how it all fits together at scale.
The Art of The Event Streaming Application: Streams, Stream Processors and Sc...confluent
1) The document discusses the art of building event streaming applications using various techniques like bounded contexts, stream processors, and architectural pillars.
2) Key aspects include modeling the application as a collection of loosely coupled bounded contexts, handling state using Kafka Streams, and building reusable stream processing patterns for instrumentation.
3) Composition patterns involve choreographing and orchestrating interactions between bounded contexts to capture business workflows and functions as event-driven data flows.
How to Quantify the Value of Kafka in Your Organization confluent
(Lyndon Hedderly, Confluent) Kafka Summit SF 2018
We all know real-time data has a value. But how do you quantify that value in order to create a business case for becoming more data, or event driven?
The first half of this talk will explore the value of data across a variety of organizations, starting with the five most valuable companies in the world: Apple, Alphabet (Google), Microsoft, Amazon and Facebook (based on stock prices July 2017). We will go on to discuss other digital natives: Uber, Ebay, Netflix and LinkedIn, before exploring more traditional companies across retail, finance and automotive. Next, we’ll look at non-businesses such as governments and lobbyists. Whether organizations are using data to create new business products and services, improve user experiences, increase productivity, manage risk or influencing global power, we’ll see that fast and interconnected data, or “event streaming” is increasingly important.
After showing that data value can be quantified, the second half of this talk will explain the five steps to creating a business case.
Most businesses focus on:
-Making more money or conferring competitive advantage to make more money
-Increasing efficiency to save money and/or
-Mitigating risk to the business to protect money
-We’ll walk through examples of real business cases, discuss how business cases have evolved over the years and show the power of a sound business case. If you’re interested in big money and big business, as well as big data, this talk is for you.
Jet Reports es la herramienta para construir el mejor BI y de forma mas rapida CLARA CAMPROVIN
Análisis empresariales cuando los necesite, en cualquier lugar
Jet Enterprise es una solución de inteligencia empresarial y generación de informes desarrollada específicamente para satisfacer las necesidades propias de los usuarios de Microsoft Dynamics. Ahora puede juntar toda su información en un mismo lugar y permitir que quien usted quiera de la organización realice fácilmente sofisticados análisis empresariales desde cualquier sitio. Capacite a los usuarios para tomar mejores decisiones, más rápido, prácticamente con cualquier dispositivo.
Con Jet Enterprise dispone de:
Una solución completa de inteligencia empresarial y generación de informes, lista para usar en solo 2 horas
Más de 80 paneles y plantillas de informes
7 cubos pregenerados personalizables
Un almacén de datos
Integración directa con sus datos de Microsoft Dynamics y posibilidad de conectarse a otros sistemas empresariales pertinentes
Posibilidad de crear paneles en cuestión de minutos, sin necesidad de conocer la estructura de datos subyacente
Jet Mobile opcional, para acceder a sus datos desde cualquier sitio a través de un navegador web o un dispositivo móvil
Una plataforma robusta de automatización y personalización del almacenamiento de datos
«Comenzamos con datos de Sage Pro, datos de NAV 2009 y, además, datos incorporados de la nueva empresa que habíamos adquirido, por lo que ahora estamos usando tres sistemas de datos. Las ventajas de combinar los tres sistemas en Jet Enterprise han sido enormes».
– Davis & Shirtliff
Éxito inmediato = rápido ROI y bajo coste de propiedad
Muchas soluciones de inteligencia empresarial conllevan costes ocultos, como implementaciones prolongadas y difíciles, personalizaciones caras y precio elevado de las licencias cuando se amplían a un gran número de usuarios. Jet Enterprise se suele instalar en unas dos horas, requiere un nivel mínimo de formación de los usuarios y ofrece licencias para un número ilimitado de usuarios. Los usuarios habitualmente experimentan un incremento de los ingresos brutos en los primeros 12 meses de uso.
Watch full webinar here: https://buff.ly/2mHGaLA
What started to evolve as the most agile and real-time enterprise data fabric, data virtualization is proving to go beyond its initial promise and is becoming one of the most important enterprise big data fabrics.
Attend this session to learn:
• What data virtualization really is
• How it differs from other enterprise data integration technologies
• Why data virtualization is finding enterprise-wide deployment inside some of the largest organizations
This document discusses how digital disruptions are changing businesses and the need for data integration (DI) modernization. It emphasizes that data is crucial for digital businesses and an efficient DI platform is key to success. The document outlines strategies like the big bang or 2-speed approach for DI modernization. It also highlights capabilities needed like API-based integration, stream computing, cloud infrastructure and logical data warehousing. Finally, it stresses the importance of adopting an agile operating model and DevOps culture for lean execution of the DI transformation.
The art of the event streaming application: streams, stream processors and sc...confluent
The document discusses event streaming applications and microservices. It introduces event streaming as an architectural style where applications are composed of loosely coupled services that communicate asynchronously through streams of events. Key aspects covered include handling state using event streams and Kafka Streams, building applications as bounded contexts with choreography and orchestration, and establishing pillars for instrumentation, control and operations. Overall the document promotes event streaming as a paradigm that addresses complexity by providing simplicity and scalability through convergent data and logic processing.
Kafka summit SF 2019 - the art of the event-streaming appNeil Avery
Have you ever imagined what it would be like to build a massively scalable streaming application on Kafka, the challenges, the patterns and the thought process involved? How much of the application can be reused? What patterns will you discover? How does it all fit together? Depending upon your use case and business, this can mean many things. Starting out with a data pipeline is one thing, but evolving into a company-wide real-time application that is business critical and entirely dependent upon a streaming platform is a giant leap. Large-scale streaming applications are also called event streaming applications. They are classically different from other data systems; event streaming applications are viewed as a series of interconnected streams that are topologically defined using stream processors; they hold state that models your use case as events. Almost like a deconstructed realtime database.
In this talk, I step through the origins of event streaming systems, understanding how they are developed from raw events to evolve into something that can be adopted at an organizational scale. I start with event-first thinking, Domain Driven Design to build data models that work with the fundamentals of Streams, Kafka Streams, KSQL and Serverless (FaaS). Building upon this, I explain how to build common business functionality by stepping through patterns for Scalable payment processing Run it on rails: Instrumentation and monitoring Control flow patterns (start, stop, pause) Finally, all of these concepts are combined in a solution architecture that can be used at enterprise scale. I will introduce enterprise patterns such as events-as-a-backbone, events as APIs and methods for governance and self-service. You will leave talk with an understanding of how to model events with event-first thinking, how to work towards reusable streaming patterns and most importantly, how it all fits together at scale.
“Lights Out”Configuration using Tivoli Netcool AutoDiscovery ToolsAntonio Rolle
Review why a CMDB is essential to and is the foundation of your BSM strategy
Outline the known challenges that require planning at the outset of a CMDB initiative
Drill down into the approach and lessons learned in the initial stages of a CMDB rollout for one of the largest financial institutions in North America
Transforming Financial Services with Event Streaming Dataconfluent
The document discusses how event streaming can transform financial services by providing real-time and scalable data. It describes how banks have become software-driven and the challenges of legacy infrastructure. The document then provides an overview of how Confluent event streaming works and its benefits. Finally, it discusses some key use cases for financial services including improving customer experiences, unlocking value from mainframes and core systems, payments, open banking, security and fraud, and regulatory compliance.
Event-driven architectures have been around for a long time, but new trends and innovations in "serverless" computing, data streaming, and Agile practices have created the ground for an evolutionary step that will have significant impact on the way we design and build software over the next decade or more. Much like APIs drove a revolution in public services for RPC, REST, and similar "pull" use cases across organization boundaries, the market now promises to similarly define standard mechanisms to enable "push" notifications of discrete data and activities. This practice, which we call Flow, will drive a revolution in interconnectivity similar to what we saw with HTML and REST.Agile is central to the success of these mechanisms, and is one of the key reasons why this will happen sooner rather than later. The ability to adapt quickly to customer needs, combined with the ability to react quickly to new and changing event sources, is required to make event-driven practices work. In this presentation, James Urquhart describes the changes on our horizon, discuss existing architectures, mechanisms and organizations that are leading the way, and talk specifically about how Agile teams are well prepared to both drive and benefit from Flow systems. The presentation is targeted at technology, development, and product leaders who wish to understand how Flow fits into their architecture portfolio.
EDA Meets Data Engineering – What's the Big Deal?confluent
Presenter: Guru Sattanathan, Systems Engineer, Confluent
Event-driven architectures have been around for many years, much like Apache Kafka®, which first open sourced in 2011. The reality is that the true potential of Kafka is only being realised now. Kafka is becoming the central nervous system of many of today’s enterprises. It is bringing a profound paradigm shift to the way we think about enterprise IT. What has changed in Kafka to enable this paradigm shift? Is it not just a message broker, and how are enterprises using it today? This session will explore these key questions.
Sydney: https://content.deloitte.com.au/20200221-tel-event-tech-community-syd-registration
Melbourne: https://content.deloitte.com.au/20200221-tel-event-tech-community-mel-registration
This document summarizes a webinar on data as a service. It discusses how data virtualization through Denodo can enable agile business intelligence by providing pre-aggregated data to users quickly. It describes how Denodo creates API access to data, allows for an enterprise data marketplace, and integrates machine learning models to power operational AI. A demonstration of a personal COVID-19 risk monitor is provided.
This document provides an overview and comparison of SaaS (Software as a Service) vs on-premise business intelligence (BI) solutions. It discusses the history and components of cloud computing including infrastructure, platforms, and software as a service. Examples are given of both on-premise and SaaS BI solutions. Considerations for choosing between the options include security, data volumes, customization needs, integration requirements, and desire for competitive advantage. Both approaches have ongoing costs associated with support and maintenance.
The Streaming Assessment – An Introductionconfluent
Business breakout during Confluent’s streaming event in Munich, presented by Lyndon Hedderly, Director of Customer Solutions at Confluent. This three-day hands-on course focused on how to build, manage, and monitor clusters using industry best-practices developed by the world’s foremost Apache Kafka™ experts. The sessions focused on how Kafka and the Confluent Platform work, how their main subsystems interact, and how to set up, manage, monitor, and tune your cluster.
The document provides an overview and introduction to the Office 365 Admin Center. It was created by Carter-McGowan Services, LLC and introduces Nikkia T. Carter, the CEO and owner of the company. Carter-McGowan Services provides consulting, setup, support and training for various Microsoft products including SharePoint, Office 365, and Skype for Business. The document outlines the key areas and functions of the Office 365 Admin Center for managing users, licenses, domains and settings for Office 365, Exchange, SharePoint and Skype for Business. It also provides instructions for adding new users and assigning licenses within the Admin Center.
Similar to Squirreling Away $640 Billion: How Stripe Leverages Flink for Change Data Capture (20)
Building a fully managed stream processing platform on Flink at scale for Lin...Flink Forward
Apache Flink is a distributed stream processing framework that allows users to process and analyze data in real-time. At LinkedIn, we developed a fully managed stream processing platform on Flink running on K8s to power hundreds of stream processing pipelines in production. This platform is the backbone for other infra systems like Search, Espresso (internal document store) and feature management etc. We provide a rich authoring and testing environment which allows users to create, test, and deploy their streaming jobs in a self-serve fashion within minutes. Users can focus on their business logic, leaving the Flink platform to take care of management aspects such as split deployment, resource provisioning, auto-scaling, job monitoring, alerting, failure recovery and much more. In this talk, we will introduce the overall platform architecture, highlight the unique value propositions that it brings to stream processing at LinkedIn and share the experiences and lessons we have learned.
Introducing BinarySortedMultiMap - A new Flink state primitive to boost your ...Flink Forward
Flink Forward San Francisco 2022.
Probably everyone who has written stateful Apache Flink applications has used one of the fault-tolerant keyed state primitives ValueState, ListState, and MapState. With RocksDB, however, retrieving and updating items comes at an increased cost that you should be aware of. Sometimes, these may not be avoidable with the current API, e.g., for efficient event-time stream-sorting or streaming joins where you need to iterate one or two buffered streams in the right order. With FLIP-220, we are introducing a new state primitive: BinarySortedMultiMapState. This new form of state offers you to (a) efficiently store lists of values for a user-provided key, and (b) iterate keyed state in a well-defined sort order. Both features can be backed efficiently by RocksDB with a 2x performance improvement over the current workarounds. This talk will go into the details of the new API and its implementation, present how to use it in your application, and talk about the process of getting it into Flink.
by
Nico Kruber
Introducing the Apache Flink Kubernetes OperatorFlink Forward
Flink Forward San Francisco 2022.
The Apache Flink Kubernetes Operator provides a consistent approach to manage Flink applications automatically, without any human interaction, by extending the Kubernetes API. Given the increasing adoption of Kubernetes based Flink deployments the community has been working on a Kubernetes native solution as part of Flink that can benefit from the rich experience of community members and ultimately make Flink easier to adopt. In this talk we give a technical introduction to the Flink Kubernetes Operator and demonstrate the core features and use-cases through in-depth examples."
by
Thomas Weise
Dynamically Scaling Data Streams across Multiple Kafka Clusters with Zero Fli...Flink Forward
Flink Forward San Francisco 2022.
Flink consumers read from Kafka as a scalable, high throughput, and low latency data source. However, there are challenges in scaling out data streams where migration and multiple Kafka clusters are required. Thus, we introduced a new Kafka source to read sharded data across multiple Kafka clusters in a way that conforms well with elastic, dynamic, and reliable infrastructure. In this presentation, we will present the source design and how the solution increases application availability while reducing maintenance toil. Furthermore, we will describe how we extended the existing KafkaSource to provide mechanisms to read logical streams located on multiple clusters, to dynamically adapt to infrastructure changes, and to perform transparent cluster migrations and failover.
by
Mason Chen
One sink to rule them all: Introducing the new Async SinkFlink Forward
Flink Forward San Francisco 2022.
Next time you want to integrate with a new destination for a demo, concept or production application, the Async Sink framework will bootstrap development, allowing you to move quickly without compromise. In Flink 1.15 we introduced the Async Sink base (FLIP-171), with the goal to encapsulate common logic and allow developers to focus on the key integration code. The new framework handles things like request batching, buffering records, applying backpressure, retry strategies, and at least once semantics. It allows you to focus on your business logic, rather than spending time integrating with your downstream consumers. During the session we will dive deep into the internals to uncover how it works, why it was designed this way, and how to use it. We will code up a new sink from scratch and demonstrate how to quickly push data to a destination. At the end of this talk you will be ready to start implementing your own Flink sink using the new Async Sink framework.
by
Steffen Hausmann & Danny Cranmer
Tuning Apache Kafka Connectors for Flink.pptxFlink Forward
Flink Forward San Francisco 2022.
In normal situations, the default Kafka consumer and producer configuration options work well. But we all know life is not all roses and rainbows and in this session we’ll explore a few knobs that can save the day in atypical scenarios. First, we'll take a detailed look at the parameters available when reading from Kafka. We’ll inspect the params helping us to spot quickly an application lock or crash, the ones that can significantly improve the performance and the ones to touch with gloves since they could cause more harm than benefit. Moreover we’ll explore the partitioning options and discuss when diverging from the default strategy is needed. Next, we’ll discuss the Kafka Sink. After browsing the available options we'll then dive deep into understanding how to approach use cases like sinking enormous records, managing spikes, and handling small but frequent updates.. If you want to understand how to make your application survive when the sky is dark, this session is for you!
by
Olena Babenko
Flink powered stream processing platform at PinterestFlink Forward
Flink Forward San Francisco 2022.
Pinterest is a visual discovery engine that serves over 433MM users. Stream processing allows us to unlock value from realtime data for pinners. At Pinterest, we adopt Flink as the unified streaming processing engine. In this talk, we will share our journey in building a stream processing platform with Flink and how we onboarding critical use cases to the platform. Pinterest has supported 90+near realtime streaming applications. We will cover the problem statement, how we evaluate potential solutions and our decision to build the framework.
by
Rainie Li & Kanchi Masalia
Flink Forward San Francisco 2022.
This talk will take you on the long journey of Apache Flink into the cloud-native era. It started all the way from where Hadoop and YARN were the standard way of deploying and operating data applications.
We're going to deep dive into the cloud-native set of principles and how they map to the Apache Flink internals and recent improvements. We'll cover fast checkpointing, fault tolerance, resource elasticity, minimal infrastructure dependencies, industry-standard tooling, ease of deployment and declarative APIs.
After this talk you'll get a broader understanding of the operational requirements for a modern streaming application and where the current limits are.
by
David Moravek
Where is my bottleneck? Performance troubleshooting in FlinkFlink Forward
Flinkn Forward San Francisco 2022.
In this talk, we will cover various topics around performance issues that can arise when running a Flink job and how to troubleshoot them. We’ll start with the basics, like understanding what the job is doing and what backpressure is. Next, we will see how to identify bottlenecks and which tools or metrics can be helpful in the process. Finally, we will also discuss potential performance issues during the checkpointing or recovery process, as well as and some tips and Flink features that can speed up checkpointing and recovery times.
by
Piotr Nowojski
Using the New Apache Flink Kubernetes Operator in a Production DeploymentFlink Forward
Flink Forward San Francisco 2022.
Running natively on Kubernetes, using the new Apache Flink Kubernetes Operator is a great way to deploy and manage Flink application and session deployments. In this presentation, we provide: - A brief overview of Kubernetes operators and their benefits. - Introduce the five levels of the operator maturity model. - Introduce the newly released Apache Flink Kubernetes Operator and FlinkDeployment CRs - Dockerfile modifications you can make to swap out UBI images and Java of the underlying Flink Operator container - Enhancements we're making in: - Versioning/Upgradeability/Stability - Security - Demo of the Apache Flink Operator in-action, with a technical preview of an upcoming product using the Flink Kubernetes Operator. - Lessons learned - Q&A
by
James Busche & Ted Chang
Flink Forward San Francisco 2022.
The Table API is one of the most actively developed components of Flink in recent time. Inspired by databases and SQL, it encapsulates concepts many developers are familiar with. It can be used with both bounded and unbounded streams in a unified way. But from afar it can be difficult to keep track of what this API is capable of and how it relates to Flink's other APIs. In this talk, we will explore the current state of Table API. We will show how it can be used as a batch processor, a changelog processor, or a streaming ETL tool with many built-in functions and operators for deduplicating, joining, and aggregating data. By comparing it to the DataStream API we will highlight differences and elaborate on when to use which API. We will demonstrate hybrid pipelines in which both APIs interact with one another and contribute their unique strengths. Finally, we will take a look at some of the most recent additions as a first step to stateful upgrades.
by
David Andreson
Flink Forward San Francisco 2022.
Based on the new Flink-Pulsar connector, we implemented Flink's TableAPI and Catalog to help users to interact with the Pulsar cluster via Flink SQL easily. We would like to go through the design and implementation of the SQL connector in the following aspects:
1. Two different modes of use Pulsar as a metadata store
2. Data format transformation and management
3. SQL semantics support within Pulsar context
by
Sijie Guo & Neng Lu
Dynamic Rule-based Real-time Market Data AlertsFlink Forward
Flink Forward San Francisco 2022.
At Bloomberg, we deal with high volumes of real-time market data. Our clients expect to be notified of any anomalies in this market data, which may indicate volatile movements in the markets, notable trades, forthcoming events, or system failures. The parameters for these alerts are always evolving and our clients can update them dynamically. In this talk, we'll cover how we utilized the open source Apache Flink and Siddhi SQL projects to build a distributed, scalable, low-latency and dynamic rule-based, real-time alerting system to solve our clients' needs. We'll also cover the lessons we learned along our journey.
by
Ajay Vyasapeetam & Madhuri Jain
Processing Semantically-Ordered Streams in Financial ServicesFlink Forward
Flink Forward San Francisco 2022.
What if my data is already in order? Stream Processing has given us an elegant and powerful solution for running analytic queries and logic over high volumes of continuously arriving data. However, in both Apache Flink and Apache Beam, the notion of time-ordering is baked in at a very low level, making it difficult to express computations that are interested in a semantic-, rather than time-ordering of the data. In financial services, what often matters the most about the data moving between systems is not when the data was created, but in what order, to the extent that many institutions engineer a global sequencing over all data entering and produced by their systems to achieve complete determinism. How, then, can financial institutions and others best employ Stream Processing on streams of data that are already ordered? I will cover various techniques that can make this work, as well as seek input from the community on how Flink might be improved to better support these use-cases.
by
Patrick Lucas
Batch Processing at Scale with Flink & IcebergFlink Forward
Flink Forward San Francisco 2022.
Goldman Sachs's Data Lake platform serves as the firm's centralized data platform, ingesting 140K (and growing!) batches per day of Datasets of varying shape and size. Powered by Flink and using metadata configured by platform users, ingestion applications are generated dynamically at runtime to extract, transform, and load data into centralized storage where it is then exported to warehousing solutions such as Sybase IQ, Snowflake, and Amazon Redshift. Data Latency is one of many key considerations as producers and consumers have their own commitments to satisfy. Consumers range from people/systems issuing queries, to applications using engines like Spark, Hive, and Presto to transform data into refined Datasets. Apache Iceberg allows our applications to not only benefit from consistency guarantees important when running on eventually consistent storage like S3, but also allows us the opportunity to improve our batch processing patterns with its scalability-focused features.
by
Andreas Hailu
Flink Forward San Francisco 2022.
At Flink Forward, we get to hear creative, unique use cases, often on the bleeding edge of some of the most exciting current technologies. This talk will give you a chance to get to open up the hood on our driven and innovative Open Source community. I will cover what our community has been working on this past year, and how this work relates to our (Ververica's) exciting new Flink engineering roadmap! I will also go through some best practices and upcoming opportunities for getting involved in this community!
by
Caito Scherr
Practical learnings from running thousands of Flink jobsFlink Forward
Flink Forward San Francisco 2022.
Task Managers constantly running out of memory? Flink job keeps restarting from cryptic Akka exceptions? Flink job running but doesn’t seem to be processing any records? We share practical learnings from running thousands of Flink Jobs for different use-cases and take a look at common challenges they have experienced such as out-of-memory errors, timeouts and job stability. We will cover memory tuning, S3 and Akka configurations to address common pitfalls and the approaches that we take on automating health monitoring and management of Flink jobs at scale.
by
Hong Teoh & Usamah Jassat
Extending Flink SQL for stream processing use casesFlink Forward
1. For streaming data, Flink SQL uses STREAMs for append-only queries and CHANGELOGs for upsert queries instead of tables.
2. Stateless queries on streaming data, such as projections and filters, result in new STREAMs or CHANGELOGs.
3. Stateful queries, such as aggregations, produce STREAMs or CHANGELOGs depending on whether they are windowed or not. Join queries between streaming sources also result in STREAM outputs.
The top 3 challenges running multi-tenant Flink at scaleFlink Forward
Apache Flink is the foundation for Decodable's real-time SaaS data platform. Flink runs critical data processing jobs with strong security requirements. In addition, Decodable has to scale to thousands of tenants, power various use cases, provide an intuitive user experience and maintain cost-efficiency. We've learned a lot of lessons while building and maintaining the platform. In this talk, I'll share the top 3 toughest challenges building and operating this platform with Flink, and how we solved them.
Using Queryable State for Fun and ProfitFlink Forward
Flink Forward San Francisco 2022.
A particular feature in our system relies on a streaming 90-minute trailing window of 1-minute samples - implemented as a lookaside cache - to speed up a particular query, allowing our customers to rapidly see an overview of their estate. Across our entire customer base, there is a substantial amount of data flowing into this cache - ~1,000,000 entries/second, with the entire cache requiring ~600GB of RAM. The current implementation is simplistic but expensive. In this talk I describe a replacement implementation as a stateful streaming Flink application leveraging Queryable State. This Flink application reduces the net cost by ~90%. In this session, the implementation is described in detail, including windowing considerations, a sliding-window state buffer that avoids the sliding window replication penalty, and a comparison of queryable state and Redis queries. The talk concludes with a frank discussion of when this distinctive approach is, and is not, appropriate.
by
Ron Crocker
It's your unstructured data: How to get your GenAI app to production (and spe...Zilliz
So you've successfully built a GenAI app POC for your company -- now comes the hard part: bringing it to production. Aparavi addresses the challenges of AI projects while addressing data privacy and PII. Our Service for RAG helps AI developers and data scientists to scale their app to 1000s to millions of users using corporate unstructured data. Aparavi’s AI Data Loader cleans, prepares and then loads only the relevant unstructured data for each AI project/app, enabling you to operationalize the creation of GenAI apps easily and accurately while giving you the time to focus on what you really want to do - building a great AI application with useful and relevant context. All within your environment and never having to share private corporate data with anyone - not even Aparavi.
"Hands-on development experience using wasm Blazor", Furdak Vladyslav.pptxFwdays
I will share my personal experience of full-time development on wasm Blazor
What difficulties our team faced: life hacks with Blazor app routing, whether it is necessary to write JavaScript, which technology stack and architectural patterns we chose
What conclusions we made and what mistakes we committed
Demystifying Neural Networks And Building Cybersecurity ApplicationsPriyanka Aash
In today's rapidly evolving technological landscape, Artificial Neural Networks (ANNs) have emerged as a cornerstone of artificial intelligence, revolutionizing various fields including cybersecurity. Inspired by the intricacies of the human brain, ANNs have a rich history and a complex structure that enables them to learn and make decisions. This blog aims to unravel the mysteries of neural networks, explore their mathematical foundations, and demonstrate their practical applications, particularly in building robust malware detection systems using Convolutional Neural Networks (CNNs).
Garbage In, Garbage Out: Why poor data curation is killing your AI models (an...Zilliz
Enterprises have traditionally prioritized data quantity, assuming more is better for AI performance. However, a new reality is setting in: high-quality data, not just volume, is the key. This shift exposes a critical gap – many organizations struggle to understand their existing data and lack effective curation strategies and tools. This talk dives into these data challenges and explores the methods of automating data curation.
Choosing the Best Outlook OST to PST Converter: Key Features and Considerationswebbyacad software
When looking for a good software utility to convert Outlook OST files to PST format, it is important to find one that is easy to use and has useful features. WebbyAcad OST to PST Converter Tool is a great choice because it is simple to use for anyone, whether you are tech-savvy or not. It can smoothly change your files to PST while keeping all your data safe and secure. Plus, it can handle large amounts of data and convert multiple files at once, which can save you a lot of time. It even comes with 24*7 technical support assistance and a free trial, so you can try it out before making a decision. Whether you need to recover, move, or back up your data, Webbyacad OST to PST Converter is a reliable option that gives you all the support you need to manage your Outlook data effectively.
UiPath Community Day Amsterdam: Code, Collaborate, ConnectUiPathCommunity
Welcome to our third live UiPath Community Day Amsterdam! Come join us for a half-day of networking and UiPath Platform deep-dives, for devs and non-devs alike, in the middle of summer ☀.
📕 Agenda:
12:30 Welcome Coffee/Light Lunch ☕
13:00 Event opening speech
Ebert Knol, Managing Partner, Tacstone Technology
Jonathan Smith, UiPath MVP, RPA Lead, Ciphix
Cristina Vidu, Senior Marketing Manager, UiPath Community EMEA
Dion Mes, Principal Sales Engineer, UiPath
13:15 ASML: RPA as Tactical Automation
Tactical robotic process automation for solving short-term challenges, while establishing standard and re-usable interfaces that fit IT's long-term goals and objectives.
Yannic Suurmeijer, System Architect, ASML
13:30 PostNL: an insight into RPA at PostNL
Showcasing the solutions our automations have provided, the challenges we’ve faced, and the best practices we’ve developed to support our logistics operations.
Leonard Renne, RPA Developer, PostNL
13:45 Break (30')
14:15 Breakout Sessions: Round 1
Modern Document Understanding in the cloud platform: AI-driven UiPath Document Understanding
Mike Bos, Senior Automation Developer, Tacstone Technology
Process Orchestration: scale up and have your Robots work in harmony
Jon Smith, UiPath MVP, RPA Lead, Ciphix
UiPath Integration Service: connect applications, leverage prebuilt connectors, and set up customer connectors
Johans Brink, CTO, MvR digital workforce
15:00 Breakout Sessions: Round 2
Automation, and GenAI: practical use cases for value generation
Thomas Janssen, UiPath MVP, Senior Automation Developer, Automation Heroes
Human in the Loop/Action Center
Dion Mes, Principal Sales Engineer @UiPath
Improving development with coded workflows
Idris Janszen, Technical Consultant, Ilionx
15:45 End remarks
16:00 Community fun games, sharing knowledge, drinks, and bites 🍻
How UiPath Discovery Suite supports identification of Agentic Process Automat...DianaGray10
📚 Understand the basics of the newly persona-based LLM-powered Agentic Process Automation and discover how existing UiPath Discovery Suite products like Communication Mining, Process Mining, and Task Mining can be leveraged to identify APA candidates.
Topics Covered:
💡 Idea Behind APA: Explore the innovative concept of Agentic Process Automation and its significance in modern workflows.
🔄 How APA is Different from RPA: Learn the key differences between Agentic Process Automation and Robotic Process Automation.
🚀 Discover the Advantages of APA: Uncover the unique benefits of implementing APA in your organization.
🔍 Identifying APA Candidates with UiPath Discovery Products: See how UiPath's Communication Mining, Process Mining, and Task Mining tools can help pinpoint potential APA candidates.
🔮 Discussion on Expected Future Impacts: Engage in a discussion on the potential future impacts of APA on various industries and business processes.
Enhance your knowledge on the forefront of automation technology and stay ahead with Agentic Process Automation. 🧠💼✨
Speakers:
Arun Kumar Asokan, Delivery Director (US) @ qBotica and UiPath MVP
Naveen Chatlapalli, Solution Architect @ Ashling Partners and UiPath MVP
"Making .NET Application Even Faster", Sergey Teplyakov.pptxFwdays
In this talk we're going to explore performance improvement lifecycle, starting with setting the performance goals, using profilers to figure out the bottle necks, making a fix and validating that the fix works by benchmarking it. The talk will be useful for novice and seasoned .NET developers and architects interested in making their application fast and understanding how things work under the hood.
The Zaitechno Handheld Raman Spectrometer is a powerful and portable tool for rapid, non-destructive chemical analysis. It utilizes Raman spectroscopy, a technique that analyzes the vibrational fingerprint of molecules to identify their chemical composition. This handheld instrument allows for on-site analysis of materials, making it ideal for a variety of applications, including:
Material identification: Identify unknown materials, minerals, and contaminants.
Quality control: Ensure the quality and consistency of raw materials and finished products.
Pharmaceutical analysis: Verify the identity and purity of pharmaceutical compounds.
Food safety testing: Detect contaminants and adulterants in food products.
Field analysis: Analyze materials in the field, such as during environmental monitoring or forensic investigations.
The Zaitechno Handheld Raman Spectrometer is easy to use and features a user-friendly interface. It is compact and lightweight, making it ideal for field applications. With its rapid analysis capabilities, the Zaitechno Handheld Raman Spectrometer can help you improve efficiency and productivity in your research or quality control workflows.
Generative AI technology is a fascinating field that focuses on creating comp...Nohoax Kanont
Generative AI technology is a fascinating field that focuses on creating computer models capable of generating new, original content. It leverages the power of large language models, neural networks, and machine learning to produce content that can mimic human creativity. This technology has seen a surge in innovation and adoption since the introduction of ChatGPT in 2022, leading to significant productivity benefits across various industries. With its ability to generate text, images, video, and audio, generative AI is transforming how we interact with technology and the types of tasks that can be automated.
Redefining Cybersecurity with AI CapabilitiesPriyanka Aash
In this comprehensive overview of Cisco's latest innovations in cybersecurity, the focus is squarely on resilience and adaptation in the face of evolving threats. The discussion covers the imperative of tackling Mal information, the increasing sophistication of insider attacks, and the expanding attack surfaces in a hybrid work environment. Emphasizing a shift towards integrated platforms over fragmented tools, Cisco introduces its Security Cloud, designed to provide end-to-end visibility and robust protection across user interactions, cloud environments, and breaches. AI emerges as a pivotal tool, from enhancing user experiences to predicting and defending against cyber threats. The blog underscores Cisco's commitment to simplifying security stacks while ensuring efficacy and economic feasibility, making a compelling case for their platform approach in safeguarding digital landscapes.
DefCamp_2016_Chemerkin_Yury-publish.pdf - Presentation by Yury Chemerkin at DefCamp 2016 discussing mobile app vulnerabilities, data protection issues, and analysis of security levels across different types of mobile applications.
2. An API that gets out of your way
It’s so easy, we’ve embedded a bunch of examples right
here. Copy some of these requests into your terminal and
check out what happens.
With wrappers in Ruby, PHP, Python and more, you can
get started in minutes. Learn More ➤
3. As complexity grew…
Then we had a ProblemFactory
Started out with
We had a problem, so we thought to use …
4. As data volume grew…
Database scalability is a complicated topic…
Started out with
Had to make sure it was web scale
Distributed transactions
Change Data Capture
6. Squirreling Away $640 Billion
Flink Forward - San Francisco 2022
Jeff Chao
Staff Engineer / Tech Lead for Change Data Capture Infrastructure at Stripe
How Stripe Leverages Flink for Change Data Capture
7. 7
CDC at Stripe
Agenda
1 Aggregating Change Events
2 How it Started, How it Ended
3
Squirreling Away $640 Billion: How Stripe Leverages Flink for Change Data Capture
Change Data Capture (CDC) is widely-
used at Stripe to capture data changes
from databases without critically
impacting database reliability and
scalability. CDC powers many critical
financial use cases at Stripe such as the
Stripe Dashboard, Stripe Search, Sigma,
and Financial Reporting.
From idea to production—things may
seem straightforward at first, but the
details matter. We detail our journey of
how we leveraged Flink for Change Data
Capture at Stripe in order to uphold the
highest data quality standards. Freshness,
Coverage, and Correctness SLOs are
paramount to the success of platforms
and applications running on top of our
CDC infrastructure.
Change Event Streams are ubiquitous
across Stripe given the vast number of
applications and employees generating
datasets worldwide. Change Event
Streams are independent from one
another which leads to the typical
challenges in distributed systems. One of
the major use cases revolves around
aggregating individual change events of a
database transaction to support Stripe’s
payments infrastructure.
8. Change Data Capture (CDC) is widely-
used at Stripe to capture data changes
from databases without critically
impacting database reliability and
scalability. CDC powers many critical
financial use cases at Stripe such as the
Stripe Dashboard, Stripe Search, Sigma,
and Financial Reporting.
8
From idea to production—things may
seem straightforward at first, but the
details matter. We detail our journey of
how we leveraged Flink for Change Data
Capture at Stripe in order to uphold the
highest data quality standards. Freshness,
Coverage, and Correctness SLOs are
paramount to the success of platforms
and applications running on top of our
CDC infrastructure.
Change Event Streams are ubiquitous
across Stripe given the vast number of
applications and employees generating
datasets worldwide. Change Event
Streams are independent from one
another which leads to the typical
challenges in distributed systems. One of
the major use cases revolves around
aggregating individual change events of a
database transaction to support Stripe’s
payments infrastructure.
Agenda
CDC at Stripe
1 Aggregating Change Events
2 How it Started, How it Ended
3
Squirreling Away $640 Billion: How Stripe Leverages Flink for Change Data Capture
15. Interoperable
Abstract Away Internals
Operational Excellence
15
Building a Platform
Make sure that we abstract away
database internals such as sharding
topology and ensure a datastore-agnostic
transport.
Build a high leveraged platform which
makes working with Change Events
interoperable with other systems within
the organization.
Minimal toil given as we scale the number
of datasets, ensure clean separation
between infrastructure and user issues,
create great operator experiences, reduce
control plane and data plane blast radius,
maintain good operator tooling/developer
experience/processes.
CDC at Stripe
16. 16
Agenda
CDC at Stripe
1 Aggregating Change Events
2 How it Started, How it Ended
3
Squirreling Away $640 Billion: How Stripe Leverages Flink for Change Data Capture
Change Data Capture (CDC) is widely-
used at Stripe to capture data changes
from databases without critically
impacting database reliability and
scalability. CDC powers many critical
financial use cases at Stripe such as the
Stripe Dashboard, Stripe Search, Sigma,
and Financial Reporting.
From idea to production—things may
seem straightforward at first, but the
details matter. We detail our journey of
how we leveraged Flink for Change Data
Capture at Stripe in order to uphold the
highest data quality standards. Freshness,
Coverage, and Correctness SLOs are
paramount to the success of platforms
and applications running on top of our
CDC infrastructure.
Change Event Streams are ubiquitous
across Stripe given the vast number of
applications and employees generating
datasets worldwide. Change Event
Streams are independent from one
another which leads to the typical
challenges in distributed systems. One of
the major use cases revolves around
aggregating individual change events of a
database transaction to support Stripe’s
payments infrastructure.
17. Why?
17
Aggregating Change Events
Product teams working with payments data use transactions
Arbitrary number of tables in a database transaction
They should be able to get transactions back out from the CDC path
They shouldn’t have to become stream processing experts
34. What is an Aggregated Change Event?
34
{
"ts_utc" : 1659375300000,
"data": [
{
"operation": "CREATE",
"transaction": { “id”: "txn1"},
"before": null,
"after": { ... },
},
{
"operation": "UPDATE",
"transaction": { “id”: "txn1"},
"before": { ... },
"after": { ... },
},
]
}
● One transaction with two events
having the same transaction ID.
● Events may arrive from an
arbitrary number of tables.
Aggregating Change Events
37. Joins elements of the same
key within the same window.
● Produces pairwise
elements
Join
37
time
Change Events
Transaction
Metadata Events
Event 1 Event 2
BEGIN COMMIT BEGIN COMMIT
Event 3
Event 1 BEGIN
,
Event 1 COMMIT
,
Event 2 BEGIN
,
Event 2 COMMIT
,
Event 3 BEGIN
,
Event 3 COMMIT
,
Aggregating Change Events
38. Unions multiple streams of
the same type into a single
stream.
● Requires streams of the
same type
Union
38
38
time
Change Events
Transaction
Metadata Events
Event 1 Event 2
BEGIN COMMIT BEGIN COMMIT
Event 3
(No output; won’t compile because streams are of different
types)
Aggregating Change Events
39. Connect
39
time
Change Events
Transaction
Metadata Events
Event 1 Event 2
BEGIN COMMIT BEGIN COMMIT
Event 3
Event 1 BEGIN
, Event 2 COMMIT
,
Event 3 BEGIN
, COMMIT
,
, ,
Unions multiple streams,
potentially of different types.
● Similar to Unions
Aggregating Change Events
40. 40
Support for streams of different types
Support for flexible stream combination semantics
Don’t need pairwise outputs
Aggregating Change Events
What Do We Need?
41. Flink Job Definition
41
val mainStream =
transactionMetadataEventStream // uid and name omitted.
.connect(changeEventStream) // Union different types.
Aggregating Change Events
44. Wraps an event containing one
of two types, either from left or
right stream.
● Out-of-box
● No concept of keys
Either.left =
Either.right = null
Either
44
time
Change Events
Transaction
Metadata Events
Event 1 Event 2
BEGIN COMMIT BEGIN COMMIT
Event 3
Event 1
BEGIN
, Either.left = null
Either.right =
,
…
Aggregating Change Events
45. WrappedEvent.key = txn-1
WrappedEvent.left = null
WrappedEvent.right =
Custom
45
WrappedEvent.key = txn-1
WrappedEvent.left =
WrappedEvent.right = null
time
Change Events
Transaction
Metadata Events
Event 1 Event 2
BEGIN COMMIT BEGIN COMMIT
Event 3
Event 1
BEGIN
,
, …
Wraps an event containing one
of two types, either from left or
right stream, and a common
key among both events.
● Small and simple code
addition
● Need to extract keys
Aggregating Change Events
46. 46
Wrap elements of a connected stream
Be able to identify keys to support
aggregations later
Aggregating Change Events
What Do We Need?
47. Flink Job Definition
47
val mainStream =
transactionMetadataEventStream // uid and name omitted.
.connect(changeEventStream) // Union different types.
.flatMap(new WrappedEventFunction) // Like Either type, but
with extra fields.
.keyBy(_.key) //
Group events with the same transaction ID.
Aggregating Change Events
49. Aggregation Characteristics
Arbitrary number of Change Event Streams
One Transaction Metadata Event Stream
Change Events must have the same
transaction IDs
Handle late arriving or duplicate Change
Events and Transaction Metadata Events
Don’t result in infinite state growth
49
Aggregating Change Events
51. Tumbling Windows
51
Assigns elements to windows
of a fixed size.
● Windows don’t overlap
time
Change Events
Transaction
Metadata Events
Event 1 Event 2
BEGIN COMMIT BEGIN COMMIT
Event 3
Aggregating Change Events
52. Tumbling Windows
52
Assigns elements to windows
of a fixed size.
● Windows don’t overlap
time
Change Events
Transaction
Metadata Events
Event 1 Event 2
BEGIN COMMIT
Aggregating Change Events
53. Tumbling Windows
53
Assigns elements to windows
of a fixed size.
● Windows don’t overlap
time
Change Events
Transaction
Metadata Events
Event 1 Event 2
BEGIN COMMIT
● Late-arriving events? Add delay.
Aggregating Change Events
54. Tumbling Windows
54
Assigns elements to windows
of a fixed size.
● Windows don’t overlap
time
Change Events
Transaction
Metadata Events
Event 1 Event 2
BEGIN COMMIT
● Late-arriving events? Add delay.
Aggregating Change Events
55. Tumbling Windows
55
Assigns elements to windows
of a fixed size.
● Windows don’t overlap
time
Change Events
Transaction
Metadata Events
Event 1 Event 2
BEGIN COMMIT
● Late-arriving events? Add delay.
● Large delay? Trade-off: Freshness vs Correctness.
Aggregating Change Events
56. Tumbling Windows
56
Assigns elements to windows
of a fixed size.
● Windows don’t overlap
time
Change Events
Transaction
Metadata Events
Event 1 Event 2
BEGIN COMMIT
● Late-arriving events? Add delay.
● Large delay? Trade-off: Freshness vs Correctness.
● Not quite right…
Aggregating Change Events
57. Sliding Windows
57
time
Change Events
Transaction
Metadata Events
Event 1 Event 2
BEGIN COMMIT BEGIN COMMIT
Event 3
Assigns elements to windows
of a fixed size, but with a slide
interval.
● Almost like a tumbling
window, but with windows
overlapping
Aggregating Change Events
58. Sliding Windows
58
time
Change Events
Transaction
Metadata Events
Event 1 Event 2
BEGIN COMMIT
● Late-arriving events? Same as tumbling windows.
● Slide interval? Explosion of windows
● Not quite right…
Aggregating Change Events
Assigns elements to windows
of a fixed size, but with a slide
interval.
● Almost like a tumbling
window, but with windows
overlapping
59. Session Windows
59
time
Change Events
Transaction
Metadata Events
Event 1
BEGIN COMMIT
Event 2
BEGIN COMMIT
Event 3
Aggregating Change Events
Assigns elements that are seen
relatively close to each other.
● Arbitrarily-sized windows;
no fixed start and end
● Windows don’t overlap
● Windows close based on a
defined gap of inactivity
60. Session Windows
60
time
Change Events
Transaction
Metadata Events
Event 1
BEGIN COMMIT
Event 2
Assigns elements that are seen
relatively close to each other.
● Arbitrarily-sized windows;
no fixed start and end
● Windows don’t overlap
● Windows close based on a
defined gap of inactivity
Aggregating Change Events
61. Session Windows
61
time
Change Events
Transaction
Metadata Events
Event 1
BEGIN COMMIT
Event 2
Assigns elements that are seen
relatively close to each other.
● Arbitrarily-sized windows;
no fixed start and end
● Windows don’t overlap
● Windows close based on a
defined gap of inactivity
Aggregating Change Events
62. Session Windows
62
time
Change Events
Transaction
Metadata Events
Event 1
BEGIN COMMIT
Event 2
● Session gap too small? Incomplete aggregates
Assigns elements that are seen
relatively close to each other.
● Arbitrarily-sized windows;
no fixed start and end
● Windows don’t overlap
● Windows close based on a
defined gap of inactivity
Aggregating Change Events
63. Session Windows
63
time
Change Events
Transaction
Metadata Events
Event 1
BEGIN COMMIT
Event 2
● Session gap too small? Incomplete aggregates
Assigns elements that are seen
relatively close to each other.
● Arbitrarily-sized windows;
no fixed start and end
● Windows don’t overlap
● Windows close based on a
defined gap of inactivity
Aggregating Change Events
64. Session Windows
64
time
Change Events
Transaction
Metadata Events
Event 1
BEGIN COMMIT
Event 2
● Session gap too small? Incomplete aggregates
Assigns elements that are seen
relatively close to each other.
● Arbitrarily-sized windows;
no fixed start and end
● Windows don’t overlap
● Windows close based on a
defined gap of inactivity
Aggregating Change Events
65. Session Windows
65
time
Change Events
Transaction
Metadata Events
Event 1
BEGIN COMMIT
Event 2
● Session gap too small? Incomplete aggregates
● Session gap too big? Trade-off: Freshness vs Correctness
Assigns elements that are seen
relatively close to each other.
● Arbitrarily-sized windows;
no fixed start and end
● Windows don’t overlap
● Windows close based on a
defined gap of inactivity
Aggregating Change Events
66. Session Windows
66
time
Change Events
Transaction
Metadata Events
Event 1
BEGIN COMMIT
Event 2
● Session gap too small? Incomplete aggregates
● Session gap too big? Trade-off: Freshness vs Correctness
● Not quite right…
Assigns elements that are seen
relatively close to each other.
● Arbitrarily-sized windows;
no fixed start and end
● Windows don’t overlap
● Windows close based on a
defined gap of inactivity
Aggregating Change Events
67. Global Windows
67
Assigns elements to a single
window.
● Only a single window per
key
● Window never closes
time
Change Events
Transaction
Metadata Events
Event 1
BEGIN COMMIT
Event 2
BEGIN COMMIT
Event 3
Aggregating Change Events
68. Global Windows
68
Assigns elements to a single
window.
● Only a single window per
key
● Window never closes
time
Change Events
Transaction
Metadata Events
Event 1
BEGIN COMMIT
Event 2
BEGIN COMMIT
Event 3
● Outputs never get evaluated and materialized
● Needs more…
Aggregating Change Events
69. Global Windows + Custom Stateful Trigger
69
Assign elements to a Global Window and add a custom
stateful trigger.
● Flexibly define open/close conditions for non-
overlapping windows
● Reasonably handle late-arriving events
● Avoid infinite state growth and reduce likelihood of
incomplete aggregates
Aggregating Change Events
70. What Makes an Aggregation Complete?
70
Aggregating Change Events
BEGIN transaction marker seen
COMMIT transaction marker seen
All Change Events of the transaction seen
All Change Events are globally and locally ordered
71. Custom Stateful Trigger:
TransactionBoundaryTrigger
71
if transaction metadata event:
if begin transaction marker:
update begin marker state
else:
update commit marker state
update bitmap state
using commit marker’s total event count
set timeout state and register event time timer
else:
update bitmap state
with change event’s global position
set timeout state and register event time timer
if should trigger(begin, commit, total events):
clear window
TriggerResult.FIRE_AND_PURGE
else:
TriggerResult.CONTINUE
Reference
Aggregating Change Events
// ChangeEvent#transaction
{
"id" : "transaction-id",
"global_position": 1,
"source_position": 1,
}
// TransactionMetadataEvent
{
"id" : "transaction-id",
"ts_utc": 1659375300000,
"marker": "COMMIT",
"total_events": 3,
"per_source_event_counts": [{ ... }],
}
72. val mainStream =
transactionMetadataEventStream // uid and name omitted.
.connect(changeEventStream) // Union different types.
.flatMap(new WrappedEventFunction) // Like Either type, but
with extra fields.
.keyBy(_.key) //
Group events with the same transaction ID.
Flink Job Definition
72
.window(GlobalWindows.create)
.trigger(new TransactionBoundaryTrigger(...)) // Flexible windowing semantics.
.process(new KeyedProcessor(...))
Aggregating Change Events
74. val mainStream =
transactionMetadataEventStream // uid and name omitted.
.connect(changeEventStream) // Union different types.
.flatMap(new WrappedEventFunction) // Like Either type, but
with extra fields.
.keyBy(_.key) //
Group events with the same transaction ID.
.window(GlobalWindows.create)
.trigger(new TransactionBoundaryTrigger(...)) // Flexible windowing semantics.
.process(new KeyedProcessor(...))
Flink Job Definition
74
mainStream //
Side output to DLQ.
.getSideOutput(...)
.addSink(...)
mainStream //
Output aggregated change events.
.addSink(...)
Aggregating Change Events
75. 75
Agenda
CDC at Stripe
1 Aggregating Change Events
2 How it Started, How it Ended
3
Squirreling Away $640 Billion: How Stripe Leverages Flink for Change Data Capture
Change Data Capture (CDC) is widely-
used at Stripe to capture data changes
from databases without critically
impacting database reliability and
scalability. CDC powers many critical
financial use cases at Stripe such as the
Stripe Dashboard, Stripe Search, Sigma,
and Financial Reporting.
From idea to production—things may
seem straightforward at first, but the
details matter. We detail our journey of
how we leveraged Flink for Change Data
Capture at Stripe in order to uphold the
highest data quality standards. Freshness,
Coverage, and Correctness SLOs are
paramount to the success of platforms
and applications running on top of our
CDC infrastructure.
Change Event Streams are ubiquitous
across Stripe given the vast number of
applications and employees generating
datasets worldwide. Change Event
Streams are independent from one
another which leads to the typical
challenges in distributed systems. One of
the major use cases revolves around
aggregating individual change events of a
database transaction to support Stripe’s
payments infrastructure.
76. From Idea to Production
76
Coverage
Platform
State
How it Started, How it Ended
80. Infinite keys due to continuous stream of new transactions
Observations
80
How it Started, How it Ended
Using a Global Window; possible windows not closing properly
No trigger timeouts firing
No watermarks being generated
82. Fix
82
Fixed an upstream issue where transaction IDs were getting mixed up
Reduce parallelism on Source Sub Tasks for all streams
Make sure parallelism ≤ ∑ Topic Partitions
Generally, check with SplitEnumerator classes
How it Started, How it Ended
85. State size still growing, but slower
Observations
85
How it Started, How it Ended
Event time timers firing, sometimes
Watermarks are being generated, but not for all sub tasks
86. New Observations
86
charges
(partitions = 2)
Transaction
Metadata Events
audits
(partitions = 1)
disputes
(partitions = 1)
Source Sub Tasks
Low volume stream
How it Started, How it Ended
87. Possible Fix
87
Switch from event time to processing time
Less precise
Could cause premature trigger firing, resulting in incomplete aggregates
How it Started, How it Ended
88. Actual Fix
88
Add idleness property on sources
Can still use event time
More precise
Not perfect; can still result in incomplete aggregates in edge cases
That’s the reality of streaming
How it Started, How it Ended
92. Don’t want to redeploy every time a new dataset (Kafka Topic) is added
Observations
92
How it Started, How it Ended
Blows away Freshness SLO’s error budget
Poor developer onboarding experience
93. Fix
93
Instead of Kafka Topic List Subscriber, use Regex Subscriber
Subscribe to all topics (for a keyspace) by default
Control plane (external) service produces an event to Broadcast Stream
On broadcast element, use Broadcast State to keep onboarded datasets in state
On element, check Broadcast State and filter for onboarded datasets
How it Started, How it Ended
97. Observations
Incomplete aggregates still happening, but not frequently
97
How it Started, How it Ended
Kafka by default is at-least-once delivery
Many independent streams operating at different speeds
98. Storage will be expensive. Trade-off between confidence and cost-
efficiency: KV store or bloom filter
Move incomplete aggregate measurement out of the Flink Job and into a
system downstream
Fix
98
How it Started, How it Ended
New system needs to dedupe events… for all time?
101. 101
Agenda
CDC at Stripe
1 Aggregating Change Events
2 How it Started, How it Ended
3
Squirreling Away $640 Billion: How Stripe Leverages Flink for Change Data Capture
Change Data Capture (CDC) is widely-
used at Stripe to capture data changes
from databases without critically
impacting database reliability and
scalability. CDC powers many critical
financial use cases at Stripe such as the
Stripe Dashboard, Stripe Search, Sigma,
and Financial Reporting.
From idea to production – things may
seem straightforward at first, but the
details matter. We detail our journey of
how we leveraged Flink for Change Data
Capture at Stripe in order to uphold the
highest data quality standards. Freshness,
Coverage, and Correctness SLOs are
paramount to the success of platforms
and applications running on top of our
CDC infrastructure.
Change Event Streams are ubiquitous
across Stripe given the vast number of
applications and employees generating
datasets worldwide. Change Event
Streams are independent from one
another which leads to the typical
challenges in distributed systems. One of
the major use cases revolves around
aggregating individual change events of a
database transaction to support Stripe’s
payments infrastructure.
102. Aggregating Change Events is relatively
straightforward, but the details matter
Squirreling Away $640 Billion: How Stripe Leverages Flink for Change Data Capture
Wrap Up
102
Change Data Capture (CDC) is widely-used at
Stripe to improve database reliability and scalability
Flink is a critical component in Stripe’s CDC
infrastructure that allows us to work with financial
streaming data with high data quality guarantees
At what scale? $640B annual in payment volume. Challenging…
Many products, many apps and services, many datasets.
Across many databases of different types. Mongo, MySQL. Multi-region, databases have many shards which are split as volume grows.
Watermarks per partition, not per key. Perhaps note an upstream issue, nonetheless, could have manifested by testing out late events.
Watermark = min parallelism
Keys can go to the same partition, one key could be late, another could not. Watermark will progress. Timeout will fire - incomplete aggregate. Late key comes in and is treated as incomplete aggregate again.
Connect with broadcast stream.
processElement -> check broadcast state
processBroadcastElement -> update state
Union or join. Streams are independent and any one stream can have duplicate. If duplicate, will result in incomplete aggregate for that key. It won’t unless all streams have the same number of duplicates for that key, but unlikely.
Imagine an aggregate was just completed for a key. Then, dup happens and event sits in state until timed out.