This document provides a summary and analysis of a performance evaluation comparing the big data processing engine Flink to other engines like Spark, Tez, and MapReduce. The key points are:
- Flink completes a 3.2TB TeraSort benchmark faster than Spark, Tez, and MapReduce due to its pipelined execution model which allows more overlap between stages compared to the other engines.
- While Tez and Spark attempt to overlap stages, in practice they do not due to the way tasks are scheduled and launched. MapReduce shows some overlap but is still slower.
- Flink causes fewer disk accesses during shuffling by transferring data directly from memory to memory instead of writing to disk like
Iceberg: A modern table format for big data (Strata NY 2018)Ryan Blue
Hive tables are an integral part of the big data ecosystem, but the simple directory-based design that made them ubiquitous is increasingly problematic. Netflix uses tables backed by S3 that, like other object stores, don’t fit this directory-based model: listings are much slower, renames are not atomic, and results are eventually consistent. Even tables in HDFS are problematic at scale, and reliable query behavior requires readers to acquire locks and wait.
Owen O’Malley and Ryan Blue offer an overview of Iceberg, a new open source project that defines a new table layout addresses the challenges of current Hive tables, with properties specifically designed for cloud object stores, such as S3. Iceberg is an Apache-licensed open source project. It specifies the portable table format and standardizes many important features, including:
* All reads use snapshot isolation without locking.
* No directory listings are required for query planning.
* Files can be added, removed, or replaced atomically.
* Full schema evolution supports changes in the table over time.
* Partitioning evolution enables changes to the physical layout without breaking existing queries.
* Data files are stored as Avro, ORC, or Parquet.
* Support for Spark, Pig, and Presto.
Spark Shuffle Deep Dive (Explained In Depth) - How Shuffle Works in SparkBo Yang
The slides explain how shuffle works in Spark and help people understand more details about Spark internal. It shows how the major classes are implemented, including: ShuffleManager (SortShuffleManager), ShuffleWriter (SortShuffleWriter, BypassMergeSortShuffleWriter, UnsafeShuffleWriter), ShuffleReader (BlockStoreShuffleReader).
Join operations in Apache Spark is often the biggest source of performance problems and even full-blown exceptions in Spark. After this talk, you will understand the two most basic methods Spark employs for joining DataFrames – to the level of detail of how Spark distributes the data within the cluster. You’ll also find out how to work out common errors and even handle the trickiest corner cases we’ve encountered! After this talk, you should be able to write performance joins in Spark SQL that scale and are zippy fast!
This session will cover different ways of joining tables in Apache Spark.
Speaker: Vida Ha
This talk was originally presented at Spark Summit East 2017.
Evening out the uneven: dealing with skew in FlinkFlink Forward
Flink Forward San Francisco 2022.
When running Flink jobs, skew is a common problem that results in wasted resources and limited scalability. In the past years, we have helped our customers and users solve various skew-related issues in their Flink jobs or clusters. In this talk, we will present the different types of skew that users often run into: data skew, key skew, event time skew, state skew, and scheduling skew, and discuss solutions for each of them. We hope this will serve as a guideline to help you reduce skew in your Flink environment.
by
Jun Qin & Karl Friedrich
The Rise of ZStandard: Apache Spark/Parquet/ORC/AvroDatabricks
Zstandard is a fast compression algorithm which you can use in Apache Spark in various way. In this talk, I briefly summarized the evolution history of Apache Spark in this area and four main use cases and the benefits and the next steps:
1) ZStandard can optimize Spark local disk IO by compressing shuffle files significantly. This is very useful in K8s environments. It’s beneficial not only when you use `emptyDir` with `memory` medium, but also it maximizes OS cache benefit when you use shared SSDs or container local storage. In Spark 3.2, SPARK-34390 takes advantage of ZStandard buffer pool feature and its performance gain is impressive, too.
2) Event log compression is another area to save your storage cost on the cloud storage like S3 and to improve the usability. SPARK-34503 officially switched the default event log compression codec from LZ4 to Zstandard.
3) Zstandard data file compression can give you more benefits when you use ORC/Parquet files as your input and output. Apache ORC 1.6 supports Zstandardalready and Apache Spark enables it via SPARK-33978. The upcoming Parquet 1.12 will support Zstandard compression.
4) Last, but not least, since Apache Spark 3.0, Zstandard is used to serialize/deserialize MapStatus data instead of Gzip.
There are more community works to utilize Zstandard to improve Spark. For example, Apache Avro community also supports Zstandard and SPARK-34479 aims to support Zstandard in Spark’s avro file format in Spark 3.2.0.
Designing ETL Pipelines with Structured Streaming and Delta Lake—How to Archi...Databricks
Structured Streaming has proven to be the best platform for building distributed stream processing applications. Its unified SQL/Dataset/DataFrame APIs and Spark’s built-in functions make it easy for developers to express complex computations. Delta Lake, on the other hand, is the best way to store structured data because it is a open-source storage layer that brings ACID transactions to Apache Spark and big data workloads Together, these can make it very easy to build pipelines in many common scenarios. However, expressing the business logic is only part of the larger problem of building end-to-end streaming pipelines that interact with a complex ecosystem of storage systems and workloads. It is important for the developer to truly understand the business problem that needs to be solved. Apache Spark, being a unified analytics engine doing both batch and stream processing, often provides multiples ways to solve the same problem. So understanding the requirements carefully helps you to architect your pipeline that solves your business needs in the most resource efficient manner.
In this talk, I am going examine a number common streaming design patterns in the context of the following questions.
WHAT are you trying to consume? What are you trying to produce? What is the final output that the business wants? What are your throughput and latency requirements?
WHY do you really have those requirements? Would solving the requirements of the individual pipeline actually solve your end-to-end business requirements?
HOW are going to architect the solution? And how much are you willing to pay for it?
Clarity in understanding the ‘what and why’ of any problem can automatically much clarity on the ‘how’ to architect it using Structured Streaming and, in many cases, Delta Lake.
Performant Streaming in Production: Preventing Common Pitfalls when Productio...Databricks
Running a stream in a development environment is relatively easy. However, some topics can cause serious issues in production when they are not addressed properly.
Deploying Flink on Kubernetes - David AndersonVerverica
Kubernetes has rapidly established itself as the de facto standard for orchestrating containerized infrastructures. And with the recent completion of the refactoring of Flink's deployment and process model known as FLIP-6, Kubernetes has become a natural choice for Flink deployments. In this talk we will walk through how to get Flink running on Kubernetes
Webinar: Deep Dive on Apache Flink State - Seth WiesmanVerverica
Apache Flink is a world class stateful stream processor presents a huge variety of optional features and configuration choices to the user. Determining out the optimal choice for any production environment and use-case be challenging. In this talk, we will explore and discuss the universe of Flink configuration with respect to state and state backends.
We will start with a closer look under the hood, at core data structures and algorithms, to build the foundation for understanding the impact of tuning parameters and the costs-benefit-tradeoffs that come with certain features and options. In particular, we will focus on state backend choices (Heap vs RocksDB), tuning checkpointing (incremental checkpoints, ...) and recovery (local recovery), serializers and Apache Flink's new state migration capabilities.
Join is one of most important and critical SQL operation in most data warehouses. This is essential when we want to get insights from multiple input datasets. Over the last year, we’ve added a series of join optimizations internally at Facebook, and we started to contribute back to upstream open source recently.
Apache Spark presentation at HasGeek FifthElelephant
https://fifthelephant.talkfunnel.com/2015/15-processing-large-data-with-apache-spark
Covering Big Data Overview, Spark Overview, Spark Internals and its supported libraries
Building a SIMD Supported Vectorized Native Engine for Spark SQLDatabricks
Spark SQL works very well with structured row-based data. Vectorized reader and writer for parquet/orc can make I/O much faster. It also used WholeStageCodeGen to improve the performance by Java JIT code. However Java JIT is usually not working very well on utilizing latest SIMD instructions under complicated queries. Apache Arrow provides columnar in-memory layout and SIMD optimized kernels as well as a LLVM based SQL engine Gandiva. These native based libraries can accelerate Spark SQL by reduce the CPU usage for both I/O and execution.
The Parquet Format and Performance Optimization OpportunitiesDatabricks
The Parquet format is one of the most widely used columnar storage formats in the Spark ecosystem. Given that I/O is expensive and that the storage layer is the entry point for any query execution, understanding the intricacies of your storage format is important for optimizing your workloads.
As an introduction, we will provide context around the format, covering the basics of structured data formats and the underlying physical data storage model alternatives (row-wise, columnar and hybrid). Given this context, we will dive deeper into specifics of the Parquet format: representation on disk, physical data organization (row-groups, column-chunks and pages) and encoding schemes. Now equipped with sufficient background knowledge, we will discuss several performance optimization opportunities with respect to the format: dictionary encoding, page compression, predicate pushdown (min/max skipping), dictionary filtering and partitioning schemes. We will learn how to combat the evil that is ‘many small files’, and will discuss the open-source Delta Lake format in relation to this and Parquet in general.
This talk serves both as an approachable refresher on columnar storage as well as a guide on how to leverage the Parquet format for speeding up analytical workloads in Spark using tangible tips and tricks.
The document discusses tuning Spark parameters to optimize performance. It describes how to control Spark's resource usage through parameters like num-executors, executor-cores, and executor-memory. Advanced parameters like spark.shuffle.memoryFraction and spark.reducer.maxSizeInFlight are also covered. Dynamic allocation allows scaling resources up and down based on workload. Tips provided include tuning memory usage, choosing serialization and storage levels, setting parallelism, and avoiding operations like groupByKey. An example recommends tuning the collaborative filtering algorithm in the RW project, reducing runtime from 27 minutes to under 7 minutes.
"The common use cases of Spark SQL include ad hoc analysis, logical warehouse, query federation, and ETL processing. Spark SQL also powers the other Spark libraries, including structured streaming for stream processing, MLlib for machine learning, and GraphFrame for graph-parallel computation. For boosting the speed of your Spark applications, you can perform the optimization efforts on the queries prior employing to the production systems. Spark query plans and Spark UIs provide you insight on the performance of your queries. This talk discloses how to read and tune the query plans for enhanced performance. It will also cover the major related features in the recent and upcoming releases of Apache Spark.
"
The columnar roadmap: Apache Parquet and Apache ArrowDataWorks Summit
The Hadoop ecosystem has standardized on columnar formats—Apache Parquet for on-disk storage and Apache Arrow for in-memory. With this trend, deep integration with columnar formats is a key differentiator for big data technologies. Vertical integration from storage to execution greatly improves the latency of accessing data by pushing projections and filters to the storage layer, reducing time spent in IO reading from disk, as well as CPU time spent decompressing and decoding. Standards like Arrow and Parquet make this integration even more valuable as data can now cross system boundaries without incurring costly translation. Cross-system programming using languages such as Spark, Python, or SQL can becomes as fast as native internal performance.
In this talk we’ll explain how Parquet is improving at the storage level, with metadata and statistics that will facilitate more optimizations in query engines in the future. We’ll detail how the new vectorized reader from Parquet to Arrow enables much faster reads by removing abstractions as well as several future improvements. We will also discuss how standard Arrow-based APIs pave the way to breaking the silos of big data. One example is Arrow-based universal function libraries that can be written in any language (Java, Scala, C++, Python, R, ...) and will be usable in any big data system (Spark, Impala, Presto, Drill). Another is a standard data access API with projection and predicate push downs, which will greatly simplify data access optimizations across the board.
Speaker
Julien Le Dem, Principal Engineer, WeWork
How to build a streaming Lakehouse with Flink, Kafka, and HudiFlink Forward
Flink Forward San Francisco 2022.
With a real-time processing engine like Flink and a transactional storage layer like Hudi, it has never been easier to build end-to-end low-latency data platforms connecting sources like Kafka to data lake storage. Come learn how to blend Lakehouse architectural patterns with real-time processing pipelines with Flink and Hudi. We will dive deep on how Flink can leverage the newest features of Hudi like multi-modal indexing that dramatically improves query and write performance, data skipping that reduces the query latency by 10x for large datasets, and many more innovations unique to Flink and Hudi.
by
Ethan Guo & Kyle Weller
Parquet performance tuning: the missing guideRyan Blue
Parquet performance tuning focuses on optimizing Parquet reads by leveraging columnar organization, encoding, and filtering techniques. Statistics and dictionary filtering can eliminate unnecessary data reads by filtering at the row group and page levels. However, these optimizations require columns to be sorted and fully dictionary encoded within files. Increasing dictionary size thresholds and decreasing row group sizes can help avoid dictionary encoding fallback and improve filtering effectiveness. Future work may include new encodings, compression algorithms like Brotli, and page-level filtering in the Parquet format.
Data Storage Tips for Optimal Spark Performance-(Vida Ha, Databricks)Spark Summit
Vida Ha presented best practices for storing and working with data in files for optimal Spark performance. Some key tips included choosing appropriate file sizes between 64 MB to 1 GB, using splittable compression formats like gzip and Snappy, enforcing schemas for structured formats like Parquet and Avro, and reusing Hadoop libraries to read various file formats. General tips involved controlling output file size through methods like coalesce and repartition, using sc.wholeTextFiles for non-splittable formats, and processing files individually by filename.
Flink Forward Berlin 2017: Dongwon Kim - Predictive Maintenance with Apache F...Flink Forward
SK telecom shares our experience of using Flink in building a solution for Predictive Maintenance (PdM). Our PdM solution named metatron PdM consists of (1) a Deep Neural Network (DNN)-based prediction model for precise prediction, and (2) a Flink-based runtime system which applies the model to a sliding window on sensor data streams. Efficient handling of multi-sensor streaming data for real-time prediction of equipment condition is a critical component of our product. In this talk, we first show why we choose Flink as a core engine for our streaming use case in which we generate real-time predictions using DNNs trained with Keras on top of TensorFlow and Theano. In addition, we present a comparative study of methods to exploit learning models on JVM such as directly using Python libraries on CPython embedded in JVM, using TensorFlow Java API (including Flink TensorFlow), and making RPC calls to TensorFlow Serving. We then explain how we implement the runtime system using Flink DataStream API, especially with event time, various window mechanisms, timestamp and watermark, custom source and sink, and checkpointing. Lastly, we present how we use the official Flink Docker image for solution delivery and the Flink metric system for monitoring and management of our solution. We hope our use case sets a good example of building a DNN-based streaming solution using Flink.
Flink Forward Berlin 2017: Pramod Bhatotia, Do Le Quoc - StreamApprox: Approx...Flink Forward
Approximate computing aims for efficient execution of workflows where an approximate output is sufficient instead of the exact output. The idea behind approximate computing is to compute over a representative sample instead of the entire input dataset. Thus, approximate computing — based on the chosen sample size — can make a systematic trade-off between the output accuracy and computation efficiency. Unfortunately, state-of-the-art systems for approximate computing, such as BlinkDB, ApproxHadoop, primarily target batch analytics, where the input data remains unchanged during the course of sampling. Thus, they are not well-suited for stream analytics. In this talk, we will present the design of StreamApprox, a Flink-based stream analytics system for approximate computing. StreamApprox implements an online stratified reservoir sampling algorithm in Apache Flink to produce approximate output with rigorous error bounds.
These are the slides that supported the presentation on Apache Flink at the ApacheCon Budapest.
Apache Flink is a platform for efficient, distributed, general-purpose data processing.
Mikio Braun – Data flow vs. procedural programming Flink Forward
The document discusses the differences between procedural and data flow programming styles as used in Flink. Procedural programming uses variables, loops, and functions to operate on ordered data structures. Data flow programming treats data as unordered sets and uses parallel set transformations like maps, filters, and reductions. It cannot nest operations and uses broadcast variables to combine intermediate results. The document provides examples translating algorithms like centering, sums, and linear regression from procedural to data flow styles in Flink.
Moon soo Lee – Data Science Lifecycle with Apache Flink and Apache ZeppelinFlink Forward
This document discusses Apache Zeppelin and Apache Flink integration. It describes how the Flink interpreter allows users to run Flink jobs within Zeppelin notebooks, accessing features like dynamic forms, angular displays, and progress monitoring. The roadmap includes improving multi-tenancy with authentication and containers, and developing Helium as a platform for packaging and distributing analytics applications on Zeppelin.
Mohamed Amine Abdessemed – Real-time Data Integration with Apache Flink & KafkaFlink Forward
This document summarizes a presentation about Bouygues Telecom's use of Apache Flink for real-time data integration and processing of mobile network event logs. Bouygues Telecom processes over 4 billion logs per day from their network equipment to calculate mobile quality of experience (QoE) indicators within 60 seconds for business intelligence, diagnostics and alerting. They were previously using Hadoop for batch processing but needed a real-time solution. After evaluating Apache Spark and Flink, they chose Flink for its true streaming capabilities, backpressure handling, and high performance on limited resources. Flink helped them process a day's worth of logs in under an hour from 10 Kafka partitions across 10 TaskManagers, each with only
Apache Flink Training: DataStream API Part 1 BasicFlink Forward
The document provides an overview of Apache Flink's DataStream API for stream processing. It discusses key concepts like stream execution environments, data types (including tuples), transformations (such as map, filter, grouping), data sources (files, sockets, collections), sinks, and fault tolerance through checkpointing. The document also contains examples of a WordCount application using the DataStream API in Java.
Maximilian Michels – Google Cloud Dataflow on Top of Apache FlinkFlink Forward
This document discusses Google Cloud Dataflow and how it can be executed using Apache Flink. It provides an overview of Dataflow and its API, which is similar to batch and streaming concepts in Flink. It then describes how a Dataflow program is translated to an Abstract Syntax Tree (AST) and how the AST is converted to a Flink execution graph by implementing translators for specific Dataflow transforms like ParDo and Combine. Finally, it mentions the FlinkPipelineRunner that is available on GitHub to execute Dataflow pipelines using Flink.
Introduction to Apache Flink - Fast and reliable big data processingTill Rohrmann
This presentation introduces Apache Flink, a massively parallel data processing engine which currently undergoes the incubation process at the Apache Software Foundation. Flink's programming primitives are presented and it is shown how easily a distributed PageRank algorithm can be implemented with Flink. Intriguing features such as dedicated memory management, Hadoop compatibility, streaming and automatic optimisation make it an unique system in the world of Big Data processing.
This document provides an overview of a presentation comparing Apache Flink and Apache Spark. The presentation aims to address marketing claims, confusing statements, and outdated information regarding Flink vs Spark. It outlines key criteria to evaluate the two platforms, such as streaming capabilities, state management, and scalability. The document then directly compares some criteria, such as their support for iterative processing and streaming engines. The presenter hopes this evaluation framework will help others assess Flink and Spark for stream processing use cases.
This document discusses stateful stream processing. It provides examples of stateful streaming applications and describes several open source stream processors, including their programming models and approaches to fault tolerance. It also examines how different systems handle state in streaming programs and discusses the tradeoffs of various approaches.
Chris Hillman – Beyond Mapreduce Scientific Data Processing in Real-timeFlink Forward
This document discusses processing scientific mass spectrometry data in real-time using parallel and distributed computing techniques. It describes how a mass spectrometry experiment produces terabytes of data that currently takes over 24 hours to fully process. The document proposes using MapReduce and Apache Flink to parallelize the data processing across clusters to help speed it up towards real-time analysis. Initial tests show Flink can process the data 2-3 times faster than traditional Hadoop MapReduce. Finally, it discusses simulating real-time streaming of the data using Kafka and Flink Streaming to enable processing results within 10 seconds of the experiment completing.
Bouygues Telecom is a large French telecommunications company with over 14 million customers. They developed a system called LUX to analyze massive logs from network equipment to produce real-time mobile quality of experience indicators. LUX ingests 4 billion events per day from equipment logs using Apache Kafka for real-time streaming, then uses Apache Flink to calculate key performance indicators and quality of experience metrics with a latency of less than 60 minutes to enable real-time diagnostics and business intelligence.
Albert Bifet – Apache Samoa: Mining Big Data Streams with Apache FlinkFlink Forward
1) Apache SAMOA is a platform for mining big data streams in real-time that provides algorithms, libraries and an execution framework.
2) It allows researchers to develop and compare stream mining algorithms and practitioners to easily apply state-of-the-art algorithms to problems like sentiment analysis, spam detection and recommendations.
3) The Vertical Hoeffding Tree algorithm in SAMOA provides high parallelism and accuracy for streaming decision tree learning, outperforming native Apache Flink implementations on certain datasets while being faster on others.
S. Bartoli & F. Pompermaier – A Semantic Big Data CompanionFlink Forward
This document introduces Okkam, an Italian company that uses Apache Flink for large-scale data integration and semantic technologies. It discusses Okkam's use of Flink for domain reasoning, RDF data processing, duplicate detection, entity linkage, and telemetry analysis. The document also provides lessons learned from Okkam's Flink experiences and suggestions for improving Flink.
Tran Nam-Luc – Stale Synchronous Parallel Iterations on FlinkFlink Forward
This document discusses two topics: 1) Stale Synchronous Parallel (SSP) iterations on Apache Flink to address stragglers, and 2) a distributed Frank-Wolfe algorithm using SSP and a parameter server. For SSP on Flink, it describes integrating an iteration control model and API to allow iterations when worker data is within a staleness threshold. For the distributed Frank-Wolfe algorithm, it applies SSP to coordinate local atom selection and global coefficient updates via a parameter server in solving LASSO regression problems.
Flink allows users to run Hadoop MapReduce jobs without changing any code by wrapping Hadoop's APIs. It supports Hadoop data types, file systems, and functions like mappers and reducers. Specifically, Flink can run a WordCount example written using Hadoop APIs without modifications by utilizing Hadoop input/output formats and mapper/reducer functions. Going forward, Flink aims to allow injecting entire MapReduce jobs as a unit into a Flink program while supporting custom Hadoop partitioners and sorters.
Apache Flink Training: DataSet API BasicsFlink Forward
This document provides an overview of the Apache Flink DataSet API. It introduces key concepts such as batch processing, data types including tuples, transformations like map, filter, group, and reduce, joining datasets, data sources and sinks, and an example word count program in Java. The word count example demonstrates reading text data, tokenizing strings, grouping and counting words, and writing the results. The document contains slides with code snippets and explanations of Flink's DataSet API concepts and features.
K. Tzoumas & S. Ewen – Flink Forward KeynoteFlink Forward
This document provides information about the first conference on Apache Flink. It summarizes key aspects of the Apache Flink streaming engine, including its improved DataStream API, support for event time processing, high availability, and integration of batch and streaming capabilities. The document also outlines Flink's progress towards version 1.0, which will focus on defining public APIs and backwards compatibility, and outlines future plans such as enhancing usability features on top of the DataStream API.
William Vambenepe – Google Cloud Dataflow and Flink , Stream Processing by De...Flink Forward
1. Google Cloud Dataflow is a fully managed service that allows users to define data processing pipelines that can run batch or streaming computations.
2. The Dataflow programming model defines pipelines as directed graphs of transformations on collections of data elements. This provides flexibility in how computations are defined across batch and streaming workloads.
3. The Dataflow service handles graph optimization, scaling of workers, and monitoring of jobs to efficiently execute user-defined pipelines on Google Cloud Platform.
Spark is a framework for large-scale data processing that improves on MapReduce. It handles batch, iterative, and streaming workloads using a directed acyclic graph (DAG) model. Spark aims for generality, low latency, fault tolerance, and simplicity. It uses an in-memory computing model with Resilient Distributed Datasets (RDDs) and a driver-executor architecture. Common Spark performance issues relate to partitioning, shuffling data between stages, task placement, and load balancing. Evaluation tools include the Spark UI, Sar, iostat, and benchmarks like SparkBench and GroupBy tests.
Migrating ETL Workflow to Apache Spark at Scale in PinterestDatabricks
The document summarizes Pinterest's migration of ETL workflows from Cascading and Scalding to Spark. Key points:
- Pinterest runs Spark on AWS but manages its own clusters to avoid vendor lock-in. They have multiple Spark clusters with hundreds to thousands of nodes.
- The migration plan is to move remaining workloads from Hive, Cascading/Scalding, and Hadoop streaming to SparkSQL, PySpark, and native Spark over time. An automatic migration service helps with the process.
- Technical challenges included secondary sorting, accumulators behaving differently between frameworks, and output committer issues. Performance profiling and tuning was also important.
- Results of migrating so far include
Spark improves on Hadoop MapReduce by keeping data in-memory between jobs. It reads data into resilient distributed datasets (RDDs) that can be transformed and cached in memory across nodes for faster iterative jobs. RDDs are immutable, partitioned collections distributed across a Spark cluster. Transformations define operations on RDDs, while actions trigger computation by passing data to the driver program.
This document provides an overview of Apache Spark's architectural components through the life of simple Spark jobs. It begins with a simple Spark application analyzing airline on-time arrival data, then covers Resilient Distributed Datasets (RDDs), the cluster architecture, job execution through Spark components like tasks and scheduling, and techniques for writing better Spark applications like optimizing partitioning and reducing shuffle size.
Healthcare Claim Reimbursement using Apache SparkDatabricks
The document discusses rewriting a claims reimbursement system using Spark. It describes how Spark provides better performance, scalability and cost savings compared to the previous Oracle-based system. Key points include using Spark for ETL to load data into a Delta Lake data lake, implementing the business logic in a reusable Java library, and seeing significant increases in processing volumes and speeds compared to the prior system. Challenges and tips for adoption are also provided.
This document discusses Spark shuffle, which is an expensive operation that involves data partitioning, serialization/deserialization, compression, and disk I/O. It provides an overview of how shuffle works in Spark and the history of optimizations like sort-based shuffle and an external shuffle service. Key concepts discussed include shuffle writers, readers, and the pluggable block transfer service that handles data transfer. The document also covers shuffle-related configuration options and potential future work.
From HDFS to S3: Migrate Pinterest Apache Spark ClustersDatabricks
The document discusses Pinterest migrating their Apache Spark clusters from HDFS to S3 storage. Some key points:
1) Migrating to S3 provided significantly better performance due to the higher IOPS of modern EC2 instances compared to their older HDFS nodes. Jobs saw 25-35% improvements on average.
2) S3 is eventually consistent while HDFS is strongly consistent, so they implemented the S3Committer to handle output consistency issues during job failures.
3) Metadata operations like file moves were very slow in S3, so they optimized jobs to reduce unnecessary moves using techniques like multipart uploads to S3.
This document summarizes a presentation about Netflix's big data platform and Spark. The key points are:
1. Netflix uses Apache Spark on YARN and Mesos clusters to process batch and streaming data from sources like Cassandra and Kafka.
2. Netflix has contributed improvements to Spark's dynamic resource allocation, predicate pushdown, and support for S3 filesystems.
3. A use case showed Spark outperforming Pig for an iterative job that duplicated and aggregated data in multiple steps.
Apache Spark: The Next Gen toolset for Big Data Processingprajods
The Spark project from Apache(spark.apache.org), is the next generation of Big Data processing systems. It uses a new architecture and in-memory processing for orders of magnitude improvement in performance. Some would call it the successor to the Hadoop set of tools. Hadoop is a batch mode Big Data processor and depends on disk based files. Spark improves on this and supports real time and interactive processing, in addition to batch processing.
Table of contents:
1. The Big Data triangle
2. Hadoop stack and its limitations
3. Spark: An Overview
3.a. Spark Streaming
3.b. GraphX: Graph processing
3.c. MLib: Machine Learning
4. Performance characteristics of Spark
This document discusses Netflix's use of Spark on Yarn for ETL workloads. Some key points:
- Netflix runs Spark on Yarn across 3000 EC2 nodes to process large amounts of streaming data from over 100 million daily users.
- Technical challenges included optimizing performance for S3, dynamic resource allocation, and Parquet read/write. Improvements led to up to 18x faster job completion times.
- Production Spark applications include recommender systems that analyze user behavior and personalize content across billions of profiles and titles.
This document discusses Netflix's use of Spark on Yarn for ETL workloads. Some key points:
- Netflix runs Spark on Yarn across 3000 EC2 nodes to process large amounts of streaming data from over 100 million daily users.
- Technical challenges included optimizing performance for S3, dynamic resource allocation, and Parquet read/write. Improvements led to up to 18x faster job completion times.
- Production Spark applications include recommender systems that analyze user behavior and personalize content across billions of profiles and titles.
700 Updatable Queries Per Second: Spark as a Real-Time Web ServiceEvan Chan
700 Updatable Queries Per Second: Spark as a Real-Time Web Service. Find out how to use Apache Spark with FiloDb for low-latency queries - something you never thought possible with Spark. Scale it down, not just scale it up!
700 Queries Per Second with Updates: Spark As A Real-Time Web ServiceSpark Summit
This document discusses using Apache Spark to enable low-latency web queries through a persistent Spark context. It introduces FiloDB, a distributed, versioned, columnar analytics database built on Spark that allows for fast, updatable queries through efficient in-memory columnar storage and filtering. The document demonstrates running over 700 SQL queries per second on a dataset of 15 million NYC taxi records loaded into FiloDB through caching of SQL parsing and use of Spark's collectAsync to enable asynchronous query execution.
The document discusses network performance profiling of Hadoop jobs. It presents results from running two common Hadoop benchmarks - Terasort and Ranked Inverted Index - on different Amazon EC2 instance configurations. The results show that the shuffle phase accounts for a significant portion (25-29%) of total job runtime. They aim to reproduce existing findings that network performance is a key bottleneck for shuffle-intensive Hadoop jobs. Some questions are also raised about inconsistencies in reported network bandwidth capabilities for EC2.
Radical Speed for SQL Queries on Databricks: Photon Under the HoodDatabricks
Join this session to hear from the Photon product and engineering team talk about the latest developments with the project.
As organizations embrace data-driven decision-making, it has become imperative for them to invest in a platform that can quickly ingest and analyze massive amounts and types of data. With their data lakes, organizations can store all their data assets in cheap cloud object storage. But data lakes alone lack robust data management and governance capabilities. Fortunately, Delta Lake brings ACID transactions to your data lakes – making them more reliable while retaining the open access and low storage cost you are used to.
Using Delta Lake as its foundation, the Databricks Lakehouse platform delivers a simplified and performant experience with first-class support for all your workloads, including SQL, data engineering, data science & machine learning. With a broad set of enhancements in data access and filtering, query optimization and scheduling, as well as query execution, the Lakehouse achieves state-of-the-art performance to meet the increasing demands of data applications. In this session, we will dive into Photon, a key component responsible for efficient query execution.
Photon was first introduced at Spark and AI Summit 2020 and is written from the ground up in C++ to take advantage of modern hardware. It uses the latest techniques in vectorized query processing to capitalize on data- and instruction-level parallelism in CPUs, enhancing performance on real-world data and applications — all natively on your data lake. Photon is fully compatible with the Apache Spark™ DataFrame and SQL APIs to ensure workloads run seamlessly without code changes. Come join us to learn more about how Photon can radically speed up your queries on Databricks.
Scala like distributed collections - dumping time-series data with apache sparkDemi Ben-Ari
Spark RDDs are almost identical to Scala collection, just in a distributed manner, all of the transformations and actions are derived from the Scala collections API.
As Martin Odersky mentioned, “Spark - The Ultimate Scala Collections” is the right way to look at RDDs. But with that great distributed power comes a great many data problems: at first you’ll start tackling the concept of partitioning, then the actual data becomes the next thing to worry about.
In the talk we’ll go through an overview on Spark's architecture, and see how similar RDDs are to the Scala collections API. We'll then shift to the world of problems that you’ll be facing when using Spark for processing a vast volume of time-series data with multiple data stores (S3, MongoDB, Apache Cassandra, MySQL).
When you start tackling many scale and performance problems, many questions arise:
> How to handle missing data?
> Should the system handle both serving and backend processes, or should we separate them out?
> Which solution is cheaper?
> How do we get the best performance for money spent?
In the talk we will tell the tale of all of the transformations we’ve made to our data and review the multiple data persistency layers... and I’ll try my best NOT to answer the question “which persistency layer is the best?” but I do promise to share our pains and lessons learned!
S3, Cassandra or Outer Space? Dumping Time Series Data using Spark - Demi Be...Codemotion
The document discusses time series data processing using Apache Spark. It begins with an introduction of the speaker and an overview of Spark. It then describes the structure and flow of maritime location data. The challenges of processing this streaming time series data are discussed, including delays in data arrival that can cause holes in computations. The document outlines the evolution of their solution from using MongoDB to using Apache Spark with Cassandra and S3 for improved performance and scalability. It concludes with lessons about understanding data characteristics and choosing the right persistence layers.
Presentations from the Cloudera Impala meetup on Aug 20 2013Cloudera, Inc.
Presentations from the Cloudera Impala meetup on Aug 20 2013:
- Nong Li on Parquet+Impala and UDF support
- Henry Robinson on performance tuning for Impala
In these slides we analyze why the aggregate data models change the way data is stored and manipulated. We introduce MapReduce and its open source implementation Hadoop. We consider how MapReduce jobs are written and executed by Hadoop.
Finally we introduce spark using a docker image and we show how to use anonymous function in spark.
The topics of the next slides will be
- Spark Shell (Scala, Python)
- Shark Shell
- Data Frames
- Spark Streaming
- Code Examples: Data Processing and Machine Learning
This presentation aims to cover Apache Spark Performance and Tuning Takeaways by focusing Data Structures, Persistency, Partitioning, Event Sourcing on Transformations and Checkpointing.
Similar to Dongwon Kim – A Comparative Performance Evaluation of Flink (20)
Building a fully managed stream processing platform on Flink at scale for Lin...Flink Forward
Apache Flink is a distributed stream processing framework that allows users to process and analyze data in real-time. At LinkedIn, we developed a fully managed stream processing platform on Flink running on K8s to power hundreds of stream processing pipelines in production. This platform is the backbone for other infra systems like Search, Espresso (internal document store) and feature management etc. We provide a rich authoring and testing environment which allows users to create, test, and deploy their streaming jobs in a self-serve fashion within minutes. Users can focus on their business logic, leaving the Flink platform to take care of management aspects such as split deployment, resource provisioning, auto-scaling, job monitoring, alerting, failure recovery and much more. In this talk, we will introduce the overall platform architecture, highlight the unique value propositions that it brings to stream processing at LinkedIn and share the experiences and lessons we have learned.
“Alexa, be quiet!”: End-to-end near-real time model building and evaluation i...Flink Forward
Flink Forward San Francisco 2022.
To improve Amazon Alexa experiences and support machine learning inference at scale, we built an automated end-to-end solution for incremental model building or fine-tuning machine learning models through continuous learning, continual learning, and/or semi-supervised active learning. Customer privacy is our top concern at Alexa, and as we build solutions, we face unique challenges when operating at scale such as supporting multiple applications with tens of thousands of transactions per second with several dependencies including near-real time inference endpoints at low latencies. Apache Flink helps us transform and discover metrics in near-real time in our solution. In this talk, we will cover the challenges that we faced, how we scale the infrastructure to meet the needs of ML teams across Alexa, and go into how we enable specific use cases that use Apache Flink on Amazon Kinesis Data Analytics to improve Alexa experiences to delight our customers while preserving their privacy.
by
Aansh Shah
Introducing BinarySortedMultiMap - A new Flink state primitive to boost your ...Flink Forward
Flink Forward San Francisco 2022.
Probably everyone who has written stateful Apache Flink applications has used one of the fault-tolerant keyed state primitives ValueState, ListState, and MapState. With RocksDB, however, retrieving and updating items comes at an increased cost that you should be aware of. Sometimes, these may not be avoidable with the current API, e.g., for efficient event-time stream-sorting or streaming joins where you need to iterate one or two buffered streams in the right order. With FLIP-220, we are introducing a new state primitive: BinarySortedMultiMapState. This new form of state offers you to (a) efficiently store lists of values for a user-provided key, and (b) iterate keyed state in a well-defined sort order. Both features can be backed efficiently by RocksDB with a 2x performance improvement over the current workarounds. This talk will go into the details of the new API and its implementation, present how to use it in your application, and talk about the process of getting it into Flink.
by
Nico Kruber
Introducing the Apache Flink Kubernetes OperatorFlink Forward
Flink Forward San Francisco 2022.
The Apache Flink Kubernetes Operator provides a consistent approach to manage Flink applications automatically, without any human interaction, by extending the Kubernetes API. Given the increasing adoption of Kubernetes based Flink deployments the community has been working on a Kubernetes native solution as part of Flink that can benefit from the rich experience of community members and ultimately make Flink easier to adopt. In this talk we give a technical introduction to the Flink Kubernetes Operator and demonstrate the core features and use-cases through in-depth examples."
by
Thomas Weise
Flink Forward San Francisco 2022.
Resource Elasticity is a frequently requested feature in Apache Flink: Users want to be able to easily adjust their clusters to changing workloads for resource efficiency and cost saving reasons. In Flink 1.13, the initial implementation of Reactive Mode was introduced, later releases added more improvements to make the feature production ready. In this talk, we’ll explain scenarios to deploy Reactive Mode to various environments to achieve autoscaling and resource elasticity. We’ll discuss the constraints to consider when planning to use this feature, and also potential improvements from the Flink roadmap. For those interested in the internals of Flink, we’ll also briefly explain how the feature is implemented, and if time permits, conclude with a short demo.
by
Robert Metzger
Dynamically Scaling Data Streams across Multiple Kafka Clusters with Zero Fli...Flink Forward
Flink Forward San Francisco 2022.
Flink consumers read from Kafka as a scalable, high throughput, and low latency data source. However, there are challenges in scaling out data streams where migration and multiple Kafka clusters are required. Thus, we introduced a new Kafka source to read sharded data across multiple Kafka clusters in a way that conforms well with elastic, dynamic, and reliable infrastructure. In this presentation, we will present the source design and how the solution increases application availability while reducing maintenance toil. Furthermore, we will describe how we extended the existing KafkaSource to provide mechanisms to read logical streams located on multiple clusters, to dynamically adapt to infrastructure changes, and to perform transparent cluster migrations and failover.
by
Mason Chen
One sink to rule them all: Introducing the new Async SinkFlink Forward
Flink Forward San Francisco 2022.
Next time you want to integrate with a new destination for a demo, concept or production application, the Async Sink framework will bootstrap development, allowing you to move quickly without compromise. In Flink 1.15 we introduced the Async Sink base (FLIP-171), with the goal to encapsulate common logic and allow developers to focus on the key integration code. The new framework handles things like request batching, buffering records, applying backpressure, retry strategies, and at least once semantics. It allows you to focus on your business logic, rather than spending time integrating with your downstream consumers. During the session we will dive deep into the internals to uncover how it works, why it was designed this way, and how to use it. We will code up a new sink from scratch and demonstrate how to quickly push data to a destination. At the end of this talk you will be ready to start implementing your own Flink sink using the new Async Sink framework.
by
Steffen Hausmann & Danny Cranmer
Flink powered stream processing platform at PinterestFlink Forward
Flink Forward San Francisco 2022.
Pinterest is a visual discovery engine that serves over 433MM users. Stream processing allows us to unlock value from realtime data for pinners. At Pinterest, we adopt Flink as the unified streaming processing engine. In this talk, we will share our journey in building a stream processing platform with Flink and how we onboarding critical use cases to the platform. Pinterest has supported 90+near realtime streaming applications. We will cover the problem statement, how we evaluate potential solutions and our decision to build the framework.
by
Rainie Li & Kanchi Masalia
Flink Forward San Francisco 2022.
This talk will take you on the long journey of Apache Flink into the cloud-native era. It started all the way from where Hadoop and YARN were the standard way of deploying and operating data applications.
We're going to deep dive into the cloud-native set of principles and how they map to the Apache Flink internals and recent improvements. We'll cover fast checkpointing, fault tolerance, resource elasticity, minimal infrastructure dependencies, industry-standard tooling, ease of deployment and declarative APIs.
After this talk you'll get a broader understanding of the operational requirements for a modern streaming application and where the current limits are.
by
David Moravek
Where is my bottleneck? Performance troubleshooting in FlinkFlink Forward
Flinkn Forward San Francisco 2022.
In this talk, we will cover various topics around performance issues that can arise when running a Flink job and how to troubleshoot them. We’ll start with the basics, like understanding what the job is doing and what backpressure is. Next, we will see how to identify bottlenecks and which tools or metrics can be helpful in the process. Finally, we will also discuss potential performance issues during the checkpointing or recovery process, as well as and some tips and Flink features that can speed up checkpointing and recovery times.
by
Piotr Nowojski
Using the New Apache Flink Kubernetes Operator in a Production DeploymentFlink Forward
Flink Forward San Francisco 2022.
Running natively on Kubernetes, using the new Apache Flink Kubernetes Operator is a great way to deploy and manage Flink application and session deployments. In this presentation, we provide: - A brief overview of Kubernetes operators and their benefits. - Introduce the five levels of the operator maturity model. - Introduce the newly released Apache Flink Kubernetes Operator and FlinkDeployment CRs - Dockerfile modifications you can make to swap out UBI images and Java of the underlying Flink Operator container - Enhancements we're making in: - Versioning/Upgradeability/Stability - Security - Demo of the Apache Flink Operator in-action, with a technical preview of an upcoming product using the Flink Kubernetes Operator. - Lessons learned - Q&A
by
James Busche & Ted Chang
Flink Forward San Francisco 2022.
The Table API is one of the most actively developed components of Flink in recent time. Inspired by databases and SQL, it encapsulates concepts many developers are familiar with. It can be used with both bounded and unbounded streams in a unified way. But from afar it can be difficult to keep track of what this API is capable of and how it relates to Flink's other APIs. In this talk, we will explore the current state of Table API. We will show how it can be used as a batch processor, a changelog processor, or a streaming ETL tool with many built-in functions and operators for deduplicating, joining, and aggregating data. By comparing it to the DataStream API we will highlight differences and elaborate on when to use which API. We will demonstrate hybrid pipelines in which both APIs interact with one another and contribute their unique strengths. Finally, we will take a look at some of the most recent additions as a first step to stateful upgrades.
by
David Andreson
Flink Forward San Francisco 2022.
Based on the new Flink-Pulsar connector, we implemented Flink's TableAPI and Catalog to help users to interact with the Pulsar cluster via Flink SQL easily. We would like to go through the design and implementation of the SQL connector in the following aspects:
1. Two different modes of use Pulsar as a metadata store
2. Data format transformation and management
3. SQL semantics support within Pulsar context
by
Sijie Guo & Neng Lu
Dynamic Rule-based Real-time Market Data AlertsFlink Forward
Flink Forward San Francisco 2022.
At Bloomberg, we deal with high volumes of real-time market data. Our clients expect to be notified of any anomalies in this market data, which may indicate volatile movements in the markets, notable trades, forthcoming events, or system failures. The parameters for these alerts are always evolving and our clients can update them dynamically. In this talk, we'll cover how we utilized the open source Apache Flink and Siddhi SQL projects to build a distributed, scalable, low-latency and dynamic rule-based, real-time alerting system to solve our clients' needs. We'll also cover the lessons we learned along our journey.
by
Ajay Vyasapeetam & Madhuri Jain
Exactly-Once Financial Data Processing at Scale with Flink and PinotFlink Forward
Flink Forward San Francisco 2022.
At Stripe we have created a complete end to end exactly-once processing pipeline to process financial data at scale, by combining the exactly-once power from Flink, Kafka, and Pinot together. The pipeline provides exactly-once guarantee, end-to-end latency within a minute, deduplication against hundreds of billions of keys, and sub-second query latency against the whole dataset with trillion level rows. In this session we will discuss the technical challenges of designing, optimizing, and operating the whole pipeline, including Flink, Kafka, and Pinot. We will also share our lessons learned and the benefits gained from exactly-once processing.
by
Xiang Zhang & Pratyush Sharma & Xiaoman Dong
Processing Semantically-Ordered Streams in Financial ServicesFlink Forward
Flink Forward San Francisco 2022.
What if my data is already in order? Stream Processing has given us an elegant and powerful solution for running analytic queries and logic over high volumes of continuously arriving data. However, in both Apache Flink and Apache Beam, the notion of time-ordering is baked in at a very low level, making it difficult to express computations that are interested in a semantic-, rather than time-ordering of the data. In financial services, what often matters the most about the data moving between systems is not when the data was created, but in what order, to the extent that many institutions engineer a global sequencing over all data entering and produced by their systems to achieve complete determinism. How, then, can financial institutions and others best employ Stream Processing on streams of data that are already ordered? I will cover various techniques that can make this work, as well as seek input from the community on how Flink might be improved to better support these use-cases.
by
Patrick Lucas
Tame the small files problem and optimize data layout for streaming ingestion...Flink Forward
Flink Forward San Francisco 2022.
In modern data platform architectures, stream processing engines such as Apache Flink are used to ingest continuous streams of data into data lakes such as Apache Iceberg. Streaming ingestion to iceberg tables can suffer by two problems (1) small files problem that can hurt read performance (2) poor data clustering that can make file pruning less effective. To address those two problems, we propose adding a shuffling stage to the Flink Iceberg streaming writer. The shuffling stage can intelligently group data via bin packing or range partition. This can reduce the number of concurrent files that every task writes. It can also improve data clustering. In this talk, we will explain the motivations in details and dive into the design of the shuffling stage. We will also share the evaluation results that demonstrate the effectiveness of smart shuffling.
by
Gang Ye & Steven Wu
Batch Processing at Scale with Flink & IcebergFlink Forward
Flink Forward San Francisco 2022.
Goldman Sachs's Data Lake platform serves as the firm's centralized data platform, ingesting 140K (and growing!) batches per day of Datasets of varying shape and size. Powered by Flink and using metadata configured by platform users, ingestion applications are generated dynamically at runtime to extract, transform, and load data into centralized storage where it is then exported to warehousing solutions such as Sybase IQ, Snowflake, and Amazon Redshift. Data Latency is one of many key considerations as producers and consumers have their own commitments to satisfy. Consumers range from people/systems issuing queries, to applications using engines like Spark, Hive, and Presto to transform data into refined Datasets. Apache Iceberg allows our applications to not only benefit from consistency guarantees important when running on eventually consistent storage like S3, but also allows us the opportunity to improve our batch processing patterns with its scalability-focused features.
by
Andreas Hailu
Flink Forward San Francisco 2022.
At Flink Forward, we get to hear creative, unique use cases, often on the bleeding edge of some of the most exciting current technologies. This talk will give you a chance to get to open up the hood on our driven and innovative Open Source community. I will cover what our community has been working on this past year, and how this work relates to our (Ververica's) exciting new Flink engineering roadmap! I will also go through some best practices and upcoming opportunities for getting involved in this community!
by
Caito Scherr
Practical learnings from running thousands of Flink jobsFlink Forward
Flink Forward San Francisco 2022.
Task Managers constantly running out of memory? Flink job keeps restarting from cryptic Akka exceptions? Flink job running but doesn’t seem to be processing any records? We share practical learnings from running thousands of Flink Jobs for different use-cases and take a look at common challenges they have experienced such as out-of-memory errors, timeouts and job stability. We will cover memory tuning, S3 and Akka configurations to address common pitfalls and the approaches that we take on automating health monitoring and management of Flink jobs at scale.
by
Hong Teoh & Usamah Jassat
Smart mobility refers to the integration of advanced technologies and innovative solutions to create efficient, sustainable, and interconnected transportation systems. It encompasses various aspects of transportation, including public transit, shared mobility services, intelligent transportation systems, electric vehicles, and connected infrastructure. Smart mobility aims to improve the overall mobility experience by leveraging data, connectivity, and automation to enhance safety, reduce congestion, optimize transportation networks, and minimize environmental impacts.
Latest Tech Trends Series 2024 By EY IndiaEYIndia1
Stay ahead of the curve with our comprehensive Tech Trends Series! Explore the latest technology trends shaping the world today, from the 2024 Tech Trends report and top emerging technologies to their impact on business technology trends. This series delves into the most significant technological advancements, giving you insights into both established and emerging tech trends that will revolutionize various industries.
Choosing the Best Outlook OST to PST Converter: Key Features and Considerationswebbyacad software
When looking for a good software utility to convert Outlook OST files to PST format, it is important to find one that is easy to use and has useful features. WebbyAcad OST to PST Converter Tool is a great choice because it is simple to use for anyone, whether you are tech-savvy or not. It can smoothly change your files to PST while keeping all your data safe and secure. Plus, it can handle large amounts of data and convert multiple files at once, which can save you a lot of time. It even comes with 24*7 technical support assistance and a free trial, so you can try it out before making a decision. Whether you need to recover, move, or back up your data, Webbyacad OST to PST Converter is a reliable option that gives you all the support you need to manage your Outlook data effectively.
Finetuning GenAI For Hacking and DefendingPriyanka Aash
Generative AI, particularly through the lens of large language models (LLMs), represents a transformative leap in artificial intelligence. With advancements that have fundamentally altered our approach to AI, understanding and leveraging these technologies is crucial for innovators and practitioners alike. This comprehensive exploration delves into the intricacies of GenAI, from its foundational principles and historical evolution to its practical applications in security and beyond.
Intel Unveils Core Ultra 200V Lunar chip .pdfTech Guru
Intel has made a significant breakthrough in the world of processors with the introduction of its Core Ultra 200V mobile processor series, codenamed Lunar Lake. This innovative processor marks a fundamental shift in the way Intel creates processors, with a high degree of aggregation, including memory-on-package (MoP). The Core Ultra 300 MX series is designed to power thin-and-light devices that are capable of handling the latest AI applications, including Microsoft's Copilot+ experiences.
Demystifying Neural Networks And Building Cybersecurity ApplicationsPriyanka Aash
In today's rapidly evolving technological landscape, Artificial Neural Networks (ANNs) have emerged as a cornerstone of artificial intelligence, revolutionizing various fields including cybersecurity. Inspired by the intricacies of the human brain, ANNs have a rich history and a complex structure that enables them to learn and make decisions. This blog aims to unravel the mysteries of neural networks, explore their mathematical foundations, and demonstrate their practical applications, particularly in building robust malware detection systems using Convolutional Neural Networks (CNNs).
How UiPath Discovery Suite supports identification of Agentic Process Automat...DianaGray10
📚 Understand the basics of the newly persona-based LLM-powered Agentic Process Automation and discover how existing UiPath Discovery Suite products like Communication Mining, Process Mining, and Task Mining can be leveraged to identify APA candidates.
Topics Covered:
💡 Idea Behind APA: Explore the innovative concept of Agentic Process Automation and its significance in modern workflows.
🔄 How APA is Different from RPA: Learn the key differences between Agentic Process Automation and Robotic Process Automation.
🚀 Discover the Advantages of APA: Uncover the unique benefits of implementing APA in your organization.
🔍 Identifying APA Candidates with UiPath Discovery Products: See how UiPath's Communication Mining, Process Mining, and Task Mining tools can help pinpoint potential APA candidates.
🔮 Discussion on Expected Future Impacts: Engage in a discussion on the potential future impacts of APA on various industries and business processes.
Enhance your knowledge on the forefront of automation technology and stay ahead with Agentic Process Automation. 🧠💼✨
Speakers:
Arun Kumar Asokan, Delivery Director (US) @ qBotica and UiPath MVP
Naveen Chatlapalli, Solution Architect @ Ashling Partners and UiPath MVP
Self-Healing Test Automation Framework - HealeniumKnoldus Inc.
Revolutionize your test automation with Healenium's self-healing framework. Automate test maintenance, reduce flakes, and increase efficiency. Learn how to build a robust test automation foundation. Discover the power of self-healing tests. Transform your testing experience.
Discovery Series - Zero to Hero - Task Mining Session 1DianaGray10
This session is focused on providing you with an introduction to task mining. We will go over different types of task mining and provide you with a real-world demo on each type of task mining in detail.
Welcome to Cyberbiosecurity. Because regular cybersecurity wasn't complicated...Snarky Security
How wonderful it is that in our modern age, every bit of our biological data can be digitized, stored, and potentially pilfered by cyber thieves! Isn't it just splendid to think that while scientists are busy pushing the boundaries of biotechnology, hackers could be plotting the next big bio-data heist? This delightful scenario is brought to you by the ever-expanding digital landscape of biology and biotechnology, where the integration of computer science, engineering, and data science transforms our understanding and manipulation of biological systems.
While the fusion of technology and biology offers immense benefits, it also necessitates a careful consideration of the ethical, security, and associated social implications. But let's be honest, in the grand scheme of things, what's a little risk compared to potential scientific achievements? After all, progress in biotechnology waits for no one, and we're just along for the ride in this thrilling, slightly terrifying, adventure.
So, as we continue to navigate this complex landscape, let's not forget the importance of robust data protection measures and collaborative international efforts to safeguard sensitive biological information. After all, what could possibly go wrong?
-------------------------
This document provides a comprehensive analysis of the security implications biological data use. The analysis explores various aspects of biological data security, including the vulnerabilities associated with data access, the potential for misuse by state and non-state actors, and the implications for national and transnational security. Key aspects considered include the impact of technological advancements on data security, the role of international policies in data governance, and the strategies for mitigating risks associated with unauthorized data access.
This view offers valuable insights for security professionals, policymakers, and industry leaders across various sectors, highlighting the importance of robust data protection measures and collaborative international efforts to safeguard sensitive biological information. The analysis serves as a crucial resource for understanding the complex dynamics at the intersection of biotechnology and security, providing actionable recommendations to enhance biosecurity in an digital and interconnected world.
The evolving landscape of biology and biotechnology, significantly influenced by advancements in computer science, engineering, and data science, is reshaping our understanding and manipulation of biological systems. The integration of these disciplines has led to the development of fields such as computational biology and synthetic biology, which utilize computational power and engineering principles to solve complex biological problems and innovate new biotechnological applications. This interdisciplinary approach has not only accelerated research and development but also introduced new capabilities such as gene editing and biomanufact
Keynote : Presentation on SASE TechnologyPriyanka Aash
Secure Access Service Edge (SASE) solutions are revolutionizing enterprise networks by integrating SD-WAN with comprehensive security services. Traditionally, enterprises managed multiple point solutions for network and security needs, leading to complexity and resource-intensive operations. SASE, as defined by Gartner, consolidates these functions into a unified cloud-based service, offering SD-WAN capabilities alongside advanced security features like secure web gateways, CASB, and remote browser isolation. This convergence not only simplifies management but also enhances security posture and application performance across global networks and cloud environments. Discover how adopting SASE can streamline operations and fortify your enterprise's digital transformation strategy.
2. About Me
• Postdoctoral researcher @ POSTECH
• Research interest
• Design and implementation of distributed systems
• Performance optimization of big data processing engines
• Doctoral thesis
• MR2: Fault Tolerant MapReduce with the Push Model
• Personal blog
• http://eastcirclek.blogspot.kr
• Why I’m here
2
3. Outline
• TeraSort for various engines
• Experimental setup
• Results & analysis
• What else for better performance?
• Conclusion
3
4. TeraSort
• Hadoop MapReduce program for the annual terabyte sort competition
• TeraSort is essentially distributed sort (DS)
a4
b3 a1
a2
b1
b2
a2
b1 a3
a4
b3
b4
Disk
a2
a4
b3
b1
a1
b4
a3
b2
a1
a3
b4
b2
Disk
a2
a4
a1
a3
b3
b1
b4
b2
read shufflinglocal sort write
Disk
Disk
local sort
Node 1
Node 2
Typical DS phases :
4Total order
5. • Included in Hadoop distributions
• with TeraGen & TeraValidate
• Identity map & reduce functions
• Range partitioner built on sampling
• To guarantee a total order & to prevent partition skew
• Sampling to compute boundary points within few seconds
TeraSort for MapReduce
Reduce taskMap task
read shuffling sortsort reducemap write
read shufflinglocal sort writelocal sortDS phases :
reducemap
5
Record range
…
Partition 1 Partition 2 Partition r
boundary points
6. • Tez can execute TeraSort for MapReduce w/o any modification
• mapreduce.framework.name = yarn-tez
• Tez DAG plan of TeraSort for MapReduce
TeraSort for Tez
finalreduce vertex
initialmap vertex
Map task
read sortmap
Reduce task
shuffling sort reduce write
input data
output data
6
7. TeraSort for Spark & Flink
• My source code in GitHub:
• https://github.com/eastcirclek/terasort
• Sampling-based range partitioner from TeraSort for MapReduce
• Visit my personal blog for a detailed explanation
• http://eastcirclek.blogspot.kr
7
8. RDD1 RDD2
• Code
• Two RDDs
TeraSort for Spark
Stage 1Stage 0
Shuffle-Map Task
(for newAPIHadoopFile)
read sort
Result Task
(for repartitionAndSortWithinPartitions)
shuffling sort write
read shufflinglocal sort writelocal sort
Create a new RDD to read from HDFS
# partitions = # blocks
Repartition the parent RDD
based on the user-specified partitioner
Write output to HDFS
DS phases :
8
9. Pipeline
• Code
• Pipelines consisting of four operators
TeraSort for Flink
read shuffling writelocal sort
Create a dataset to read tuples
from HDFS
partition tuples
Sort tuples of each partition
DataSource Partition SortPartition DataSink
local sort
No map-side sorting
due to pipelined execution
Write output to HDFS
DS phases :
9
10. Importance of TeraSort
• Suitable for measuring the pure performance of big data engines
• No data transformation (like map, filter) with user-defined logic
• Basic facilities of each engine are used
• “Winning the sort benchmark” is a great means of PR
10
11. Outline
• TeraSort for various engines
• Experimental setup
• Machine specification
• Node configuration
• Results & analysis
• What else for better performance?
• Conclusion
11
12. Machine specification (42 identical machines)
DELL PowerEdge R610
CPU
Two X5650 processors
(Total 12 cores)
Memory
Total 24Gb
Disk
6 disks * 500GB/disk
Network
10 Gigabit Ethernet
My machine Spark team
Processor
Intel Xeon X5650
(Q1, 2010)
Intel Xeon E5-2670
(Q1, 2012)
Cores 6 * 2 processors 8 * 4 processors
Memory 24GB 244GB
Disks 6 HDD's 8 SSD's
Results can be different
in newer machines
12
14. Outline
• TeraSort for various engines
• Experimental setup
• Results & analysis
• Flink is faster than other engines due to its pipelined execution
• What else for better performance?
• Conclusion
14
15. How to read a swimlane graph & throughput graphs
Tasks
Time since job starts (seconds)
2nd stage
1st
2nd
3rd
4th
5th
6th
15
Cluster network throughput
Cluster disk throughput
In
Out
Disk read
Disk
Write
- 6 waves of 1st stage tasks
- 1 wave of 2nd stage tasks
- Two stages are hardly overlapped
1st stage
2nd stage
1st stage
2nd stage
No network traffic during 1st stage
Each line : duration of each task
Different patterns for different stages
16. Result of sorting 80GB/node (3.2TB)
1480 sec
1st stage
1st stage
1st stage
2nd stage
2157 sec
2nd stage
2171 sec
1 DataSource
2 Partition
3 SortPartition
4 DataSink
• Flink is the fastest due to its pipelined execution
• Tez and Spark do not overlap 1st and 2nd stages
• MapReduce is slow despite overlapping stages
MapReduce
in Hadoop-2.7.1
Tez-0.7.0
Spark-1.5.1
Flink-0.9.1
2nd stage
1887 sec
2157
1887
2171
1480
0
500
1000
1500
2000
2500
MapReduce
in Hadoop-2.7.1
Tez-0.7.0 Spark-1.5.1 Flink-0.9.1
Time(seconds)
16* Map output compression turned on for Spark and Tez
* *
17. Tez and Spark do not overlap 1st and 2nd stages
Cluster network
throughput
Cluster disk throughput
In
Out
Disk read
Cluster network
throughput
Cluster disk throughput
In
Out
Disk read
Disk
write
Disk
write
Disk read
Disk write
Out
In
(1) 2nd stage starts
(2)
Output of 1st stage is sent
(1) 2nd stage starts
(2)
Output of 1st stage is sent
(1)
Network traffic
occurs from start
Cluster network
throughput
(2)
Write to HDFS occurs
right after shuffling is done
1 DataSource
2 Partition
3 SortPartition
4 DataSink
idle idle
(3)
Disk write to HDFS occurs
after shuffling is done
(3)
Disk write to HDFS occurs
after shuffling is done
17
18. Tez does not overlap 1st and 2nd stages
• Tez has parameters to control the degree of overlap
• tez.shuffle-vertex-manager.min-src-fraction : 0.2
• tez.shuffle-vertex-manager.max-src-fraction : 0.4
• However, 2nd stage is scheduled early but launched late
scheduled launched
18
19. Spark does not overlap 1st and 2nd stages
• Spark cannot execute multiple stages simultaneously
• also mentioned in the following VLDB paper (2015)
Spark doesn’t support the overlap
between shuffle write and read stages.
…
Spark may want to support this overlap
in the future to improve performance.
Experimental results of this paper
- Spark is faster than MapReduce for WordCount, K-means, PageRank.
- MapReduce is faster than Spark for Sort.
19
20. MapReduce is slow despite overlapping stages
• mapreduce.job.reduce.slowstart.completedMaps : [0.0, 1.0]
• Wang’s attempt to overlap spark stages
0.05
(overlapping, default)
0.95
(no overlapping)
2157 sec
10%
improvement
20
Wang proposes to overlap stages
to achieve better utilization
10%???
Why Spark & MapReduce
improve just 10%?
2385 sec
2nd stage
1st stage
21. Disk
Data transfer between tasks of different stages
Output file
P1 P2 Pn
Shuffle server
…
Consumer
Task 1
Consumer
Task 2
Consumer
Task n
P1
…
P2
Pn
Traditional pull model
- Used in MapReduce, Spark, Tez
- Extra disk access & simultaneous disk access
- Shuffling affects the performance of producers
Producer
Task
(1)
Write output
to disk
(2)
Request P1
(3)
Send P1
Pipelined data transfer
- Used in Flink
- Data transfer from memory to memory
- Flink causes fewer disk access during shuffling
21
Leads to only 10% improvement
22. Flink causes fewer disk access during shuffling
Map
Reduce
Flink diff.
Total disk write
(TB)
9.9 6.5 3.4
Total disk read
(TB)
8.1 6.9 1.2
Difference comes
from shuffling
Shuffled data are sometimes
read from page cache
Cluster disk throughput
Disk read
Disk write
Disk read
Disk write
Cluster disk throughput
FlinkMapReduce
22
Total amount of disk read/write
equals to
the area of blue/green region
23. Result of TeraSort with various data sizes
node data size
(GB)
Time (seconds)
Flink Spark MapReduce Tez
10 157 387 259 277
20 350 652 555 729
40 741 1135 1085 1709
80 1480 2171 2157 1887
160 3127 4927 4796 3950
23
100
1000
10000
10 20 40 80 160
Time(seconds)
node data size (GB)
Flink Spark MapReduce Tez
What we’ve seen
Log scale
* Map output compression turned on for Spark and Tez
24. Result of HashJoin
• 10 slave nodes
• org.apache.tez.examples.JoinDataGen
• Small dataset : 256MB
• Large dataset : 240GB (24GB/node)
• Result :
24
Visit my blog
Flink is
~2x faster than Tez
~4x faster than Spark
770
1538
378
0
200
400
600
800
1000
1200
1400
1600
1800
Tez-0.7.0 Spark-1.5.1 Flink-0.9.1
Time(seconds)
* No map output compression for both Spark and Tez unlike in TeraSort
25. Result of HashJoin with swimlane & throughput graphs
25
Idle
1 DataSource
2 DataSource
3 Join
4 DataSink
Idle
Cluster network throughput
Cluster disk throughput
In
Out
Disk
read
Disk
write
Disk read
Disk write
In
Out
In
Out
Disk read
Disk
write
Cluster network throughput
Cluster disk throughput
0.24 TB
0.41 TB
0.60 TB 0.84 TB
0.68 TB
0.74 TB
Overlap
2nd
3rd
26. Flink’s shortcoming
• No support for map output compression
• Small data blocks are pipelined between operators
• Job-level fault tolerance
• Shuffle data are not materialized
• Low disk throughput during the post-shuffling phase
26
27. Low disk throughput during the post-shuffling phase
• Possible reason : sorting records from small files
• Concurrent disk access to small files too many disk seeks
low disk throughput
• Other engines merge records from larger files than Flink
• “Eager pipelining moves some of the sorting work from the mapper to the
reducer”
• from MapReduce online (NSDI 2010)
Flink Tez MapReduce
27
28. Outline
• TeraSort for various engines
• Experimental setup
• Results & analysis
• What else for better performance?
• Conclusion
28
29. MR2 – another MapReduce engine
• PhD thesis
• MR2: Fault Tolerant MapReduce with the Push Model
• developed for 3 years
• Provide the user interface of Hadoop MapReduce
• No DAG support
• No in-memory computation
• No iterative-computation
• Characteristics
• Push model + Fault tolerance
• Techniques to boost up HDD throughput
• Prefetching for mappers
• Preloading for reducers
29
30. MR2 pipeline
• 7 types of components with memory buffers
1. Mappers & reducers : to apply user-defined functions
2. Prefetcher & preloader : to eliminate concurrent disk access
3. Sender & reducer & merger : to implement MR2’s push model
• Various buffers : to pass data between components w/o disk IOs
• Minimum disk access (2 disk reads & 2 disk writes)
• +1 disk write for fault tolerance
W1 R2 W2R1
30
1 12 23 3 3
W3
31. Prefetcher & Mappers
• Prefetcher loads data for multiple mappers
• Mappers do not read input from disks
<MR2><Hadoop MapReduce>
Mapper1 processing Blk1
Mapper2 processing Blk2
Time
Disk
throughput
CPU
utilization
2 mappers
on a node
Blk1
Time
Prefetcher Blk2 Blk3
Blk1
2
Blk1
1
Blk2
2
Blk2
1
Blk3
2
Blk3
1
Blk4
Blk4
2
Blk4
1
Disk
throughput
CPU
utilization
2 mappers
on a node
31
32. Push-model in MR2
• Node-to-node network connection for pushing data
• To reduce # network connections
• Data transfer from memory buffer
• Mappers stores spills in send buffer
• Spills are pushed to reducer sides by sender
• Fault tolerance (can be turned on/off)
• Input ranges of each spill are known to master for reproduce
• For fast recovery
• store spills on disk for fast recovery (extra disk write)
32
similar to Flink’s pipelined execution
MR2 does local sorting
before pushing data
similar to Spark
33. Receiver’s managed memory
Receiver & merger & preloader & reducer
• Merger produces a file from different partition data
• sorts each partition data
• and then does interleaving
• Preloader preloads each group into reduce buffer
• Reducers do not read data directly from disks
• MR2 can eliminate concurrent disk reads from reducers thanks to Preloader
P1 P2 P3 P4
P1 P2 P3 P4
P1 P2 P3 P4
… …
Preloader loads each group
(1 disk access for 4 partitions)
33
34. Result of sorting 80GB/node (3.2TB) with MR2
MapReduce
in Hadoop-2.7.1
Tez-0.7.0 Spark-1.5.1 Flink-0.9.1 MR2
Time (sec) 2157 1887 2171 1480 890
MR2 speedup
over other engines
2.42 2.12 2.44 1.66 -
2157
1887
2171
1480
890
0
500
1000
1500
2000
2500
MapReduce
in Hadoop-2.7.1
Tez-0.7.0 Spark-1.5.1 Flink-0.9.1 MR2
Time(seconds)
34
35. Disk & network throughput
1. DataSource / Mapping
• Prefetcher is effective
• MR2 shows higher disk
throughput
2. Partition / Shuffling
• Records to shuffle are
generated faster from in MR2
3. DataSink / Reducing
• Preloader is effective
• Almost 2x throughput
Disk read
Disk write
Out
In
Cluster network throughput
Cluster disk throughput
Out
In
Disk read
Disk write
Flink MR2
1
1
2
2
3
3
35
37. Outline
• TeraSort for various engines
• Experimental setup
• Experimental results & analysis
• What else for better performance?
• Conclusion
37
38. Conclusion
• Pipelined execution for both batch and streaming processing
• Even better than other batch processing engines for
TeraSort & HashJoin
• Shortcomings due to pipelined execution
• No fine-grained fault tolerance
• No map output compression
• Low disk throughput during the post-shuffling phase
38