SlideShare a Scribd company logo
A Comparative Performance
Evaluation of Flink
Dongwon Kim
POSTECH
About Me
• Postdoctoral researcher @ POSTECH
• Research interest
• Design and implementation of distributed systems
• Performance optimization of big data processing engines
• Doctoral thesis
• MR2: Fault Tolerant MapReduce with the Push Model
• Personal blog
• http://eastcirclek.blogspot.kr
• Why I’m here 
2
Outline
• TeraSort for various engines
• Experimental setup
• Results & analysis
• What else for better performance?
• Conclusion
3
TeraSort
• Hadoop MapReduce program for the annual terabyte sort competition
• TeraSort is essentially distributed sort (DS)
a4
b3 a1
a2
b1
b2
a2
b1 a3
a4
b3
b4
Disk
a2
a4
b3
b1
a1
b4
a3
b2
a1
a3
b4
b2
Disk
a2
a4
a1
a3
b3
b1
b4
b2
read shufflinglocal sort write
Disk
Disk
local sort
Node 1
Node 2
Typical DS phases :
4Total order
• Included in Hadoop distributions
• with TeraGen & TeraValidate
• Identity map & reduce functions
• Range partitioner built on sampling
• To guarantee a total order & to prevent partition skew
• Sampling to compute boundary points within few seconds
TeraSort for MapReduce
Reduce taskMap task
read shuffling sortsort reducemap write
read shufflinglocal sort writelocal sortDS phases :
reducemap
5
Record range
…
Partition 1 Partition 2 Partition r
boundary points
• Tez can execute TeraSort for MapReduce w/o any modification
• mapreduce.framework.name = yarn-tez
• Tez DAG plan of TeraSort for MapReduce
TeraSort for Tez
finalreduce vertex
initialmap vertex
Map task
read sortmap
Reduce task
shuffling sort reduce write
input data
output data
6
TeraSort for Spark & Flink
• My source code in GitHub:
• https://github.com/eastcirclek/terasort
• Sampling-based range partitioner from TeraSort for MapReduce
• Visit my personal blog for a detailed explanation
• http://eastcirclek.blogspot.kr
7
RDD1 RDD2
• Code
• Two RDDs
TeraSort for Spark
Stage 1Stage 0
Shuffle-Map Task
(for newAPIHadoopFile)
read sort
Result Task
(for repartitionAndSortWithinPartitions)
shuffling sort write
read shufflinglocal sort writelocal sort
Create a new RDD to read from HDFS
# partitions = # blocks
Repartition the parent RDD
based on the user-specified partitioner
Write output to HDFS
DS phases :
8
Pipeline
• Code
• Pipelines consisting of four operators
TeraSort for Flink
read shuffling writelocal sort
Create a dataset to read tuples
from HDFS
partition tuples
Sort tuples of each partition
DataSource Partition SortPartition DataSink
local sort
No map-side sorting
due to pipelined execution
Write output to HDFS
DS phases :
9
Importance of TeraSort
• Suitable for measuring the pure performance of big data engines
• No data transformation (like map, filter) with user-defined logic
• Basic facilities of each engine are used
• “Winning the sort benchmark” is a great means of PR
10
Outline
• TeraSort for various engines
• Experimental setup
• Machine specification
• Node configuration
• Results & analysis
• What else for better performance?
• Conclusion
11
Machine specification (42 identical machines)
DELL PowerEdge R610
CPU
Two X5650 processors
(Total 12 cores)
Memory
Total 24Gb
Disk
6 disks * 500GB/disk
Network
10 Gigabit Ethernet
My machine Spark team
Processor
Intel Xeon X5650
(Q1, 2010)
Intel Xeon E5-2670
(Q1, 2012)
Cores 6 * 2 processors 8 * 4 processors
Memory 24GB 244GB
Disks 6 HDD's 8 SSD's
Results can be different
in newer machines
12
24GB on each node
Node configuration
Total 2 GB
for daemons
13 GB
Tez-0.7.0
NodeManager (1 GB)
ShuffleService
MapTask (1GB)
DataNode (1 GB)
MapTask (1GB)
ReduceTask (1GB)
ReduceTask (1GB)
MapTask (1GB)
MapTask (1GB)
MapTask (1GB)
…
MapReduce-2.7.1
NodeManager (1 GB)
ShuffleService
DataNode (1 GB)
MapTask (1GB)
MapTask (1GB)
ReduceTask (1GB)
ReduceTask (1GB)
MapTask (1GB)
MapTask (1GB)
MapTask (1GB)
…
12 GB
Flink
Spark
Spark-1.5.1
NodeManager (1 GB)
Executor (12GB)
Internal
memory layout
Various managers
DataNode (1 GB)
Task slot 1
Task slot 2
Task slot 12
...
Thread pool
Flink-0.9.1
NodeManager (1 GB)
TaskManager (12GB)
DataNode (1 GB)
Internal
memory layout
Various managers
Task slot 1
Task slot 2
Task slot 12
...
Task threads
Tez
MapReduce
ReduceTask (1GB)ReduceTask (1GB)
13
12 simultaneous tasks
at most
Driver (1GB) JobManager (1GB)
Outline
• TeraSort for various engines
• Experimental setup
• Results & analysis
• Flink is faster than other engines due to its pipelined execution
• What else for better performance?
• Conclusion
14
How to read a swimlane graph & throughput graphs
Tasks
Time since job starts (seconds)
2nd stage
1st
2nd
3rd
4th
5th
6th
15
Cluster network throughput
Cluster disk throughput
In
Out
Disk read
Disk
Write
- 6 waves of 1st stage tasks
- 1 wave of 2nd stage tasks
- Two stages are hardly overlapped
1st stage
2nd stage
1st stage
2nd stage
No network traffic during 1st stage
Each line : duration of each task
Different patterns for different stages
Result of sorting 80GB/node (3.2TB)
1480 sec
1st stage
1st stage
1st stage
2nd stage
2157 sec
2nd stage
2171 sec
1 DataSource
2 Partition
3 SortPartition
4 DataSink
• Flink is the fastest due to its pipelined execution
• Tez and Spark do not overlap 1st and 2nd stages
• MapReduce is slow despite overlapping stages
MapReduce
in Hadoop-2.7.1
Tez-0.7.0
Spark-1.5.1
Flink-0.9.1
2nd stage
1887 sec
2157
1887
2171
1480
0
500
1000
1500
2000
2500
MapReduce
in Hadoop-2.7.1
Tez-0.7.0 Spark-1.5.1 Flink-0.9.1
Time(seconds)
16* Map output compression turned on for Spark and Tez
* *
Tez and Spark do not overlap 1st and 2nd stages
Cluster network
throughput
Cluster disk throughput
In
Out
Disk read
Cluster network
throughput
Cluster disk throughput
In
Out
Disk read
Disk
write
Disk
write
Disk read
Disk write
Out
In
(1) 2nd stage starts
(2)
Output of 1st stage is sent
(1) 2nd stage starts
(2)
Output of 1st stage is sent
(1)
Network traffic
occurs from start
Cluster network
throughput
(2)
Write to HDFS occurs
right after shuffling is done
1 DataSource
2 Partition
3 SortPartition
4 DataSink
idle idle
(3)
Disk write to HDFS occurs
after shuffling is done
(3)
Disk write to HDFS occurs
after shuffling is done
17
Tez does not overlap 1st and 2nd stages
• Tez has parameters to control the degree of overlap
• tez.shuffle-vertex-manager.min-src-fraction : 0.2
• tez.shuffle-vertex-manager.max-src-fraction : 0.4
• However, 2nd stage is scheduled early but launched late
scheduled launched
18
Spark does not overlap 1st and 2nd stages
• Spark cannot execute multiple stages simultaneously
• also mentioned in the following VLDB paper (2015)
Spark doesn’t support the overlap
between shuffle write and read stages.
…
Spark may want to support this overlap
in the future to improve performance.
Experimental results of this paper
- Spark is faster than MapReduce for WordCount, K-means, PageRank.
- MapReduce is faster than Spark for Sort.
19
MapReduce is slow despite overlapping stages
• mapreduce.job.reduce.slowstart.completedMaps : [0.0, 1.0]
• Wang’s attempt to overlap spark stages
0.05
(overlapping, default)
0.95
(no overlapping)
2157 sec
10%
improvement
20
Wang proposes to overlap stages
to achieve better utilization
10%???
Why Spark & MapReduce
improve just 10%?
2385 sec
2nd stage
1st stage
Disk
Data transfer between tasks of different stages
Output file
P1 P2 Pn
Shuffle server
…
Consumer
Task 1
Consumer
Task 2
Consumer
Task n
P1
…
P2
Pn
Traditional pull model
- Used in MapReduce, Spark, Tez
- Extra disk access & simultaneous disk access
- Shuffling affects the performance of producers
Producer
Task
(1)
Write output
to disk
(2)
Request P1
(3)
Send P1
Pipelined data transfer
- Used in Flink
- Data transfer from memory to memory
- Flink causes fewer disk access during shuffling
21
Leads to only 10% improvement
Flink causes fewer disk access during shuffling
Map
Reduce
Flink diff.
Total disk write
(TB)
9.9 6.5 3.4
Total disk read
(TB)
8.1 6.9 1.2
Difference comes
from shuffling
Shuffled data are sometimes
read from page cache
Cluster disk throughput
Disk read
Disk write
Disk read
Disk write
Cluster disk throughput
FlinkMapReduce
22
Total amount of disk read/write
equals to
the area of blue/green region
Result of TeraSort with various data sizes
node data size
(GB)
Time (seconds)
Flink Spark MapReduce Tez
10 157 387 259 277
20 350 652 555 729
40 741 1135 1085 1709
80 1480 2171 2157 1887
160 3127 4927 4796 3950
23
100
1000
10000
10 20 40 80 160
Time(seconds)
node data size (GB)
Flink Spark MapReduce Tez
What we’ve seen
Log scale
* Map output compression turned on for Spark and Tez
Result of HashJoin
• 10 slave nodes
• org.apache.tez.examples.JoinDataGen
• Small dataset : 256MB
• Large dataset : 240GB (24GB/node)
• Result :
24
Visit my blog

Flink is
~2x faster than Tez
~4x faster than Spark
770
1538
378
0
200
400
600
800
1000
1200
1400
1600
1800
Tez-0.7.0 Spark-1.5.1 Flink-0.9.1
Time(seconds)
* No map output compression for both Spark and Tez unlike in TeraSort
Result of HashJoin with swimlane & throughput graphs
25
Idle
1 DataSource
2 DataSource
3 Join
4 DataSink
Idle
Cluster network throughput
Cluster disk throughput
In
Out
Disk
read
Disk
write
Disk read
Disk write
In
Out
In
Out
Disk read
Disk
write
Cluster network throughput
Cluster disk throughput
0.24 TB
0.41 TB
0.60 TB 0.84 TB
0.68 TB
0.74 TB
Overlap
2nd
3rd
Flink’s shortcoming
• No support for map output compression
• Small data blocks are pipelined between operators
• Job-level fault tolerance
• Shuffle data are not materialized
• Low disk throughput during the post-shuffling phase
26
Low disk throughput during the post-shuffling phase
• Possible reason : sorting records from small files
• Concurrent disk access to small files  too many disk seeks
 low disk throughput
• Other engines merge records from larger files than Flink
• “Eager pipelining moves some of the sorting work from the mapper to the
reducer”
• from MapReduce online (NSDI 2010)
Flink Tez MapReduce
27
Outline
• TeraSort for various engines
• Experimental setup
• Results & analysis
• What else for better performance?
• Conclusion
28
MR2 – another MapReduce engine
• PhD thesis
• MR2: Fault Tolerant MapReduce with the Push Model
• developed for 3 years
• Provide the user interface of Hadoop MapReduce
• No DAG support
• No in-memory computation
• No iterative-computation
• Characteristics
• Push model + Fault tolerance
• Techniques to boost up HDD throughput
• Prefetching for mappers
• Preloading for reducers
29
MR2 pipeline
• 7 types of components with memory buffers
1. Mappers & reducers : to apply user-defined functions
2. Prefetcher & preloader : to eliminate concurrent disk access
3. Sender & reducer & merger : to implement MR2’s push model
• Various buffers : to pass data between components w/o disk IOs
• Minimum disk access (2 disk reads & 2 disk writes)
• +1 disk write for fault tolerance
W1 R2 W2R1
30
1 12 23 3 3
W3
Prefetcher & Mappers
• Prefetcher loads data for multiple mappers
• Mappers do not read input from disks
<MR2><Hadoop MapReduce>
Mapper1 processing Blk1
Mapper2 processing Blk2
Time
Disk
throughput
CPU
utilization
2 mappers
on a node
Blk1
Time
Prefetcher Blk2 Blk3
Blk1
2
Blk1
1
Blk2
2
Blk2
1
Blk3
2
Blk3
1
Blk4
Blk4
2
Blk4
1
Disk
throughput
CPU
utilization
2 mappers
on a node
31
Push-model in MR2
• Node-to-node network connection for pushing data
• To reduce # network connections
• Data transfer from memory buffer
• Mappers stores spills in send buffer
• Spills are pushed to reducer sides by sender
• Fault tolerance (can be turned on/off)
• Input ranges of each spill are known to master for reproduce
• For fast recovery
• store spills on disk for fast recovery (extra disk write)
32
similar to Flink’s pipelined execution
MR2 does local sorting
before pushing data
similar to Spark
Receiver’s managed memory
Receiver & merger & preloader & reducer
• Merger produces a file from different partition data
• sorts each partition data
• and then does interleaving
• Preloader preloads each group into reduce buffer
• Reducers do not read data directly from disks
• MR2 can eliminate concurrent disk reads from reducers thanks to Preloader
P1 P2 P3 P4
P1 P2 P3 P4
P1 P2 P3 P4
… …
Preloader loads each group
(1 disk access for 4 partitions)
33
Result of sorting 80GB/node (3.2TB) with MR2
MapReduce
in Hadoop-2.7.1
Tez-0.7.0 Spark-1.5.1 Flink-0.9.1 MR2
Time (sec) 2157 1887 2171 1480 890
MR2 speedup
over other engines
2.42 2.12 2.44 1.66 -
2157
1887
2171
1480
890
0
500
1000
1500
2000
2500
MapReduce
in Hadoop-2.7.1
Tez-0.7.0 Spark-1.5.1 Flink-0.9.1 MR2
Time(seconds)
34
Disk & network throughput
1. DataSource / Mapping
• Prefetcher is effective
• MR2 shows higher disk
throughput
2. Partition / Shuffling
• Records to shuffle are
generated faster from in MR2
3. DataSink / Reducing
• Preloader is effective
• Almost 2x throughput
Disk read
Disk write
Out
In
Cluster network throughput
Cluster disk throughput
Out
In
Disk read
Disk write
Flink MR2
1
1
2
2
3
3
35
• Experimental results using 10 nodes
PUMA (PUrdue MApreduce benchmarks suite)
36
Outline
• TeraSort for various engines
• Experimental setup
• Experimental results & analysis
• What else for better performance?
• Conclusion
37
Conclusion
• Pipelined execution for both batch and streaming processing
• Even better than other batch processing engines for
TeraSort & HashJoin
• Shortcomings due to pipelined execution
• No fine-grained fault tolerance
• No map output compression
• Low disk throughput during the post-shuffling phase
38
Thank you!
Any question?
39

More Related Content

What's hot

Iceberg: A modern table format for big data (Strata NY 2018)
Iceberg: A modern table format for big data (Strata NY 2018)Iceberg: A modern table format for big data (Strata NY 2018)
Iceberg: A modern table format for big data (Strata NY 2018)
Ryan Blue
 
Spark Shuffle Deep Dive (Explained In Depth) - How Shuffle Works in Spark
Spark Shuffle Deep Dive (Explained In Depth) - How Shuffle Works in SparkSpark Shuffle Deep Dive (Explained In Depth) - How Shuffle Works in Spark
Spark Shuffle Deep Dive (Explained In Depth) - How Shuffle Works in Spark
Bo Yang
 
Optimizing Apache Spark SQL Joins
Optimizing Apache Spark SQL JoinsOptimizing Apache Spark SQL Joins
Optimizing Apache Spark SQL Joins
Databricks
 
Evening out the uneven: dealing with skew in Flink
Evening out the uneven: dealing with skew in FlinkEvening out the uneven: dealing with skew in Flink
Evening out the uneven: dealing with skew in Flink
Flink Forward
 
The Rise of ZStandard: Apache Spark/Parquet/ORC/Avro
The Rise of ZStandard: Apache Spark/Parquet/ORC/AvroThe Rise of ZStandard: Apache Spark/Parquet/ORC/Avro
The Rise of ZStandard: Apache Spark/Parquet/ORC/Avro
Databricks
 
Designing ETL Pipelines with Structured Streaming and Delta Lake—How to Archi...
Designing ETL Pipelines with Structured Streaming and Delta Lake—How to Archi...Designing ETL Pipelines with Structured Streaming and Delta Lake—How to Archi...
Designing ETL Pipelines with Structured Streaming and Delta Lake—How to Archi...
Databricks
 
Speed Up Uber's Presto with Alluxio
Speed Up Uber's Presto with AlluxioSpeed Up Uber's Presto with Alluxio
Speed Up Uber's Presto with Alluxio
Alluxio, Inc.
 
Performant Streaming in Production: Preventing Common Pitfalls when Productio...
Performant Streaming in Production: Preventing Common Pitfalls when Productio...Performant Streaming in Production: Preventing Common Pitfalls when Productio...
Performant Streaming in Production: Preventing Common Pitfalls when Productio...
Databricks
 
Deploying Flink on Kubernetes - David Anderson
 Deploying Flink on Kubernetes - David Anderson Deploying Flink on Kubernetes - David Anderson
Deploying Flink on Kubernetes - David Anderson
Ververica
 
Webinar: Deep Dive on Apache Flink State - Seth Wiesman
Webinar: Deep Dive on Apache Flink State - Seth WiesmanWebinar: Deep Dive on Apache Flink State - Seth Wiesman
Webinar: Deep Dive on Apache Flink State - Seth Wiesman
Ververica
 
Spark SQL Join Improvement at Facebook
Spark SQL Join Improvement at FacebookSpark SQL Join Improvement at Facebook
Spark SQL Join Improvement at Facebook
Databricks
 
Processing Large Data with Apache Spark -- HasGeek
Processing Large Data with Apache Spark -- HasGeekProcessing Large Data with Apache Spark -- HasGeek
Processing Large Data with Apache Spark -- HasGeek
Venkata Naga Ravi
 
Building a SIMD Supported Vectorized Native Engine for Spark SQL
Building a SIMD Supported Vectorized Native Engine for Spark SQLBuilding a SIMD Supported Vectorized Native Engine for Spark SQL
Building a SIMD Supported Vectorized Native Engine for Spark SQL
Databricks
 
The Parquet Format and Performance Optimization Opportunities
The Parquet Format and Performance Optimization OpportunitiesThe Parquet Format and Performance Optimization Opportunities
The Parquet Format and Performance Optimization Opportunities
Databricks
 
Spark tuning
Spark tuningSpark tuning
Understanding Query Plans and Spark UIs
Understanding Query Plans and Spark UIsUnderstanding Query Plans and Spark UIs
Understanding Query Plans and Spark UIs
Databricks
 
The columnar roadmap: Apache Parquet and Apache Arrow
The columnar roadmap: Apache Parquet and Apache ArrowThe columnar roadmap: Apache Parquet and Apache Arrow
The columnar roadmap: Apache Parquet and Apache Arrow
DataWorks Summit
 
How to build a streaming Lakehouse with Flink, Kafka, and Hudi
How to build a streaming Lakehouse with Flink, Kafka, and HudiHow to build a streaming Lakehouse with Flink, Kafka, and Hudi
How to build a streaming Lakehouse with Flink, Kafka, and Hudi
Flink Forward
 
Parquet performance tuning: the missing guide
Parquet performance tuning: the missing guideParquet performance tuning: the missing guide
Parquet performance tuning: the missing guide
Ryan Blue
 
Data Storage Tips for Optimal Spark Performance-(Vida Ha, Databricks)
Data Storage Tips for Optimal Spark Performance-(Vida Ha, Databricks)Data Storage Tips for Optimal Spark Performance-(Vida Ha, Databricks)
Data Storage Tips for Optimal Spark Performance-(Vida Ha, Databricks)
Spark Summit
 

What's hot (20)

Iceberg: A modern table format for big data (Strata NY 2018)
Iceberg: A modern table format for big data (Strata NY 2018)Iceberg: A modern table format for big data (Strata NY 2018)
Iceberg: A modern table format for big data (Strata NY 2018)
 
Spark Shuffle Deep Dive (Explained In Depth) - How Shuffle Works in Spark
Spark Shuffle Deep Dive (Explained In Depth) - How Shuffle Works in SparkSpark Shuffle Deep Dive (Explained In Depth) - How Shuffle Works in Spark
Spark Shuffle Deep Dive (Explained In Depth) - How Shuffle Works in Spark
 
Optimizing Apache Spark SQL Joins
Optimizing Apache Spark SQL JoinsOptimizing Apache Spark SQL Joins
Optimizing Apache Spark SQL Joins
 
Evening out the uneven: dealing with skew in Flink
Evening out the uneven: dealing with skew in FlinkEvening out the uneven: dealing with skew in Flink
Evening out the uneven: dealing with skew in Flink
 
The Rise of ZStandard: Apache Spark/Parquet/ORC/Avro
The Rise of ZStandard: Apache Spark/Parquet/ORC/AvroThe Rise of ZStandard: Apache Spark/Parquet/ORC/Avro
The Rise of ZStandard: Apache Spark/Parquet/ORC/Avro
 
Designing ETL Pipelines with Structured Streaming and Delta Lake—How to Archi...
Designing ETL Pipelines with Structured Streaming and Delta Lake—How to Archi...Designing ETL Pipelines with Structured Streaming and Delta Lake—How to Archi...
Designing ETL Pipelines with Structured Streaming and Delta Lake—How to Archi...
 
Speed Up Uber's Presto with Alluxio
Speed Up Uber's Presto with AlluxioSpeed Up Uber's Presto with Alluxio
Speed Up Uber's Presto with Alluxio
 
Performant Streaming in Production: Preventing Common Pitfalls when Productio...
Performant Streaming in Production: Preventing Common Pitfalls when Productio...Performant Streaming in Production: Preventing Common Pitfalls when Productio...
Performant Streaming in Production: Preventing Common Pitfalls when Productio...
 
Deploying Flink on Kubernetes - David Anderson
 Deploying Flink on Kubernetes - David Anderson Deploying Flink on Kubernetes - David Anderson
Deploying Flink on Kubernetes - David Anderson
 
Webinar: Deep Dive on Apache Flink State - Seth Wiesman
Webinar: Deep Dive on Apache Flink State - Seth WiesmanWebinar: Deep Dive on Apache Flink State - Seth Wiesman
Webinar: Deep Dive on Apache Flink State - Seth Wiesman
 
Spark SQL Join Improvement at Facebook
Spark SQL Join Improvement at FacebookSpark SQL Join Improvement at Facebook
Spark SQL Join Improvement at Facebook
 
Processing Large Data with Apache Spark -- HasGeek
Processing Large Data with Apache Spark -- HasGeekProcessing Large Data with Apache Spark -- HasGeek
Processing Large Data with Apache Spark -- HasGeek
 
Building a SIMD Supported Vectorized Native Engine for Spark SQL
Building a SIMD Supported Vectorized Native Engine for Spark SQLBuilding a SIMD Supported Vectorized Native Engine for Spark SQL
Building a SIMD Supported Vectorized Native Engine for Spark SQL
 
The Parquet Format and Performance Optimization Opportunities
The Parquet Format and Performance Optimization OpportunitiesThe Parquet Format and Performance Optimization Opportunities
The Parquet Format and Performance Optimization Opportunities
 
Spark tuning
Spark tuningSpark tuning
Spark tuning
 
Understanding Query Plans and Spark UIs
Understanding Query Plans and Spark UIsUnderstanding Query Plans and Spark UIs
Understanding Query Plans and Spark UIs
 
The columnar roadmap: Apache Parquet and Apache Arrow
The columnar roadmap: Apache Parquet and Apache ArrowThe columnar roadmap: Apache Parquet and Apache Arrow
The columnar roadmap: Apache Parquet and Apache Arrow
 
How to build a streaming Lakehouse with Flink, Kafka, and Hudi
How to build a streaming Lakehouse with Flink, Kafka, and HudiHow to build a streaming Lakehouse with Flink, Kafka, and Hudi
How to build a streaming Lakehouse with Flink, Kafka, and Hudi
 
Parquet performance tuning: the missing guide
Parquet performance tuning: the missing guideParquet performance tuning: the missing guide
Parquet performance tuning: the missing guide
 
Data Storage Tips for Optimal Spark Performance-(Vida Ha, Databricks)
Data Storage Tips for Optimal Spark Performance-(Vida Ha, Databricks)Data Storage Tips for Optimal Spark Performance-(Vida Ha, Databricks)
Data Storage Tips for Optimal Spark Performance-(Vida Ha, Databricks)
 

Viewers also liked

Flink Forward Berlin 2017: Dongwon Kim - Predictive Maintenance with Apache F...
Flink Forward Berlin 2017: Dongwon Kim - Predictive Maintenance with Apache F...Flink Forward Berlin 2017: Dongwon Kim - Predictive Maintenance with Apache F...
Flink Forward Berlin 2017: Dongwon Kim - Predictive Maintenance with Apache F...
Flink Forward
 
Flink Forward Berlin 2017: Pramod Bhatotia, Do Le Quoc - StreamApprox: Approx...
Flink Forward Berlin 2017: Pramod Bhatotia, Do Le Quoc - StreamApprox: Approx...Flink Forward Berlin 2017: Pramod Bhatotia, Do Le Quoc - StreamApprox: Approx...
Flink Forward Berlin 2017: Pramod Bhatotia, Do Le Quoc - StreamApprox: Approx...
Flink Forward
 
Flink Apachecon Presentation
Flink Apachecon PresentationFlink Apachecon Presentation
Flink Apachecon Presentation
Gyula Fóra
 
Mikio Braun – Data flow vs. procedural programming
Mikio Braun – Data flow vs. procedural programming Mikio Braun – Data flow vs. procedural programming
Mikio Braun – Data flow vs. procedural programming
Flink Forward
 
Moon soo Lee – Data Science Lifecycle with Apache Flink and Apache Zeppelin
Moon soo Lee – Data Science Lifecycle with Apache Flink and Apache ZeppelinMoon soo Lee – Data Science Lifecycle with Apache Flink and Apache Zeppelin
Moon soo Lee – Data Science Lifecycle with Apache Flink and Apache Zeppelin
Flink Forward
 
Mohamed Amine Abdessemed – Real-time Data Integration with Apache Flink & Kafka
Mohamed Amine Abdessemed – Real-time Data Integration with Apache Flink & KafkaMohamed Amine Abdessemed – Real-time Data Integration with Apache Flink & Kafka
Mohamed Amine Abdessemed – Real-time Data Integration with Apache Flink & Kafka
Flink Forward
 
Apache Flink Training: DataStream API Part 1 Basic
 Apache Flink Training: DataStream API Part 1 Basic Apache Flink Training: DataStream API Part 1 Basic
Apache Flink Training: DataStream API Part 1 Basic
Flink Forward
 
Maximilian Michels – Google Cloud Dataflow on Top of Apache Flink
Maximilian Michels – Google Cloud Dataflow on Top of Apache FlinkMaximilian Michels – Google Cloud Dataflow on Top of Apache Flink
Maximilian Michels – Google Cloud Dataflow on Top of Apache Flink
Flink Forward
 
Introduction to Apache Flink - Fast and reliable big data processing
Introduction to Apache Flink - Fast and reliable big data processingIntroduction to Apache Flink - Fast and reliable big data processing
Introduction to Apache Flink - Fast and reliable big data processing
Till Rohrmann
 
Slim Baltagi – Flink vs. Spark
Slim Baltagi – Flink vs. SparkSlim Baltagi – Flink vs. Spark
Slim Baltagi – Flink vs. Spark
Flink Forward
 
Marton Balassi – Stateful Stream Processing
Marton Balassi – Stateful Stream ProcessingMarton Balassi – Stateful Stream Processing
Marton Balassi – Stateful Stream Processing
Flink Forward
 
Chris Hillman – Beyond Mapreduce Scientific Data Processing in Real-time
Chris Hillman – Beyond Mapreduce Scientific Data Processing in Real-timeChris Hillman – Beyond Mapreduce Scientific Data Processing in Real-time
Chris Hillman – Beyond Mapreduce Scientific Data Processing in Real-time
Flink Forward
 
Flink Case Study: Bouygues Telecom
Flink Case Study: Bouygues TelecomFlink Case Study: Bouygues Telecom
Flink Case Study: Bouygues Telecom
Flink Forward
 
Albert Bifet – Apache Samoa: Mining Big Data Streams with Apache Flink
Albert Bifet – Apache Samoa: Mining Big Data Streams with Apache FlinkAlbert Bifet – Apache Samoa: Mining Big Data Streams with Apache Flink
Albert Bifet – Apache Samoa: Mining Big Data Streams with Apache Flink
Flink Forward
 
S. Bartoli & F. Pompermaier – A Semantic Big Data Companion
S. Bartoli & F. Pompermaier – A Semantic Big Data CompanionS. Bartoli & F. Pompermaier – A Semantic Big Data Companion
S. Bartoli & F. Pompermaier – A Semantic Big Data Companion
Flink Forward
 
Tran Nam-Luc – Stale Synchronous Parallel Iterations on Flink
Tran Nam-Luc – Stale Synchronous Parallel Iterations on FlinkTran Nam-Luc – Stale Synchronous Parallel Iterations on Flink
Tran Nam-Luc – Stale Synchronous Parallel Iterations on Flink
Flink Forward
 
Apache Flink - Hadoop MapReduce Compatibility
Apache Flink - Hadoop MapReduce CompatibilityApache Flink - Hadoop MapReduce Compatibility
Apache Flink - Hadoop MapReduce Compatibility
Fabian Hueske
 
Apache Flink Training: DataSet API Basics
Apache Flink Training: DataSet API BasicsApache Flink Training: DataSet API Basics
Apache Flink Training: DataSet API Basics
Flink Forward
 
K. Tzoumas & S. Ewen – Flink Forward Keynote
K. Tzoumas & S. Ewen – Flink Forward KeynoteK. Tzoumas & S. Ewen – Flink Forward Keynote
K. Tzoumas & S. Ewen – Flink Forward Keynote
Flink Forward
 
William Vambenepe – Google Cloud Dataflow and Flink , Stream Processing by De...
William Vambenepe – Google Cloud Dataflow and Flink , Stream Processing by De...William Vambenepe – Google Cloud Dataflow and Flink , Stream Processing by De...
William Vambenepe – Google Cloud Dataflow and Flink , Stream Processing by De...
Flink Forward
 

Viewers also liked (20)

Flink Forward Berlin 2017: Dongwon Kim - Predictive Maintenance with Apache F...
Flink Forward Berlin 2017: Dongwon Kim - Predictive Maintenance with Apache F...Flink Forward Berlin 2017: Dongwon Kim - Predictive Maintenance with Apache F...
Flink Forward Berlin 2017: Dongwon Kim - Predictive Maintenance with Apache F...
 
Flink Forward Berlin 2017: Pramod Bhatotia, Do Le Quoc - StreamApprox: Approx...
Flink Forward Berlin 2017: Pramod Bhatotia, Do Le Quoc - StreamApprox: Approx...Flink Forward Berlin 2017: Pramod Bhatotia, Do Le Quoc - StreamApprox: Approx...
Flink Forward Berlin 2017: Pramod Bhatotia, Do Le Quoc - StreamApprox: Approx...
 
Flink Apachecon Presentation
Flink Apachecon PresentationFlink Apachecon Presentation
Flink Apachecon Presentation
 
Mikio Braun – Data flow vs. procedural programming
Mikio Braun – Data flow vs. procedural programming Mikio Braun – Data flow vs. procedural programming
Mikio Braun – Data flow vs. procedural programming
 
Moon soo Lee – Data Science Lifecycle with Apache Flink and Apache Zeppelin
Moon soo Lee – Data Science Lifecycle with Apache Flink and Apache ZeppelinMoon soo Lee – Data Science Lifecycle with Apache Flink and Apache Zeppelin
Moon soo Lee – Data Science Lifecycle with Apache Flink and Apache Zeppelin
 
Mohamed Amine Abdessemed – Real-time Data Integration with Apache Flink & Kafka
Mohamed Amine Abdessemed – Real-time Data Integration with Apache Flink & KafkaMohamed Amine Abdessemed – Real-time Data Integration with Apache Flink & Kafka
Mohamed Amine Abdessemed – Real-time Data Integration with Apache Flink & Kafka
 
Apache Flink Training: DataStream API Part 1 Basic
 Apache Flink Training: DataStream API Part 1 Basic Apache Flink Training: DataStream API Part 1 Basic
Apache Flink Training: DataStream API Part 1 Basic
 
Maximilian Michels – Google Cloud Dataflow on Top of Apache Flink
Maximilian Michels – Google Cloud Dataflow on Top of Apache FlinkMaximilian Michels – Google Cloud Dataflow on Top of Apache Flink
Maximilian Michels – Google Cloud Dataflow on Top of Apache Flink
 
Introduction to Apache Flink - Fast and reliable big data processing
Introduction to Apache Flink - Fast and reliable big data processingIntroduction to Apache Flink - Fast and reliable big data processing
Introduction to Apache Flink - Fast and reliable big data processing
 
Slim Baltagi – Flink vs. Spark
Slim Baltagi – Flink vs. SparkSlim Baltagi – Flink vs. Spark
Slim Baltagi – Flink vs. Spark
 
Marton Balassi – Stateful Stream Processing
Marton Balassi – Stateful Stream ProcessingMarton Balassi – Stateful Stream Processing
Marton Balassi – Stateful Stream Processing
 
Chris Hillman – Beyond Mapreduce Scientific Data Processing in Real-time
Chris Hillman – Beyond Mapreduce Scientific Data Processing in Real-timeChris Hillman – Beyond Mapreduce Scientific Data Processing in Real-time
Chris Hillman – Beyond Mapreduce Scientific Data Processing in Real-time
 
Flink Case Study: Bouygues Telecom
Flink Case Study: Bouygues TelecomFlink Case Study: Bouygues Telecom
Flink Case Study: Bouygues Telecom
 
Albert Bifet – Apache Samoa: Mining Big Data Streams with Apache Flink
Albert Bifet – Apache Samoa: Mining Big Data Streams with Apache FlinkAlbert Bifet – Apache Samoa: Mining Big Data Streams with Apache Flink
Albert Bifet – Apache Samoa: Mining Big Data Streams with Apache Flink
 
S. Bartoli & F. Pompermaier – A Semantic Big Data Companion
S. Bartoli & F. Pompermaier – A Semantic Big Data CompanionS. Bartoli & F. Pompermaier – A Semantic Big Data Companion
S. Bartoli & F. Pompermaier – A Semantic Big Data Companion
 
Tran Nam-Luc – Stale Synchronous Parallel Iterations on Flink
Tran Nam-Luc – Stale Synchronous Parallel Iterations on FlinkTran Nam-Luc – Stale Synchronous Parallel Iterations on Flink
Tran Nam-Luc – Stale Synchronous Parallel Iterations on Flink
 
Apache Flink - Hadoop MapReduce Compatibility
Apache Flink - Hadoop MapReduce CompatibilityApache Flink - Hadoop MapReduce Compatibility
Apache Flink - Hadoop MapReduce Compatibility
 
Apache Flink Training: DataSet API Basics
Apache Flink Training: DataSet API BasicsApache Flink Training: DataSet API Basics
Apache Flink Training: DataSet API Basics
 
K. Tzoumas & S. Ewen – Flink Forward Keynote
K. Tzoumas & S. Ewen – Flink Forward KeynoteK. Tzoumas & S. Ewen – Flink Forward Keynote
K. Tzoumas & S. Ewen – Flink Forward Keynote
 
William Vambenepe – Google Cloud Dataflow and Flink , Stream Processing by De...
William Vambenepe – Google Cloud Dataflow and Flink , Stream Processing by De...William Vambenepe – Google Cloud Dataflow and Flink , Stream Processing by De...
William Vambenepe – Google Cloud Dataflow and Flink , Stream Processing by De...
 

Similar to Dongwon Kim – A Comparative Performance Evaluation of Flink

Spark Overview and Performance Issues
Spark Overview and Performance IssuesSpark Overview and Performance Issues
Spark Overview and Performance Issues
Antonios Katsarakis
 
Migrating ETL Workflow to Apache Spark at Scale in Pinterest
Migrating ETL Workflow to Apache Spark at Scale in PinterestMigrating ETL Workflow to Apache Spark at Scale in Pinterest
Migrating ETL Workflow to Apache Spark at Scale in Pinterest
Databricks
 
Spark architechure.pptx
Spark architechure.pptxSpark architechure.pptx
Spark architechure.pptx
SaiSriMadhuriYatam
 
Apache Spark: What's under the hood
Apache Spark: What's under the hoodApache Spark: What's under the hood
Apache Spark: What's under the hood
Adarsh Pannu
 
Healthcare Claim Reimbursement using Apache Spark
Healthcare Claim Reimbursement using Apache SparkHealthcare Claim Reimbursement using Apache Spark
Healthcare Claim Reimbursement using Apache Spark
Databricks
 
Spark shuffle introduction
Spark shuffle introductionSpark shuffle introduction
Spark shuffle introduction
colorant
 
From HDFS to S3: Migrate Pinterest Apache Spark Clusters
From HDFS to S3: Migrate Pinterest Apache Spark ClustersFrom HDFS to S3: Migrate Pinterest Apache Spark Clusters
From HDFS to S3: Migrate Pinterest Apache Spark Clusters
Databricks
 
[262] netflix 빅데이터 플랫폼
[262] netflix 빅데이터 플랫폼[262] netflix 빅데이터 플랫폼
[262] netflix 빅데이터 플랫폼
NAVER D2
 
Apache Spark: The Next Gen toolset for Big Data Processing
Apache Spark: The Next Gen toolset for Big Data ProcessingApache Spark: The Next Gen toolset for Big Data Processing
Apache Spark: The Next Gen toolset for Big Data Processing
prajods
 
Spark on Yarn @ Netflix
Spark on Yarn @ NetflixSpark on Yarn @ Netflix
Spark on Yarn @ Netflix
Nezih Yigitbasi
 
Producing Spark on YARN for ETL
Producing Spark on YARN for ETLProducing Spark on YARN for ETL
Producing Spark on YARN for ETL
DataWorks Summit/Hadoop Summit
 
700 Updatable Queries Per Second: Spark as a Real-Time Web Service
700 Updatable Queries Per Second: Spark as a Real-Time Web Service700 Updatable Queries Per Second: Spark as a Real-Time Web Service
700 Updatable Queries Per Second: Spark as a Real-Time Web Service
Evan Chan
 
700 Queries Per Second with Updates: Spark As A Real-Time Web Service
700 Queries Per Second with Updates: Spark As A Real-Time Web Service700 Queries Per Second with Updates: Spark As A Real-Time Web Service
700 Queries Per Second with Updates: Spark As A Real-Time Web Service
Spark Summit
 
Hadoop Network Performance profile
Hadoop Network Performance profileHadoop Network Performance profile
Hadoop Network Performance profile
pramodbiligiri
 
Radical Speed for SQL Queries on Databricks: Photon Under the Hood
Radical Speed for SQL Queries on Databricks: Photon Under the HoodRadical Speed for SQL Queries on Databricks: Photon Under the Hood
Radical Speed for SQL Queries on Databricks: Photon Under the Hood
Databricks
 
Scala like distributed collections - dumping time-series data with apache spark
Scala like distributed collections - dumping time-series data with apache sparkScala like distributed collections - dumping time-series data with apache spark
Scala like distributed collections - dumping time-series data with apache spark
Demi Ben-Ari
 
S3, Cassandra or Outer Space? Dumping Time Series Data using Spark - Demi Be...
S3, Cassandra or Outer Space? Dumping Time Series Data using Spark  - Demi Be...S3, Cassandra or Outer Space? Dumping Time Series Data using Spark  - Demi Be...
S3, Cassandra or Outer Space? Dumping Time Series Data using Spark - Demi Be...
Codemotion
 
Presentations from the Cloudera Impala meetup on Aug 20 2013
Presentations from the Cloudera Impala meetup on Aug 20 2013Presentations from the Cloudera Impala meetup on Aug 20 2013
Presentations from the Cloudera Impala meetup on Aug 20 2013
Cloudera, Inc.
 
11. From Hadoop to Spark 1:2
11. From Hadoop to Spark 1:211. From Hadoop to Spark 1:2
11. From Hadoop to Spark 1:2
Fabio Fumarola
 
Apache Spark Best Practices Meetup Talk
Apache Spark Best Practices Meetup TalkApache Spark Best Practices Meetup Talk
Apache Spark Best Practices Meetup Talk
Eren Avşaroğulları
 

Similar to Dongwon Kim – A Comparative Performance Evaluation of Flink (20)

Spark Overview and Performance Issues
Spark Overview and Performance IssuesSpark Overview and Performance Issues
Spark Overview and Performance Issues
 
Migrating ETL Workflow to Apache Spark at Scale in Pinterest
Migrating ETL Workflow to Apache Spark at Scale in PinterestMigrating ETL Workflow to Apache Spark at Scale in Pinterest
Migrating ETL Workflow to Apache Spark at Scale in Pinterest
 
Spark architechure.pptx
Spark architechure.pptxSpark architechure.pptx
Spark architechure.pptx
 
Apache Spark: What's under the hood
Apache Spark: What's under the hoodApache Spark: What's under the hood
Apache Spark: What's under the hood
 
Healthcare Claim Reimbursement using Apache Spark
Healthcare Claim Reimbursement using Apache SparkHealthcare Claim Reimbursement using Apache Spark
Healthcare Claim Reimbursement using Apache Spark
 
Spark shuffle introduction
Spark shuffle introductionSpark shuffle introduction
Spark shuffle introduction
 
From HDFS to S3: Migrate Pinterest Apache Spark Clusters
From HDFS to S3: Migrate Pinterest Apache Spark ClustersFrom HDFS to S3: Migrate Pinterest Apache Spark Clusters
From HDFS to S3: Migrate Pinterest Apache Spark Clusters
 
[262] netflix 빅데이터 플랫폼
[262] netflix 빅데이터 플랫폼[262] netflix 빅데이터 플랫폼
[262] netflix 빅데이터 플랫폼
 
Apache Spark: The Next Gen toolset for Big Data Processing
Apache Spark: The Next Gen toolset for Big Data ProcessingApache Spark: The Next Gen toolset for Big Data Processing
Apache Spark: The Next Gen toolset for Big Data Processing
 
Spark on Yarn @ Netflix
Spark on Yarn @ NetflixSpark on Yarn @ Netflix
Spark on Yarn @ Netflix
 
Producing Spark on YARN for ETL
Producing Spark on YARN for ETLProducing Spark on YARN for ETL
Producing Spark on YARN for ETL
 
700 Updatable Queries Per Second: Spark as a Real-Time Web Service
700 Updatable Queries Per Second: Spark as a Real-Time Web Service700 Updatable Queries Per Second: Spark as a Real-Time Web Service
700 Updatable Queries Per Second: Spark as a Real-Time Web Service
 
700 Queries Per Second with Updates: Spark As A Real-Time Web Service
700 Queries Per Second with Updates: Spark As A Real-Time Web Service700 Queries Per Second with Updates: Spark As A Real-Time Web Service
700 Queries Per Second with Updates: Spark As A Real-Time Web Service
 
Hadoop Network Performance profile
Hadoop Network Performance profileHadoop Network Performance profile
Hadoop Network Performance profile
 
Radical Speed for SQL Queries on Databricks: Photon Under the Hood
Radical Speed for SQL Queries on Databricks: Photon Under the HoodRadical Speed for SQL Queries on Databricks: Photon Under the Hood
Radical Speed for SQL Queries on Databricks: Photon Under the Hood
 
Scala like distributed collections - dumping time-series data with apache spark
Scala like distributed collections - dumping time-series data with apache sparkScala like distributed collections - dumping time-series data with apache spark
Scala like distributed collections - dumping time-series data with apache spark
 
S3, Cassandra or Outer Space? Dumping Time Series Data using Spark - Demi Be...
S3, Cassandra or Outer Space? Dumping Time Series Data using Spark  - Demi Be...S3, Cassandra or Outer Space? Dumping Time Series Data using Spark  - Demi Be...
S3, Cassandra or Outer Space? Dumping Time Series Data using Spark - Demi Be...
 
Presentations from the Cloudera Impala meetup on Aug 20 2013
Presentations from the Cloudera Impala meetup on Aug 20 2013Presentations from the Cloudera Impala meetup on Aug 20 2013
Presentations from the Cloudera Impala meetup on Aug 20 2013
 
11. From Hadoop to Spark 1:2
11. From Hadoop to Spark 1:211. From Hadoop to Spark 1:2
11. From Hadoop to Spark 1:2
 
Apache Spark Best Practices Meetup Talk
Apache Spark Best Practices Meetup TalkApache Spark Best Practices Meetup Talk
Apache Spark Best Practices Meetup Talk
 

More from Flink Forward

Building a fully managed stream processing platform on Flink at scale for Lin...
Building a fully managed stream processing platform on Flink at scale for Lin...Building a fully managed stream processing platform on Flink at scale for Lin...
Building a fully managed stream processing platform on Flink at scale for Lin...
Flink Forward
 
“Alexa, be quiet!”: End-to-end near-real time model building and evaluation i...
“Alexa, be quiet!”: End-to-end near-real time model building and evaluation i...“Alexa, be quiet!”: End-to-end near-real time model building and evaluation i...
“Alexa, be quiet!”: End-to-end near-real time model building and evaluation i...
Flink Forward
 
Introducing BinarySortedMultiMap - A new Flink state primitive to boost your ...
Introducing BinarySortedMultiMap - A new Flink state primitive to boost your ...Introducing BinarySortedMultiMap - A new Flink state primitive to boost your ...
Introducing BinarySortedMultiMap - A new Flink state primitive to boost your ...
Flink Forward
 
Introducing the Apache Flink Kubernetes Operator
Introducing the Apache Flink Kubernetes OperatorIntroducing the Apache Flink Kubernetes Operator
Introducing the Apache Flink Kubernetes Operator
Flink Forward
 
Autoscaling Flink with Reactive Mode
Autoscaling Flink with Reactive ModeAutoscaling Flink with Reactive Mode
Autoscaling Flink with Reactive Mode
Flink Forward
 
Dynamically Scaling Data Streams across Multiple Kafka Clusters with Zero Fli...
Dynamically Scaling Data Streams across Multiple Kafka Clusters with Zero Fli...Dynamically Scaling Data Streams across Multiple Kafka Clusters with Zero Fli...
Dynamically Scaling Data Streams across Multiple Kafka Clusters with Zero Fli...
Flink Forward
 
One sink to rule them all: Introducing the new Async Sink
One sink to rule them all: Introducing the new Async SinkOne sink to rule them all: Introducing the new Async Sink
One sink to rule them all: Introducing the new Async Sink
Flink Forward
 
Flink powered stream processing platform at Pinterest
Flink powered stream processing platform at PinterestFlink powered stream processing platform at Pinterest
Flink powered stream processing platform at Pinterest
Flink Forward
 
Apache Flink in the Cloud-Native Era
Apache Flink in the Cloud-Native EraApache Flink in the Cloud-Native Era
Apache Flink in the Cloud-Native Era
Flink Forward
 
Where is my bottleneck? Performance troubleshooting in Flink
Where is my bottleneck? Performance troubleshooting in FlinkWhere is my bottleneck? Performance troubleshooting in Flink
Where is my bottleneck? Performance troubleshooting in Flink
Flink Forward
 
Using the New Apache Flink Kubernetes Operator in a Production Deployment
Using the New Apache Flink Kubernetes Operator in a Production DeploymentUsing the New Apache Flink Kubernetes Operator in a Production Deployment
Using the New Apache Flink Kubernetes Operator in a Production Deployment
Flink Forward
 
The Current State of Table API in 2022
The Current State of Table API in 2022The Current State of Table API in 2022
The Current State of Table API in 2022
Flink Forward
 
Flink SQL on Pulsar made easy
Flink SQL on Pulsar made easyFlink SQL on Pulsar made easy
Flink SQL on Pulsar made easy
Flink Forward
 
Dynamic Rule-based Real-time Market Data Alerts
Dynamic Rule-based Real-time Market Data AlertsDynamic Rule-based Real-time Market Data Alerts
Dynamic Rule-based Real-time Market Data Alerts
Flink Forward
 
Exactly-Once Financial Data Processing at Scale with Flink and Pinot
Exactly-Once Financial Data Processing at Scale with Flink and PinotExactly-Once Financial Data Processing at Scale with Flink and Pinot
Exactly-Once Financial Data Processing at Scale with Flink and Pinot
Flink Forward
 
Processing Semantically-Ordered Streams in Financial Services
Processing Semantically-Ordered Streams in Financial ServicesProcessing Semantically-Ordered Streams in Financial Services
Processing Semantically-Ordered Streams in Financial Services
Flink Forward
 
Tame the small files problem and optimize data layout for streaming ingestion...
Tame the small files problem and optimize data layout for streaming ingestion...Tame the small files problem and optimize data layout for streaming ingestion...
Tame the small files problem and optimize data layout for streaming ingestion...
Flink Forward
 
Batch Processing at Scale with Flink & Iceberg
Batch Processing at Scale with Flink & IcebergBatch Processing at Scale with Flink & Iceberg
Batch Processing at Scale with Flink & Iceberg
Flink Forward
 
Welcome to the Flink Community!
Welcome to the Flink Community!Welcome to the Flink Community!
Welcome to the Flink Community!
Flink Forward
 
Practical learnings from running thousands of Flink jobs
Practical learnings from running thousands of Flink jobsPractical learnings from running thousands of Flink jobs
Practical learnings from running thousands of Flink jobs
Flink Forward
 

More from Flink Forward (20)

Building a fully managed stream processing platform on Flink at scale for Lin...
Building a fully managed stream processing platform on Flink at scale for Lin...Building a fully managed stream processing platform on Flink at scale for Lin...
Building a fully managed stream processing platform on Flink at scale for Lin...
 
“Alexa, be quiet!”: End-to-end near-real time model building and evaluation i...
“Alexa, be quiet!”: End-to-end near-real time model building and evaluation i...“Alexa, be quiet!”: End-to-end near-real time model building and evaluation i...
“Alexa, be quiet!”: End-to-end near-real time model building and evaluation i...
 
Introducing BinarySortedMultiMap - A new Flink state primitive to boost your ...
Introducing BinarySortedMultiMap - A new Flink state primitive to boost your ...Introducing BinarySortedMultiMap - A new Flink state primitive to boost your ...
Introducing BinarySortedMultiMap - A new Flink state primitive to boost your ...
 
Introducing the Apache Flink Kubernetes Operator
Introducing the Apache Flink Kubernetes OperatorIntroducing the Apache Flink Kubernetes Operator
Introducing the Apache Flink Kubernetes Operator
 
Autoscaling Flink with Reactive Mode
Autoscaling Flink with Reactive ModeAutoscaling Flink with Reactive Mode
Autoscaling Flink with Reactive Mode
 
Dynamically Scaling Data Streams across Multiple Kafka Clusters with Zero Fli...
Dynamically Scaling Data Streams across Multiple Kafka Clusters with Zero Fli...Dynamically Scaling Data Streams across Multiple Kafka Clusters with Zero Fli...
Dynamically Scaling Data Streams across Multiple Kafka Clusters with Zero Fli...
 
One sink to rule them all: Introducing the new Async Sink
One sink to rule them all: Introducing the new Async SinkOne sink to rule them all: Introducing the new Async Sink
One sink to rule them all: Introducing the new Async Sink
 
Flink powered stream processing platform at Pinterest
Flink powered stream processing platform at PinterestFlink powered stream processing platform at Pinterest
Flink powered stream processing platform at Pinterest
 
Apache Flink in the Cloud-Native Era
Apache Flink in the Cloud-Native EraApache Flink in the Cloud-Native Era
Apache Flink in the Cloud-Native Era
 
Where is my bottleneck? Performance troubleshooting in Flink
Where is my bottleneck? Performance troubleshooting in FlinkWhere is my bottleneck? Performance troubleshooting in Flink
Where is my bottleneck? Performance troubleshooting in Flink
 
Using the New Apache Flink Kubernetes Operator in a Production Deployment
Using the New Apache Flink Kubernetes Operator in a Production DeploymentUsing the New Apache Flink Kubernetes Operator in a Production Deployment
Using the New Apache Flink Kubernetes Operator in a Production Deployment
 
The Current State of Table API in 2022
The Current State of Table API in 2022The Current State of Table API in 2022
The Current State of Table API in 2022
 
Flink SQL on Pulsar made easy
Flink SQL on Pulsar made easyFlink SQL on Pulsar made easy
Flink SQL on Pulsar made easy
 
Dynamic Rule-based Real-time Market Data Alerts
Dynamic Rule-based Real-time Market Data AlertsDynamic Rule-based Real-time Market Data Alerts
Dynamic Rule-based Real-time Market Data Alerts
 
Exactly-Once Financial Data Processing at Scale with Flink and Pinot
Exactly-Once Financial Data Processing at Scale with Flink and PinotExactly-Once Financial Data Processing at Scale with Flink and Pinot
Exactly-Once Financial Data Processing at Scale with Flink and Pinot
 
Processing Semantically-Ordered Streams in Financial Services
Processing Semantically-Ordered Streams in Financial ServicesProcessing Semantically-Ordered Streams in Financial Services
Processing Semantically-Ordered Streams in Financial Services
 
Tame the small files problem and optimize data layout for streaming ingestion...
Tame the small files problem and optimize data layout for streaming ingestion...Tame the small files problem and optimize data layout for streaming ingestion...
Tame the small files problem and optimize data layout for streaming ingestion...
 
Batch Processing at Scale with Flink & Iceberg
Batch Processing at Scale with Flink & IcebergBatch Processing at Scale with Flink & Iceberg
Batch Processing at Scale with Flink & Iceberg
 
Welcome to the Flink Community!
Welcome to the Flink Community!Welcome to the Flink Community!
Welcome to the Flink Community!
 
Practical learnings from running thousands of Flink jobs
Practical learnings from running thousands of Flink jobsPractical learnings from running thousands of Flink jobs
Practical learnings from running thousands of Flink jobs
 

Recently uploaded

NVIDIA at Breakthrough Discuss for Space Exploration
NVIDIA at Breakthrough Discuss for Space ExplorationNVIDIA at Breakthrough Discuss for Space Exploration
NVIDIA at Breakthrough Discuss for Space Exploration
Alison B. Lowndes
 
Perth MuleSoft Meetup July 2024
Perth MuleSoft Meetup July 2024Perth MuleSoft Meetup July 2024
Perth MuleSoft Meetup July 2024
Michael Price
 
Smart Mobility Market:Revolutionizing Transportation.pdf
Smart Mobility Market:Revolutionizing Transportation.pdfSmart Mobility Market:Revolutionizing Transportation.pdf
Smart Mobility Market:Revolutionizing Transportation.pdf
Market.us
 
Exchange, Entra ID, Conectores, RAML: Todo, a la vez, en todas partes
Exchange, Entra ID, Conectores, RAML: Todo, a la vez, en todas partesExchange, Entra ID, Conectores, RAML: Todo, a la vez, en todas partes
Exchange, Entra ID, Conectores, RAML: Todo, a la vez, en todas partes
jorgelebrato
 
FIDO Munich Seminar Blueprint for In-Vehicle Payment Standard.pptx
FIDO Munich Seminar Blueprint for In-Vehicle Payment Standard.pptxFIDO Munich Seminar Blueprint for In-Vehicle Payment Standard.pptx
FIDO Munich Seminar Blueprint for In-Vehicle Payment Standard.pptx
FIDO Alliance
 
Latest Tech Trends Series 2024 By EY India
Latest Tech Trends Series 2024 By EY IndiaLatest Tech Trends Series 2024 By EY India
Latest Tech Trends Series 2024 By EY India
EYIndia1
 
Choosing the Best Outlook OST to PST Converter: Key Features and Considerations
Choosing the Best Outlook OST to PST Converter: Key Features and ConsiderationsChoosing the Best Outlook OST to PST Converter: Key Features and Considerations
Choosing the Best Outlook OST to PST Converter: Key Features and Considerations
webbyacad software
 
UX Webinar Series: Aligning Authentication Experiences with Business Goals
UX Webinar Series: Aligning Authentication Experiences with Business GoalsUX Webinar Series: Aligning Authentication Experiences with Business Goals
UX Webinar Series: Aligning Authentication Experiences with Business Goals
FIDO Alliance
 
FIDO Munich Seminar: Strong Workforce Authn Push & Pull Factors.pptx
FIDO Munich Seminar: Strong Workforce Authn Push & Pull Factors.pptxFIDO Munich Seminar: Strong Workforce Authn Push & Pull Factors.pptx
FIDO Munich Seminar: Strong Workforce Authn Push & Pull Factors.pptx
FIDO Alliance
 
Finetuning GenAI For Hacking and Defending
Finetuning GenAI For Hacking and DefendingFinetuning GenAI For Hacking and Defending
Finetuning GenAI For Hacking and Defending
Priyanka Aash
 
Intel Unveils Core Ultra 200V Lunar chip .pdf
Intel Unveils Core Ultra 200V Lunar chip .pdfIntel Unveils Core Ultra 200V Lunar chip .pdf
Intel Unveils Core Ultra 200V Lunar chip .pdf
Tech Guru
 
Demystifying Neural Networks And Building Cybersecurity Applications
Demystifying Neural Networks And Building Cybersecurity ApplicationsDemystifying Neural Networks And Building Cybersecurity Applications
Demystifying Neural Networks And Building Cybersecurity Applications
Priyanka Aash
 
How UiPath Discovery Suite supports identification of Agentic Process Automat...
How UiPath Discovery Suite supports identification of Agentic Process Automat...How UiPath Discovery Suite supports identification of Agentic Process Automat...
How UiPath Discovery Suite supports identification of Agentic Process Automat...
DianaGray10
 
Self-Healing Test Automation Framework - Healenium
Self-Healing Test Automation Framework - HealeniumSelf-Healing Test Automation Framework - Healenium
Self-Healing Test Automation Framework - Healenium
Knoldus Inc.
 
What's New in Teams Calling, Meetings, Devices June 2024
What's New in Teams Calling, Meetings, Devices June 2024What's New in Teams Calling, Meetings, Devices June 2024
What's New in Teams Calling, Meetings, Devices June 2024
Stephanie Beckett
 
Discovery Series - Zero to Hero - Task Mining Session 1
Discovery Series - Zero to Hero - Task Mining Session 1Discovery Series - Zero to Hero - Task Mining Session 1
Discovery Series - Zero to Hero - Task Mining Session 1
DianaGray10
 
FIDO Munich Seminar Workforce Authentication Case Study.pptx
FIDO Munich Seminar Workforce Authentication Case Study.pptxFIDO Munich Seminar Workforce Authentication Case Study.pptx
FIDO Munich Seminar Workforce Authentication Case Study.pptx
FIDO Alliance
 
Welcome to Cyberbiosecurity. Because regular cybersecurity wasn't complicated...
Welcome to Cyberbiosecurity. Because regular cybersecurity wasn't complicated...Welcome to Cyberbiosecurity. Because regular cybersecurity wasn't complicated...
Welcome to Cyberbiosecurity. Because regular cybersecurity wasn't complicated...
Snarky Security
 
Keynote : Presentation on SASE Technology
Keynote : Presentation on SASE TechnologyKeynote : Presentation on SASE Technology
Keynote : Presentation on SASE Technology
Priyanka Aash
 
Semantic-Aware Code Model: Elevating the Future of Software Development
Semantic-Aware Code Model: Elevating the Future of Software DevelopmentSemantic-Aware Code Model: Elevating the Future of Software Development
Semantic-Aware Code Model: Elevating the Future of Software Development
Baishakhi Ray
 

Recently uploaded (20)

NVIDIA at Breakthrough Discuss for Space Exploration
NVIDIA at Breakthrough Discuss for Space ExplorationNVIDIA at Breakthrough Discuss for Space Exploration
NVIDIA at Breakthrough Discuss for Space Exploration
 
Perth MuleSoft Meetup July 2024
Perth MuleSoft Meetup July 2024Perth MuleSoft Meetup July 2024
Perth MuleSoft Meetup July 2024
 
Smart Mobility Market:Revolutionizing Transportation.pdf
Smart Mobility Market:Revolutionizing Transportation.pdfSmart Mobility Market:Revolutionizing Transportation.pdf
Smart Mobility Market:Revolutionizing Transportation.pdf
 
Exchange, Entra ID, Conectores, RAML: Todo, a la vez, en todas partes
Exchange, Entra ID, Conectores, RAML: Todo, a la vez, en todas partesExchange, Entra ID, Conectores, RAML: Todo, a la vez, en todas partes
Exchange, Entra ID, Conectores, RAML: Todo, a la vez, en todas partes
 
FIDO Munich Seminar Blueprint for In-Vehicle Payment Standard.pptx
FIDO Munich Seminar Blueprint for In-Vehicle Payment Standard.pptxFIDO Munich Seminar Blueprint for In-Vehicle Payment Standard.pptx
FIDO Munich Seminar Blueprint for In-Vehicle Payment Standard.pptx
 
Latest Tech Trends Series 2024 By EY India
Latest Tech Trends Series 2024 By EY IndiaLatest Tech Trends Series 2024 By EY India
Latest Tech Trends Series 2024 By EY India
 
Choosing the Best Outlook OST to PST Converter: Key Features and Considerations
Choosing the Best Outlook OST to PST Converter: Key Features and ConsiderationsChoosing the Best Outlook OST to PST Converter: Key Features and Considerations
Choosing the Best Outlook OST to PST Converter: Key Features and Considerations
 
UX Webinar Series: Aligning Authentication Experiences with Business Goals
UX Webinar Series: Aligning Authentication Experiences with Business GoalsUX Webinar Series: Aligning Authentication Experiences with Business Goals
UX Webinar Series: Aligning Authentication Experiences with Business Goals
 
FIDO Munich Seminar: Strong Workforce Authn Push & Pull Factors.pptx
FIDO Munich Seminar: Strong Workforce Authn Push & Pull Factors.pptxFIDO Munich Seminar: Strong Workforce Authn Push & Pull Factors.pptx
FIDO Munich Seminar: Strong Workforce Authn Push & Pull Factors.pptx
 
Finetuning GenAI For Hacking and Defending
Finetuning GenAI For Hacking and DefendingFinetuning GenAI For Hacking and Defending
Finetuning GenAI For Hacking and Defending
 
Intel Unveils Core Ultra 200V Lunar chip .pdf
Intel Unveils Core Ultra 200V Lunar chip .pdfIntel Unveils Core Ultra 200V Lunar chip .pdf
Intel Unveils Core Ultra 200V Lunar chip .pdf
 
Demystifying Neural Networks And Building Cybersecurity Applications
Demystifying Neural Networks And Building Cybersecurity ApplicationsDemystifying Neural Networks And Building Cybersecurity Applications
Demystifying Neural Networks And Building Cybersecurity Applications
 
How UiPath Discovery Suite supports identification of Agentic Process Automat...
How UiPath Discovery Suite supports identification of Agentic Process Automat...How UiPath Discovery Suite supports identification of Agentic Process Automat...
How UiPath Discovery Suite supports identification of Agentic Process Automat...
 
Self-Healing Test Automation Framework - Healenium
Self-Healing Test Automation Framework - HealeniumSelf-Healing Test Automation Framework - Healenium
Self-Healing Test Automation Framework - Healenium
 
What's New in Teams Calling, Meetings, Devices June 2024
What's New in Teams Calling, Meetings, Devices June 2024What's New in Teams Calling, Meetings, Devices June 2024
What's New in Teams Calling, Meetings, Devices June 2024
 
Discovery Series - Zero to Hero - Task Mining Session 1
Discovery Series - Zero to Hero - Task Mining Session 1Discovery Series - Zero to Hero - Task Mining Session 1
Discovery Series - Zero to Hero - Task Mining Session 1
 
FIDO Munich Seminar Workforce Authentication Case Study.pptx
FIDO Munich Seminar Workforce Authentication Case Study.pptxFIDO Munich Seminar Workforce Authentication Case Study.pptx
FIDO Munich Seminar Workforce Authentication Case Study.pptx
 
Welcome to Cyberbiosecurity. Because regular cybersecurity wasn't complicated...
Welcome to Cyberbiosecurity. Because regular cybersecurity wasn't complicated...Welcome to Cyberbiosecurity. Because regular cybersecurity wasn't complicated...
Welcome to Cyberbiosecurity. Because regular cybersecurity wasn't complicated...
 
Keynote : Presentation on SASE Technology
Keynote : Presentation on SASE TechnologyKeynote : Presentation on SASE Technology
Keynote : Presentation on SASE Technology
 
Semantic-Aware Code Model: Elevating the Future of Software Development
Semantic-Aware Code Model: Elevating the Future of Software DevelopmentSemantic-Aware Code Model: Elevating the Future of Software Development
Semantic-Aware Code Model: Elevating the Future of Software Development
 

Dongwon Kim – A Comparative Performance Evaluation of Flink

  • 1. A Comparative Performance Evaluation of Flink Dongwon Kim POSTECH
  • 2. About Me • Postdoctoral researcher @ POSTECH • Research interest • Design and implementation of distributed systems • Performance optimization of big data processing engines • Doctoral thesis • MR2: Fault Tolerant MapReduce with the Push Model • Personal blog • http://eastcirclek.blogspot.kr • Why I’m here  2
  • 3. Outline • TeraSort for various engines • Experimental setup • Results & analysis • What else for better performance? • Conclusion 3
  • 4. TeraSort • Hadoop MapReduce program for the annual terabyte sort competition • TeraSort is essentially distributed sort (DS) a4 b3 a1 a2 b1 b2 a2 b1 a3 a4 b3 b4 Disk a2 a4 b3 b1 a1 b4 a3 b2 a1 a3 b4 b2 Disk a2 a4 a1 a3 b3 b1 b4 b2 read shufflinglocal sort write Disk Disk local sort Node 1 Node 2 Typical DS phases : 4Total order
  • 5. • Included in Hadoop distributions • with TeraGen & TeraValidate • Identity map & reduce functions • Range partitioner built on sampling • To guarantee a total order & to prevent partition skew • Sampling to compute boundary points within few seconds TeraSort for MapReduce Reduce taskMap task read shuffling sortsort reducemap write read shufflinglocal sort writelocal sortDS phases : reducemap 5 Record range … Partition 1 Partition 2 Partition r boundary points
  • 6. • Tez can execute TeraSort for MapReduce w/o any modification • mapreduce.framework.name = yarn-tez • Tez DAG plan of TeraSort for MapReduce TeraSort for Tez finalreduce vertex initialmap vertex Map task read sortmap Reduce task shuffling sort reduce write input data output data 6
  • 7. TeraSort for Spark & Flink • My source code in GitHub: • https://github.com/eastcirclek/terasort • Sampling-based range partitioner from TeraSort for MapReduce • Visit my personal blog for a detailed explanation • http://eastcirclek.blogspot.kr 7
  • 8. RDD1 RDD2 • Code • Two RDDs TeraSort for Spark Stage 1Stage 0 Shuffle-Map Task (for newAPIHadoopFile) read sort Result Task (for repartitionAndSortWithinPartitions) shuffling sort write read shufflinglocal sort writelocal sort Create a new RDD to read from HDFS # partitions = # blocks Repartition the parent RDD based on the user-specified partitioner Write output to HDFS DS phases : 8
  • 9. Pipeline • Code • Pipelines consisting of four operators TeraSort for Flink read shuffling writelocal sort Create a dataset to read tuples from HDFS partition tuples Sort tuples of each partition DataSource Partition SortPartition DataSink local sort No map-side sorting due to pipelined execution Write output to HDFS DS phases : 9
  • 10. Importance of TeraSort • Suitable for measuring the pure performance of big data engines • No data transformation (like map, filter) with user-defined logic • Basic facilities of each engine are used • “Winning the sort benchmark” is a great means of PR 10
  • 11. Outline • TeraSort for various engines • Experimental setup • Machine specification • Node configuration • Results & analysis • What else for better performance? • Conclusion 11
  • 12. Machine specification (42 identical machines) DELL PowerEdge R610 CPU Two X5650 processors (Total 12 cores) Memory Total 24Gb Disk 6 disks * 500GB/disk Network 10 Gigabit Ethernet My machine Spark team Processor Intel Xeon X5650 (Q1, 2010) Intel Xeon E5-2670 (Q1, 2012) Cores 6 * 2 processors 8 * 4 processors Memory 24GB 244GB Disks 6 HDD's 8 SSD's Results can be different in newer machines 12
  • 13. 24GB on each node Node configuration Total 2 GB for daemons 13 GB Tez-0.7.0 NodeManager (1 GB) ShuffleService MapTask (1GB) DataNode (1 GB) MapTask (1GB) ReduceTask (1GB) ReduceTask (1GB) MapTask (1GB) MapTask (1GB) MapTask (1GB) … MapReduce-2.7.1 NodeManager (1 GB) ShuffleService DataNode (1 GB) MapTask (1GB) MapTask (1GB) ReduceTask (1GB) ReduceTask (1GB) MapTask (1GB) MapTask (1GB) MapTask (1GB) … 12 GB Flink Spark Spark-1.5.1 NodeManager (1 GB) Executor (12GB) Internal memory layout Various managers DataNode (1 GB) Task slot 1 Task slot 2 Task slot 12 ... Thread pool Flink-0.9.1 NodeManager (1 GB) TaskManager (12GB) DataNode (1 GB) Internal memory layout Various managers Task slot 1 Task slot 2 Task slot 12 ... Task threads Tez MapReduce ReduceTask (1GB)ReduceTask (1GB) 13 12 simultaneous tasks at most Driver (1GB) JobManager (1GB)
  • 14. Outline • TeraSort for various engines • Experimental setup • Results & analysis • Flink is faster than other engines due to its pipelined execution • What else for better performance? • Conclusion 14
  • 15. How to read a swimlane graph & throughput graphs Tasks Time since job starts (seconds) 2nd stage 1st 2nd 3rd 4th 5th 6th 15 Cluster network throughput Cluster disk throughput In Out Disk read Disk Write - 6 waves of 1st stage tasks - 1 wave of 2nd stage tasks - Two stages are hardly overlapped 1st stage 2nd stage 1st stage 2nd stage No network traffic during 1st stage Each line : duration of each task Different patterns for different stages
  • 16. Result of sorting 80GB/node (3.2TB) 1480 sec 1st stage 1st stage 1st stage 2nd stage 2157 sec 2nd stage 2171 sec 1 DataSource 2 Partition 3 SortPartition 4 DataSink • Flink is the fastest due to its pipelined execution • Tez and Spark do not overlap 1st and 2nd stages • MapReduce is slow despite overlapping stages MapReduce in Hadoop-2.7.1 Tez-0.7.0 Spark-1.5.1 Flink-0.9.1 2nd stage 1887 sec 2157 1887 2171 1480 0 500 1000 1500 2000 2500 MapReduce in Hadoop-2.7.1 Tez-0.7.0 Spark-1.5.1 Flink-0.9.1 Time(seconds) 16* Map output compression turned on for Spark and Tez * *
  • 17. Tez and Spark do not overlap 1st and 2nd stages Cluster network throughput Cluster disk throughput In Out Disk read Cluster network throughput Cluster disk throughput In Out Disk read Disk write Disk write Disk read Disk write Out In (1) 2nd stage starts (2) Output of 1st stage is sent (1) 2nd stage starts (2) Output of 1st stage is sent (1) Network traffic occurs from start Cluster network throughput (2) Write to HDFS occurs right after shuffling is done 1 DataSource 2 Partition 3 SortPartition 4 DataSink idle idle (3) Disk write to HDFS occurs after shuffling is done (3) Disk write to HDFS occurs after shuffling is done 17
  • 18. Tez does not overlap 1st and 2nd stages • Tez has parameters to control the degree of overlap • tez.shuffle-vertex-manager.min-src-fraction : 0.2 • tez.shuffle-vertex-manager.max-src-fraction : 0.4 • However, 2nd stage is scheduled early but launched late scheduled launched 18
  • 19. Spark does not overlap 1st and 2nd stages • Spark cannot execute multiple stages simultaneously • also mentioned in the following VLDB paper (2015) Spark doesn’t support the overlap between shuffle write and read stages. … Spark may want to support this overlap in the future to improve performance. Experimental results of this paper - Spark is faster than MapReduce for WordCount, K-means, PageRank. - MapReduce is faster than Spark for Sort. 19
  • 20. MapReduce is slow despite overlapping stages • mapreduce.job.reduce.slowstart.completedMaps : [0.0, 1.0] • Wang’s attempt to overlap spark stages 0.05 (overlapping, default) 0.95 (no overlapping) 2157 sec 10% improvement 20 Wang proposes to overlap stages to achieve better utilization 10%??? Why Spark & MapReduce improve just 10%? 2385 sec 2nd stage 1st stage
  • 21. Disk Data transfer between tasks of different stages Output file P1 P2 Pn Shuffle server … Consumer Task 1 Consumer Task 2 Consumer Task n P1 … P2 Pn Traditional pull model - Used in MapReduce, Spark, Tez - Extra disk access & simultaneous disk access - Shuffling affects the performance of producers Producer Task (1) Write output to disk (2) Request P1 (3) Send P1 Pipelined data transfer - Used in Flink - Data transfer from memory to memory - Flink causes fewer disk access during shuffling 21 Leads to only 10% improvement
  • 22. Flink causes fewer disk access during shuffling Map Reduce Flink diff. Total disk write (TB) 9.9 6.5 3.4 Total disk read (TB) 8.1 6.9 1.2 Difference comes from shuffling Shuffled data are sometimes read from page cache Cluster disk throughput Disk read Disk write Disk read Disk write Cluster disk throughput FlinkMapReduce 22 Total amount of disk read/write equals to the area of blue/green region
  • 23. Result of TeraSort with various data sizes node data size (GB) Time (seconds) Flink Spark MapReduce Tez 10 157 387 259 277 20 350 652 555 729 40 741 1135 1085 1709 80 1480 2171 2157 1887 160 3127 4927 4796 3950 23 100 1000 10000 10 20 40 80 160 Time(seconds) node data size (GB) Flink Spark MapReduce Tez What we’ve seen Log scale * Map output compression turned on for Spark and Tez
  • 24. Result of HashJoin • 10 slave nodes • org.apache.tez.examples.JoinDataGen • Small dataset : 256MB • Large dataset : 240GB (24GB/node) • Result : 24 Visit my blog  Flink is ~2x faster than Tez ~4x faster than Spark 770 1538 378 0 200 400 600 800 1000 1200 1400 1600 1800 Tez-0.7.0 Spark-1.5.1 Flink-0.9.1 Time(seconds) * No map output compression for both Spark and Tez unlike in TeraSort
  • 25. Result of HashJoin with swimlane & throughput graphs 25 Idle 1 DataSource 2 DataSource 3 Join 4 DataSink Idle Cluster network throughput Cluster disk throughput In Out Disk read Disk write Disk read Disk write In Out In Out Disk read Disk write Cluster network throughput Cluster disk throughput 0.24 TB 0.41 TB 0.60 TB 0.84 TB 0.68 TB 0.74 TB Overlap 2nd 3rd
  • 26. Flink’s shortcoming • No support for map output compression • Small data blocks are pipelined between operators • Job-level fault tolerance • Shuffle data are not materialized • Low disk throughput during the post-shuffling phase 26
  • 27. Low disk throughput during the post-shuffling phase • Possible reason : sorting records from small files • Concurrent disk access to small files  too many disk seeks  low disk throughput • Other engines merge records from larger files than Flink • “Eager pipelining moves some of the sorting work from the mapper to the reducer” • from MapReduce online (NSDI 2010) Flink Tez MapReduce 27
  • 28. Outline • TeraSort for various engines • Experimental setup • Results & analysis • What else for better performance? • Conclusion 28
  • 29. MR2 – another MapReduce engine • PhD thesis • MR2: Fault Tolerant MapReduce with the Push Model • developed for 3 years • Provide the user interface of Hadoop MapReduce • No DAG support • No in-memory computation • No iterative-computation • Characteristics • Push model + Fault tolerance • Techniques to boost up HDD throughput • Prefetching for mappers • Preloading for reducers 29
  • 30. MR2 pipeline • 7 types of components with memory buffers 1. Mappers & reducers : to apply user-defined functions 2. Prefetcher & preloader : to eliminate concurrent disk access 3. Sender & reducer & merger : to implement MR2’s push model • Various buffers : to pass data between components w/o disk IOs • Minimum disk access (2 disk reads & 2 disk writes) • +1 disk write for fault tolerance W1 R2 W2R1 30 1 12 23 3 3 W3
  • 31. Prefetcher & Mappers • Prefetcher loads data for multiple mappers • Mappers do not read input from disks <MR2><Hadoop MapReduce> Mapper1 processing Blk1 Mapper2 processing Blk2 Time Disk throughput CPU utilization 2 mappers on a node Blk1 Time Prefetcher Blk2 Blk3 Blk1 2 Blk1 1 Blk2 2 Blk2 1 Blk3 2 Blk3 1 Blk4 Blk4 2 Blk4 1 Disk throughput CPU utilization 2 mappers on a node 31
  • 32. Push-model in MR2 • Node-to-node network connection for pushing data • To reduce # network connections • Data transfer from memory buffer • Mappers stores spills in send buffer • Spills are pushed to reducer sides by sender • Fault tolerance (can be turned on/off) • Input ranges of each spill are known to master for reproduce • For fast recovery • store spills on disk for fast recovery (extra disk write) 32 similar to Flink’s pipelined execution MR2 does local sorting before pushing data similar to Spark
  • 33. Receiver’s managed memory Receiver & merger & preloader & reducer • Merger produces a file from different partition data • sorts each partition data • and then does interleaving • Preloader preloads each group into reduce buffer • Reducers do not read data directly from disks • MR2 can eliminate concurrent disk reads from reducers thanks to Preloader P1 P2 P3 P4 P1 P2 P3 P4 P1 P2 P3 P4 … … Preloader loads each group (1 disk access for 4 partitions) 33
  • 34. Result of sorting 80GB/node (3.2TB) with MR2 MapReduce in Hadoop-2.7.1 Tez-0.7.0 Spark-1.5.1 Flink-0.9.1 MR2 Time (sec) 2157 1887 2171 1480 890 MR2 speedup over other engines 2.42 2.12 2.44 1.66 - 2157 1887 2171 1480 890 0 500 1000 1500 2000 2500 MapReduce in Hadoop-2.7.1 Tez-0.7.0 Spark-1.5.1 Flink-0.9.1 MR2 Time(seconds) 34
  • 35. Disk & network throughput 1. DataSource / Mapping • Prefetcher is effective • MR2 shows higher disk throughput 2. Partition / Shuffling • Records to shuffle are generated faster from in MR2 3. DataSink / Reducing • Preloader is effective • Almost 2x throughput Disk read Disk write Out In Cluster network throughput Cluster disk throughput Out In Disk read Disk write Flink MR2 1 1 2 2 3 3 35
  • 36. • Experimental results using 10 nodes PUMA (PUrdue MApreduce benchmarks suite) 36
  • 37. Outline • TeraSort for various engines • Experimental setup • Experimental results & analysis • What else for better performance? • Conclusion 37
  • 38. Conclusion • Pipelined execution for both batch and streaming processing • Even better than other batch processing engines for TeraSort & HashJoin • Shortcomings due to pipelined execution • No fine-grained fault tolerance • No map output compression • Low disk throughput during the post-shuffling phase 38