SlideShare a Scribd company logo
Page1
Hive: Loading Data
June 2015
Version 2.0
Ben Leonhardi
Page2
Agenda
• Introduction
• ORC files
• Partitioning vs. Predicate Pushdown
• Loading data
• Dynamic Partitioning
• Bucketing
• Optimize Sort Dynamic Partitioning
• Manual Distribution
• Miscellaneous
• Sorting and Predicate pushdown
• Debugging
• Bloom Filters
Page3
Introduction
• Effectively storing data in Hive
• Reducing IO
• Partitioning
• ORC files with predicate pushdown
• Partitioned tables
• Static partition loading
– One partition is loaded at a time
– Good for continuous operation
– Not suitable for initial loads
• Dynamic partition loading
– Data is distributed between partitions dynamically
• Data Sorting for better predicate pushdown
Page4
ORCFile – Columnar Storage for Hive
Columnar format
enables high
compression and high
performance.
• ORC is an optimized, compressed, columnar storage format
• Only needed columns are read
• Blocks of data can be skipped using indexes and predicate pushdown
Page5
Partitioning Hive
• Hive tables can be value partitioned
– Each partition is associated with a folder in HDFS
– All partitions have an entry in the Hive Catalog
– The Hive optimizer will parse the query for filter conditions and skip unneeded partitions
• Usage consideration
– Too many partitions can lead to bad performance in the Hive Catalog and Optimizer
– No range partitioning / no continuous values
– Normally date partitioned by data load
Page 5
• /apps/hive/warehouse
• cust.db
• customers
• sales
• day=20150801
• day=20150802
• day=20150803
• …
Warehouse folder in HDFS
Hive Databases have
folders ending in .db
Unpartitioned tables have
a single folder.
Partitioned tables have a subfolder for
each partition.
Page6
Predicate Pushdown
• ORC ( and other storage formats ) support predicate pushdown
– Query filters are pushed down into the storage handler
– Blocks of data can be skipped without reading them from HDFS based on ORC index
SELECT SUM (PROFIT) FROM SALES WHERE DAY = 03
Page 6
DAY CUST PROFIT
01 Klaus 35
01 Max 30
01 John 20
02 John 34
03 Max 10
04 Klaus 20
04 Max 45
05 Mark 20
DAY_MIN DAY_MAX PROFIT_MIN PROFIT_MAX
01 01 20 35
02 04 10 34
04 05 20 45
Only Block 2 can contain rows
with DAY 02.
Block 1 and 3 can be skipped
Page7
Partitioning vs. Predicate Pushdown
• Both reduce the data that needs to be read
• Partitioning works at split generation, no need to start containers
• Predicate pushdown is applied during file reads
• Partitioning is applied in the split generation/optimizer
• Impact on Optimizer and HCatalog for large number of partitions
• Thousands of partitions will result in performance problems
• Predicate Pushdown needs to read the file footers
• Container are allocated even though they can run very quickly
• No overhead in Optimizer/Catalog
• Newest Hive build 1.2 can apply PP at split generation time
• hive.exec.orc.split.strategy=BI, means never read footers (& fire jobs fast)
• hive.exec.orc.split.strategy=ETL, always read footers and split as fine as you want
Page8
Partitioning and Predicate Pushdown
SELECT * FROM TABLE WHERE COUNTRY = “EN” and DATE = 2015
Partition EN
ORC BLK1
2008
2010
2011
2011
2013
2013
Map1
ORC BLK2
2013
2013
2013
2014
2015
2015
ORC BLK3
2015
2015
2015
2015
2015
2015
Partition DE
ORC
BLK1
ORC
BLK2
Map2 Map3
Table partitioned on Country, only folder
for “EN” is read
ORC files keep index information on
content, blocks can be skipped based on
index
Page9
Agenda
• Introduction
• ORC files
• Partitioning vs. Predicate Pushdown
• Loading data
• Dynamic Partitioning
• Bucketing
• Optimize Sort Dynamic Partitioning
• Manual Distribution
• Miscellaneous
• Sorting and Predicate pushdown
• Debugging
• Bloom Filters
Page10
Loading Data with Dynamic Partitioning
CREATE TABLE ORC_SALES
( CLIENTID INT, DT DATE, REV DOUBLE, PROFIT DOUBLE, COMMENT STRING )
PARTITIONED BY ( COUNTRY STRING )
STORED AS ORC;
INSERT INTO TABLE ORC_SALES PARTITION (COUNTRY) SELECT * FROM DEL_SALES;
• Dynamic partitioning could create millions of partitions for bad partition keys
• Parameters exist that restrict the creation of dynamic partitions
set hive.exec.dynamic.partition=true;
set hive.exec.dynamic.partition.mode = nonstrict;
set hive.exec.max.dynamic.partitions.pernode=100000;
set hive.exec.max.dynamic.partitions=100000;
set hive.exec.max.created.files=100000;
Most of these settings are already enabled with
good values in HDP 2.2+
Dynamic partition columns need to be the last columns
in your dataset
Change order in SELECT list if necessary
Page11
Dynamic Partition Loading
• One file per Reducer/Mapper
• Standard Load will use Map tasks to write data. One map task per input block/split
Partition DE Partition EN Partition FR Partition SP
Block1 Block3Block2 Block4
Map1 Map3Map2 Map4
b1 b2
b3 b4
b1 b2
b3 b4
b1 b2
b3 b4
b1 b2
b3 b4
Block5
Map5
b5 b5 b5 b5
Page12
Small files
• Large number of writers with large number of partitions results in small files
• Files with 1-10 blocks of data are more efficient for HDFS
• ORC compression is not very efficient on small files
• ORC Writer will keep one Writer object open for each partition he encounters.
• RAM needed for one stripe in every file / column
• Too many Writers results in small stripes ( down to 5000 rows )
• If you run into memory problems you can increase the task RAM or increase the ORC
memory pool percentage
set hive.tez.java.opts="-Xmx3400m";
set hive.tez.container.size = 4096;
set hive.exec.orc.memory.pool = 1.0;
Page13
Loading Data Using Distribution
• For large number of partitions, load data through reducers.
• One or more reducers associated with a partition through data distribution
• Beware of Hash conflicts ( two partitions being mapped to the same reducer by the hash function )
EN, 2015
DE, 2015
EN, 2014
…
DE, 2009
EN, 2008
DE, 2011
…
EN, 2014
EN, 2008
DE, 2011
…
Partition EN
Map0
HASH (EN) -> 1
HASH (DE) -> 0
…
Map1
HASH (EN) -> 1
HASH (DE) -> 0
…
Map2
HASH (EN) -> 1
HASH (EN) -> 1
…
Red1
EN, 2015
EN, 2008
…
Red0
DE, 2015
DE, 2009
…
Partition DE
Page14
Bucketing
• Hive tables can be bucketed using the CLUSTERED BY keyword
– One file/reducer per bucket
– Buckets can be sorted
– Additional advantages like bucket joins and sampling
• Per default one reducer for each bucket across all partitions
– Performance problems for large loads with dynamic partitioning
– ORC Writer memory issues
• Enforce Bucketing and Sorting in Hive
set hive.enforce.sorting=true;
set hive.enforce.bucketing=true;
Page15
Bucketing Example
CREATE TABLE ORC_SALES
( CLIENTID INT, DT DATE, REV DOUBLE, PROFIT DOUBLE, COMMENT STRING )
PARTITIONED BY ( COUNTRY STRING )
CLUSTERED BY DT SORT BY ( DT ) INTO 31 BUCKETS;
INSERT INTO TABLE ORC_SALES PARTITION (COUNTRY) SELECT * FROM DEL_SALES;
Partition DE Partition EN Partition FR
D1 D2
D4 …
D3 D1 D2
D4 …
D3 D1 D2
D4 …
D3
Red DT1 Red DT2 Red DT3 Red …
Page16
Optimized Dynamic Sorted Partitioning
• Enable optimized sorted partitioning to fix small file creation
– Creates one reducer for each partition AND bucket
– If you have 5 partitions with 4 buckets you will have 20 reducers
• Hash conflicts mean that you can still have reducers handling more than one file
– Data is sorted by partition/bucket key
– ORCWriter closes files after encountering new keys
- only one open file at a time
- reduced memory needs
• Can be enabled with
set optimize.sort.dynamic.partitioning=true;
Page17
Optimized Dynamic Sorted Partitioning
• Optimized sorted partitioning creates one reducer per partition * bucket
Block1 Block3Block2 Block4 Block5
Partition DE Partition EN Partition FR Partition SP
Map1 Map3Map2 Map4 Map5
Red1 Red2 Red3 Red4
Out1 Out2 Out3 Out4
Hash Conflicts can happen even though there is
one reducer for each partition.
• This is the reason data is sorted
• Reducer can close ORC writer after each key
Page18
Miscellaneous
• Small number of partitions can lead to slow loads
• Solution is bucketing, increase the number of reducers
• This can also help in Predicate pushdown
• Partition by country, bucket by client id for example.
• On a big system you may have to increase the max. number of reducers
set hive.exec.reducers.max=1000;
Page19
Manual Distribution
• Fine grained control over distribution may be needed
• DISTRIBUTE BY keyword allows control over the distribution algorithm
• For example DISTRIBUTE BY GENDER will split the data stream into two sub streams
• Does not define the numbers of reducers
– Specify a fitting number with
set mapred.reduce.tasks=2
• For dynamic partitioning include the partition key in the distributiom
• Any additional subkeys result in multiple files per partition folder ( not unlike bucketing )
• For fast load try to maximize number of reducers in cluster
Page20
Distribute By
SET MAPRED.REDUCE.TASKS = 8;
INSERT INTO ORC_SALES PARTITION ( COUNTRY) SELECT FROM DEL_SALES
DISTRIBUTE BY COUNTRY, GENDER;
Block1 Block3Block2 Block4
Partition DE Partition EN Partition FR Partition SP
Map1 Map3Map2 Map4
Red1 Red2 Red3 Red4 Red5 Red6 Red7 Red8
DE
M
DE
F
EN
M
EN
F
FR
M
FR
F
SP
M
SP
F
HashConflict
Reducers and number of distribution keys do not have
to be identical but it is good best practice
If you run into hash conflicts, changing the distribution
key may help. ( M/F -> 0/1 ) for example
Page21
Agenda
• Introduction
• ORC files
• Partitioning vs. Predicate Pushdown
• Loading data
• Dynamic Partitioning
• Bucketing
• Optimize Sort Dynamic Partitioning
• Manual Distribution
• Miscellaneous
• Sorting and Predicate pushdown
• Debugging
• Bloom Filters
Page22
SORT BY for Predicate Pushdown ( PPD )
• ORC can skip stripes ( and 10k sub-blocks ) of data based on ORC footers
• Data can be skipped based on min/max values and bloom filters
• In warehouse environments data is normally sorted by date
• For initial loads or other predicates data can be sorted during load
• Two ways to sort data: ORDER BY ( global sort, slow ) and SORT BY ( sort by reducer )
– Use want SORT BY for PPD: faster and cross-file sorting does not help PPD
• Can be combined with Distribution, Partitioning, Bucketing to optimize effect
CREATE TABLE ORC_SALES
( CLIENTID INT, DT DATE, REV DOUBLE, PROFIT DOUBLE, COMMENT STRING )
STORED AS ORC;
INSERT INTO TABLE ORC_SALES SELECT * FROM DEL_SALES SORT BY DT;
Page23
Sorting when Inserting into Table
Partition DE
DE 1
2015-01
2015-01
2015-02
2015-03
2015-04
2015-05
Block1 Block1
DE 2
2015-02
2015-02
2015-03
2015-03
2015-03
Partition EN
EN 1
2015-03
2015-04
2015-04
2015-07
EN 2
2015-01
2015-02
2015-02
2015-02
2015-03
2015-05
Map1 Map1
SELECT * FROM DATA_ORC WHERE dt = 2015-02
Files are divided into
stripes of x MB and
blocks of 10000 rows
Only blue blocks have to
be read based on their
min/max values
This requires sorting
Page24
Checking Results
• Use hive –orcfiledump to check results in ORC files
hive –orcfiledump /apps/hive/warehouse/table/dt=3/00001_0
… Compression: ZLIB …
Stripe Statistics:
Stripe 1:
Column 0: count: 145000
Column 1: min: 1 max: 145000
…
Stripe 2:
Column 0: count: 144000
Column 1: min: 145001 max: 289000
…
Check Number of Stripes and number rows
- small stripes (5000 rows) indicate a memory problem during load
Data should be sorted on your
predicate columns
Page25
Bloom Filters
• New feature in Hive 1.2
• A hash index bitmap of values in a column
• If the bit for hash(value) is 0, no row in the stripe can be your value
• If the bit for hash(value) is 1, it is possible that the stripe contains your value
• Hive can skip stripes without need to sort data
• Hard to sort by multiple columns
CREATE TABLE ORC_SALES ( ID INT, Client INT, DT INT… );
STORED AS ORC TBLPROPERTIES
("orc.bloom.filter.columns"="Client,DT");
Parameter needs case sensitive comma-separated list of columns
Page26
Bloom Filters
• Bloom Filters are good
• If you have multiple predicate columns
• If your predicate columns are not suitable for sorting ( URLs, hash values, … )
• If you cannot sort the data ( daily ingestion, filter by clientid )
• Bloom Filters are bad
• If every stripe contains your value
– low cardinality fields like country
– Events that happen regularly ( client buys something daily )
• Check if you successfully created a bloom filter index with orcfiledump
hive --orcfiledump –rowindex 3,4,5 /apps/hive/…
You only see bloom filter indexes if you specify the columns you want to see
Page27
Verify ORC indexes
• Switch on additional information like row counts going in/out of Tasks
SET HIVE.TEZ.PRINT.EXEC.SUMMARY = TRUE;
• Run query with/without Predicate Pushdown to compare row counts:
set hive.optimize.index.filter=false;
// run query
set hive.optimize.index.filter=true;
// run query
// compare results
Page28
Summary
• Partitioning and Predicate Pushdown can greatly enhance query performance
• Predicate Pushdown enhances Partitioning, it does not replace it
• Too many partitions lead to performance problems
• Dynamic Partition loading can lead to problems
• Normally Optimized Dynamic Sorted Partitioning solves these problems
• Sometimes manual distribution can be beneficial
• Carefully design your table layout and data loading
• Sorting is critical for effective predicate pushdown
• If sorting is no option bloom filters can be a solution
• Verify data layout with orcfiledump and debug information

More Related Content

What's hot

Hive Bucketing in Apache Spark with Tejas Patil
Hive Bucketing in Apache Spark with Tejas PatilHive Bucketing in Apache Spark with Tejas Patil
Hive Bucketing in Apache Spark with Tejas Patil
Databricks
 
Spark shuffle introduction
Spark shuffle introductionSpark shuffle introduction
Spark shuffle introduction
colorant
 
Hive+Tez: A performance deep dive
Hive+Tez: A performance deep diveHive+Tez: A performance deep dive
Hive+Tez: A performance deep dive
t3rmin4t0r
 
Managing 2000 Node Cluster with Ambari
Managing 2000 Node Cluster with AmbariManaging 2000 Node Cluster with Ambari
Managing 2000 Node Cluster with Ambari
DataWorks Summit
 
Deep Dive: Memory Management in Apache Spark
Deep Dive: Memory Management in Apache SparkDeep Dive: Memory Management in Apache Spark
Deep Dive: Memory Management in Apache Spark
Databricks
 
Deep Dive into the New Features of Apache Spark 3.0
Deep Dive into the New Features of Apache Spark 3.0Deep Dive into the New Features of Apache Spark 3.0
Deep Dive into the New Features of Apache Spark 3.0
Databricks
 
Understanding Query Plans and Spark UIs
Understanding Query Plans and Spark UIsUnderstanding Query Plans and Spark UIs
Understanding Query Plans and Spark UIs
Databricks
 
Streaming SQL with Apache Calcite
Streaming SQL with Apache CalciteStreaming SQL with Apache Calcite
Streaming SQL with Apache Calcite
Julian Hyde
 
Streaming Data Lakes using Kafka Connect + Apache Hudi | Vinoth Chandar, Apac...
Streaming Data Lakes using Kafka Connect + Apache Hudi | Vinoth Chandar, Apac...Streaming Data Lakes using Kafka Connect + Apache Hudi | Vinoth Chandar, Apac...
Streaming Data Lakes using Kafka Connect + Apache Hudi | Vinoth Chandar, Apac...
HostedbyConfluent
 
Apache Hive Hook
Apache Hive HookApache Hive Hook
Apache Hive Hook
Minwoo Kim
 
Apache Spark Architecture
Apache Spark ArchitectureApache Spark Architecture
Apache Spark Architecture
Alexey Grishchenko
 
Optimizing S3 Write-heavy Spark workloads
Optimizing S3 Write-heavy Spark workloadsOptimizing S3 Write-heavy Spark workloads
Optimizing S3 Write-heavy Spark workloads
datamantra
 
Building large scale transactional data lake using apache hudi
Building large scale transactional data lake using apache hudiBuilding large scale transactional data lake using apache hudi
Building large scale transactional data lake using apache hudi
Bill Liu
 
HBase Low Latency
HBase Low LatencyHBase Low Latency
HBase Low Latency
DataWorks Summit
 
Building an open data platform with apache iceberg
Building an open data platform with apache icebergBuilding an open data platform with apache iceberg
Building an open data platform with apache iceberg
Alluxio, Inc.
 
Spark + Parquet In Depth: Spark Summit East Talk by Emily Curtin and Robbie S...
Spark + Parquet In Depth: Spark Summit East Talk by Emily Curtin and Robbie S...Spark + Parquet In Depth: Spark Summit East Talk by Emily Curtin and Robbie S...
Spark + Parquet In Depth: Spark Summit East Talk by Emily Curtin and Robbie S...
Spark Summit
 
Using Spark Streaming and NiFi for the Next Generation of ETL in the Enterprise
Using Spark Streaming and NiFi for the Next Generation of ETL in the EnterpriseUsing Spark Streaming and NiFi for the Next Generation of ETL in the Enterprise
Using Spark Streaming and NiFi for the Next Generation of ETL in the Enterprise
DataWorks Summit
 
A Thorough Comparison of Delta Lake, Iceberg and Hudi
A Thorough Comparison of Delta Lake, Iceberg and HudiA Thorough Comparison of Delta Lake, Iceberg and Hudi
A Thorough Comparison of Delta Lake, Iceberg and Hudi
Databricks
 
Introduction to Spark Internals
Introduction to Spark InternalsIntroduction to Spark Internals
Introduction to Spark Internals
Pietro Michiardi
 
What is new in Apache Hive 3.0?
What is new in Apache Hive 3.0?What is new in Apache Hive 3.0?
What is new in Apache Hive 3.0?
DataWorks Summit
 

What's hot (20)

Hive Bucketing in Apache Spark with Tejas Patil
Hive Bucketing in Apache Spark with Tejas PatilHive Bucketing in Apache Spark with Tejas Patil
Hive Bucketing in Apache Spark with Tejas Patil
 
Spark shuffle introduction
Spark shuffle introductionSpark shuffle introduction
Spark shuffle introduction
 
Hive+Tez: A performance deep dive
Hive+Tez: A performance deep diveHive+Tez: A performance deep dive
Hive+Tez: A performance deep dive
 
Managing 2000 Node Cluster with Ambari
Managing 2000 Node Cluster with AmbariManaging 2000 Node Cluster with Ambari
Managing 2000 Node Cluster with Ambari
 
Deep Dive: Memory Management in Apache Spark
Deep Dive: Memory Management in Apache SparkDeep Dive: Memory Management in Apache Spark
Deep Dive: Memory Management in Apache Spark
 
Deep Dive into the New Features of Apache Spark 3.0
Deep Dive into the New Features of Apache Spark 3.0Deep Dive into the New Features of Apache Spark 3.0
Deep Dive into the New Features of Apache Spark 3.0
 
Understanding Query Plans and Spark UIs
Understanding Query Plans and Spark UIsUnderstanding Query Plans and Spark UIs
Understanding Query Plans and Spark UIs
 
Streaming SQL with Apache Calcite
Streaming SQL with Apache CalciteStreaming SQL with Apache Calcite
Streaming SQL with Apache Calcite
 
Streaming Data Lakes using Kafka Connect + Apache Hudi | Vinoth Chandar, Apac...
Streaming Data Lakes using Kafka Connect + Apache Hudi | Vinoth Chandar, Apac...Streaming Data Lakes using Kafka Connect + Apache Hudi | Vinoth Chandar, Apac...
Streaming Data Lakes using Kafka Connect + Apache Hudi | Vinoth Chandar, Apac...
 
Apache Hive Hook
Apache Hive HookApache Hive Hook
Apache Hive Hook
 
Apache Spark Architecture
Apache Spark ArchitectureApache Spark Architecture
Apache Spark Architecture
 
Optimizing S3 Write-heavy Spark workloads
Optimizing S3 Write-heavy Spark workloadsOptimizing S3 Write-heavy Spark workloads
Optimizing S3 Write-heavy Spark workloads
 
Building large scale transactional data lake using apache hudi
Building large scale transactional data lake using apache hudiBuilding large scale transactional data lake using apache hudi
Building large scale transactional data lake using apache hudi
 
HBase Low Latency
HBase Low LatencyHBase Low Latency
HBase Low Latency
 
Building an open data platform with apache iceberg
Building an open data platform with apache icebergBuilding an open data platform with apache iceberg
Building an open data platform with apache iceberg
 
Spark + Parquet In Depth: Spark Summit East Talk by Emily Curtin and Robbie S...
Spark + Parquet In Depth: Spark Summit East Talk by Emily Curtin and Robbie S...Spark + Parquet In Depth: Spark Summit East Talk by Emily Curtin and Robbie S...
Spark + Parquet In Depth: Spark Summit East Talk by Emily Curtin and Robbie S...
 
Using Spark Streaming and NiFi for the Next Generation of ETL in the Enterprise
Using Spark Streaming and NiFi for the Next Generation of ETL in the EnterpriseUsing Spark Streaming and NiFi for the Next Generation of ETL in the Enterprise
Using Spark Streaming and NiFi for the Next Generation of ETL in the Enterprise
 
A Thorough Comparison of Delta Lake, Iceberg and Hudi
A Thorough Comparison of Delta Lake, Iceberg and HudiA Thorough Comparison of Delta Lake, Iceberg and Hudi
A Thorough Comparison of Delta Lake, Iceberg and Hudi
 
Introduction to Spark Internals
Introduction to Spark InternalsIntroduction to Spark Internals
Introduction to Spark Internals
 
What is new in Apache Hive 3.0?
What is new in Apache Hive 3.0?What is new in Apache Hive 3.0?
What is new in Apache Hive 3.0?
 

Similar to Hive: Loading Data

Hadoop 3.0 - Revolution or evolution?
Hadoop 3.0 - Revolution or evolution?Hadoop 3.0 - Revolution or evolution?
Hadoop 3.0 - Revolution or evolution?
Uwe Printz
 
Best Practices for Migrating Your Data Warehouse to Amazon Redshift
Best Practices for Migrating Your Data Warehouse to Amazon RedshiftBest Practices for Migrating Your Data Warehouse to Amazon Redshift
Best Practices for Migrating Your Data Warehouse to Amazon Redshift
Amazon Web Services
 
High Performance, High Reliability Data Loading on ClickHouse
High Performance, High Reliability Data Loading on ClickHouseHigh Performance, High Reliability Data Loading on ClickHouse
High Performance, High Reliability Data Loading on ClickHouse
Altinity Ltd
 
Maaz Anjum - IOUG Collaborate 2013 - An Insight into Space Realization on ODA...
Maaz Anjum - IOUG Collaborate 2013 - An Insight into Space Realization on ODA...Maaz Anjum - IOUG Collaborate 2013 - An Insight into Space Realization on ODA...
Maaz Anjum - IOUG Collaborate 2013 - An Insight into Space Realization on ODA...
Maaz Anjum
 
Presentation db2 best practices for optimal performance
Presentation   db2 best practices for optimal performancePresentation   db2 best practices for optimal performance
Presentation db2 best practices for optimal performance
solarisyougood
 
SolrCloud in Public Cloud: Scaling Compute Independently from Storage - Ilan ...
SolrCloud in Public Cloud: Scaling Compute Independently from Storage - Ilan ...SolrCloud in Public Cloud: Scaling Compute Independently from Storage - Ilan ...
SolrCloud in Public Cloud: Scaling Compute Independently from Storage - Ilan ...
Lucidworks
 
Presentation db2 best practices for optimal performance
Presentation   db2 best practices for optimal performancePresentation   db2 best practices for optimal performance
Presentation db2 best practices for optimal performance
xKinAnx
 
DAS RAID NAS SAN
DAS RAID NAS SANDAS RAID NAS SAN
DAS RAID NAS SAN
Ghassen Smida
 
AWS June 2016 Webinar Series - Amazon Redshift or Big Data Analytics
AWS June 2016 Webinar Series - Amazon Redshift or Big Data AnalyticsAWS June 2016 Webinar Series - Amazon Redshift or Big Data Analytics
AWS June 2016 Webinar Series - Amazon Redshift or Big Data Analytics
Amazon Web Services
 
Hadoop 3.0 - Revolution or evolution?
Hadoop 3.0 - Revolution or evolution?Hadoop 3.0 - Revolution or evolution?
Hadoop 3.0 - Revolution or evolution?
Uwe Printz
 
Improving Apache Spark by Taking Advantage of Disaggregated Architecture
 Improving Apache Spark by Taking Advantage of Disaggregated Architecture Improving Apache Spark by Taking Advantage of Disaggregated Architecture
Improving Apache Spark by Taking Advantage of Disaggregated Architecture
Databricks
 
Deep Dive on Amazon Redshift
Deep Dive on Amazon RedshiftDeep Dive on Amazon Redshift
Deep Dive on Amazon Redshift
Amazon Web Services
 
A tour of Amazon Redshift
A tour of Amazon RedshiftA tour of Amazon Redshift
A tour of Amazon Redshift
Kel Graham
 
RaptorX: Building a 10X Faster Presto with hierarchical cache
RaptorX: Building a 10X Faster Presto with hierarchical cacheRaptorX: Building a 10X Faster Presto with hierarchical cache
RaptorX: Building a 10X Faster Presto with hierarchical cache
Alluxio, Inc.
 
Designing High Performance ETL for Data Warehouse
Designing High Performance ETL for Data WarehouseDesigning High Performance ETL for Data Warehouse
Designing High Performance ETL for Data Warehouse
Marcel Franke
 
PostgreSQL Table Partitioning / Sharding
PostgreSQL Table Partitioning / ShardingPostgreSQL Table Partitioning / Sharding
PostgreSQL Table Partitioning / Sharding
Amir Reza Hashemi
 
Best Practices for Migrating your Data Warehouse to Amazon Redshift
Best Practices for Migrating your Data Warehouse to Amazon RedshiftBest Practices for Migrating your Data Warehouse to Amazon Redshift
Best Practices for Migrating your Data Warehouse to Amazon Redshift
Amazon Web Services
 
Best Practices – Extreme Performance with Data Warehousing on Oracle Databa...
Best Practices –  Extreme Performance with Data Warehousing  on Oracle Databa...Best Practices –  Extreme Performance with Data Warehousing  on Oracle Databa...
Best Practices – Extreme Performance with Data Warehousing on Oracle Databa...
Edgar Alejandro Villegas
 
Best Practices for Migrating your Data Warehouse to Amazon Redshift
Best Practices for Migrating your Data Warehouse to Amazon RedshiftBest Practices for Migrating your Data Warehouse to Amazon Redshift
Best Practices for Migrating your Data Warehouse to Amazon Redshift
Amazon Web Services
 
Deep Dive on Amazon Redshift
Deep Dive on Amazon RedshiftDeep Dive on Amazon Redshift
Deep Dive on Amazon Redshift
Amazon Web Services
 

Similar to Hive: Loading Data (20)

Hadoop 3.0 - Revolution or evolution?
Hadoop 3.0 - Revolution or evolution?Hadoop 3.0 - Revolution or evolution?
Hadoop 3.0 - Revolution or evolution?
 
Best Practices for Migrating Your Data Warehouse to Amazon Redshift
Best Practices for Migrating Your Data Warehouse to Amazon RedshiftBest Practices for Migrating Your Data Warehouse to Amazon Redshift
Best Practices for Migrating Your Data Warehouse to Amazon Redshift
 
High Performance, High Reliability Data Loading on ClickHouse
High Performance, High Reliability Data Loading on ClickHouseHigh Performance, High Reliability Data Loading on ClickHouse
High Performance, High Reliability Data Loading on ClickHouse
 
Maaz Anjum - IOUG Collaborate 2013 - An Insight into Space Realization on ODA...
Maaz Anjum - IOUG Collaborate 2013 - An Insight into Space Realization on ODA...Maaz Anjum - IOUG Collaborate 2013 - An Insight into Space Realization on ODA...
Maaz Anjum - IOUG Collaborate 2013 - An Insight into Space Realization on ODA...
 
Presentation db2 best practices for optimal performance
Presentation   db2 best practices for optimal performancePresentation   db2 best practices for optimal performance
Presentation db2 best practices for optimal performance
 
SolrCloud in Public Cloud: Scaling Compute Independently from Storage - Ilan ...
SolrCloud in Public Cloud: Scaling Compute Independently from Storage - Ilan ...SolrCloud in Public Cloud: Scaling Compute Independently from Storage - Ilan ...
SolrCloud in Public Cloud: Scaling Compute Independently from Storage - Ilan ...
 
Presentation db2 best practices for optimal performance
Presentation   db2 best practices for optimal performancePresentation   db2 best practices for optimal performance
Presentation db2 best practices for optimal performance
 
DAS RAID NAS SAN
DAS RAID NAS SANDAS RAID NAS SAN
DAS RAID NAS SAN
 
AWS June 2016 Webinar Series - Amazon Redshift or Big Data Analytics
AWS June 2016 Webinar Series - Amazon Redshift or Big Data AnalyticsAWS June 2016 Webinar Series - Amazon Redshift or Big Data Analytics
AWS June 2016 Webinar Series - Amazon Redshift or Big Data Analytics
 
Hadoop 3.0 - Revolution or evolution?
Hadoop 3.0 - Revolution or evolution?Hadoop 3.0 - Revolution or evolution?
Hadoop 3.0 - Revolution or evolution?
 
Improving Apache Spark by Taking Advantage of Disaggregated Architecture
 Improving Apache Spark by Taking Advantage of Disaggregated Architecture Improving Apache Spark by Taking Advantage of Disaggregated Architecture
Improving Apache Spark by Taking Advantage of Disaggregated Architecture
 
Deep Dive on Amazon Redshift
Deep Dive on Amazon RedshiftDeep Dive on Amazon Redshift
Deep Dive on Amazon Redshift
 
A tour of Amazon Redshift
A tour of Amazon RedshiftA tour of Amazon Redshift
A tour of Amazon Redshift
 
RaptorX: Building a 10X Faster Presto with hierarchical cache
RaptorX: Building a 10X Faster Presto with hierarchical cacheRaptorX: Building a 10X Faster Presto with hierarchical cache
RaptorX: Building a 10X Faster Presto with hierarchical cache
 
Designing High Performance ETL for Data Warehouse
Designing High Performance ETL for Data WarehouseDesigning High Performance ETL for Data Warehouse
Designing High Performance ETL for Data Warehouse
 
PostgreSQL Table Partitioning / Sharding
PostgreSQL Table Partitioning / ShardingPostgreSQL Table Partitioning / Sharding
PostgreSQL Table Partitioning / Sharding
 
Best Practices for Migrating your Data Warehouse to Amazon Redshift
Best Practices for Migrating your Data Warehouse to Amazon RedshiftBest Practices for Migrating your Data Warehouse to Amazon Redshift
Best Practices for Migrating your Data Warehouse to Amazon Redshift
 
Best Practices – Extreme Performance with Data Warehousing on Oracle Databa...
Best Practices –  Extreme Performance with Data Warehousing  on Oracle Databa...Best Practices –  Extreme Performance with Data Warehousing  on Oracle Databa...
Best Practices – Extreme Performance with Data Warehousing on Oracle Databa...
 
Best Practices for Migrating your Data Warehouse to Amazon Redshift
Best Practices for Migrating your Data Warehouse to Amazon RedshiftBest Practices for Migrating your Data Warehouse to Amazon Redshift
Best Practices for Migrating your Data Warehouse to Amazon Redshift
 
Deep Dive on Amazon Redshift
Deep Dive on Amazon RedshiftDeep Dive on Amazon Redshift
Deep Dive on Amazon Redshift
 

Recently uploaded

Unlocking the Future of Artificial Intelligence
Unlocking the Future of Artificial IntelligenceUnlocking the Future of Artificial Intelligence
Unlocking the Future of Artificial Intelligence
dorinIonescu
 
Fix Production Bugs Quickly - The Power of Structured Logging in Ruby on Rail...
Fix Production Bugs Quickly - The Power of Structured Logging in Ruby on Rail...Fix Production Bugs Quickly - The Power of Structured Logging in Ruby on Rail...
Fix Production Bugs Quickly - The Power of Structured Logging in Ruby on Rail...
John Gallagher
 
07. Ruby String Slides - Ruby Core Teaching
07. Ruby String Slides - Ruby Core Teaching07. Ruby String Slides - Ruby Core Teaching
07. Ruby String Slides - Ruby Core Teaching
quanhoangd129
 
02. Ruby Basic slides - Ruby Core Teaching
02. Ruby Basic slides - Ruby Core Teaching02. Ruby Basic slides - Ruby Core Teaching
02. Ruby Basic slides - Ruby Core Teaching
quanhoangd129
 
New York University degree Cert offer diploma Transcripta
New York University degree Cert offer diploma Transcripta New York University degree Cert offer diploma Transcripta
New York University degree Cert offer diploma Transcripta
pyxgy
 
Mlm software - Binary, Board, Matrix, Monoline, Unilevel MLM Ecommerce or E-p...
Mlm software - Binary, Board, Matrix, Monoline, Unilevel MLM Ecommerce or E-p...Mlm software - Binary, Board, Matrix, Monoline, Unilevel MLM Ecommerce or E-p...
Mlm software - Binary, Board, Matrix, Monoline, Unilevel MLM Ecommerce or E-p...
LETSCMS Private Limited
 
01. Ruby Introduction - Ruby Core Teaching
01. Ruby Introduction - Ruby Core Teaching01. Ruby Introduction - Ruby Core Teaching
01. Ruby Introduction - Ruby Core Teaching
quanhoangd129
 
pgroll - Zero-downtime, reversible, schema migrations for Postgres
pgroll - Zero-downtime, reversible, schema migrations for Postgrespgroll - Zero-downtime, reversible, schema migrations for Postgres
pgroll - Zero-downtime, reversible, schema migrations for Postgres
Tudor Golubenco
 
Empowering Businesses with Intelligent Software Solutions - Grawlix
Empowering Businesses with Intelligent Software Solutions - GrawlixEmpowering Businesses with Intelligent Software Solutions - Grawlix
Empowering Businesses with Intelligent Software Solutions - Grawlix
Aarisha Shaikh
 
Understanding Automated Testing Tools for Web Applications.pdf
Understanding Automated Testing Tools for Web Applications.pdfUnderstanding Automated Testing Tools for Web Applications.pdf
Understanding Automated Testing Tools for Web Applications.pdf
kalichargn70th171
 
Predicting Test Results without Execution (FSE 2024)
Predicting Test Results without Execution (FSE 2024)Predicting Test Results without Execution (FSE 2024)
Predicting Test Results without Execution (FSE 2024)
Andre Hora
 
Waze vs. Google Maps vs. Apple Maps, Who Else.pdf
Waze vs. Google Maps vs. Apple Maps, Who Else.pdfWaze vs. Google Maps vs. Apple Maps, Who Else.pdf
Waze vs. Google Maps vs. Apple Maps, Who Else.pdf
Ben Ramedani
 
4. The Build System _ Embedded Android.pdf
4. The Build System _ Embedded Android.pdf4. The Build System _ Embedded Android.pdf
4. The Build System _ Embedded Android.pdf
VishalKumarJha10
 
Old Tools, New Tricks: Unleashing the Power of Time-Tested Testing Tools
Old Tools, New Tricks: Unleashing the Power of Time-Tested Testing ToolsOld Tools, New Tricks: Unleashing the Power of Time-Tested Testing Tools
Old Tools, New Tricks: Unleashing the Power of Time-Tested Testing Tools
Benjamin Bischoff
 
iBirds Services - Comprehensive Salesforce CRM and Software Development Solut...
iBirds Services - Comprehensive Salesforce CRM and Software Development Solut...iBirds Services - Comprehensive Salesforce CRM and Software Development Solut...
iBirds Services - Comprehensive Salesforce CRM and Software Development Solut...
vijayatibirds
 
03. Ruby Variables & Regex - Ruby Core Teaching
03. Ruby Variables & Regex - Ruby Core Teaching03. Ruby Variables & Regex - Ruby Core Teaching
03. Ruby Variables & Regex - Ruby Core Teaching
quanhoangd129
 
How to Secure Your Kubernetes Software Supply Chain at Scale
How to Secure Your Kubernetes Software Supply Chain at ScaleHow to Secure Your Kubernetes Software Supply Chain at Scale
How to Secure Your Kubernetes Software Supply Chain at Scale
Anchore
 
06. Ruby Array & Hash - Ruby Core Teaching
06. Ruby Array & Hash - Ruby Core Teaching06. Ruby Array & Hash - Ruby Core Teaching
06. Ruby Array & Hash - Ruby Core Teaching
quanhoangd129
 
Fixing Git Catastrophes - Nebraska.Code()
Fixing Git Catastrophes - Nebraska.Code()Fixing Git Catastrophes - Nebraska.Code()
Fixing Git Catastrophes - Nebraska.Code()
Gene Gotimer
 
240717 ProPILE - Probing Privacy Leakage in Large Language Models.pdf
240717 ProPILE - Probing Privacy Leakage in Large Language Models.pdf240717 ProPILE - Probing Privacy Leakage in Large Language Models.pdf
240717 ProPILE - Probing Privacy Leakage in Large Language Models.pdf
CS Kwak
 

Recently uploaded (20)

Unlocking the Future of Artificial Intelligence
Unlocking the Future of Artificial IntelligenceUnlocking the Future of Artificial Intelligence
Unlocking the Future of Artificial Intelligence
 
Fix Production Bugs Quickly - The Power of Structured Logging in Ruby on Rail...
Fix Production Bugs Quickly - The Power of Structured Logging in Ruby on Rail...Fix Production Bugs Quickly - The Power of Structured Logging in Ruby on Rail...
Fix Production Bugs Quickly - The Power of Structured Logging in Ruby on Rail...
 
07. Ruby String Slides - Ruby Core Teaching
07. Ruby String Slides - Ruby Core Teaching07. Ruby String Slides - Ruby Core Teaching
07. Ruby String Slides - Ruby Core Teaching
 
02. Ruby Basic slides - Ruby Core Teaching
02. Ruby Basic slides - Ruby Core Teaching02. Ruby Basic slides - Ruby Core Teaching
02. Ruby Basic slides - Ruby Core Teaching
 
New York University degree Cert offer diploma Transcripta
New York University degree Cert offer diploma Transcripta New York University degree Cert offer diploma Transcripta
New York University degree Cert offer diploma Transcripta
 
Mlm software - Binary, Board, Matrix, Monoline, Unilevel MLM Ecommerce or E-p...
Mlm software - Binary, Board, Matrix, Monoline, Unilevel MLM Ecommerce or E-p...Mlm software - Binary, Board, Matrix, Monoline, Unilevel MLM Ecommerce or E-p...
Mlm software - Binary, Board, Matrix, Monoline, Unilevel MLM Ecommerce or E-p...
 
01. Ruby Introduction - Ruby Core Teaching
01. Ruby Introduction - Ruby Core Teaching01. Ruby Introduction - Ruby Core Teaching
01. Ruby Introduction - Ruby Core Teaching
 
pgroll - Zero-downtime, reversible, schema migrations for Postgres
pgroll - Zero-downtime, reversible, schema migrations for Postgrespgroll - Zero-downtime, reversible, schema migrations for Postgres
pgroll - Zero-downtime, reversible, schema migrations for Postgres
 
Empowering Businesses with Intelligent Software Solutions - Grawlix
Empowering Businesses with Intelligent Software Solutions - GrawlixEmpowering Businesses with Intelligent Software Solutions - Grawlix
Empowering Businesses with Intelligent Software Solutions - Grawlix
 
Understanding Automated Testing Tools for Web Applications.pdf
Understanding Automated Testing Tools for Web Applications.pdfUnderstanding Automated Testing Tools for Web Applications.pdf
Understanding Automated Testing Tools for Web Applications.pdf
 
Predicting Test Results without Execution (FSE 2024)
Predicting Test Results without Execution (FSE 2024)Predicting Test Results without Execution (FSE 2024)
Predicting Test Results without Execution (FSE 2024)
 
Waze vs. Google Maps vs. Apple Maps, Who Else.pdf
Waze vs. Google Maps vs. Apple Maps, Who Else.pdfWaze vs. Google Maps vs. Apple Maps, Who Else.pdf
Waze vs. Google Maps vs. Apple Maps, Who Else.pdf
 
4. The Build System _ Embedded Android.pdf
4. The Build System _ Embedded Android.pdf4. The Build System _ Embedded Android.pdf
4. The Build System _ Embedded Android.pdf
 
Old Tools, New Tricks: Unleashing the Power of Time-Tested Testing Tools
Old Tools, New Tricks: Unleashing the Power of Time-Tested Testing ToolsOld Tools, New Tricks: Unleashing the Power of Time-Tested Testing Tools
Old Tools, New Tricks: Unleashing the Power of Time-Tested Testing Tools
 
iBirds Services - Comprehensive Salesforce CRM and Software Development Solut...
iBirds Services - Comprehensive Salesforce CRM and Software Development Solut...iBirds Services - Comprehensive Salesforce CRM and Software Development Solut...
iBirds Services - Comprehensive Salesforce CRM and Software Development Solut...
 
03. Ruby Variables & Regex - Ruby Core Teaching
03. Ruby Variables & Regex - Ruby Core Teaching03. Ruby Variables & Regex - Ruby Core Teaching
03. Ruby Variables & Regex - Ruby Core Teaching
 
How to Secure Your Kubernetes Software Supply Chain at Scale
How to Secure Your Kubernetes Software Supply Chain at ScaleHow to Secure Your Kubernetes Software Supply Chain at Scale
How to Secure Your Kubernetes Software Supply Chain at Scale
 
06. Ruby Array & Hash - Ruby Core Teaching
06. Ruby Array & Hash - Ruby Core Teaching06. Ruby Array & Hash - Ruby Core Teaching
06. Ruby Array & Hash - Ruby Core Teaching
 
Fixing Git Catastrophes - Nebraska.Code()
Fixing Git Catastrophes - Nebraska.Code()Fixing Git Catastrophes - Nebraska.Code()
Fixing Git Catastrophes - Nebraska.Code()
 
240717 ProPILE - Probing Privacy Leakage in Large Language Models.pdf
240717 ProPILE - Probing Privacy Leakage in Large Language Models.pdf240717 ProPILE - Probing Privacy Leakage in Large Language Models.pdf
240717 ProPILE - Probing Privacy Leakage in Large Language Models.pdf
 

Hive: Loading Data

  • 1. Page1 Hive: Loading Data June 2015 Version 2.0 Ben Leonhardi
  • 2. Page2 Agenda • Introduction • ORC files • Partitioning vs. Predicate Pushdown • Loading data • Dynamic Partitioning • Bucketing • Optimize Sort Dynamic Partitioning • Manual Distribution • Miscellaneous • Sorting and Predicate pushdown • Debugging • Bloom Filters
  • 3. Page3 Introduction • Effectively storing data in Hive • Reducing IO • Partitioning • ORC files with predicate pushdown • Partitioned tables • Static partition loading – One partition is loaded at a time – Good for continuous operation – Not suitable for initial loads • Dynamic partition loading – Data is distributed between partitions dynamically • Data Sorting for better predicate pushdown
  • 4. Page4 ORCFile – Columnar Storage for Hive Columnar format enables high compression and high performance. • ORC is an optimized, compressed, columnar storage format • Only needed columns are read • Blocks of data can be skipped using indexes and predicate pushdown
  • 5. Page5 Partitioning Hive • Hive tables can be value partitioned – Each partition is associated with a folder in HDFS – All partitions have an entry in the Hive Catalog – The Hive optimizer will parse the query for filter conditions and skip unneeded partitions • Usage consideration – Too many partitions can lead to bad performance in the Hive Catalog and Optimizer – No range partitioning / no continuous values – Normally date partitioned by data load Page 5 • /apps/hive/warehouse • cust.db • customers • sales • day=20150801 • day=20150802 • day=20150803 • … Warehouse folder in HDFS Hive Databases have folders ending in .db Unpartitioned tables have a single folder. Partitioned tables have a subfolder for each partition.
  • 6. Page6 Predicate Pushdown • ORC ( and other storage formats ) support predicate pushdown – Query filters are pushed down into the storage handler – Blocks of data can be skipped without reading them from HDFS based on ORC index SELECT SUM (PROFIT) FROM SALES WHERE DAY = 03 Page 6 DAY CUST PROFIT 01 Klaus 35 01 Max 30 01 John 20 02 John 34 03 Max 10 04 Klaus 20 04 Max 45 05 Mark 20 DAY_MIN DAY_MAX PROFIT_MIN PROFIT_MAX 01 01 20 35 02 04 10 34 04 05 20 45 Only Block 2 can contain rows with DAY 02. Block 1 and 3 can be skipped
  • 7. Page7 Partitioning vs. Predicate Pushdown • Both reduce the data that needs to be read • Partitioning works at split generation, no need to start containers • Predicate pushdown is applied during file reads • Partitioning is applied in the split generation/optimizer • Impact on Optimizer and HCatalog for large number of partitions • Thousands of partitions will result in performance problems • Predicate Pushdown needs to read the file footers • Container are allocated even though they can run very quickly • No overhead in Optimizer/Catalog • Newest Hive build 1.2 can apply PP at split generation time • hive.exec.orc.split.strategy=BI, means never read footers (& fire jobs fast) • hive.exec.orc.split.strategy=ETL, always read footers and split as fine as you want
  • 8. Page8 Partitioning and Predicate Pushdown SELECT * FROM TABLE WHERE COUNTRY = “EN” and DATE = 2015 Partition EN ORC BLK1 2008 2010 2011 2011 2013 2013 Map1 ORC BLK2 2013 2013 2013 2014 2015 2015 ORC BLK3 2015 2015 2015 2015 2015 2015 Partition DE ORC BLK1 ORC BLK2 Map2 Map3 Table partitioned on Country, only folder for “EN” is read ORC files keep index information on content, blocks can be skipped based on index
  • 9. Page9 Agenda • Introduction • ORC files • Partitioning vs. Predicate Pushdown • Loading data • Dynamic Partitioning • Bucketing • Optimize Sort Dynamic Partitioning • Manual Distribution • Miscellaneous • Sorting and Predicate pushdown • Debugging • Bloom Filters
  • 10. Page10 Loading Data with Dynamic Partitioning CREATE TABLE ORC_SALES ( CLIENTID INT, DT DATE, REV DOUBLE, PROFIT DOUBLE, COMMENT STRING ) PARTITIONED BY ( COUNTRY STRING ) STORED AS ORC; INSERT INTO TABLE ORC_SALES PARTITION (COUNTRY) SELECT * FROM DEL_SALES; • Dynamic partitioning could create millions of partitions for bad partition keys • Parameters exist that restrict the creation of dynamic partitions set hive.exec.dynamic.partition=true; set hive.exec.dynamic.partition.mode = nonstrict; set hive.exec.max.dynamic.partitions.pernode=100000; set hive.exec.max.dynamic.partitions=100000; set hive.exec.max.created.files=100000; Most of these settings are already enabled with good values in HDP 2.2+ Dynamic partition columns need to be the last columns in your dataset Change order in SELECT list if necessary
  • 11. Page11 Dynamic Partition Loading • One file per Reducer/Mapper • Standard Load will use Map tasks to write data. One map task per input block/split Partition DE Partition EN Partition FR Partition SP Block1 Block3Block2 Block4 Map1 Map3Map2 Map4 b1 b2 b3 b4 b1 b2 b3 b4 b1 b2 b3 b4 b1 b2 b3 b4 Block5 Map5 b5 b5 b5 b5
  • 12. Page12 Small files • Large number of writers with large number of partitions results in small files • Files with 1-10 blocks of data are more efficient for HDFS • ORC compression is not very efficient on small files • ORC Writer will keep one Writer object open for each partition he encounters. • RAM needed for one stripe in every file / column • Too many Writers results in small stripes ( down to 5000 rows ) • If you run into memory problems you can increase the task RAM or increase the ORC memory pool percentage set hive.tez.java.opts="-Xmx3400m"; set hive.tez.container.size = 4096; set hive.exec.orc.memory.pool = 1.0;
  • 13. Page13 Loading Data Using Distribution • For large number of partitions, load data through reducers. • One or more reducers associated with a partition through data distribution • Beware of Hash conflicts ( two partitions being mapped to the same reducer by the hash function ) EN, 2015 DE, 2015 EN, 2014 … DE, 2009 EN, 2008 DE, 2011 … EN, 2014 EN, 2008 DE, 2011 … Partition EN Map0 HASH (EN) -> 1 HASH (DE) -> 0 … Map1 HASH (EN) -> 1 HASH (DE) -> 0 … Map2 HASH (EN) -> 1 HASH (EN) -> 1 … Red1 EN, 2015 EN, 2008 … Red0 DE, 2015 DE, 2009 … Partition DE
  • 14. Page14 Bucketing • Hive tables can be bucketed using the CLUSTERED BY keyword – One file/reducer per bucket – Buckets can be sorted – Additional advantages like bucket joins and sampling • Per default one reducer for each bucket across all partitions – Performance problems for large loads with dynamic partitioning – ORC Writer memory issues • Enforce Bucketing and Sorting in Hive set hive.enforce.sorting=true; set hive.enforce.bucketing=true;
  • 15. Page15 Bucketing Example CREATE TABLE ORC_SALES ( CLIENTID INT, DT DATE, REV DOUBLE, PROFIT DOUBLE, COMMENT STRING ) PARTITIONED BY ( COUNTRY STRING ) CLUSTERED BY DT SORT BY ( DT ) INTO 31 BUCKETS; INSERT INTO TABLE ORC_SALES PARTITION (COUNTRY) SELECT * FROM DEL_SALES; Partition DE Partition EN Partition FR D1 D2 D4 … D3 D1 D2 D4 … D3 D1 D2 D4 … D3 Red DT1 Red DT2 Red DT3 Red …
  • 16. Page16 Optimized Dynamic Sorted Partitioning • Enable optimized sorted partitioning to fix small file creation – Creates one reducer for each partition AND bucket – If you have 5 partitions with 4 buckets you will have 20 reducers • Hash conflicts mean that you can still have reducers handling more than one file – Data is sorted by partition/bucket key – ORCWriter closes files after encountering new keys - only one open file at a time - reduced memory needs • Can be enabled with set optimize.sort.dynamic.partitioning=true;
  • 17. Page17 Optimized Dynamic Sorted Partitioning • Optimized sorted partitioning creates one reducer per partition * bucket Block1 Block3Block2 Block4 Block5 Partition DE Partition EN Partition FR Partition SP Map1 Map3Map2 Map4 Map5 Red1 Red2 Red3 Red4 Out1 Out2 Out3 Out4 Hash Conflicts can happen even though there is one reducer for each partition. • This is the reason data is sorted • Reducer can close ORC writer after each key
  • 18. Page18 Miscellaneous • Small number of partitions can lead to slow loads • Solution is bucketing, increase the number of reducers • This can also help in Predicate pushdown • Partition by country, bucket by client id for example. • On a big system you may have to increase the max. number of reducers set hive.exec.reducers.max=1000;
  • 19. Page19 Manual Distribution • Fine grained control over distribution may be needed • DISTRIBUTE BY keyword allows control over the distribution algorithm • For example DISTRIBUTE BY GENDER will split the data stream into two sub streams • Does not define the numbers of reducers – Specify a fitting number with set mapred.reduce.tasks=2 • For dynamic partitioning include the partition key in the distributiom • Any additional subkeys result in multiple files per partition folder ( not unlike bucketing ) • For fast load try to maximize number of reducers in cluster
  • 20. Page20 Distribute By SET MAPRED.REDUCE.TASKS = 8; INSERT INTO ORC_SALES PARTITION ( COUNTRY) SELECT FROM DEL_SALES DISTRIBUTE BY COUNTRY, GENDER; Block1 Block3Block2 Block4 Partition DE Partition EN Partition FR Partition SP Map1 Map3Map2 Map4 Red1 Red2 Red3 Red4 Red5 Red6 Red7 Red8 DE M DE F EN M EN F FR M FR F SP M SP F HashConflict Reducers and number of distribution keys do not have to be identical but it is good best practice If you run into hash conflicts, changing the distribution key may help. ( M/F -> 0/1 ) for example
  • 21. Page21 Agenda • Introduction • ORC files • Partitioning vs. Predicate Pushdown • Loading data • Dynamic Partitioning • Bucketing • Optimize Sort Dynamic Partitioning • Manual Distribution • Miscellaneous • Sorting and Predicate pushdown • Debugging • Bloom Filters
  • 22. Page22 SORT BY for Predicate Pushdown ( PPD ) • ORC can skip stripes ( and 10k sub-blocks ) of data based on ORC footers • Data can be skipped based on min/max values and bloom filters • In warehouse environments data is normally sorted by date • For initial loads or other predicates data can be sorted during load • Two ways to sort data: ORDER BY ( global sort, slow ) and SORT BY ( sort by reducer ) – Use want SORT BY for PPD: faster and cross-file sorting does not help PPD • Can be combined with Distribution, Partitioning, Bucketing to optimize effect CREATE TABLE ORC_SALES ( CLIENTID INT, DT DATE, REV DOUBLE, PROFIT DOUBLE, COMMENT STRING ) STORED AS ORC; INSERT INTO TABLE ORC_SALES SELECT * FROM DEL_SALES SORT BY DT;
  • 23. Page23 Sorting when Inserting into Table Partition DE DE 1 2015-01 2015-01 2015-02 2015-03 2015-04 2015-05 Block1 Block1 DE 2 2015-02 2015-02 2015-03 2015-03 2015-03 Partition EN EN 1 2015-03 2015-04 2015-04 2015-07 EN 2 2015-01 2015-02 2015-02 2015-02 2015-03 2015-05 Map1 Map1 SELECT * FROM DATA_ORC WHERE dt = 2015-02 Files are divided into stripes of x MB and blocks of 10000 rows Only blue blocks have to be read based on their min/max values This requires sorting
  • 24. Page24 Checking Results • Use hive –orcfiledump to check results in ORC files hive –orcfiledump /apps/hive/warehouse/table/dt=3/00001_0 … Compression: ZLIB … Stripe Statistics: Stripe 1: Column 0: count: 145000 Column 1: min: 1 max: 145000 … Stripe 2: Column 0: count: 144000 Column 1: min: 145001 max: 289000 … Check Number of Stripes and number rows - small stripes (5000 rows) indicate a memory problem during load Data should be sorted on your predicate columns
  • 25. Page25 Bloom Filters • New feature in Hive 1.2 • A hash index bitmap of values in a column • If the bit for hash(value) is 0, no row in the stripe can be your value • If the bit for hash(value) is 1, it is possible that the stripe contains your value • Hive can skip stripes without need to sort data • Hard to sort by multiple columns CREATE TABLE ORC_SALES ( ID INT, Client INT, DT INT… ); STORED AS ORC TBLPROPERTIES ("orc.bloom.filter.columns"="Client,DT"); Parameter needs case sensitive comma-separated list of columns
  • 26. Page26 Bloom Filters • Bloom Filters are good • If you have multiple predicate columns • If your predicate columns are not suitable for sorting ( URLs, hash values, … ) • If you cannot sort the data ( daily ingestion, filter by clientid ) • Bloom Filters are bad • If every stripe contains your value – low cardinality fields like country – Events that happen regularly ( client buys something daily ) • Check if you successfully created a bloom filter index with orcfiledump hive --orcfiledump –rowindex 3,4,5 /apps/hive/… You only see bloom filter indexes if you specify the columns you want to see
  • 27. Page27 Verify ORC indexes • Switch on additional information like row counts going in/out of Tasks SET HIVE.TEZ.PRINT.EXEC.SUMMARY = TRUE; • Run query with/without Predicate Pushdown to compare row counts: set hive.optimize.index.filter=false; // run query set hive.optimize.index.filter=true; // run query // compare results
  • 28. Page28 Summary • Partitioning and Predicate Pushdown can greatly enhance query performance • Predicate Pushdown enhances Partitioning, it does not replace it • Too many partitions lead to performance problems • Dynamic Partition loading can lead to problems • Normally Optimized Dynamic Sorted Partitioning solves these problems • Sometimes manual distribution can be beneficial • Carefully design your table layout and data loading • Sorting is critical for effective predicate pushdown • If sorting is no option bloom filters can be a solution • Verify data layout with orcfiledump and debug information