This presentation describes how to efficiently load data into Hive. I cover partitioning, predicate pushdown, ORC file optimization and different loading schemes
Hive Bucketing in Apache Spark with Tejas PatilDatabricks
Bucketing is a partitioning technique that can improve performance in certain data transformations by avoiding data shuffling and sorting. The general idea of bucketing is to partition, and optionally sort, the data based on a subset of columns while it is written out (a one-time cost), while making successive reads of the data more performant for downstream jobs if the SQL operators can make use of this property. Bucketing can enable faster joins (i.e. single stage sort merge join), the ability to short circuit in FILTER operation if the file is pre-sorted over the column in a filter predicate, and it supports quick data sampling.
In this session, you’ll learn how bucketing is implemented in both Hive and Spark. In particular, Patil will describe the changes in the Catalyst optimizer that enable these optimizations in Spark for various bucketing scenarios. Facebook’s performance tests have shown bucketing to improve Spark performance from 3-5x faster when the optimization is enabled. Many tables at Facebook are sorted and bucketed, and migrating these workloads to Spark have resulted in a 2-3x savings when compared to Hive. You’ll also hear about real-world applications of bucketing, like loading of cumulative tables with daily delta, and the characteristics that can help identify suitable candidate jobs that can benefit from bucketing.
This document discusses Spark shuffle, which is an expensive operation that involves data partitioning, serialization/deserialization, compression, and disk I/O. It provides an overview of how shuffle works in Spark and the history of optimizations like sort-based shuffle and an external shuffle service. Key concepts discussed include shuffle writers, readers, and the pluggable block transfer service that handles data transfer. The document also covers shuffle-related configuration options and potential future work.
This document provides a summary of improvements made to Hive's performance through the use of Apache Tez and other optimizations. Some key points include:
- Hive was improved to use Apache Tez as its execution engine instead of MapReduce, reducing latency for interactive queries and improving throughput for batch queries.
- Statistics collection was optimized to gather column-level statistics from ORC file footers, speeding up statistics gathering.
- The cost-based optimizer Optiq was added to Hive, allowing it to choose better execution plans.
- Vectorized query processing, broadcast joins, dynamic partitioning, and other optimizations improved individual query performance by over 100x in some cases.
This document discusses Apache Ambari, an open source tool for managing Hadoop clusters. It describes how Ambari is used to manage a 2000 node Hadoop cluster, lessons learned, and new features in Ambari 1.6.0 like blueprints, views, and improved configuration and host management capabilities.
Deep Dive: Memory Management in Apache SparkDatabricks
Memory management is at the heart of any data-intensive system. Spark, in particular, must arbitrate memory allocation between two main use cases: buffering intermediate data for processing (execution) and caching user data (storage). This talk will take a deep dive through the memory management designs adopted in Spark since its inception and discuss their performance and usability implications for the end user.
Deep Dive into the New Features of Apache Spark 3.0Databricks
Continuing with the objectives to make Spark faster, easier, and smarter, Apache Spark 3.0 extends its scope with more than 3000 resolved JIRAs. We will talk about the exciting new developments in the Spark 3.0 as well as some other major initiatives that are coming in the future.
"The common use cases of Spark SQL include ad hoc analysis, logical warehouse, query federation, and ETL processing. Spark SQL also powers the other Spark libraries, including structured streaming for stream processing, MLlib for machine learning, and GraphFrame for graph-parallel computation. For boosting the speed of your Spark applications, you can perform the optimization efforts on the queries prior employing to the production systems. Spark query plans and Spark UIs provide you insight on the performance of your queries. This talk discloses how to read and tune the query plans for enhanced performance. It will also cover the major related features in the recent and upcoming releases of Apache Spark.
"
With the rise of the Internet of Things (IoT) and low-latency analytics, streaming data becomes ever more important. Surprisingly, one of the most promising approaches for processing streaming data is SQL. In this presentation, Julian Hyde shows how to build streaming SQL analytics that deliver results with low latency, adapt to network changes, and play nicely with BI tools and stored data. He also describes how Apache Calcite optimizes streaming queries, and the ongoing collaborations between Calcite and the Storm, Flink and Samza projects.
This talk was given Julian Hyde at Apache Big Data conference, Vancouver, on 2016/05/09.
Streaming Data Lakes using Kafka Connect + Apache Hudi | Vinoth Chandar, Apac...HostedbyConfluent
Apache Hudi is a data lake platform, that provides streaming primitives (upserts/deletes/change streams) on top of data lake storage. Hudi powers very large data lakes at Uber, Robinhood and other companies, while being pre-installed on four major cloud platforms.
Hudi supports exactly-once, near real-time data ingestion from Apache Kafka to cloud storage, which is typically used in-place of a S3/HDFS sink connector to gain transactions and mutability. While this approach is scalable and battle-tested, it can only ingest data in mini batches, leading to lower data freshness. In this talk, we introduce a Kafka Connect Sink Connector for Apache Hudi, which writes data straight into Hudi's log format, making the data immediately queryable, while Hudi's table services like indexing, compaction, clustering work behind the scenes, to further re-organize for better query performance.
The document describes Apache Hive hooks, which allow intercepting function calls or events during query execution in Hive. It provides details on the different hook points in Hive, including pre-execution, post-execution, and failure hooks. It also explains how to configure hooks by setting hook properties and the jar paths for hook implementations. Finally, it outlines the interfaces and contexts provided to hooks at each stage of query processing in Hive.
This is the presentation I made on JavaDay Kiev 2015 regarding the architecture of Apache Spark. It covers the memory model, the shuffle implementations, data frames and some other high-level staff and can be used as an introduction to Apache Spark
This document discusses optimizing Spark write-heavy workloads to S3 object storage. It describes problems with eventual consistency, renames, and failures when writing to S3. It then presents several solutions implemented at Qubole to improve the performance of Spark writes to Hive tables and directly writing to the Hive warehouse location. These optimizations include parallelizing renames, writing directly to the warehouse, and making recover partitions faster by using more efficient S3 listing. Performance improvements of up to 7x were achieved.
Building large scale transactional data lake using apache hudiBill Liu
Data is a critical infrastructure for building machine learning systems. From ensuring accurate ETAs to predicting optimal traffic routes, providing safe, seamless transportation and delivery experiences on the Uber platform requires reliable, performant large-scale data storage and analysis. In 2016, Uber developed Apache Hudi, an incremental processing framework, to power business critical data pipelines at low latency and high efficiency, and helps distributed organizations build and manage petabyte-scale data lakes.
In this talk, I will describe what is APache Hudi and its architectural design, and then deep dive to improving data operations by providing features such as data versioning, time travel.
We will also go over how Hudi brings kappa architecture to big data systems and enables efficient incremental processing for near real time use cases.
Speaker: Satish Kotha (Uber)
Apache Hudi committer and Engineer at Uber. Previously, he worked on building real time distributed storage systems like Twitter MetricsDB and BlobStore.
website: https://www.aicamp.ai/event/eventdetails/W2021043010
This document discusses techniques for improving latency in HBase. It analyzes the write and read paths, identifying sources of latency such as networking, HDFS flushes, garbage collection, and machine failures. For writes, it finds that single puts can achieve millisecond latency while streaming puts can hide latency spikes. For reads, it notes cache hits are sub-millisecond while cache misses and seeks add latency. GC pauses of 25-100ms are common, and failures hurt locality and require cache rebuilding. The document outlines ongoing work to reduce GC, use off-heap memory, improve compactions and caching to further optimize for low latency.
Spark + Parquet In Depth: Spark Summit East Talk by Emily Curtin and Robbie S...Spark Summit
What if you could get the simplicity, convenience, interoperability, and storage niceties of an old-fashioned CSV with the speed of a NoSQL database and the storage requirements of a gzipped file? Enter Parquet.
At The Weather Company, Parquet files are a quietly awesome and deeply integral part of our Spark-driven analytics workflow. Using Spark + Parquet, we’ve built a blazing fast, storage-efficient, query-efficient data lake and a suite of tools to accompany it.
We will give a technical overview of how Parquet works and how recent improvements from Tungsten enable SparkSQL to take advantage of this design to provide fast queries by overcoming two major bottlenecks of distributed analytics: communication costs (IO bound) and data decoding (CPU bound).
Using Spark Streaming and NiFi for the Next Generation of ETL in the EnterpriseDataWorks Summit
In recent years, big data has moved from batch processing to stream-based processing since no one wants to wait hours or days to gain insights. Dozens of stream processing frameworks exist today and the same trend that occurred in the batch-based big data processing realm has taken place in the streaming world so that nearly every streaming framework now supports higher level relational operations.
On paper, combining Apache NiFi, Kafka, and Spark Streaming provides a compelling architecture option for building your next generation ETL data pipeline in near real time. What does this look like in an enterprise production environment to deploy and operationalized?
The newer Spark Structured Streaming provides fast, scalable, fault-tolerant, end-to-end exactly-once stream processing with elegant code samples, but is that the whole story?
We discuss the drivers and expected benefits of changing the existing event processing systems. In presenting the integrated solution, we will explore the key components of using NiFi, Kafka, and Spark, then share the good, the bad, and the ugly when trying to adopt these technologies into the enterprise. This session is targeted toward architects and other senior IT staff looking to continue their adoption of open source technology and modernize ingest/ETL processing. Attendees will take away lessons learned and experience in deploying these technologies to make their journey easier.
Speaker: Andrew Psaltis, Principal Solution Engineer, Hortonworks
A Thorough Comparison of Delta Lake, Iceberg and HudiDatabricks
Recently, a set of modern table formats such as Delta Lake, Hudi, Iceberg spring out. Along with Hive Metastore these table formats are trying to solve problems that stand in traditional data lake for a long time with their declared features like ACID, schema evolution, upsert, time travel, incremental consumption etc.
The document provides an overview of Apache Spark internals and Resilient Distributed Datasets (RDDs). It discusses:
- RDDs are Spark's fundamental data structure - they are immutable distributed collections that allow transformations like map and filter to be applied.
- RDDs track their lineage or dependency graph to support fault tolerance. Transformations create new RDDs while actions trigger computation.
- Operations on RDDs include narrow transformations like map that don't require data shuffling, and wide transformations like join that do require shuffling.
- The RDD abstraction allows Spark's scheduler to optimize execution through techniques like pipelining and cache reuse.
Apache Hive is a rapidly evolving project which continues to enjoy great adoption in the big data ecosystem. As Hive continues to grow its support for analytics, reporting, and interactive query, the community is hard at work in improving it along with many different dimensions and use cases. This talk will provide an overview of the latest and greatest features and optimizations which have landed in the project over the last year. Materialized views, the extension of ACID semantics to non-ORC data, and workload management are some noteworthy new features.
We will discuss optimizations which provide major performance gains, including significantly improved performance for ACID tables. The talk will also provide a glimpse of what is expected to come in the near future.
With Hadoop-3.0.0-alpha2 being released in January 2017, it's time to have a closer look at the features and fixes of Hadoop 3.0.
We will have a look at Core Hadoop, HDFS and YARN, and answer the emerging question whether Hadoop 3.0 will be an architectural revolution like Hadoop 2 was with YARN & Co. or will it be more of an evolution adapting to new use cases like IoT, Machine Learning and Deep Learning (TensorFlow)?
Best Practices for Migrating Your Data Warehouse to Amazon RedshiftAmazon Web Services
by Darin Briskman, Technical Evangelist, AWS
You can gain substantially more business insights and save costs by migrating your existing data warehouse to Amazon Redshift. This session will cover the key benefits of migrating to Amazon Redshift, migration strategies, and tools and resources that can help you in the process. We’ll learn about AWS Database Migration Service and AWS Schema Migration Tool, which were recently enhanced to import data from six common data warehouse platforms. Level: 200
High Performance, High Reliability Data Loading on ClickHouseAltinity Ltd
This document provides a summary of best practices for high reliability data loading in ClickHouse. It discusses ClickHouse's ingestion pipeline and strategies for improving performance and reliability of inserts. Some key points include using larger block sizes for inserts, avoiding overly frequent or compressed inserts, optimizing partitioning and sharding, and techniques like buffer tables and compact parts. The document also covers ways to make inserts atomic and handle deduplication of records through block-level and logical approaches.
Maaz Anjum - IOUG Collaborate 2013 - An Insight into Space Realization on ODA...Maaz Anjum
The document provides an overview of Maaz Anjum, a solutions architect specializing in Oracle products like OEM12c, Golden Gate, and Engineered Systems. It lists his email, blog, and experience using Oracle products since 2001. It also provides details about Bias Corporation, the company he works for, including its founding date, certifications, expertise, customers, and implementations.
Presentation db2 best practices for optimal performancesolarisyougood
This document summarizes best practices for optimizing DB2 performance on various platforms. It discusses sizing workloads based on factors like concurrent users and response time objectives. Guidelines are provided for selecting CPUs, memory, disks and platforms. The document reviews physical database design best practices like choosing a page size and tablespace design. It also discusses index design, compression techniques, and benchmark results showing DB2's high performance.
SolrCloud in Public Cloud: Scaling Compute Independently from Storage - Ilan ...Lucidworks
Running SolrCloud in Public Cloud is the future. This presentation and the code that will be contributed back to the community will allow such clusters to be highly efficient, scalable and elastic. Attendees will understand the challenges and potential of sharing index data between servers.
Speakers: Ilan Ginzburg & Yonik Seeley, Salesforce
This document provides an overview of various data storage technologies including RAID, DAS, NAS, and SAN. It discusses RAID levels like RAID 0, 1, 5 which provide data striping and redundancy. Direct attached storage (DAS) connects directly to servers but cannot be shared, while network attached storage (NAS) uses file sharing protocols over IP networks. Storage area networks (SAN) use dedicated storage networks like Fibre Channel and iSCSI to provide block-level access to consolidated storage. The key is choosing the right solution based on capacity, performance, scalability, availability, data protection needs, and budget.
AWS June 2016 Webinar Series - Amazon Redshift or Big Data AnalyticsAmazon Web Services
Analyzing big data quickly and efficiently requires a data warehouse optimized to handle and scale for large datasets. Amazon Redshift is a fast, petabyte-scale data warehouse that makes it simple and cost-effective to analyze big data for a fraction of the cost of traditional data warehouses. By following a few best practices, you can take advantage of Amazon Redshift’s columnar technology and parallel processing capabilities to minimize I/O and deliver high throughput and query performance. This webinar will cover techniques to load data efficiently, design optimal schemas, and tune query and database performance.
Learning Objectives:
Get an inside look at Amazon Redshift's columnar technology and parallel processing capabilities
Learn how to migrate from existing data warehouses, optimize schemas, and load data efficiently
Learn best practices for managing workload, tuning your queries, and using Amazon Redshift's interleaved sorting features
Updated version of my talk about Hadoop 3.0 with the newest community updates.
Talk given at the codecentric Meetup Berlin on 31.08.2017 and on Data2Day Meetup on 28.09.2017 in Heidelberg.
Improving Apache Spark by Taking Advantage of Disaggregated ArchitectureDatabricks
Shuffle in Apache Spark is an intermediate phrase redistributing data across computing units, which has one important primitive that the shuffle data is persisted on local disks. This architecture suffers from some scalability and reliability issues. Moreover, the assumptions of collocated storage do not always hold in today’s data centers. The hardware trend is moving to disaggregated storage and compute architecture for better cost efficiency and scalability.
To address the issues of Spark shuffle and support disaggregated storage and compute architecture, we implemented a new remote Spark shuffle manager. This new architecture writes shuffle data to a remote cluster with different Hadoop-compatible filesystem backends.
Firstly, the failure of compute nodes will no longer cause shuffle data recomputation. Spark executors can also be allocated and recycled dynamically which results in better resource utilization.
Secondly, for most customers currently running Spark with collocated storage, it is usually challenging for them to upgrade the disks on every node to latest hardware like NVMe SSD and persistent memory because of cost consideration and system compatibility. With this new shuffle manager, they are free to build a separated cluster storing and serving the shuffle data, leveraging the latest hardware to improve the performance and reliability.
Thirdly, in HPC world, more customers are trying Spark as their high performance data analytics tools, while storage and compute in HPC clusters are typically disaggregated. This work will make their life easier.
In this talk, we will present an overview of the issues of the current Spark shuffle implementation, the design of new remote shuffle manager, and a performance study of the work.
Take an in-depth look at data warehousing with Amazon Redshift and get answers to your technical questions. We will cover performance tuning techniques that take advantage of Amazon Redshift's columnar technology and massively parallel processing architecture. We will also discuss best practices for migrating from existing data warehouses, optimizing your schema, loading data efficiently, and using work load management and interleaved sorting.
A quick tour in 16 slides of Amazon's Redshift clustered, massively parallel database.
Find out what differentiates it from the other database products Amazon has, including SimpleDB, DynamoDB and RDS (MySQL, SQL Server and Oracle).
Learn how it stores data on disk in a columnar format and how this relates to performance and interesting compression techniques.
Contrast the difference between Redshift and a MySQL instance and discover how the clustered architecture may help to dramatically reduce query time.
RaptorX: Building a 10X Faster Presto with hierarchical cacheAlluxio, Inc.
RaptorX is a new product from Facebook that provides a 10x performance improvement over Presto for querying large datasets stored in remote object storage. It achieves this through an intelligent hierarchical caching system that caches metadata, file lists, file descriptors, data fragments, and query results at various points in the query processing pipeline. This caching approach significantly reduces the latency of queries by minimizing the number of remote storage requests. RaptorX has been deployed at Facebook on over 10,000 servers to power interactive analytics workloads querying over 1 exabyte of data stored in remote object storage.
Designing High Performance ETL for Data WarehouseMarcel Franke
This document discusses best practices and approaches for designing high performance ETL for data warehousing. It summarizes the key components of the FastTrack reference architecture, including hardware configuration with storage, servers, and networking. It then evaluates different ETL loading strategies like loading into partitioned vs non-partitioned tables, with and without sorting, and using hash partitioning. Test results show that loading with multiple parallel streams into hash partitioned tables using partition switching can achieve the highest throughput.
The document discusses table partitioning and sharding in PostgreSQL as approaches to improve performance and scalability as data volumes grow over time. Table partitioning involves splitting a master table into multiple child tables or partitions based on a partition function to distribute data. Sharding distributes partitions across multiple database servers. The document provides steps to implement table partitioning and sharding in PostgreSQL using the Citus extension to distribute a sample sales table across a master and worker node.
Best Practices for Migrating your Data Warehouse to Amazon RedshiftAmazon Web Services
You can gain substantially more business insights and save costs by migrating your existing data warehouse to Amazon Redshift. This session will cover the key benefits of migrating to Amazon Redshift, migration strategies, and tools and resources that can help you in the process.
Best Practices – Extreme Performance with Data Warehousing on Oracle Databa...Edgar Alejandro Villegas
The document discusses best practices for data warehousing performance on Oracle Database. It covers Oracle Exadata Database Machine capabilities like intelligent storage, hybrid columnar compression, and smart flash cache. It also discusses partitioning, parallelism, monitoring tools, and data loading techniques to maximize warehouse performance.
Best Practices for Migrating your Data Warehouse to Amazon RedshiftAmazon Web Services
You can gain substantially more business insights and save costs by migrating your existing data warehouse to Amazon Redshift. This session will cover the key benefits of migrating to Amazon Redshift, migration strategies, and tools and resources that can help you in the process. We’ll learn about AWS Database Migration Service and AWS Schema Migration Tool, which were recently enhanced to import data from six common data warehouse platforms.
Take an in-depth look at data warehousing with Amazon Redshift and get answers to your technical questions. We will cover performance tuning techniques that take advantage of Amazon Redshift's columnar technology and massively parallel processing architecture. We will also discuss best practices for migrating from existing data warehouses, optimizing your schema, loading data efficiently, and using work load management and interleaved sorting.
Unlocking the Future of Artificial IntelligencedorinIonescu
Unlock the Future: Dive into AI Today! Videnda AI specializes in developing advanced artificial intelligence solutions, including visual dictionaries and language learning tools that leverage immersive virtual travel experiences. Stay Ahead of the Curve: Master AI Now! Our AI technology integrates machine learning and neural networks to enhance education and business applications. AI: The Next Frontier. Are You Ready to Explore? With a focus on real-time AI solutions and deep learning models, Videnda AI provides innovative tools for multilingual communication and immersive learning.
In this course, you'll find a series of engaging videos packed with vibrant animations that break down complex AI concepts into digestible pieces. Our curriculum covers AI models such as Convolutional Neural Networks (CNN), Multi-Layer Perceptrons (MLP), Generative Adversarial Networks (GAN), and Transformers, providing a solid understanding of these models and their real-world applications. We also offer hands-on experience with Generative AI tools like ChatGPT and Midjourney, and Python programming tutorials to help you implement AI algorithms and build your own AI applications.
We are proud participants in the Nvidia Inception Program, driving AI innovation across various industries. By the end of our course, you'll have a strong understanding of AI principles, enhanced Python programming skills, and practical experience with state-of-the-art Generative AI tools. Whether you're looking to kickstart a career in AI or simply curious about this revolutionary technology, Videnda AI is your partner in mastering the future of artificial intelligence.
Fix Production Bugs Quickly - The Power of Structured Logging in Ruby on Rail...John Gallagher
Rails apps can be a black box. Have you ever tried to fix a bug where you just can’t understand what’s going on? This talk will give you practical steps to improve the observability of your Rails app, taking the time to understand and fix defects from hours or days to minutes. Rails 8 will bring an exciting new feature: built-in structured logging. This talk will delve into the transformative impact of structured logging on fixing bugs and saving engineers time. Structured logging, as a cornerstone of observability, offers a powerful way to handle logs compared to traditional text-based logs. This session will guide you through the nuances of structured logging in Rails, demonstrating how it can be used to gain better insights into your application’s behavior. This talk will be a practical, technical deep dive into how to make structured logging work with an existing Rails app.
I talk about the Steps to Observable Software - a practical five step process for improving the observability of your Rails app.
pgroll - Zero-downtime, reversible, schema migrations for PostgresTudor Golubenco
pgroll is an open source command-line tool that offers safe and reversible schema migrations for PostgreSQL by serving multiple schema versions simultaneously. It takes care of the complex migration operations to ensure that client applications continue working while the database schema is being updated. This includes ensuring changes are applied without locking the database, and that both old and new schema versions work simultaneously (even when breaking changes are being made!). This removes risks related to schema migrations, and greatly simplifies client application rollout, also allowing for instant rollbacks.
Empowering Businesses with Intelligent Software Solutions - GrawlixAarisha Shaikh
Explore Grawlix's comprehensive suite of intelligent software solutions designed to drive transformative growth and scalability for businesses. This presentation covers our expertise in bespoke software development, digital marketing, web design, cloud solutions, cybersecurity, AI/ML, and IT consulting. Discover how Grawlix's customized solutions enhance productivity, streamline processes, and enable data-driven decision-making. Learn about our key projects, technologies, and the dedicated team who ensures exceptional client satisfaction through innovation and excellence.
Understanding Automated Testing Tools for Web Applications.pdfkalichargn70th171
Automated testing tools for web applications are revolutionizing how we ensure quality and performance in software development. These tools help save time, reduce human error, and increase the efficiency of web application testing processes. This guide delves into automated testing, discusses the available tools, and highlights how to choose the right tool for your needs.
Predicting Test Results without Execution (FSE 2024)Andre Hora
As software systems grow, test suites may become complex, making it challenging to run the tests frequently and locally. Recently, Large Language Models (LLMs) have been adopted in multiple software engineering tasks. It has demonstrated great results in code generation, however, it is not yet clear whether these models understand code execution. Particularly, it is unclear whether LLMs can be used to predict test results, and, potentially, overcome the issues of running real-world tests. To shed some light on this problem, in this paper, we explore the capability of LLMs to predict test results without execution. We evaluate the performance of the state-of-the-art GPT-4 in predicting the execution of 200 test cases of the Python Standard Library. Among these 200 test cases, 100 are passing and 100 are failing ones. Overall, we find that GPT-4 has a precision of 88.8%, recall of 71%, and accuracy of 81% in the test result prediction. However, the results vary depending on the test complexity: GPT-4 presented better precision and recall when predicting simpler tests (93.2% and 82%) than complex ones (83.3% and 60%). We also find differences among the analyzed test suites, with the precision ranging from 77.8% to 94.7% and recall between 60% and 90%. Our findings suggest that GPT-4 still needs significant progress in predicting test results.
Waze vs. Google Maps vs. Apple Maps, Who Else.pdfBen Ramedani
Let’s face it, getting lost isn’t really part of the adventure anymore (unless you’re into that sort of thing!). Nowadays, a good navigation app is like your trusty compass, guiding you through busy city streets and winding country roads. But with so many options out there—from big names like Waze, Google Maps, and Apple Maps to some lesser-known contenders—choosing the right one can feel a bit overwhelming.
Think about it: you're about to head out on a road trip, and the last thing you want is to end up in the middle of nowhere because you took a wrong turn. Or maybe you're just trying to navigate your daily commute without hitting every single red light. That's where a solid navigation app comes in handy.
Google Maps is like the old reliable friend who knows every shortcut and scenic route. It's packed with features, from real-time traffic updates to detailed directions, making it a top choice for many. But then there's Waze, the social butterfly of navigation apps. It's all about community, with drivers sharing real-time updates on traffic, accidents, and even speed traps. It’s perfect if you want to feel like you’re part of a huge driving club, all working together to get everyone to their destination faster.
And let’s not forget Apple Maps, which has come a long way since its rocky start. If you're deep into the Apple ecosystem, it's a seamless choice, integrating smoothly with all your devices and offering some pretty neat features like Flyover for 3D city views.
But wait, there are also some underdog apps worth considering! Have you heard of MapQuest? It's still around and offers some great features, especially for planning long trips with multiple stops. Then there's HERE WeGo, which is fantastic for offline navigation—a real lifesaver if you're heading somewhere with spotty cell service.
So, whether you're planning a cross-country adventure or just trying to find the quickest route to work, we’ll help you sift through these options. We’ll dive into what makes each app unique, their pros and cons, and ultimately, guide you to the perfect navigation app for your needs. Buckle up and get ready for a smooth ride!
Old Tools, New Tricks: Unleashing the Power of Time-Tested Testing ToolsBenjamin Bischoff
In the rapidly evolving landscape of software development and testing, it is tempting to chase the latest tools and technologies. However, some of the most effective solutions have been in existence for decades. In this talk, we’ll delve into the enduring value of these timeless testing tools.
We’ll explore how established tools like Selenium, GNU Make, Maven, and Bash remain vital in today’s software development and testing toolkit even though they have been around for a long time (some were even invented before I was born). I’ll share examples of how these tools have addressed our testing and automation challenges, showcasing their adaptability, versatility, and reliability in various scenarios. I aim to demonstrate that sometimes, the “old” ways can indeed be the best ways.
iBirds Services - Comprehensive Salesforce CRM and Software Development Solut...vijayatibirds
Unlock the full potential of your business with iBirds Services. As a trusted Salesforce Consulting Partner, iBirds Software Pvt. Ltd. offers a wide range of customer-centric consulting services to help you seamlessly integrate, customize, and optimize your Salesforce CRM. Our team of experts specializes in delivering innovative software development solutions tailored to meet your unique business needs.
In this document, you will discover:
An overview of iBirds Services and our expertise in Salesforce CRM implementation.
Detailed insights into our software development services, including custom applications, integrations, and automation.
Case studies highlighting our successful projects and satisfied clients.
Key benefits of partnering with iBirds Services for your CRM and software development needs.
Whether you are a small business or a large enterprise, our proven strategies and cutting-edge technologies ensure your business stays ahead of the competition. Explore our services and learn how iBirds can transform your business operations with scalable and efficient solutions.
How to Secure Your Kubernetes Software Supply Chain at ScaleAnchore
Achieving comprehensive security visibility in Kubernetes environments is essential for maintaining robust and compliant cloud-native applications. In this exclusive webinar, Anchore and Spectro Cloud team up to showcase how to enhance your Kubernetes security posture with SBOM (Software Bill of Materials) management and vulnerability scanning.
Join Cornelia Davis, VP of Product, Spectro Cloud and Alan Pope, Director of Developer Relations, Anchore to learn how to elevate your Kubernetes security visibility and protect your cloud-native applications effectively.
—Discover how Anchore can be integrated with Spectro Cloud Palette to take SBOM scanning to the next level, delivering fully automated software compliance
—Gain valuable insights into best practices for securing your Kubernetes workloads, ensuring compliance, and improving your DevSecOps processes.
The code is written and the tests pass. I just have to commit this last round of changes to my branch. Wait, why does that say committed to main? Did I commit all those changes to main? Arghh! I can’t redo all of this!
Committing changes to the wrong branch, forgetting files, misspelling the commit message, and needing to undo commits are some of the “advanced” features of Git that we normal people run into way too often and need help with. The fixes are often easy – once you know what they are. But in the heat of the moment, with the deadline (or Friday afternoon) approaching, it isn’t always easy to figure out what magic spell to cast to get Git to do what you need.
We’ll spend some time looking at typical Git situations people get themselves into, and then we’ll demonstrate how to get out of them. This isn’t about Git internals or a Git master’s class – this real-world Git when things aren’t going right. And there will be plenty of time for questions, so bring your “best” Git nightmare scenarios so we can figure out how to recover.
3. Page3
Introduction
• Effectively storing data in Hive
• Reducing IO
• Partitioning
• ORC files with predicate pushdown
• Partitioned tables
• Static partition loading
– One partition is loaded at a time
– Good for continuous operation
– Not suitable for initial loads
• Dynamic partition loading
– Data is distributed between partitions dynamically
• Data Sorting for better predicate pushdown
4. Page4
ORCFile – Columnar Storage for Hive
Columnar format
enables high
compression and high
performance.
• ORC is an optimized, compressed, columnar storage format
• Only needed columns are read
• Blocks of data can be skipped using indexes and predicate pushdown
5. Page5
Partitioning Hive
• Hive tables can be value partitioned
– Each partition is associated with a folder in HDFS
– All partitions have an entry in the Hive Catalog
– The Hive optimizer will parse the query for filter conditions and skip unneeded partitions
• Usage consideration
– Too many partitions can lead to bad performance in the Hive Catalog and Optimizer
– No range partitioning / no continuous values
– Normally date partitioned by data load
Page 5
• /apps/hive/warehouse
• cust.db
• customers
• sales
• day=20150801
• day=20150802
• day=20150803
• …
Warehouse folder in HDFS
Hive Databases have
folders ending in .db
Unpartitioned tables have
a single folder.
Partitioned tables have a subfolder for
each partition.
6. Page6
Predicate Pushdown
• ORC ( and other storage formats ) support predicate pushdown
– Query filters are pushed down into the storage handler
– Blocks of data can be skipped without reading them from HDFS based on ORC index
SELECT SUM (PROFIT) FROM SALES WHERE DAY = 03
Page 6
DAY CUST PROFIT
01 Klaus 35
01 Max 30
01 John 20
02 John 34
03 Max 10
04 Klaus 20
04 Max 45
05 Mark 20
DAY_MIN DAY_MAX PROFIT_MIN PROFIT_MAX
01 01 20 35
02 04 10 34
04 05 20 45
Only Block 2 can contain rows
with DAY 02.
Block 1 and 3 can be skipped
7. Page7
Partitioning vs. Predicate Pushdown
• Both reduce the data that needs to be read
• Partitioning works at split generation, no need to start containers
• Predicate pushdown is applied during file reads
• Partitioning is applied in the split generation/optimizer
• Impact on Optimizer and HCatalog for large number of partitions
• Thousands of partitions will result in performance problems
• Predicate Pushdown needs to read the file footers
• Container are allocated even though they can run very quickly
• No overhead in Optimizer/Catalog
• Newest Hive build 1.2 can apply PP at split generation time
• hive.exec.orc.split.strategy=BI, means never read footers (& fire jobs fast)
• hive.exec.orc.split.strategy=ETL, always read footers and split as fine as you want
8. Page8
Partitioning and Predicate Pushdown
SELECT * FROM TABLE WHERE COUNTRY = “EN” and DATE = 2015
Partition EN
ORC BLK1
2008
2010
2011
2011
2013
2013
Map1
ORC BLK2
2013
2013
2013
2014
2015
2015
ORC BLK3
2015
2015
2015
2015
2015
2015
Partition DE
ORC
BLK1
ORC
BLK2
Map2 Map3
Table partitioned on Country, only folder
for “EN” is read
ORC files keep index information on
content, blocks can be skipped based on
index
10. Page10
Loading Data with Dynamic Partitioning
CREATE TABLE ORC_SALES
( CLIENTID INT, DT DATE, REV DOUBLE, PROFIT DOUBLE, COMMENT STRING )
PARTITIONED BY ( COUNTRY STRING )
STORED AS ORC;
INSERT INTO TABLE ORC_SALES PARTITION (COUNTRY) SELECT * FROM DEL_SALES;
• Dynamic partitioning could create millions of partitions for bad partition keys
• Parameters exist that restrict the creation of dynamic partitions
set hive.exec.dynamic.partition=true;
set hive.exec.dynamic.partition.mode = nonstrict;
set hive.exec.max.dynamic.partitions.pernode=100000;
set hive.exec.max.dynamic.partitions=100000;
set hive.exec.max.created.files=100000;
Most of these settings are already enabled with
good values in HDP 2.2+
Dynamic partition columns need to be the last columns
in your dataset
Change order in SELECT list if necessary
11. Page11
Dynamic Partition Loading
• One file per Reducer/Mapper
• Standard Load will use Map tasks to write data. One map task per input block/split
Partition DE Partition EN Partition FR Partition SP
Block1 Block3Block2 Block4
Map1 Map3Map2 Map4
b1 b2
b3 b4
b1 b2
b3 b4
b1 b2
b3 b4
b1 b2
b3 b4
Block5
Map5
b5 b5 b5 b5
12. Page12
Small files
• Large number of writers with large number of partitions results in small files
• Files with 1-10 blocks of data are more efficient for HDFS
• ORC compression is not very efficient on small files
• ORC Writer will keep one Writer object open for each partition he encounters.
• RAM needed for one stripe in every file / column
• Too many Writers results in small stripes ( down to 5000 rows )
• If you run into memory problems you can increase the task RAM or increase the ORC
memory pool percentage
set hive.tez.java.opts="-Xmx3400m";
set hive.tez.container.size = 4096;
set hive.exec.orc.memory.pool = 1.0;
13. Page13
Loading Data Using Distribution
• For large number of partitions, load data through reducers.
• One or more reducers associated with a partition through data distribution
• Beware of Hash conflicts ( two partitions being mapped to the same reducer by the hash function )
EN, 2015
DE, 2015
EN, 2014
…
DE, 2009
EN, 2008
DE, 2011
…
EN, 2014
EN, 2008
DE, 2011
…
Partition EN
Map0
HASH (EN) -> 1
HASH (DE) -> 0
…
Map1
HASH (EN) -> 1
HASH (DE) -> 0
…
Map2
HASH (EN) -> 1
HASH (EN) -> 1
…
Red1
EN, 2015
EN, 2008
…
Red0
DE, 2015
DE, 2009
…
Partition DE
14. Page14
Bucketing
• Hive tables can be bucketed using the CLUSTERED BY keyword
– One file/reducer per bucket
– Buckets can be sorted
– Additional advantages like bucket joins and sampling
• Per default one reducer for each bucket across all partitions
– Performance problems for large loads with dynamic partitioning
– ORC Writer memory issues
• Enforce Bucketing and Sorting in Hive
set hive.enforce.sorting=true;
set hive.enforce.bucketing=true;
15. Page15
Bucketing Example
CREATE TABLE ORC_SALES
( CLIENTID INT, DT DATE, REV DOUBLE, PROFIT DOUBLE, COMMENT STRING )
PARTITIONED BY ( COUNTRY STRING )
CLUSTERED BY DT SORT BY ( DT ) INTO 31 BUCKETS;
INSERT INTO TABLE ORC_SALES PARTITION (COUNTRY) SELECT * FROM DEL_SALES;
Partition DE Partition EN Partition FR
D1 D2
D4 …
D3 D1 D2
D4 …
D3 D1 D2
D4 …
D3
Red DT1 Red DT2 Red DT3 Red …
16. Page16
Optimized Dynamic Sorted Partitioning
• Enable optimized sorted partitioning to fix small file creation
– Creates one reducer for each partition AND bucket
– If you have 5 partitions with 4 buckets you will have 20 reducers
• Hash conflicts mean that you can still have reducers handling more than one file
– Data is sorted by partition/bucket key
– ORCWriter closes files after encountering new keys
- only one open file at a time
- reduced memory needs
• Can be enabled with
set optimize.sort.dynamic.partitioning=true;
17. Page17
Optimized Dynamic Sorted Partitioning
• Optimized sorted partitioning creates one reducer per partition * bucket
Block1 Block3Block2 Block4 Block5
Partition DE Partition EN Partition FR Partition SP
Map1 Map3Map2 Map4 Map5
Red1 Red2 Red3 Red4
Out1 Out2 Out3 Out4
Hash Conflicts can happen even though there is
one reducer for each partition.
• This is the reason data is sorted
• Reducer can close ORC writer after each key
18. Page18
Miscellaneous
• Small number of partitions can lead to slow loads
• Solution is bucketing, increase the number of reducers
• This can also help in Predicate pushdown
• Partition by country, bucket by client id for example.
• On a big system you may have to increase the max. number of reducers
set hive.exec.reducers.max=1000;
19. Page19
Manual Distribution
• Fine grained control over distribution may be needed
• DISTRIBUTE BY keyword allows control over the distribution algorithm
• For example DISTRIBUTE BY GENDER will split the data stream into two sub streams
• Does not define the numbers of reducers
– Specify a fitting number with
set mapred.reduce.tasks=2
• For dynamic partitioning include the partition key in the distributiom
• Any additional subkeys result in multiple files per partition folder ( not unlike bucketing )
• For fast load try to maximize number of reducers in cluster
20. Page20
Distribute By
SET MAPRED.REDUCE.TASKS = 8;
INSERT INTO ORC_SALES PARTITION ( COUNTRY) SELECT FROM DEL_SALES
DISTRIBUTE BY COUNTRY, GENDER;
Block1 Block3Block2 Block4
Partition DE Partition EN Partition FR Partition SP
Map1 Map3Map2 Map4
Red1 Red2 Red3 Red4 Red5 Red6 Red7 Red8
DE
M
DE
F
EN
M
EN
F
FR
M
FR
F
SP
M
SP
F
HashConflict
Reducers and number of distribution keys do not have
to be identical but it is good best practice
If you run into hash conflicts, changing the distribution
key may help. ( M/F -> 0/1 ) for example
22. Page22
SORT BY for Predicate Pushdown ( PPD )
• ORC can skip stripes ( and 10k sub-blocks ) of data based on ORC footers
• Data can be skipped based on min/max values and bloom filters
• In warehouse environments data is normally sorted by date
• For initial loads or other predicates data can be sorted during load
• Two ways to sort data: ORDER BY ( global sort, slow ) and SORT BY ( sort by reducer )
– Use want SORT BY for PPD: faster and cross-file sorting does not help PPD
• Can be combined with Distribution, Partitioning, Bucketing to optimize effect
CREATE TABLE ORC_SALES
( CLIENTID INT, DT DATE, REV DOUBLE, PROFIT DOUBLE, COMMENT STRING )
STORED AS ORC;
INSERT INTO TABLE ORC_SALES SELECT * FROM DEL_SALES SORT BY DT;
23. Page23
Sorting when Inserting into Table
Partition DE
DE 1
2015-01
2015-01
2015-02
2015-03
2015-04
2015-05
Block1 Block1
DE 2
2015-02
2015-02
2015-03
2015-03
2015-03
Partition EN
EN 1
2015-03
2015-04
2015-04
2015-07
EN 2
2015-01
2015-02
2015-02
2015-02
2015-03
2015-05
Map1 Map1
SELECT * FROM DATA_ORC WHERE dt = 2015-02
Files are divided into
stripes of x MB and
blocks of 10000 rows
Only blue blocks have to
be read based on their
min/max values
This requires sorting
24. Page24
Checking Results
• Use hive –orcfiledump to check results in ORC files
hive –orcfiledump /apps/hive/warehouse/table/dt=3/00001_0
… Compression: ZLIB …
Stripe Statistics:
Stripe 1:
Column 0: count: 145000
Column 1: min: 1 max: 145000
…
Stripe 2:
Column 0: count: 144000
Column 1: min: 145001 max: 289000
…
Check Number of Stripes and number rows
- small stripes (5000 rows) indicate a memory problem during load
Data should be sorted on your
predicate columns
25. Page25
Bloom Filters
• New feature in Hive 1.2
• A hash index bitmap of values in a column
• If the bit for hash(value) is 0, no row in the stripe can be your value
• If the bit for hash(value) is 1, it is possible that the stripe contains your value
• Hive can skip stripes without need to sort data
• Hard to sort by multiple columns
CREATE TABLE ORC_SALES ( ID INT, Client INT, DT INT… );
STORED AS ORC TBLPROPERTIES
("orc.bloom.filter.columns"="Client,DT");
Parameter needs case sensitive comma-separated list of columns
26. Page26
Bloom Filters
• Bloom Filters are good
• If you have multiple predicate columns
• If your predicate columns are not suitable for sorting ( URLs, hash values, … )
• If you cannot sort the data ( daily ingestion, filter by clientid )
• Bloom Filters are bad
• If every stripe contains your value
– low cardinality fields like country
– Events that happen regularly ( client buys something daily )
• Check if you successfully created a bloom filter index with orcfiledump
hive --orcfiledump –rowindex 3,4,5 /apps/hive/…
You only see bloom filter indexes if you specify the columns you want to see
27. Page27
Verify ORC indexes
• Switch on additional information like row counts going in/out of Tasks
SET HIVE.TEZ.PRINT.EXEC.SUMMARY = TRUE;
• Run query with/without Predicate Pushdown to compare row counts:
set hive.optimize.index.filter=false;
// run query
set hive.optimize.index.filter=true;
// run query
// compare results
28. Page28
Summary
• Partitioning and Predicate Pushdown can greatly enhance query performance
• Predicate Pushdown enhances Partitioning, it does not replace it
• Too many partitions lead to performance problems
• Dynamic Partition loading can lead to problems
• Normally Optimized Dynamic Sorted Partitioning solves these problems
• Sometimes manual distribution can be beneficial
• Carefully design your table layout and data loading
• Sorting is critical for effective predicate pushdown
• If sorting is no option bloom filters can be a solution
• Verify data layout with orcfiledump and debug information