Apache CarbonData & Spark Meetup
Apache Spark™ is a unified analytics engine for large-scale data processing.
CarbonData is a high-performance data solution that supports various data analytic scenarios, including BI analysis, ad-hoc SQL query, fast filter lookup on detail record, streaming analytics, and so on. CarbonData has been deployed in many enterprise production environments, in one of the largest scenario it supports queries on single table with 3PB data (more than 5 trillion records) with response time less than 3 seconds!
Transactional operations in Apache Hive: present and futureDataWorks Summit
Apache Hive is an enterprise data warehouse build on top of Hadoop. Hive supports insert, update, delete, and merge SQL operations with transactional semantics and read operations that run at snapshot isolation. The well defined semantics of these operations in the face of failure and concurrency are critical to building robust application on top of Apache Hive. In the past there were many preconditions to enabling these features which meant giving up other functionality. The need to make these tradeoffs is rapidly being eliminated.
This talk will describe the intended use cases, architecture of the implementation, recent improvements and new features build for Hive 3.0. For example, bucketing transactional tables, while supported, is no longer required. Performance overhead of using transactional tables is nearly eliminated relative to identical non-transactional tables. We’ll also cover Streaming Ingest API, which allows writing batches of events into a Hive table without using SQL.
Speaker
Eugene Koifman, Hortonworks, Principal Software Engineer
The document summarizes Apache Phoenix and its past, present, and future as a SQL interface for HBase. It describes Phoenix's architecture and key features like secondary indexes, joins, aggregations, and transactions. Recent releases added functional indexes, the Phoenix Query Server, and initial transaction support. Future plans include improvements to local indexes, integration with Calcite and Hive, and adding JSON and other SQL features. The document aims to provide an overview of Phoenix's capabilities and roadmap for building a full-featured SQL layer over HBase.
This document discusses enabling Apache Zeppelin and Spark for data science in the enterprise. It outlines current issues with Zeppelin and Spark integration including secure data access, multi-tenancy, and fault tolerance. It then describes how using Livy Server as a session management service solves these issues by providing secure, isolated sessions for each user. The document concludes by covering near term improvements like session management and long term goals like controlled sharing and model deployment.
Performance Update: When Apache ORC Met Apache SparkDataWorks Summit
Apache Spark 1.4 introduced support for Apache ORC. However, initially it did not take advantage of the full power of ORC. For instance, it was slow because ORC vectorization was not used and push-down predicate wa s also not supported on DATE types. Recently the Apache Spark community has started to use the latest Apache ORC which include new enhancements to address these limitations. In this talk, we show the result of integrating the latest Apache ORC and Apache Spark. We will also review the latest enhancements and roadmap.
Speakers:
Owen O'Malley, Co-founder & Technical Fellow, Hortonworks
Dongjoon Hyun, Staff Software Engineer, Hortonworks
Present and future of unified, portable and efficient data processing with Ap...DataWorks Summit
The world of big data involves an ever-changing field of players. Much as SQL stands as a lingua franca for declarative data analysis, Apache Beam aims to provide a portable standard for expressing robust, out-of-order data processing pipelines in a variety of languages across a variety of platforms. In a way, Apache Beam is a glue that can connect the big data ecosystem together; it enables users to "run any data processing pipeline anywhere."
This talk will briefly cover the capabilities of the Beam model for data processing and discuss its architecture, including the portability model. We’ll focus on the present state of the community and the current status of the Beam ecosystem. We’ll cover the state of the art in data processing and discuss where Beam is going next, including completion of the portability framework and the Streaming SQL. Finally, we’ll discuss areas of improvement and how anybody can join us on the path of creating the glue that interconnects the big data ecosystem.
Speaker
Davor Bonaci, V.P. of Apache Beam; Founder/CEO at Operiant
Apache Ambari is used by thousands of Hadoop Operators to manage the deployment, lifecycle, and automation of DevOps for Hadoop ecosystem projects. The Ambari engineering team will talk about improvements being made to the automation, metrics, logging, upgrade, and other core frameworks within Ambari as the project is being re-imagined.
Starting out, Apache Ambari installed a handful of Apache Hadoop ecosystem projects, on a few operating systems, and helped with the most basic Hadoop operational tasks. Today, the product manages over 20 different services, runs on multiple major operating systems and versions, and automates many of the most challenging Hadoop operational tasks in the most secure customer environments.
As part of this talk, the engineering team will walk you through what we've learned, the challenges we've overcome, and how the Apache Ambari community has changed the product to handle them. The future is fast approaching, and with it comes new on-premise and cloud deployment architectures. See how Apache Ambari is being re-imagined to handle these new challenges.
Speaker
Paul Codding, Product Management Director, Hortonworks
Oliver Szabo, Senior Software Engineer, Hortonworks
With the rise of the Internet of Things (IoT) and low-latency analytics, streaming data becomes ever more important. Surprisingly, one of the most promising approaches for processing streaming data is SQL. In this presentation, Julian Hyde shows how to build streaming SQL analytics that deliver results with low latency, adapt to network changes, and play nicely with BI tools and stored data. He also describes how Apache Calcite optimizes streaming queries, and the ongoing collaborations between Calcite and the Storm, Flink and Samza projects.
This talk was given Julian Hyde at Apache Big Data conference, Vancouver, on 2016/05/09.
Apache Spark 2.0 set the architectural foundations of structure in Spark, unified high-level APIs, structured streaming, and the underlying performant components like Catalyst Optimizer and Tungsten Engine. Since then the Spark community has continued to build new features and fix numerous issues in releases Spark 2.1 and 2.2.
Apache Spark 2.3 & 2.4 has made similar strides too. In this talk, we want to highlight some of the new features and enhancements, such as:
• Apache Spark and Kubernetes
• Native Vectorized ORC and SQL Cache Readers
• Pandas UDFs for PySpark
• Continuous Stream Processing
• Barrier Execution
• Avro/Image Data Source
• Higher-order Functions
Speaker: Robert Hryniewicz, AI Evangelist, Hortonworks
The document summarizes the results of a study that evaluated the performance of different Platform-as-a-Service offerings for running SQL on Hadoop workloads. The study tested Amazon EMR, Google Cloud DataProc, Microsoft Azure HDInsight, and Rackspace Cloud Big Data using the TPC-H benchmark at various data sizes up to 1 terabyte. It found that at 1TB, lower-end systems had poorer performance. In general, HDInsight running on D4 instances and Rackspace Cloud Big Data on dedicated hardware had the best scalability and execution times. The study provides insights into the performance, scalability, and price-performance of running SQL on Hadoop in the cloud.
LLAP (Live Long and Process) is the newest query acceleration engine for Hive 2.0, which entered GA in 2017. LLAP brings into light a new set of trade-offs and optimizations that allows for efficient and secure multi-user BI systems on the cloud. In this talk, we discuss the specifics of building a modern BI engine within those boundaries, designed to be fast and cost-effective on the public cloud. The focus of the LLAP cache is to speed up common BI query patterns on the cloud, while avoiding most of the operational administration overheads of maintaining a caching layer, with an automatically coherent cache with intelligent eviction and support for custom file formats from text to ORC, and explore the possibilities of combining the cache with a transactional storage layer which supports online UPDATE and DELETES without full data reloads. LLAP by itself, as a relational data layer, extends the same caching and security advantages to any other data processing framework. We overview the structure of such a hybrid system, where both Hive and Spark use LLAP to provide SQL query acceleration on the cloud with new, improved concurrent query support and production-ready tools and UI.
Speaker
Sergey Shelukin, Member of Technical Staff, Hortonworks
This talk will give an overview of two exciting releases for Apache HBase 2.0 and Phoenix 5.0. HBase provides a NoSQL column store on Hadoop for random, real-time read/write workloads. Phoenix provides SQL on top of HBase. HBase 2.0 contains a large number of features that were a long time in development, including rewritten region assignment, performance improvements (RPC, rewritten write pipeline, etc), async clients and WAL, a C++ client, offheaping memstore and other buffers, shading of dependencies, as well as a lot of other fixes and stability improvements. We will go into details on some of the most important improvements in the release, as well as what are the implications for the users in terms of API and upgrade paths. Phoenix 5.0 is the next big Phoenix release because of its integration with HBase 2.0 and a lot of performance improvements in support of secondary Indexes. It has many important new features such as encoded columns, Kafka and Hive integration, and many other performance improvements. This session will also describe the uses cases that HBase and Phoenix are a good architectural fit for.
Speaker: Alan Gates, Co-Founder, Hortonworks
Demand for cloud is through the roof. Cloud is turbo charging the Enterprise IT landscape with agility and flexibility. And now, discussions of cloud architecture dominate Enterprise IT. Cloud is enabling many ephemeral on-demand use cases which is a game changing opportunity for analytic workloads. But all of this comes with the challenges of running enterprise workloads in the cloud securely and with ease.
In this session, we will take you through Cloudbreak as a solution to simplify provisioning and managing enterprise workloads while providing an open and common experience for deploying workloads across clouds. We will discuss the challenges (and opportunities) to run enterprise workloads in the cloud and will go through how the latest from Cloudbreak enables enterprises to easily and securely run big data workloads. This includes deep-dive discussion on autoscaling, Ambari Blueprints, recipes, custom images, and enabling Kerberos -- which are all key capabilities for Enterprise deployments.
As a last topic we will discuss how we deployed and operate Cloudbreak as a Service internally which enables rapid cluster deployment for prototyping and testing purposes.
Speakers
Peter Darvasi, Cloudbreak Partner Engineer, Hortonworks
Richard Doktorics, Staff Engineer, Hortonworks
This document discusses Microsoft's use of Apache YARN for scale-out resource management. It describes how YARN is used to manage vast amounts of data and compute resources across many different applications and workloads. The document outlines some limitations of YARN and Microsoft's contributions to address those limitations, including Rayon for improved scheduling, Mercury and Yaq for distributed scheduling, and work on federation to scale YARN across multiple clusters. It provides details on the implementation and evaluation of these contributions through papers, JIRAs, and integration into Apache Hadoop releases.
Omid: scalable and highly available transaction processing for Apache PhoenixDataWorks Summit
Apache Phoenix is an OLTP and operational analytics for Hadoop. To ensure operations correctness, Phoenix requires that a transaction processor guarantees that all data accesses satisfy the ACID properties. Traditionally, Apache Phoenix has been using the Apache Tephra transaction processing technology. Recently, we introduced into Phoenix the support for Apache Omid—an open source transaction processor for HBase that is used at Yahoo at a large scale.
A single Omid instance sustains hundreds of thousands of transactions per second and provides high availability at zero cost for mainstream processing. Omid, as well as Tephra, are now configurable choices for the Phoenix transaction processing backend, being enabled by the newly introduced Transaction Abstraction Layer (TAL) API. The integration requires introducing many new features and operations to Omid and will become generally available early 2018.
In this talk, we walk through the challenges of the project, focusing on the new use cases introduced by Phoenix and how we address them in Omid.
Speaker
Ohad Shacham, Yahoo Research, Oath, Senior Research Scientist
James Taylor
Speed Up Your Queries with Hive LLAP Engine on Hadoop or in the Cloudgluent.
Hive was the first popular SQL layer built on Hadoop and has long been known as a heavyweight SQL engine suitable mainly for long-running batch jobs. This has greatly changed since Hive was announced to the world over 8 years ago. Hortonworks and the open source community have evolved Apache Hive into a fast, dynamic SQL on Hadoop engine capable of running highly concurrent query workloads over large datasets with sub-second response time.
The latest Hortonworks and Azure HDInsight platform versions fully support Hive with LLAP execution engine for production use. In this webinar, we will go through the architecture of Hive + LLAP engine and explain how it differs from previous Hive versions. We will then dive deeper and show how features like query vectorization and LLAP columnar caching bring further automatic performance improvements.
In the end, we will show how Gluent brings these new performance benefits to traditional enterprise database platforms via transparent data virtualization, allowing even your largest databases to benefit from all this without changing any application code. Join this webinar to learn about significant improvements in modern Hive architecture and how Gluent and Hive LLAP on Hortonworks or Azure HDInsight platforms can accelerate cloud migrations and greatly improve hybrid query performance!
This talk will give an overview of two exciting releases for Apache HBase 2.0 and Phoenix 5.0. HBase provides a NoSQL column store on Hadoop for random, real-time read/write workloads. Phoenix provides SQL on top of HBase. HBase 2.0 contains a large number of features that were a long time in development, including rewritten region assignment, performance improvements (RPC, rewritten write pipeline, etc), async clients and WAL, a C++ client, offheaping memstore and other buffers, shading of dependencies, as well as a lot of other fixes and stability improvements. We will go into details on some of the most important improvements in the release, as well as what are the implications for the users in terms of API and upgrade paths. Phoenix 5.0 is the next big Phoenix release because of its integration with HBase 2.0 and a lot of performance improvements in support of secondary Indexes. It has many important new features such as encoded columns, Kafka and Hive integration, and many other performance improvements. This session will also describe the uses cases that HBase and Phoenix are a good architectural fit for.
ORC files were originally introduced in Hive, but have now migrated to an independent Apache project. This has sped up the development of ORC and simplified integrating ORC into other projects, such as Hadoop, Spark, Presto, and Nifi. There are also many new tools that are built on top of ORC, such as Hive’s ACID transactions and LLAP, which provides incredibly fast reads for your hot data. LLAP also provides strong security guarantees that allow each user to only see the rows and columns that they have permission for.
This talk will discuss the details of the ORC and Parquet formats and what the relevant tradeoffs are. In particular, it will discuss how to format your data and the options to use to maximize your read performance. In particular, we’ll discuss when and how to use ORC’s schema evolution, bloom filters, and predicate push down. It will also show you how to use the tools to translate ORC files into human-readable formats, such as JSON, and display the rich metadata from the file including the type in the file and min, max, and count for each column.
This document discusses improvements to ORC support in Apache Spark 2.3. It describes previous issues with ORC performance and compatibility in Spark. The current approach in Spark 2.3 introduces a new native ORC file format that provides significantly better performance compared to the previous Hive ORC implementation. It allows configuring the ORC implementation and reader type. The document also demonstrates ORC usage in Spark and PySpark. Benchmark results show the native ORC reader provides up to 15x faster performance for scans and predicate pushdown. Future work items are discussed to further improve ORC support in Spark.
Sharing metadata across the data lake and streamsDataWorks Summit
The document discusses sharing metadata across data lakes and streams. It proposes unifying the Hive Metastore (HMS) and Schema Registry so that batch and streaming systems can see each other's metadata. This would reduce the number of separate metadata systems administrators need to maintain. The document also describes making the HMS standalone so it is not required to install Hive, enabling other systems like Spark and Impala to use HMS independently. Finally, it provides use cases where streaming applications need access to batch data in Hive tables and vice versa.
The document discusses Microsoft's Azure IoT platform for connecting, managing, and analyzing Internet of Things devices and data. It provides an overview of the key components of Azure IoT including Azure IoT Hub for device connectivity and management, analytics services like Azure Machine Learning and Stream Analytics, and connectivity to other Azure services. It also highlights aspects of Azure IoT like its open ecosystem, support for open standards, and global infrastructure running on Microsoft's Azure cloud.
Apache Spark 2.0 set the architectural foundations of structure in Spark, unified high-level APIs, structured streaming, and the underlying performant components like Catalyst Optimizer and Tungsten Engine. Since then the Spark community has continued to build new features and fix numerous issues in releases Spark 2.1 and 2.2.
Continuing forward in that spirit, the upcoming release of Apache Spark 2.3 has made similar strides too, introducing new features and resolving over 1300 JIRA issues. In this talk, we want to share with the community some salient aspects of soon-to-be-released Spark 2.3 features:
• New deployment mode: Kubernetes scheduler backend
• PySpark performance and enhancements
• New structured streaming execution engine: continuous processing
• Data source v2 APIs for both structured streaming and Spark SQL
• ML on structured streaming
• Image reader
• Stable codegen engine
• Spark History Server V2
• Native ORC support
• Vectorized ORC and SQL cache readers
• Stream-stream Join
• UDF enhancements
• Various SQL enhancements
Speakers
Xiao Li, Software Engineer, Databricks
Wenchen Fan, Software Engineer, Databricks
Deep Dive into Spark SQL with Advanced Performance Tuning with Xiao Li & Wenc...Databricks
Spark SQL is a highly scalable and efficient relational processing engine with ease-to-use APIs and mid-query fault tolerance. It is a core module of Apache Spark. Spark SQL can process, integrate and analyze the data from diverse data sources (e.g., Hive, Cassandra, Kafka and Oracle) and file formats (e.g., Parquet, ORC, CSV, and JSON). This talk will dive into the technical details of SparkSQL spanning the entire lifecycle of a query execution. The audience will get a deeper understanding of Spark SQL and understand how to tune Spark SQL performance.
Overview of Apache Spark 2.3: What’s New? with Sameer AgarwalDatabricks
Apache Spark 2.0 set the architectural foundations of Structure in Spark, Unified high-level APIs, Structured Streaming, and the underlying performant components like Catalyst Optimizer and Tungsten Engine. Since then the Spark community contributors have continued to build new features and fix numerous issues in releases Spark 2.1 and 2.2.
Continuing forward in that spirit, Apache Spark 2.3 has made similar strides too, introducing new features and resolving over 1300 JIRA issues. In this talk, we want to share with the community some salient aspects of Spark 2.3 features:
Kubernetes Scheduler Backend
PySpark Performance and Enhancements
Continuous Structured Streaming Processing
DataSource v2 APIs
Spark History Server Performance Enhancements
The document summarizes the major new features in Apache Spark 2.3, including continuous processing for low-latency streaming, Spark running on Kubernetes, improved PySpark performance using Pandas UDFs, machine learning capabilities on streaming data, and image reading support. Some key updates are continuous processing for streaming with latency of ~1ms and at-least once semantics, Spark's ability to run natively on Kubernetes clusters, and Pandas UDFs in PySpark providing a 3x to 100x performance boost over row-at-a-time UDFs. The speaker is the Spark 2.3 release manager and discusses these topics at the Spark Summit on June 6, 2018.
This document discusses real time analytics using Spark and Spark Streaming. It provides an introduction to Spark and highlights limitations of Hadoop for real-time analytics. It then describes Spark's advantages like in-memory processing and rich APIs. The document discusses Spark Streaming and the Spark Cassandra Connector. It also introduces DataStax Enterprise which integrates Spark, Cassandra and Solr to allow real-time analytics without separate clusters. Examples of streaming use cases and demos are provided.
Apache Spark 2.0 set the architectural foundations of Structure in Spark, Unified high-level APIs, Structured Streaming, and the underlying performant components like Catalyst Optimizer and Tungsten Engine. Since then the Spark community has continued to build new features and fix numerous issues in releases Spark 2.1 and 2.2.
Continuing forward in that spirit, the upcoming release of Apache Spark 2.3 has made similar strides too, introducing new features and resolving over 1300 JIRA issues. In this talk, we want to share with the community some salient aspects of soon to be released Spark 2.3 features:
• Kubernetes Scheduler Backend
• PySpark Performance and Enhancements
• Continuous Structured Streaming Processing
• DataSource v2 APIs
• Structured Streaming v2 APIs
This document summarizes new features in Apache Spark 2.3, including continuous processing mode for structured streaming, stream-stream joins, running Spark applications on Kubernetes, improved PySpark performance through vectorized UDFs and Pandas integration, and Databricks Delta for reliability and performance in data lakes. The author, an Apache Spark committer and PMC member, provides overviews and code examples of these features.
Web Scale Reasoning and the LarKC ProjectSaltlux Inc.
The LarKC project aims to build an integrated pluggable platform for large-scale reasoning. It supports parallelization, distribution, and remote execution. The LarKC platform provides a lightweight core that gives standardized interfaces for combining plug-in components, while the real work is done in the plug-ins. There are three types of LarKC users: those building plug-ins, configuring workflows, and using workflows.
What is Apache Kafka and What is an Event Streaming Platform?confluent
Speaker: Gabriel Schenker, Lead Curriculum Developer, Confluent
Streaming platforms have emerged as a popular, new trend, but what exactly is a streaming platform? Part messaging system, part Hadoop made fast, part fast ETL and scalable data integration. With Apache Kafka® at the core, event streaming platforms offer an entirely new perspective on managing the flow of data. This talk will explain what an event streaming platform such as Apache Kafka is and some of the use cases and design patterns around its use—including several examples of where it is solving real business problems. New developments in this area such as KSQL will also be discussed.
Spark Summit EU talk by Miklos Christine paddling up the streamSpark Summit
This document provides lessons learned from using Apache Spark Streaming. It discusses key architecture decisions when using Spark Streaming vs Structured Streaming. It also outlines the top 5 support issues encountered, including type mismatches, errors finding leader offsets, issues with toDF functions, non-serializable tasks, and efficiently pushing JSON records. It provides solutions and references for each issue.
This introductory workshop is aimed at data analysts & data engineers new to Apache Spark and exposes them how to analyze big data with Spark SQL and DataFrames.
In this partly instructor-led and self-paced labs, we will cover Spark concepts and you’ll do labs for Spark SQL and DataFrames
in Databricks Community Edition.
Toward the end, you’ll get a glimpse into newly minted Databricks Developer Certification for Apache Spark: what to expect & how to prepare for it.
* Apache Spark Basics & Architecture
* Spark SQL
* DataFrames
* Brief Overview of Databricks Certified Developer for Apache Spark
Apache® Spark™ 1.6 presented by Databricks co-founder Patrick WendellDatabricks
In this webcast, Patrick Wendell from Databricks will be speaking about Apache Spark's new 1.6 release.
Spark 1.6 will include (but not limited to) a type-safe API called Dataset on top of DataFrames that leverages all the work in Project Tungsten to have more robust and efficient execution (including memory management, code generation, and query optimization) [SPARK-9999], adaptive query execution [SPARK-9850], and unified memory management by consolidating cache and execution memory [SPARK-10000].
What's New in Apache Spark 2.3 & Why Should You CareDatabricks
The Apache Spark 2.3 release marks a big step forward in speed, unification, and API support.
This talk will quickly walk through what’s new and how you can benefit from the upcoming improvements:
* Continuous Processing in Structured Streaming.
* PySpark support for vectorization, giving Python developers the ability to run native Python code fast.
* Native Kubernetes support, marrying the best of container orchestration and distributed data processing.
Incorta allows users to create materialized views (MVs) using Spark. It provides functions to read data from Incorta tables and save Spark DataFrames as MVs. The document discusses Spark integration with Incorta, including installing and configuring Spark, and creating the first MV using Spark Python APIs. It demonstrates reading data from Incorta and saving a DataFrame as a new MV.
The document summarizes a presentation given at Spark Summit 2016 in San Francisco. It discusses Apache Spark, noting that it is an open-source cluster computing framework that is 100x faster than Hadoop for large-scale data processing. It then discusses how a large video game company uses Spark SQL for data exploration and reporting, Spark Streaming for network performance monitoring, and Spark MLlib for building a recommendation system. These allow the company to gain insights from over 500 billion daily data points collected from their 67 million active players.
Streaming Big Data with Spark, Kafka, Cassandra, Akka & Scala (from webinar)Helena Edelson
This document provides an overview of streaming big data with Spark, Kafka, Cassandra, Akka, and Scala. It discusses delivering meaning in near-real time at high velocity and an overview of Spark Streaming, Kafka and Akka. It also covers Cassandra and the Spark Cassandra Connector as well as integration in big data applications. The presentation is given by Helena Edelson, a Spark Cassandra Connector committer and Akka contributor who is a Scala and big data conference speaker working as a senior software engineer at DataStax.
Monitor Apache Spark 3 on Kubernetes using Metrics and PluginsDatabricks
This talk will cover some practical aspects of Apache Spark monitoring, focusing on measuring Apache Spark running on cloud environments, and aiming to empower Apache Spark users with data-driven performance troubleshooting. Apache Spark metrics allow extracting important information on Apache Spark’s internal execution. In addition, Apache Spark 3 has introduced an improved plugin interface extending the metrics collection to third-party APIs. This is particularly useful when running Apache Spark on cloud environments as it allows measuring OS and container metrics like CPU usage, I/O, memory usage, network throughput, and also measuring metrics related to cloud filesystems access. Participants will learn how to make use of this type of instrumentation to build and run an Apache Spark performance dashboard, which complements the existing Spark WebUI for advanced monitoring and performance troubleshooting.
Teaching Apache Spark: Demonstrations on the Databricks Cloud PlatformYao Yao
Yao Yao Mooyoung Lee
https://github.com/yaowser/learn-spark/tree/master/Final%20project
https://www.youtube.com/watch?v=IVMbSDS4q3A
https://www.academia.edu/35646386/Teaching_Apache_Spark_Demonstrations_on_the_Databricks_Cloud_Platform
https://www.slideshare.net/YaoYao44/teaching-apache-spark-demonstrations-on-the-databricks-cloud-platform-86063070/
Apache Spark is a fast and general engine for big data analytics processing with libraries for SQL, streaming, and advanced analytics
Cloud Computing, Structured Streaming, Unified Analytics Integration, End-to-End Applications
Delivering Meaning In Near-Real Time At High Velocity In Massive Scale with A...Helena Edelson
Streaming Big Data: Delivering Meaning In Near-Real Time At High Velocity At Massive Scale with Apache Spark, Apache Kafka, Apache Cassandra, Akka and the Spark Cassandra Connector. Why this pairing of technologies and How easy it is to implement. Example application: https://github.com/killrweather/killrweather
Jumpstart on Apache Spark 2.2 on DatabricksDatabricks
In this introductory part lecture and part hands-on workshop, you’ll learn how to apply some of these new APIs using Databricks Community Edition. In particular, we will cover the following areas:
Agenda:
• Overview of Spark Fundamentals & Architecture
• What’s new in Spark 2.x
• Unified APIs: SparkSessions, SQL, DataFrames, Datasets
• Introduction to DataFrames, Datasets and Spark SQL
• Introduction to Structured Streaming Concepts
• Four Hands On Labs
You will use Databricks Community Edition, which will give you unlimited free access to a ~6 GB Spark 2.x local mode cluster. And in the process, you will learn how to create a cluster, navigate in Databricks, explore a couple of datasets, perform transformations and ETL, save your data as tables and parquet files, read from these sources, and analyze datasets using DataFrames/Datasets API and Spark SQL.
Level: Beginner to intermediate, not for advanced Spark users.
Prerequisite: You will need a laptop with Chrome or Firefox browser installed with at least 8 GB. Introductory or basic knowledge Scala or Python is required, since the Notebooks will be in Scala; Python is optional.
Bio:
Jules S. Damji is an Apache Spark Community Evangelist with Databricks. He is a hands-on developer with over 15 years of experience and has worked at leading companies, such as Sun Microsystems, Netscape, LoudCloud/Opsware, VeriSign, Scalix, and ProQuest, building large-scale distributed systems. Before joining Databricks, he was a Developer Advocate at Hortonworks.
UiPath Community Day Amsterdam: Code, Collaborate, ConnectUiPathCommunity
Welcome to our third live UiPath Community Day Amsterdam! Come join us for a half-day of networking and UiPath Platform deep-dives, for devs and non-devs alike, in the middle of summer ☀.
📕 Agenda:
12:30 Welcome Coffee/Light Lunch ☕
13:00 Event opening speech
Ebert Knol, Managing Partner, Tacstone Technology
Jonathan Smith, UiPath MVP, RPA Lead, Ciphix
Cristina Vidu, Senior Marketing Manager, UiPath Community EMEA
Dion Mes, Principal Sales Engineer, UiPath
13:15 ASML: RPA as Tactical Automation
Tactical robotic process automation for solving short-term challenges, while establishing standard and re-usable interfaces that fit IT's long-term goals and objectives.
Yannic Suurmeijer, System Architect, ASML
13:30 PostNL: an insight into RPA at PostNL
Showcasing the solutions our automations have provided, the challenges we’ve faced, and the best practices we’ve developed to support our logistics operations.
Leonard Renne, RPA Developer, PostNL
13:45 Break (30')
14:15 Breakout Sessions: Round 1
Modern Document Understanding in the cloud platform: AI-driven UiPath Document Understanding
Mike Bos, Senior Automation Developer, Tacstone Technology
Process Orchestration: scale up and have your Robots work in harmony
Jon Smith, UiPath MVP, RPA Lead, Ciphix
UiPath Integration Service: connect applications, leverage prebuilt connectors, and set up customer connectors
Johans Brink, CTO, MvR digital workforce
15:00 Breakout Sessions: Round 2
Automation, and GenAI: practical use cases for value generation
Thomas Janssen, UiPath MVP, Senior Automation Developer, Automation Heroes
Human in the Loop/Action Center
Dion Mes, Principal Sales Engineer @UiPath
Improving development with coded workflows
Idris Janszen, Technical Consultant, Ilionx
15:45 End remarks
16:00 Community fun games, sharing knowledge, drinks, and bites 🍻
Top 12 AI Technology Trends For 2024.pdfMarrie Morris
Technology has become an irreplaceable component of our daily lives. The role of AI in technology revolutionizes our lives for the betterment of the future. In this article, we will learn about the top 12 AI technology trends for 2024.
"Hands-on development experience using wasm Blazor", Furdak Vladyslav.pptxFwdays
I will share my personal experience of full-time development on wasm Blazor
What difficulties our team faced: life hacks with Blazor app routing, whether it is necessary to write JavaScript, which technology stack and architectural patterns we chose
What conclusions we made and what mistakes we committed
Discovery Series - Zero to Hero - Task Mining Session 1DianaGray10
This session is focused on providing you with an introduction to task mining. We will go over different types of task mining and provide you with a real-world demo on each type of task mining in detail.
Keynote : Presentation on SASE TechnologyPriyanka Aash
Secure Access Service Edge (SASE) solutions are revolutionizing enterprise networks by integrating SD-WAN with comprehensive security services. Traditionally, enterprises managed multiple point solutions for network and security needs, leading to complexity and resource-intensive operations. SASE, as defined by Gartner, consolidates these functions into a unified cloud-based service, offering SD-WAN capabilities alongside advanced security features like secure web gateways, CASB, and remote browser isolation. This convergence not only simplifies management but also enhances security posture and application performance across global networks and cloud environments. Discover how adopting SASE can streamline operations and fortify your enterprise's digital transformation strategy.
TrustArc Webinar - Innovating with TRUSTe Responsible AI CertificationTrustArc
In a landmark year marked by significant AI advancements, it’s vital to prioritize transparency, accountability, and respect for privacy rights with your AI innovation.
Learn how to navigate the shifting AI landscape with our innovative solution TRUSTe Responsible AI Certification, the first AI certification designed for data protection and privacy. Crafted by a team with 10,000+ privacy certifications issued, this framework integrated industry standards and laws for responsible AI governance.
This webinar will review:
- How compliance can play a role in the development and deployment of AI systems
- How to model trust and transparency across products and services
- How to save time and work smarter in understanding regulatory obligations, including AI
- How to operationalize and deploy AI governance best practices in your organization
It's your unstructured data: How to get your GenAI app to production (and spe...Zilliz
So you've successfully built a GenAI app POC for your company -- now comes the hard part: bringing it to production. Aparavi addresses the challenges of AI projects while addressing data privacy and PII. Our Service for RAG helps AI developers and data scientists to scale their app to 1000s to millions of users using corporate unstructured data. Aparavi’s AI Data Loader cleans, prepares and then loads only the relevant unstructured data for each AI project/app, enabling you to operationalize the creation of GenAI apps easily and accurately while giving you the time to focus on what you really want to do - building a great AI application with useful and relevant context. All within your environment and never having to share private corporate data with anyone - not even Aparavi.
"Making .NET Application Even Faster", Sergey Teplyakov.pptxFwdays
In this talk we're going to explore performance improvement lifecycle, starting with setting the performance goals, using profilers to figure out the bottle necks, making a fix and validating that the fix works by benchmarking it. The talk will be useful for novice and seasoned .NET developers and architects interested in making their application fast and understanding how things work under the hood.
2. About Me
• Software Engineer at Databricks
• Apache Spark Committer and PMC Member
• Previously, IBM Master Inventor
• Spark SQL, Database Replication, Information Integration
• Ph.D. in University of Florida
• Github: gatorsmile
3. DATABRICKS WORKSPACE
Databricks Delta ML Frameworks
DATABRICKS CLOUD SERVICE
DATABRICKS RUNTIME
Reliable & Scalable Simple & Integrated
Databricks Unified Analytics Platform
APIs
Jobs
Models
Notebooks
Dashboards End to end ML lifecycle
4. Databricks Customers Across Industries
Financial Services Healthcare & Pharma Media & Entertainment Technology
Public Sector Retail & CPG Consumer Services Energy & Industrial IoTMarketing & AdTech
Data & Analytics Services
6. Higher-order
Functions
Major Features on Spark 2.4
6
Structured
Streaming
Built-in source
Improvement
Spark on
Kubernetes
PySpark
Improvement
Native Avro
Support
Image
Source
Barrier
Execution
Scala
2.12
Various SQL
Features
7. Higher-order
Functions
Major Features on Spark 2.4
7
Structured
Streaming
Built-in source
Improvement
Spark on
Kubernetes
PySpark
Improvement
Native Avro
Support
Image
Source
Barrier
Execution
Scala
2.12
Various SQL
Features
8. Apache Spark: The First Unified Analytics Engine
Runtime
Delta
Spark Core Engine
Big Data Processing
ETL + SQL +Streaming
Machine Learning
MLlib + SparkR
Uniquely combines Data & AI technologies
9. The cross?
9
Map/Reduce
CaffeOnSpark
TensorFlowOnSpark
DataFrame-based APIs
50+ Data Sources
Python/Java/R interfaces
Structured Streaming
ML Pipelines API
Continuous
Processing
RDD
Project Tungsten
Pandas UDF
TensorFrames
scikit-learn
pandas/numpy/scipy
LIBLINEAR
R
glmnet
xgboost
GraphLab
Caffe/PyTorch/MXNet
TensorFlow
Keras
Distributed
TensorFlow
Horovod
tf.data
tf.transform
AI/ML
??
TF XLA
10. Project Hydrogen: Spark + AI
A gang scheduling to Apache Spark that embeds a distributed DL
job as a Spark stage to simplify the distributed training workflow.
[SPARK-24374]
• Launch the tasks in a stage at the same time
• Provide enough information and tooling to embed distributed DL
workloads
• Introduce a new mechanism of fault tolerance (When any task
failed in the middle, Spark shall abort all the tasks and restart the
stage)
10
13. Higher-order
Functions
Major Features on Spark 2.4
13
Structured
Streaming
Built-in source
Improvement
Spark on
Kubernetes
PySpark
Improvement
Native Avro
Support
Image
Source
Barrier
Execution
Scala
2.12
Various SQL
Features
14. Pandas UDFs
• Grouped Aggregate Pandas UDFs
• pandas.Series -> a scalar
• returnType: primitive data type
• [SPARK-22274] [SPARK-22239]
14
Spark 2.3 introduced vectorized
Pandas UDFs that use Pandas to
process data. Faster data
serialization and execution using
vectorized formats
17. Other Notable Features
[SPARK-24396] Add Structured Streaming ForeachWriter for Python
[SPARK-23030] Use Arrow stream format for creating from and collecting
Pandas DataFrames
[SPARK-24624] Support mixture of Python UDF and Scalar Pandas UDF
[SPARK-23874] Upgrade Apache Arrow to 0.10.0
• Allow for adding BinaryType support [ARROW-2141]
[SPARK-25004] Add spark.executor.pyspark.memory limit
17
18. Higher-order
Functions
Major Features on Spark 2.4
18
Structured
Streaming
Built-in source
Improvement
Spark on
Kubernetes
PySpark
Improvement
Native Avro
Support
Image
Source
Barrier
Execution
Scala
2.12
Various SQL
Features
19. Flexible Streaming Sink
[SPARK-24565] Exposing output rows of each microbatch as a
DataFrame
foreachBatch(f: Dataset[T] => Unit)
• Scala/Java/Python APIs in DataStreamWriter.
• Reuse existing batch data sources
• Write to multiple locations
• Apply additional DataFrame operations
19
22. Structured Streaming
[SPARK-24662] Support for the LIMIT operator for streams
in Append and Complete output modes.
[SPARK-24763] Remove redundant key data from value in streaming aggregation
[SPARK-24156] Faster generation of output results and/or state cleanup with
stateful operations (mapGroupsWithState, stream-stream join, streaming
aggregation, streaming dropDuplicates) when there is no data in the input
stream.
[SPARK-24730] Support for choosing either the min or max watermark when
there are multiple input streams in a query.
22
23. Kafka Client 2.0.0
[SPARK-18057] Upgraded Kafka client version from 0.10.0.1 to 2.0.0
[SPARK-25005] Support “kafka.isolation.level” to read only committed
records from Kafka topics that are written using a transactional
producer.
23
24. Higher-order
Functions
Major Features on Spark 2.4
24
Structured
Streaming
Built-in source
Improvement
Spark on
Kubernetes
PySpark
Improvement
Native Avro
Support
Image
Source
Barrier
Execution
Scala
2.12
Various SQL
Features
25. AVRO
• Apache Avro (https://avro.apache.org)
• A data serialization format
• Widely used in the Spark and Hadoop ecosystem, especially for
Kafka-based data pipelines.
• Spark-Avro package (https://github.com/databricks/spark-avro)
• Spark SQL can read and write the avro data.
• Inlining Spark-Avro package [SPARK-24768]
• Better experience for first-time users of Spark SQL and structured
streaming
• Expect further improve the adoption of structured streaming
25
26. AVRO
[SPARK-24811] from_avro/to_avro
functions to read and write Avro
data within a DataFrame instead of
just files.
Example:
1. Decode the Avro data into a struct
2. Filter by column `favorite_color`
3. Encode the column `name` in
Avro format
26
27. Runtime comparison (Lower is better)
AVRO Performance
27
[SPARK-24800] Refactor Avro Serializer
and Deserializer
External library
Native reader
AVRO Data Row InternalRow
AVRO Data Row InternalRow
AVRO Data InternalRow
AVRO Data InternalRow
Notebook: https://dbricks.co/AvroPerf
2X faster
28. AVRO Logical Types
Avro upgrade from 1.7.7 to 1.8.
[SPARK-24771]
Logical type support:
• Date [SPARK-24772]
• Decimal [SPARK-24774]
• Timestamp [SPARK-24773]
28
Options:
• compression
• ignoreExtension
• recordNamespace
• recordName
• avroSchema
Blog: Apache Avro as a Built-in Data Source in Apache Spark 2.4.
https://t.co/jks7j27PxJ
29. Higher-order
Functions
Major Features on Spark 2.4
29
Structured
Streaming
Built-in source
Improvement
Spark on
Kubernetes
PySpark
Improvement
Native Avro
Support
Image
Source
Barrier
Execution
Scala
2.12
Various SQL
Features
30. Image schema data source
[SPARK-22666] Spark datasource for image format
• Partition discovery [new]
• Loading recursively from directory [new]
• dropImageFailures path wildcard matching
• Path wildcard matching
30
31. Higher-order
Functions
Major Features on Spark 2.4
31
Structured
Streaming
Built-in source
Improvement
Spark on
Kubernetes
PySpark
Improvement
Native Avro
Support
Image
Source
Barrier
Execution
Scala
2.12
Various SQL
Features
32. 32
• 30+ PB data
• 5,000 tables
• 20,000 temporary tables
• 50,000 jobs
• 1,800+ nodes per day
• 15,000 jobs
Analytic Database
SQL
33. Parquet
33
Update from 1.8.2 to 1.10.0 [SPARK-23972].
• PARQUET-1025 - Support new min-max statistics in parquet-mr
• PARQUET-225 - INT64 support for delta encoding
• PARQUET-1142 Enable parquet.filter.dictionary.enabled by default.
Predicate pushdown
• STRING [SPARK-23972] [20x faster]
• Decimal [SPARK-24549]
• Timestamp [SPARK-24718]
• Date [SPARK-23727]
• Byte/Short [SPARK-24706]
• StringStartsWith [SPARK-24638]
• IN [SPARK-17091]
34. ORC
Native vectorized ORC reader is GAed!
• Native ORC reader is on by default [SPARK-23456]
• Update ORC from 1.4.1 to 1.5.2 [SPARK-24576]
• Turn on ORC filter push-down by default [SPARK-21783]
• Use native ORC reader to read Hive serde tables by default
[SPARK-22279]
• Avoid creating reader for all ORC files [SPARK-25126]
34
35. CSV
• Option samplingRatio for schema inference [SPARK-23846]
• Option enforceSchema for throwing an exception when user-specified
schema doesn‘t match the CSV header [SPARK-23786]
• Option encoding for specifying the encoding of outputs. [SPARK-19018]
Performance:
• Parsing only required columns to the CSV parser [SPARK-24244]
• Speed up count() for JSON and CSV [SPARK-24959]
• Better performance by switching to uniVocity 2.7.3 [SPARK-24945]
35
36. JSON
• Option encoding for specifying the encoding of inputs and outputs.
[SPARK-23723]
• Option dropFieldIfAllNull for ignoring column of all null values or
empty array/struct during JSON schema inference [SPARK-23772]
• Option lineSep for defining the line separator that should be used for
parsing [SPARK-23765]
• Speed up count() for JSON and CSV [SPARK-24959]
36
37. JDBC
• Option queryTimeout for the number of seconds the the driver will wait
for a Statement object to execute. [SPARK-23856]
• Option query for specifying the query to read from JDBC [SPARK-24423]
• Option pushDownFilters for specifying whether the filter pushdown is
allowed [SPARK-24288]
• Auto-correction of partition column names [SPARK-24327]
• Support Date/Timestamp in a JDBC partition column when reading in
parallel from multiple workers. [SPARK-22814]
• Add cascadeTruncate option to JDBC datasource [SPARK-22880]
37
38. Higher-order
Functions
Major Features on Upcoming Spark 2.4
38
Structured
Streaming
Built-in source
Improvement
Spark on
Kubernetes
PySpark
Improvement
Native Avro
Support
Image
Source
Barrier
Execution
Scala
2.12
Various SQL
Features
39. Higher-order Functions
Transformation on complex objects like arrays, maps and structures
inside of columns.
39
tbl_nested
|-- key: long (nullable = false)
|-- values: array (nullable = false)
| |-- element: long (containsNull = false)
UDF ? Expensive data serialization
40. 1) Check for element existence
SELECT EXISTS(values, e -> e > 30) AS v
FROM tbl_nested;
2) Transform an array
SELECT TRANSFORM(values, e -> e * e) AS v
FROM tbl_nested;
tbl_nested
|-- key: long (nullable = false)
|-- values: array (nullable = false)
| |-- element: long (containsNull = false)
Higher-order Functions
41. 4) Aggregate an array
SELECT REDUCE(values, 0, (value, acc) -> value + acc) AS sum
FROM tbl_nested;
Ref Databricks Blog: http://dbricks.co/2rUKQ1A
3) Filter an array
SELECT FILTER(values, e -> e > 30) AS v
FROM tbl_nested;
tbl_nested
|-- key: long (nullable = false)
|-- values: array (nullable = false)
| |-- element: long (containsNull = false)
Higher-order Functions
42. Built-in Functions
[SPARK-23899] New or extended built-in functions for ArrayTypes and
MapTypes
• 26 functions for ArrayTypes
transform, filter, reduce, array_distinct, array_intersect, array_union,
array_except, array_join, array_max, array_min, ...
• 3 functions for MapTypes
map_from_arrays, map_from_entries, map_concat
42
Blog: Introducing New Built-in and Higher-Order Functions for
Complex Data Types in Apache Spark 2.4. https://t.co/p1TRRtabJJ
43. Higher-order
Functions
Major Features on Spark 2.4
43
Structured
Streaming
Built-in source
Improvement
Spark on
Kubernetes
PySpark
Improvement
Native Avro
Support
Image
Source
Barrier
Execution
Scala
2.12
Various SQL
Features
44. When having many columns/functions
[SPARK-16406] Analyzer: Improve performance of LogicalPlan.resolve
Add an indexing structure to resolve(...) in order to find potential
matches quicker.
[SPARK-23963] Properly handle large number of columns in query on text-
based Hive table
Turns a list to array, makes a hive table scan 10 times faster when
there are a lot of columns.
[SPARK-23486] Analyzer: Cache the function name from the external
catalog for lookupFunctions
44
45. Optimizer/Planner
[SPARK-23803] Support Bucket Pruning
Prune buckets that cannot satisfy equal-to predicates, to reduce the
number of files to scan.
[SPARK-24802] Optimization Rule Exclusion
Disable a list of optimization rules in the optimizer
[SPARK-4502] Nested schema pruning for Parquet tables.
Column pruning on nested fields.
More: [SPARK-24339] [SPARK-23877] [SPARK-23957] [SPARK-25212] …
45
46. SQL API Enhancement
[SPARK-24940] Coalesce and Repartition Hint for
SQL Queries
INSERT OVERWRITE TABLE targetTable
SELECT /*+ REPARTITION(10) */ *
FROM sourceTable
[SPARK-19602] Support column resolution of fully
qualified column name (3 part name). (i.e.,
$DBNAME.$TABLENAME.$COLUMNNAME)
46
More: INTERSECT ALL,
EXCEPT ALL, Pivot,
GROUPING SET,
precedence rules for set
operations and …
[SPARK-21274] [SPARK-
24035] [SPARK-24424]
[SPARK-24966]
Blog: SQL Pivot: Converting
Rows to Columns in Apache
Spark 2.4
https://t.co/AgGKOcl2N4
SELECT * FROM db1.t3
WHERE c1 IN (SELECT db1.t4.c2 FROM
db1.t4 WHERE db1.t4.c3 = db1.t3.c2)
47. Other Notable Changes in SQL
[SPARK-24596] Non-cascading Cache Invalidation
• Non-cascading mode for temporary views and DataSet.unpersist()
• Cascading mode for the rest
[SPARK-23880] Do not trigger any job for caching data
[SPARK-23510][SPARK-24312] Support Hive 2.2 and Hive 2.3 metastore
[SPARK-23711] Add fallback generator for UnsafeProjection
[SPARK-24626] Parallelize location size calculation in Analyze Table
command
47
48. Other Notable Features in Core
[SPARK-23243] Fix RDD.repartition() data correctness issue
[SPARK-24296] Support replicating blocks larger than 2 GB
[SPARK-24307] Support sending messages over 2GB from memory
48
49. Higher-order
Functions
Major Features on Spark 2.4
49
Structured
Streaming
Built-in source
Improvement
Spark on
Kubernetes
PySpark
Improvement
Native Avro
Support
Image
Source
Barrier
Execution
Scala
2.12
Various SQL
Features
50. Native Spark App in K8S
New Spark scheduler backend
• PySpark support [SPARK-23984]
• SparkR support [SPARK-24433]
• Client-mode support [SPARK-23146]
• Support for mounting K8S volumes
[SPARK-23529]
Blog: What’s New for Apache Spark on Kubernetes in
the Upcoming Apache Spark 2.4 Release
https://t.co/uUpdUj2Z4B
50
on
51. Higher-order
Functions
Major Features on Spark 2.4
51
Structured
Streaming
Built-in source
Improvement
Spark on
Kubernetes
PySpark
Improvement
Native Avro
Support
Image
Source
Barrier
Execution
Scala
2.12
Various SQL
Features
52. Scala 2.12 Beta Support
[SPARK-14220] Build Spark against Scala 2.12
52
All the tests PASS! https://dbricks.co/Scala212Jenkins
57. 57
Nike: Enabling Data Scientists to bring their Models to Market
Facebook: Vectorized Query Execution in Apache Spark at Facebook
Tencent: Large-scale Malicious Domain Detection with Spark AI
IBM: In-memory storage Evolution in Apache Spark
Capital One: Apache Spark and Sights at Speed: Streaming, Feature
management and Execution
Apple: Making Nested Columns as First Citizen in Apache Spark SQL
EBay: Managing Apache Spark workload and automatic optimizing.
Google: Validating Spark ML Jobs
HP: Apache Spark for Cyber Security in big company
Microsoft: Apache Spark Serving: Unifying Batch, Streaming and
RESTful Serving
ABSA Group: A Mainframe Data Source for Spark SQL and Streaming
Facebook: an efficient Facebook-scale shuffle service
IBM: Make your PySpark Data Fly with Arrow!
Facebook : Distributed Scheduling Framework for Apache Spark
Zynga: Automating Predictive Modeling at Zynga with PySpark
World Bank: Using Crowdsourced Images to Create Image Recognition
Models and NLP to Augment Global Trade indicator
JD.com: Optimizing Performance and Computing Resource.
Microsoft: Azure Databricks with R: Deep Dive
ICL: Cooperative Task Execution for Apache Spark
Airbnb: Apache Spark at Airbnb
Netflix: Migrating to Apache Spark at Netflix
Microsoft: Infrastructure for Deep Learning in
Apache Spark
Intel: Game playing using AI on Apache Spark
Facebook: Scaling Apache Spark @ Facebook
Lyft: Scaling Apache Spark on K8S at Lyft
Uber: Using Spark Mllib Models in a Production
Training and Serving Platform
Apple: Bridging the gap between Datasets and
DataFrames
Salesforce: The Rule of 10,000 Spark Jobs
Target: Lessons in Linear Algebra at Scale with
Apache Spark
Workday: Lesson Learned Using Apache Spark