Wangda Tan and Mayank Bansal presented on YARN Node Labels. Node labels allow grouping nodes with similar hardware or software profiles and partitioning a cluster. This allows applications to request nodes with specific resources and partitions the cluster for different organizations or workloads. Node partitions were added in Hadoop 2.6 to allow exclusive or non-exclusive access to labeled nodes. Ebay and other companies use node labels to separate machine learning, licensed software, and organizational workloads. Future work includes adding node constraints and supporting node labels in other Apache projects like FairScheduler, Tez and Oozie.
Impala Architecture Presentation at Toronto Hadoop User Group, in January 2014 by Mark Grover.
Event details:
http://www.meetup.com/TorontoHUG/events/150328602/
The document discusses architectural considerations for implementing clickstream analytics using Hadoop. It covers choices for data storage layers like HDFS vs HBase, data modeling including file formats and partitioning, data ingestion methods like Flume and Sqoop, available processing engines like MapReduce, Hive, Spark and Impala, and the need to sessionize clickstream data to analyze metrics like bounce rates and attribution.
Tez is the next generation Hadoop Query Processing framework written on top of YARN. Computation topologies in higher level languages like Pig/Hive can be naturally expressed in the new graph dataflow model exposed by Tez. Multi-stage queries can be expressed as a single Tez job resulting in lower latency for short queries and improved throughput for large scale queries. MapReduce has been the workhorse for Hadoop but its monolithic structure had made innovation slower. YARN separates resource management from application logic and thus enables the creation of Tez, a more flexible and generic new framework for data processing for the benefit of the entire Hadoop query ecosystem.
The document is a presentation about using Hadoop for analytic workloads. It discusses how Hadoop has traditionally been used for batch processing but can now also be used for interactive queries and business intelligence workloads using tools like Impala, Parquet, and HDFS. It summarizes performance tests showing Impala can outperform MapReduce for queries and scales linearly with additional nodes. The presentation argues Hadoop provides an effective solution for certain data warehouse workloads while maintaining flexibility, ease of scaling, and cost effectiveness.
Slides for presentation on Cloudera Impala I gave at the DC/NOVA Java Users Group on 7/9/2013. It is a slightly updated set of slides from the ones I uploaded a few months ago on 4/19/2013. It covers version 1.0.1 and also includes some new slides on HortonWorks' Stinger Initiative.
Using Familiar BI Tools and Hadoop to Analyze Enterprise NetworksDataWorks Summit
This document discusses using Apache Drill and business intelligence (BI) tools to analyze network data stored in Hadoop. It provides examples of querying network packet captures and APIs directly using SQL without needing to transform or structure the data first. This allows gaining insights into issues like dropped sensor readings by analyzing packets alongside other data sources. The document concludes that SQL-on-Hadoop technologies allow network analysis to be done in a BI context more quickly than traditional specialized tools.
At the StampedeCon 2015 Big Data Conference: YARN enables Hadoop to move beyond just pure batch processing. With that multiple workloads and tenants now must be able to share a single infrastructure for data processing. Features of the Capacity Scheduler enable resource sharing among multiple tenants in a fair manner with elastic queues to maximize utilization. This talk will focus on the features of the Capacity Scheduler that enable Multi-Tenancy and how resource sharing can be rebalanced using features like Preemption.
Hortonworks Technical Workshop - Operational Best Practices WorkshopHortonworks
Hortonworks Data Platform is a key component of Modern Data Architecture. Organizations rely on HDP for mission critical business functions and expects for the system to be constantly available and performant. In this session we will cover the operational best practices for administering the Hortonworks Data Platform including the initial setup and ongoing maintenance.
A brave new world in mutable big data relational storage (Strata NYC 2017)Todd Lipcon
The ever-increasing interest in running fast analytic scans on constantly updating data is stretching the capabilities of HDFS and NoSQL storage. Users want the fast online updates and serving of real-time data that NoSQL offers, as well as the fast scans, analytics, and processing of HDFS. Additionally, users are demanding that big data storage systems integrate natively with their existing BI and analytic technology investments, which typically use SQL as the standard query language of choice. This demand has led big data back to a familiar friend: relationally structured data storage systems.
Todd Lipcon explores the advantages of relational storage and reviews new developments, including Google Cloud Spanner and Apache Kudu, which provide a scalable relational solution for users who have too much data for a legacy high-performance analytic system. Todd explains how to address use cases that fall between HDFS and NoSQL with technologies like Apache Kudu or Google Cloud Spanner and how the combination of relational data models, SQL query support, and native API-based access enables the next generation of big data applications. Along the way, he also covers suggested architectures, the performance characteristics of Kudu and Spanner, and the deployment flexibility each option provides.
Cloudera Impala: A Modern SQL Engine for HadoopCloudera, Inc.
Cloudera Impala is a modern SQL query engine for Apache Hadoop that provides high performance for both analytical and transactional workloads. It runs directly within Hadoop clusters, reading common Hadoop file formats and communicating with Hadoop storage systems. Impala uses a C++ implementation and runtime code generation for high performance compared to other Hadoop SQL query engines like Hive that use Java and MapReduce.
Presentations from the Cloudera Impala meetup on Aug 20 2013Cloudera, Inc.
Presentations from the Cloudera Impala meetup on Aug 20 2013:
- Nong Li on Parquet+Impala and UDF support
- Henry Robinson on performance tuning for Impala
This document discusses modern data architecture and Apache Hadoop's role within it. It presents WANdisco and its Non-Stop Hadoop solution, which extends HDFS across multiple data centers to provide 100% uptime for Hadoop deployments. Non-Stop Hadoop uses WANdisco's patented distributed coordination engine to synchronize HDFS metadata across sites separated by wide area networks, enabling continuous availability of HDFS data and global HDFS deployments.
The document discusses Impala, a SQL query engine for Hadoop. It was created to enable low-latency queries on Hadoop data by using a new execution engine instead of MapReduce. Impala aims to provide high performance SQL queries on HDFS, HBase and other Hadoop data. It runs as a distributed service and queries are distributed to nodes and executed in parallel. The document covers Impala's architecture, query execution process, and its planner which partitions queries for efficient execution.
DeathStar: Easy, Dynamic, Multi-Tenant HBase via YARNDataWorks Summit
DeathStar is a system that runs HBase on YARN to provide easy, dynamic multi-tenant HBase clusters via YARN. It allows different applications to run HBase in separate application-specific clusters on a shared HDFS and YARN infrastructure. This provides strict isolation between applications and enables dynamic scaling of clusters as needed. Some key benefits are improved cluster utilization, easier capacity planning and configuration, and the ability to start new clusters on demand without lengthy provisioning times.
Flexible and Real-Time Stream Processing with Apache FlinkDataWorks Summit
This document provides an overview of stream processing with Apache Flink. It discusses the rise of stream processing and how it enables low-latency applications and real-time analysis. It then describes Flink's stream processing capabilities, including pipelining of data, fault tolerance through checkpointing and recovery, and integration with batch processing. The document also summarizes Flink's programming model, state management, and roadmap for further development.
Cloudera Impala: The Open Source, Distributed SQL Query Engine for Big Data. The Cloudera Impala project is pioneering the next generation of Hadoop capabilities: the convergence of fast SQL queries with the capacity, scalability, and flexibility of a Apache Hadoop cluster. With Impala, the Hadoop ecosystem now has an open-source codebase that helps users query data stored in Hadoop-based enterprise data hubs in real time, using familiar SQL syntax.
This talk will begin with an overview of the challenges organizations face as they collect and process more data than ever before, followed by an overview of Impala from the user's perspective and a dive into Impala's architecture. It concludes with stories of how Cloudera's customers are using Impala and the benefits they see.
Apache Tez - Accelerating Hadoop Data Processinghitesh1892
Apache Tez - A New Chapter in Hadoop Data Processing. Talk at Hadoop Summit, San Jose. 2014 By Bikas Saha and Hitesh Shah.
Apache Tez is a modern data processing engine designed for YARN on Hadoop 2. Tez aims to provide high performance and efficiency out of the box, across the spectrum of low latency queries and heavy-weight batch processing.
Data is the fuel for the idea economy, and being data-driven is essential for businesses to be competitive. HPE works with all the Hadoop partners to deliver packaged solutions to become data driven. Join us in this session and you’ll hear about HPE’s Enterprise-grade Hadoop solution which encompasses the following
-Infrastructure – Two industrialized solutions optimized for Hadoop; a standard solution with co-located storage and compute and an elastic solution which lets you scale storage and compute independently to enable data sharing and prevent Hadoop cluster sprawl.
-Software – A choice of all popular Hadoop distributions, and Hadoop ecosystem components like Spark and more. And a comprehensive utility to manage your Hadoop cluster infrastructure.
-Services – HPE’s data center experts have designed some of the largest Hadoop clusters in the world and can help you design the right Hadoop infrastructure to avoid performance issues and future proof you against Hadoop cluster sprawl.
-Add-on solutions – Hadoop needs more to fill in the gaps. HPE partners with the right ecosystem partners to bring you solutions such an industrial grade SQL on Hadoop with Vertica, data encryption with SecureData, SAP ecosystem with SAP HANA VORA, Multitenancy with Blue Data, Object storage with Scality and more.
Embrace Sparsity At Web Scale: Apache Spark MLlib Algorithms Optimization For...Jen Aman
This document discusses optimizations made to Apache Spark MLlib algorithms to better support sparse data at large scale. It describes how KMeans, linear methods, and other ML algorithms were modified to use sparse vector representations to reduce memory usage and improve performance when working with sparse data, including optimizations made for clustering large, high-dimensional datasets. The optimizations allow these algorithms to be applied to much larger sparse datasets and high-dimensional problems than was previously possible with MLlib.
Spark and Deep Learning frameworks with distributed workloadsS N
The increasing complexity of learning algorithms and deep neural networks, combined with size of data and parameters, has made it challenging to exploit existing large-scale data processing pipelines for training and inference.
Approaches are outlined for preprocessing, training, inference, and deployment across datasets that leverage Spark, its extended ecosystem of libraries, and deep learning frameworks.
Cloudera Impala - Las Vegas Big Data Meetup Nov 5th 2014cdmaxime
Maxime Dumas gives a presentation on Cloudera Impala, which provides fast SQL query capability for Apache Hadoop. Impala allows for interactive queries on Hadoop data in seconds rather than minutes by using a native MPP query engine instead of MapReduce. It offers benefits like SQL support, improved performance of 3-4x up to 90x faster than MapReduce, and flexibility to query existing Hadoop data without needing to migrate or duplicate it. The latest release of Impala 2.0 includes new features like window functions, subqueries, and spilling joins and aggregations to disk when memory is exhausted.
Deep learning has become widespread as frameworks such as TensorFlow and PyTorch have made it easy to onboard machine learning applications. However, while it is easy to start developing with these frameworks on your local developer machine, scaling up a model to run on a cluster and train on huge datasets is still challenging. Code and dependencies have to be copied to every machine and defining the cluster configurations is tedious and error-prone. In addition, troubleshooting errors and aggregating logs is difficult. Ad-hoc solutions also lack resource guarantees, isolation from other jobs, and fault tolerance.
To solve these problems and make scaling deep learning easy, we have made several enhancements to Hadoop and built an open-source deep learning platform called TonY. In this talk, Anthony and Keqiu will discuss new Hadoop features useful for deep learning, such as GPU resource support, and deep dive into TonY, which lets you run deep learning programs natively on Hadoop. We will discuss TonY's architecture and how it allows users to manage their deep learning jobs, acting as a portal from which to launch notebooks, monitor jobs, and visualize training results.
Impala is an open-source SQL query engine for Hadoop that is designed for performance. It utilizes standard Hadoop components like HDFS, HBase, and YARN. Impala allows users to issue SQL queries against data stored in HDFS and HBase and returns results very quickly. It exposes industry-standard interfaces that allow business intelligence tools to connect. Impala has added many new features in recent versions like analytic functions, subqueries, and support for joining and aggregating data that can spill to disk.
Scaling Deep Learning on Hadoop at LinkedInAnthony Hsu
Describes LinkedIn's journey in building a training orchestrator, TonY, for doing deep learning on Hadoop. For more details about TonY, visit https://github.com/linkedin/tony.
Apache Tajo is a big data warehouse system that runs on Hadoop. It supports SQL standards and features powerful distributed processing, advanced query optimization, and the ability to handle long-running queries (hours) and interactive analysis queries (100 milliseconds). Tajo uses a master-slave architecture with a TajoMaster managing metadata and slave TajoWorkers running query tasks in a distributed fashion.
Challenges of Building a First Class SQL-on-Hadoop EngineNicolas Morales
Challenges of Building a First Class SQL-on-Hadoop Engine:
Why and what is Big SQL 3.0?
Overview of the challenges
How we solved (some of) them
Architecture and interaction with Hadoop
Query rewrite
Query optimization
Future challenges
Challenges of Implementing an Advanced SQL Engine on HadoopDataWorks Summit
Big SQL 3.0 is IBM's SQL engine for Hadoop that addresses challenges of building a first class SQL engine on Hadoop. It uses a modern MPP shared-nothing architecture and is architected from the ground up for low latency and high throughput. Key challenges included data placement on Hadoop, reading and writing Hadoop file formats, query optimization with limited statistics, and resource management with a shared Hadoop cluster. The architecture utilizes existing SQL query rewrite and optimization capabilities while introducing new capabilities for statistics, constraints, and pushdown to Hadoop file formats and data sources.
A brief introduction to YARN: how and why it came into existence and how it fits together with this thing called Hadoop.
Focus given to architecture, availability, resource management and scheduling, migration from MR1 to MR2, job history and logging, interfaces, and applications.
This document provides tips and best practices for optimizing Apache Spark performance and resource allocation. It discusses:
- The components of Spark including executors, drivers, and tasks
- Configuring Spark on YARN and dynamic resource allocation
- Optimizing memory usage, avoiding data skew, and reducing serialization costs
- Best practices for Spark Streaming around microbatching, fault tolerance, and performance
- Recommendations for running Spark on cloud object stores like S3
The document outlines topics covered in "The Impala Cookbook" published by Cloudera. It discusses physical and schema design best practices for Impala, including recommendations for data types, partition design, file formats, and block size. It also covers estimating and managing Impala's memory usage, and how to identify the cause when queries exceed memory limits.
Using Big Data techniques to query and store OpenStreetMap data. Stephen Knox...huguk
This talk will describe his research into using Hadoop to query and manage big geographic datasets, specifically OpenStreetMap(OSM). OSM is an “open-source” map of the world, growing at a large rate, currently around 5TB of data. The talk will introduce OSM, detail some aspects of the research, but also discuss his experiences with using the SpatialHadoop stack on Azure and Google Cloud.
This document discusses various MySQL performance metrics that are important to measure from within the database, operating system, and application. It outlines key InnoDB internal structures like the buffer pool and log system. Specific metrics that provide insight into buffer pool usage, page churn, and log writes are highlighted. Optimizing the working set size and ensuring sufficient free space in the log files are important factors for performance.
Low Latency Polyglot Model Scoring using Apache ApexApache Apex
This document discusses challenges in building low-latency machine learning applications and how Apache Apex can help address them. It introduces Apache Apex as a distributed streaming engine and describes how it allows embedding models from frameworks like R, Python, H2O through custom operators. It provides various data and model scoring patterns in Apex like dynamic resource allocation, checkpointing, exactly-once processing to meet SLAs. The document also demonstrates techniques like canary deployment, dormant models, model ensembles through logical overlays on the Apex DAG.
Hadoop is a software framework that allows for distributed processing of large data sets across clusters of computers. It uses MapReduce and HDFS to parallelize tasks, distribute data storage, and provide fault tolerance. Applications of Hadoop include log analysis, data mining, and machine learning using large datasets at companies like Yahoo!, Facebook, and The New York Times.
Big Data and Hadoop in Cloud - Leveraging Amazon EMRVijay Rayapati
This document discusses big data, Hadoop, and using Hadoop in the cloud via Amazon EMR. It provides an overview of big data and what Hadoop is, explains how Hadoop works and how it can help store and process large datasets. It then discusses how Amazon EMR can be used to deploy Hadoop clusters in the cloud without having to manage the underlying infrastructure, and provides instructions on setting up and using EMR. Finally, it discusses debugging, profiling, and performance tuning Hadoop jobs and EMR clusters.
Hadoop is a software framework that allows for distributed processing of large data sets across clusters of computers. It uses MapReduce as a programming model and HDFS for storage. MapReduce divides applications into parallelizable map and reduce tasks that process key-value pairs across large datasets in a reliable and fault-tolerant manner. HDFS stores multiple replicas of data blocks for reliability and allows processing of data in parallel on nodes where the data is located. Hadoop can reliably store and process petabytes of data on thousands of low-cost commodity hardware nodes.
Artificial Intelligence Imaging - medical imagingNeeluPari
10 stages of Artificial Intelligence,
Artificial intelligence (AI) has made significant advancements in the field of medical imaging, offering valuable tools and capabilities to improve diagnostics, treatment planning, and patient care. Here are several ways AI is used in medical imaging
Bell Crank Lever.pptxDesign of Bell Crank Leverssuser110cda
In a bell crank lever, the two arms of the lever are at right angles.
Such type of levers are used in railway signalling, governors of Hartnell type, the drive for the air pump of condensers etc.
The bell crank lever is designed in a similar way as discussed earlier.
Maintaining asset integrity, maximising performance, and reaching peak production outcomes all depend on effective defect elimination management.
The difficulties of locating, comprehending, and resolving asset flaws will be examined in this article, with a focus on the significance of having a complete awareness of their nature, implications, and possible outcomes.
The significance of maintenance departments creating a framework for integrating defect elimination and bad actor analysis into their proactive maintenance strategies, enabling them to make more informed decisions, becomes clear when we delve into the subtleties of defect analysis.
Defects in an asset are characterised as flaws, inadequacies, or departures from standard operating characteristics or design specifications.
The kind of defect, where it is located on the assembly of the asset, and what happens when it exists determine how one defect impacts the performance of the asset as a whole.
For example, a minor corrosion patch on a panel that is readily replaceable and does not pose a structural risk to the asset overall is unlikely to have an adverse effect on the asset's longevity, performance, or safety.
Effective Defect Elimination Management is critical for maintaining asset integrity, optimizing performance, and achieving peak production results. It involves identifying, understanding, and addressing asset defects systematically.
Understanding asset defects requires accurate identification and comprehensive documentation in the CMMS, including risk assessments that evaluate both the consequence and likelihood of defects leading to failures.
Defect Elimination Management (DEM) is a comprehensive approach that goes beyond traditional maintenance practices, focusing on root cause analysis and implementing long-term solutions to prevent defect recurrence.
"Bad Actors" in defect elimination refer to equipment, systems, or components that consistently underperform, require frequent maintenance, or cause repeated operational reliability or quality issues.
Advanced diagnostic tools and technologies, such as vibration analysis systems, infrared thermography, and AI-based analytics, have transformed the way asset defects are identified and managed.
Quality and timely repairs and clear business processes for managing defects are crucial, along with maintaining quality maintenance history data to provide valuable insights for future defect elimination processes.
To learn more, you can read my article via this link: https://www.cmmssuccess.com/defect-elimination-management/
Structural Dynamics and Earthquake Engineeringtushardatta
Slides are prepared with a lot of text material to help young teachers to teach the course for the first time. This also includes solved problems. This can be used to teach a first course on structural dynamics and earthquake engineering. The lecture notes based on which slides are prepared are available in SCRIBD.
2. About us
Wangda Tan
• Last 5+ years in big data field, Hadoop,
Open-MPI, etc.
• Now
• Apache Hadoop Committer
@Hortonworks, all in YARN.
• Now spending most of time on
resource scheduling enhancements.
• Past
• Pivotal (PHD team, brings
OpenMPI/GraphLab to YARN)
• Alibaba (ODPS team, platform for
distributed data-mining)
Mayank Bansal
• Hadoop Architect @ ebay
• Apache Hadoop Committer
• Apache Oozie PMC and Committer
• Current
• Leading Hadoop Core Development
for YARN and MapReduce @ ebay
• Past
• Working on Scheduler / Resource
Managers
4. Overview – Background
• Resources are managed
by a hierarchy of queues.
• One queue can have
multiple applications
• Container is the result
resource scheduling,
Which is a bundle of
resources and can run
process(es)
5. Overview – How to manage your workload by queues
• By organization:
• Marketing/Finance queue
• By workload
• Interactive/Batch queue
• Hybrid
• Finance-
batch/Marketing-realtime
queue
6. Problems
• No way to specify for specific resource on nodes
• E.g. nodes with GPU / SSD
• No way for application to request nodes with specific resources.
• Unable to partition a cluster based on organizations/workloads
7. What is Node Label?
• Group nodes with similar profile
• Hardware
• Software
• Organization
• Workloads
• A way for app to specify where to run in a cluster
8. Node Labels
• Types of node labels
• Node partition (Since 2.6)
• Node constraints (WIP)
• Node partition
• One node belongs to only one partition
• Related to resource planning
• Node constraints
• One node can assign multiple
constraints
• Not related to resource planning
9. Understand by example (1)
• A real-world example about why node
partition is needed:
• Company-X has a big cluster, Each of
Engineering/Marketing/Sales team has
33% share of the cluster.
...
...
YARN RM
Engineer
33%
Marketing
33%
Sales
33%
10. Understand by example (2)
Engineer
50%
Marketing
50%
..
.
..
.
• Engineering/marketing team need GPU installed
servers to do some visualization works. So they spent
equal amount of money buy some machines with GPU.
• They want to share the cluster 50:50.
• Sales team spent $0 on these node nodes, so it cannot
run anything on these new nodes at all.
11. Understand by example (3)
• Here problem comes:
• if you create a separated YARN cluster, ops
team will unhappy.
• If you add these new nodes to original
cluster, you cannot guarantee
engineering/marking team have preference
to use these new nodes.
...
...
YARN RM
...
...
?
12. Understand by example (4)
...
...
YARN RM
...
...
"Default" Partition "GPU" Partition
Engineer
33%
Marketing
33%
Sales
33%
Engineer
50%
Marketing
50%
• Node partition is to solve this problem:
• Add GPU partition, which is managed by the same YARN RM. Admin can specify
different percentage of shares in different partitions.
13. Understand by example (5)
• Understand Non-exclusive node
partition:
• In the previous example, “GPU”
partition can be only used by
engineering and marketing team.
• This is a bad for resource utilization.
• Admin can define, if “GPU” partition
has idle resources, sales queue can use
it. But when engineering/marketing
come back. Resource allocated to sales
queue will be preempted.
• (available since Hadoop 2.8)
...
...
"Default" Partition
...
...
"GPU" Partition
Guaranteed to use Can use if it's idle
Engineer Marketing Sales
33%
33%
33%
50%
50%
0%
14. Understand by example (6)
• Configuration for above example (Capacity Scheduler)
yarn.scheduler.capacity.root.queues=engineering,marketing,sales
yarn.scheduler.capacity.root.engineering.capacity=33
yarn.scheduler.capacity.root.marketing.capacity=33
yarn.scheduler.capacity.root.sales.capacity=33
---------
yarn.scheduler.capacity.root.engineering.accessible-node-labels=GPU
yarn.scheduler.capacity.root.marketing.accessible-node-labels=GPU
---------
yarn.scheduler.capacity.root.engineering.accessible-node-labels.GPU.capacity=50
yarn.scheduler.capacity.root.marketing.accessible-node-labels.GPU.capacity=50
---------
(optional)
yarn.scheduler.capacity.root.engineering.default-node-label-expression=GPU
They’re original configuration
without node partition
Capacities
For node partitions.
Queue ACLs
For node partitions.
(optional)
Applications running in the queue
Will run in GPU partition
By default
15. Understand by example (7)
...
...
YARN RM
Company (100%)
R & D (50%) Sales (50%)
QE (20%) Dev (80%)
Without node partition
YARN RM
Company (100%)
R & D (50%) Sales (50%)
QE (20%) Dev (80%)
With node partition
Default GPU
Company (100%)
R & D (100%) Sales (0%)
QE (50%) Dev (50%)
16. Architecture
• Central piece:
NodeLabelsManager
• Stores labels and their attributes
• Store nodes-to-labels mapping
• It can be read/write by
• CLI and REST API (which we
called centralized configuration)
• OR NM can retrieve labels on it
and send to RM (we call it
distributed configuration)
• Scheduler uses node labels
manager make decisions and
receive resource request from
AM, return allocated
container to AM
17. Case study (1) – uses node label
• Use node label to create isolated
environment for
batch/interactive/low-latency
workloads.
• Deploy YARN containers onto
compute nodes are optimized and
accelerated for each workload:
• Using RDMA-enabled nodes to
accelerate shuffle.
• Using powerful CPU nodes to
accelerate compression.
• It is possible to DOUBLE THE
DENSITY of today’s traditional
Hadoop cluster with substantially
better price performance.
• Create a converged system that
allow Hadoop / Vertica / Spark and
other stacks share a common pool
of data.
19. Case study (3) – Ebay cluster use node label
• Separate Machine Learning workloads from regular workloads
• Use node label to separate licensed software to some machines
• Enabling GPU workloads
• Separation of organizational workloads
20. Case study (4) – Slider use cases
• HBase region servers run in nodes with
SSD (Non-exclusive).
• HBase master monopolize to use
nodes.
• Map-reduce jobs run in other nodes.
And they can use idle resources of
region server nodes.
... ...
HBase
Master
(Exclusive)
HBase
Region Server
(Non-Exclusive)
Default
Slider
HBase
Master RS RS
Launches
MR
AM
User
Submit
Task
Task
TaskTask
21. Status – Done parts of Node Labels
• Exclusive / non-exclusive node partition support in Capacity Scheduler (√)
• User-limit
• Preemption
• Now all respecting node partition!
• Centralized configuration via CLI/REST API (√)
• Distributed configuration in Node Manager’s config/script (√)
23. Status – Other Apache projects support node label
• Following projects are already
support node label:
• (SPARK-6470)
• (MAPREDUCE-6304)
• Slider (SLIDER-81)
• (via SLIDER)
• (via SLIDER)
• (via SLIDER)
• (AMBARI-10063)
24. Future of Node Label
• Support constraints (YARN-3409)
• Orthogonal to partition, they’re
describing attributes of node’s
hardware/software just for
affinity.
• Some example of constraints:
• glibc version
• JDK version
• Type of CPU (x86_64/i686)
• Physical or virtualized
• With this, application can ask for
resource
• glibc.version >= 2.20 &&
JDK.version >= 8u20 &&
x86_64
• Support node label in
FairScheduler (YARN-2497)
• Support in more projects
• Tez
• Oozie
• …
In simple: Node Partition is to split a big cluster to several smaller sub-clusters according to hardware / usage. Each partition has different capacities on queue hierarchy.