The document discusses big data and distributed computing. It explains that big data refers to large, unstructured datasets that are too large for traditional databases. Distributed computing uses multiple computers connected via a network to process large datasets in parallel. Hadoop is an open-source framework for distributed computing that uses MapReduce and HDFS for parallel processing and storage across clusters. HDFS stores data redundantly across nodes for fault tolerance.
What is HDFS | Hadoop Distributed File System | EdurekaEdureka!
( Hadoop Training: https://www.edureka.co/hadoop )
This What is HDFS PPT will help you to understand about Hadoop Distributed File System and its features along with practical. In this What is HDFS PPT, we will cover:
1. What is DFS and Why Do We Need It?
2. What is HDFS?
3. HDFS Architecture
4. HDFS Replication Factor
5. HDFS Commands Demonstration on a Production Hadoop Cluster
Check our complete Hadoop playlist here: https://goo.gl/hzUO0m
Follow us to never miss an update in the future.
Instagram: https://www.instagram.com/edureka_learning/
Facebook: https://www.facebook.com/edurekaIN/
Twitter: https://twitter.com/edurekain
LinkedIn: https://www.linkedin.com/company/edureka
The document provides an introduction and overview of MongoDB, including what NoSQL is, the different types of NoSQL databases, when to use MongoDB, its key features like scalability and flexibility, how to install and use basic commands like creating databases and collections, and references for further learning.
Introduction To Hadoop | What Is Hadoop And Big Data | Hadoop Tutorial For Be...Simplilearn
This presentation about Hadoop will help you learn the basics of Hadoop and its components. First, you will see what is Big Data and the significant challenges in it. Then, you will understand how Hadoop solved those challenges. You will have a glance at the History of Hadoop, what is Hadoop, the different companies using Hadoop, the applications of Hadoop in different companies, etc. Finally, you will learn the three essential components of Hadoop – HDFS, MapReduce, and YARN, along with their architecture. Now, let us get started with Introduction to Hadoop.
Below topics are explained in this Hadoop presentation:
1. Big Data and its challenges
2. Hadoop as a solution
3. History of Hadoop
4. What is Hadoop
5. Applications of Hadoop
6. Components of Hadoop
7. Hadoop Distributed File System
8. Hadoop MapReduce
9. Hadoop YARN
What is this Big Data Hadoop training course about?
The Big Data Hadoop and Spark developer course have been designed to impart an in-depth knowledge of Big Data processing using Hadoop and Spark. The course is packed with real-life projects and case studies to be executed in the CloudLab.
What are the course objectives?
This course will enable you to:
1. Understand the different components of Hadoop ecosystem such as Hadoop 2.7, Yarn, MapReduce, Pig, Hive, Impala, HBase, Sqoop, Flume, and Apache Spark
2. Understand Hadoop Distributed File System (HDFS) and YARN as well as their architecture, and learn how to work with them for storage and resource management
3. Understand MapReduce and its characteristics, and assimilate some advanced MapReduce concepts
4. Get an overview of Sqoop and Flume and describe how to ingest data using them
5. Create database and tables in Hive and Impala, understand HBase, and use Hive and Impala for partitioning
6. Understand different types of file formats, Avro Schema, using Arvo with Hive, and Sqoop and Schema evolution
7. Understand Flume, Flume architecture, sources, flume sinks, channels, and flume configurations
8. Understand HBase, its architecture, data storage, and working with HBase. You will also understand the difference between HBase and RDBMS
9. Gain a working knowledge of Pig and its components
10. Do functional programming in Spark
11. Understand resilient distribution datasets (RDD) in detail
12. Implement and build Spark applications
13. Gain an in-depth understanding of parallel processing in Spark and Spark RDD optimization techniques
14. Understand the common use-cases of Spark and the various interactive algorithms
15. Learn Spark SQL, creating, transforming, and querying Data frames
Learn more at https://www.simplilearn.com/big-data-and-analytics/introduction-to-big-data-and-hadoop-certification-training.
This presentation about HBase will help you understand what is HBase, what are the applications of HBase, how is HBase is different from RDBMS, what is HBase Storage, what are the architectural components of HBase and at the end, we will also look at some of the HBase commands using a demo. HBase is an essential part of the Hadoop ecosystem. It is a column-oriented database management system derived from Google’s NoSQL database Bigtable that runs on top of HDFS. After watching this video, you will know how to store and process large datasets using HBase. Now, let us get started and understand HBase and what it is used for.
Below topics are explained in this HBase presentation:
1. What is HBase?
2. HBase Use Case
3. Applications of HBase
4. HBase vs RDBMS
5. HBase Storage
6. HBase Architectural Components
What is this Big Data Hadoop training course about?
Simplilearn’s Big Data Hadoop training course lets you master the concepts of the Hadoop framework and prepares you for Cloudera’s CCA175 Big data certification. The Big Data Hadoop and Spark developer course have been designed to impart in-depth knowledge of Big Data processing using Hadoop and Spark. The course is packed with real-life projects and case studies to be executed in the CloudLab.
What are the course objectives?
This course will enable you to:
1. Understand the different components of the Hadoop ecosystem such as Hadoop 2.7, Yarn, MapReduce, Pig, Hive, Impala, HBase, Sqoop, Flume, and Apache Spark
2. Understand Hadoop Distributed File System (HDFS) and YARN as well as their architecture, and learn how to work with them for storage and resource management
3. Understand MapReduce and its characteristics, and assimilate some advanced MapReduce concepts
4. Get an overview of Sqoop and Flume and describe how to ingest data using them
5. Create database and tables in Hive and Impala, understand HBase, and use Hive and Impala for partitioning
6. Understand different types of file formats, Avro Schema, using Arvo with Hive, and Sqoop and Schema evolution
7. Understand Flume, Flume architecture, sources, flume sinks, channels, and flume configurations
8. Understand HBase, its architecture, data storage, and working with HBase. You will also understand the difference between HBase and RDBMS
9. Gain a working knowledge of Pig and its components
10. Do functional programming in Spark
11. Understand resilient distribution datasets (RDD) in detail
12. Implement and build Spark applications
13. Gain an in-depth understanding of parallel processing in Spark and Spark RDD optimization techniques
14. Understand the common use-cases of Spark and the various interactive algorithms
15. Learn Spark SQL, creating, transforming, and querying Data frames
Learn more at https://www.simplilearn.com/big-data-and-analytics/big-data-and-hadoop-training
Hadoop MapReduce is an open source framework for distributed processing of large datasets across clusters of computers. It allows parallel processing of large datasets by dividing the work across nodes. The framework handles scheduling, fault tolerance, and distribution of work. MapReduce consists of two main phases - the map phase where the data is processed key-value pairs and the reduce phase where the outputs of the map phase are aggregated together. It provides an easy programming model for developers to write distributed applications for large scale processing of structured and unstructured data.
The Hadoop Distributed File System (HDFS) is the primary data storage system used by Hadoop applications. It employs a Master and Slave architecture with a NameNode that manages metadata and DataNodes that store data blocks. The NameNode tracks locations of data blocks and regulates access to files, while DataNodes store file blocks and manage read/write operations as directed by the NameNode. HDFS provides high-performance, scalable access to data across large Hadoop clusters.
Delta Lake OSS: Create reliable and performant Data Lake by Quentin AmbardParis Data Engineers !
Delta Lake is an open source framework living on top of parquet in your data lake to provide Reliability and performances. It has been open-sourced by Databricks this year and is gaining traction to become the defacto delta lake format.
We’ll see all the goods Delta Lake can do to your data with ACID transactions, DDL operations, Schema enforcement, batch and stream support etc !
What Is Apache Spark? | Introduction To Apache Spark | Apache Spark Tutorial ...Simplilearn
This presentation about Apache Spark covers all the basics that a beginner needs to know to get started with Spark. It covers the history of Apache Spark, what is Spark, the difference between Hadoop and Spark. You will learn the different components in Spark, and how Spark works with the help of architecture. You will understand the different cluster managers on which Spark can run. Finally, you will see the various applications of Spark and a use case on Conviva. Now, let's get started with what is Apache Spark.
Below topics are explained in this Spark presentation:
1. History of Spark
2. What is Spark
3. Hadoop vs Spark
4. Components of Apache Spark
5. Spark architecture
6. Applications of Spark
7. Spark usecase
What is this Big Data Hadoop training course about?
The Big Data Hadoop and Spark developer course have been designed to impart an in-depth knowledge of Big Data processing using Hadoop and Spark. The course is packed with real-life projects and case studies to be executed in the CloudLab.
What are the course objectives?
Simplilearn’s Apache Spark and Scala certification training are designed to:
1. Advance your expertise in the Big Data Hadoop Ecosystem
2. Help you master essential Apache and Spark skills, such as Spark Streaming, Spark SQL, machine learning programming, GraphX programming and Shell Scripting Spark
3. Help you land a Hadoop developer job requiring Apache Spark expertise by giving you a real-life industry project coupled with 30 demos
What skills will you learn?
By completing this Apache Spark and Scala course you will be able to:
1. Understand the limitations of MapReduce and the role of Spark in overcoming these limitations
2. Understand the fundamentals of the Scala programming language and its features
3. Explain and master the process of installing Spark as a standalone cluster
4. Develop expertise in using Resilient Distributed Datasets (RDD) for creating applications in Spark
5. Master Structured Query Language (SQL) using SparkSQL
6. Gain a thorough understanding of Spark streaming features
7. Master and describe the features of Spark ML programming and GraphX programming
Who should take this Scala course?
1. Professionals aspiring for a career in the field of real-time big data analytics
2. Analytics professionals
3. Research professionals
4. IT developers and testers
5. Data scientists
6. BI and reporting professionals
7. Students who wish to gain a thorough understanding of Apache Spark
Learn more at https://www.simplilearn.com/big-data-and-analytics/apache-spark-scala-certification-training
This presentation describes how to efficiently load data into Hive. I cover partitioning, predicate pushdown, ORC file optimization and different loading schemes
This document outlines Apache Flume, a distributed system for collecting large amounts of log data from various sources and transporting it to a centralized data store such as Hadoop. It describes the key components of Flume including agents, sources, sinks and flows. It explains how Flume provides reliable, scalable, extensible and manageable log aggregation capabilities through its node-based architecture and horizontal scalability. An example use case of using Flume for near real-time log aggregation is also briefly mentioned.
Graph databases use graph structures to represent and store data, with nodes connected by edges. They are well-suited for interconnected data. Unlike relational databases, graph databases allow for flexible schemas and querying of relationships. Common uses of graph databases include social networks, knowledge graphs, and recommender systems.
Apache Hive is a data warehouse infrastructure tool built on Hadoop that allows users to query and analyze large datasets stored in Hadoop using SQL. It works by translating SQL queries into MapReduce jobs that process the data. Hive provides a metastore to store metadata about the schema and HDFS location of tables, and uses a query language called HiveQL that is similar to SQL. It allows users to run analytics on large datasets without needing to write MapReduce code directly.
Apache Spark is a In Memory Data Processing Solution that can work with existing data source like HDFS and can make use of your existing computation infrastructure like YARN/Mesos etc. This talk will cover a basic introduction of Apache Spark with its various components like MLib, Shark, GrpahX and with few examples.
This document provides an overview of big data and Hadoop. It discusses why Hadoop is useful for extremely large datasets that are difficult to manage in relational databases. It then summarizes what Hadoop is, including its core components like HDFS, MapReduce, HBase, Pig, Hive, Chukwa, and ZooKeeper. The document also outlines Hadoop's design principles and provides examples of how some of its components like MapReduce and Hive work.
The document summarizes Apache Hadoop, an open-source software framework for distributed storage and processing of large datasets across clusters of computers. It describes the key components of Hadoop including the Hadoop Distributed File System (HDFS) which stores data reliably across commodity hardware, and the MapReduce programming model which allows distributed processing of large datasets in parallel. The document provides an overview of HDFS architecture, data flow, fault tolerance, and other aspects to enable reliable storage and access of very large files across clusters.
Hadoop is the popular open source like Facebook, Twitter, RFID readers, sensors, and implementation of MapReduce, a powerful tool so on.Your management wants to derive designed for deep analysis and transformation of information from both the relational data and thevery large data sets. Hadoop enables you to unstructuredexplore complex data, using custom analyses data, and wants this information as soon astailored to your information and questions. possible.Hadoop is the system that allows unstructured What should you do? Hadoop may be the answer!data to be distributed across hundreds or Hadoop is an open source project of the Apachethousands of machines forming shared nothing Foundation.clusters, and the execution of Map/Reduce It is a framework written in Java originallyroutines to run on the data in that cluster. Hadoop developed by Doug Cutting who named it after hishas its own filesystem which replicates data to sons toy elephant.multiple nodes to ensure if one node holding data Hadoop uses Google’s MapReduce and Google Filegoes down, there are at least 2 other nodes from System technologies as its foundation.which to retrieve that piece of information. This It is optimized to handle massive quantities of dataprotects the data availability from node failure, which could be structured, unstructured orsomething which is critical when there are many semi-structured, using commodity hardware, thatnodes in a cluster (aka RAID at a server level). is, relatively inexpensive computers. This massive parallel processing is done with greatWhat is Hadoop? performance. However, it is a batch operation handling massive quantities of data, so theThe data are stored in a relational database in your response time is not immediate.desktop computer and this desktop computer As of Hadoop version 0.20.2, updates are nothas no problem handling this load. possible, but appends will be possible starting inThen your company starts growing very quickly, version 0.21.and that data grows to 10GB. Hadoop replicates its data across differentAnd then 100GB. computers, so that if one goes down, the data areAnd you start to reach the limits of your current processed on one of the replicated computers.desktop computer. Hadoop is not suitable for OnLine Transaction So you scale-up by investing in a larger computer, Processing workloads where data are randomly and you are then OK for a few more months. accessed on structured data like a relational When your data grows to 10TB, and then 100TB. database.Hadoop is not suitable for OnLineAnd you are fast approaching the limits of that Analytical Processing or Decision Support Systemcomputer. workloads where data are sequentially accessed onMoreover, you are now asked to feed your structured data like a relational database, to application with unstructured data coming from generate reports that provide business sources intelligence. Hadoop is used for Big Data. It complements OnLine Transaction Processing and OnLine Analytical Pro
This document provides an overview of Hadoop architecture. It discusses how Hadoop uses MapReduce and HDFS to process and store large datasets reliably across commodity hardware. MapReduce allows distributed processing of data through mapping and reducing functions. HDFS provides a distributed file system that stores data reliably in blocks across nodes. The document outlines components like the NameNode, DataNodes and how Hadoop handles failures transparently at scale.
Apache Hive is a data warehouse software that allows querying and managing large datasets stored in Hadoop's HDFS. It provides tools for easy extract, transform, and load of data. Hive supports a SQL-like language called HiveQL and big data analytics using MapReduce. Data in Hive is organized into databases, tables, partitions, and buckets. Hive supports various data types, operators, and functions for data analysis. Some advantages of Hive include its ability to handle large datasets using Hadoop's reliability and performance. However, Hive does not support all SQL features and transactions.
Here is how you can solve this problem using MapReduce and Unix commands:
Map step:
grep -o 'Blue\|Green' input.txt | wc -l > output
This uses grep to search the input file for the strings "Blue" or "Green" and print only the matches. The matches are piped to wc which counts the lines (matches).
Reduce step:
cat output
This isn't really needed as there is only one mapper. Cat prints the contents of the output file which has the count of Blue and Green.
So MapReduce has been simulated using grep for the map and cat for the reduce functionality. The key aspects are - grep extracts the relevant data (map
MapReduce: Distributed Computing for Machine Learningbutest
This document summarizes research using the MapReduce framework for machine learning tasks on modest compute clusters. It benchmarks MapReduce performance on search and sort tasks using an 80-node cluster. It finds that MapReduce is suitable for basic operations on large datasets but has complications for more complex machine learning. It also discusses classes of machine learning algorithms that can be addressed in MapReduce, including single-pass, iterative, and query-based learning techniques.
Hadoop is a framework for distributed storage and processing of large datasets across clusters of commodity hardware. It uses HDFS for fault-tolerant storage and MapReduce as a programming model for distributed computing. HDFS stores data across clusters of machines and replicates it for reliability. MapReduce allows processing of large datasets in parallel by splitting work into independent tasks. Hadoop provides reliable and scalable storage and analysis of very large amounts of data.
This document describes the MapReduce programming model for processing large datasets in a distributed manner. MapReduce allows users to write map and reduce functions that are automatically parallelized and run across large clusters. The input data is split and the map tasks run in parallel, producing intermediate key-value pairs. These are shuffled and input to the reduce tasks, which produce the final output. The system handles failures, scheduling and parallelization transparently, making it easy for programmers to write distributed applications.
The document discusses combiners and partitioners in MapReduce frameworks. It explains that combiners allow for local aggregation of map output key-value pairs before shuffling to reducers. This can significantly reduce the amount of data transferred between maps and reduces. For a combiner to be effective, the reduce operation must be commutative and associative so the local aggregations can be merged. The document provides examples of operations like sum() and max() that qualify for use as combiners. It also discusses factors like serialization overhead that should be considered when deciding whether a combiner will provide benefits for a given job.
This document provides an overview of Hadoop and MapReduce. It discusses how Hadoop uses HDFS for distributed storage and replication of data blocks across commodity servers. It also explains how MapReduce allows for massively parallel processing of large datasets by splitting jobs into mappers and reducers. Mappers process data blocks in parallel and generate intermediate key-value pairs, which are then sorted and grouped by the reducers to produce the final results.
This document provides an overview of Big Data and Hadoop. It defines Big Data as large volumes of structured, semi-structured, and unstructured data that is too large to process using traditional databases and software. It provides examples of the large amounts of data generated daily by organizations. Hadoop is presented as a framework for distributed storage and processing of large datasets across clusters of commodity hardware. Key components of Hadoop including HDFS for distributed storage and fault tolerance, and MapReduce for distributed processing, are described at a high level. Common use cases for Hadoop by large companies are also mentioned.
This document provides an introduction and overview of Hadoop, an open-source framework for distributed storage and processing of large datasets across clusters of computers. It discusses how Hadoop uses MapReduce and HDFS to parallelize workloads and store data redundantly across nodes to solve issues around hardware failure and combining results. Key aspects covered include how HDFS distributes and replicates data, how MapReduce isolates processing into mapping and reducing functions to abstract communication, and how Hadoop moves computation to the data to improve performance.
This document provides an overview of MapReduce concepts including:
1. It describes the anatomy of MapReduce including the map and reduce phases, intermediate data, and final outputs.
2. It explains key MapReduce terminology like jobs, tasks, task attempts, and the roles of the master and slave nodes.
3. It discusses MapReduce data types, input formats, record readers, partitioning, sorting, and output formats.
This document provides an overview of MapReduce, a programming model developed by Google for processing and generating large datasets in a distributed computing environment. It describes how MapReduce abstracts away the complexities of parallelization, fault tolerance, and load balancing to allow developers to focus on the problem logic. Examples are given showing how MapReduce can be used for tasks like word counting in documents and joining datasets. Implementation details and usage statistics from Google demonstrate how MapReduce has scaled to process exabytes of data across thousands of machines.
Big data refers to large volumes of unstructured or semi-structured data that is difficult to process using traditional databases and analysis tools. The amount of data generated daily is growing exponentially due to factors like increased internet usage and data collection by organizations. Hadoop is an open-source framework for distributed storage and processing of large datasets across clusters of commodity hardware. It uses HDFS for reliable storage and MapReduce as a programming model to process data in parallel across nodes.
Hadoop is a framework for distributed storage and processing of large datasets across clusters of computers. It addresses problems like hardware failure and combining data after analysis. The core components are HDFS for distributed storage and MapReduce for distributed processing. HDFS stores data as blocks across nodes and handles replication for reliability. The Namenode manages the file system namespace and metadata, while Datanodes store and retrieve blocks. Hadoop supports reliable analysis of large datasets in a distributed manner through its scalable architecture.
Hadoop Training, Enhance your Big data subject knowledge with Online Training without wasting your time. Register for Free LIVE DEMO Class.
For more info: http://www.hadooponlinetutor.com
Contact Us:
8121660044
732-419-2619
http://www.hadooponlinetutor.com
Jumpstart your career with the world’s most in-demand technology: Hadoop. Hadooptrainingacademy provides best Hadoop online training with quality videos, comprehensive
online live training and detailed study material. Join today!
For more info, visit: http://www.hadooptrainingacademy.com/
Contact Us:
8121660088
732-419-2619
http://www.hadooptrainingacademy.com/
This document discusses Hadoop and its core components HDFS and MapReduce. It provides an overview of how Hadoop addresses the challenges of big data by allowing distributed processing of large datasets across clusters of computers. Key points include: Hadoop uses HDFS for distributed storage and MapReduce for distributed processing; HDFS works on a master-slave model with a Namenode and Datanodes; MapReduce utilizes a map and reduce programming model to parallelize tasks. Fault tolerance is built into Hadoop to prevent single points of failure.
This document provides an overview of Hadoop and how it addresses the challenges of big data. It discusses how Hadoop uses a distributed file system (HDFS) and MapReduce programming model to allow processing of large datasets across clusters of computers. Key aspects summarized include how HDFS works using namenodes and datanodes, how MapReduce leverages mappers and reducers to parallelize processing, and how Hadoop provides fault tolerance.
This document provides an overview of Hadoop and how it addresses the challenges of big data. It discusses how Hadoop uses a distributed file system (HDFS) and MapReduce programming model to allow processing of large datasets across clusters of computers. Key aspects summarized include how HDFS works using namenodes and datanodes, how MapReduce leverages mappers and reducers to parallelize processing, and how Hadoop provides fault tolerance.
There is a growing trend of applications that ought to handle huge information. However, analysing huge information may be a terribly difficult drawback nowadays. For such data many techniques can be considered. The technologies like Grid Computing, Volunteering Computing, and RDBMS can be considered as potential techniques to handle such data. We have a still in growing phase Hadoop Tool to handle such data also. We will do a survey on all this techniques to find a potential technique to manage and work with Big Data.
Hadoop is a software framework that allows for distributed processing of large data sets across clusters of computers. It uses MapReduce and HDFS to parallelize tasks, distribute data storage, and provide fault tolerance. Applications of Hadoop include log analysis, data mining, and machine learning using large datasets at companies like Yahoo!, Facebook, and The New York Times.
Hadoop is a software framework that allows for distributed processing of large data sets across clusters of computers. It uses MapReduce as a programming model and HDFS for storage. MapReduce divides applications into parallelizable map and reduce tasks that process key-value pairs across large datasets in a reliable and fault-tolerant manner. HDFS stores multiple replicas of data blocks for reliability and allows processing of data in parallel on nodes where the data is located. Hadoop can reliably store and process petabytes of data on thousands of low-cost commodity hardware nodes.
This document provides an introduction to big data and Hadoop. It discusses how the volume of data being generated is growing rapidly and exceeding the capabilities of traditional databases. Hadoop is presented as a solution for distributed storage and processing of large datasets across clusters of commodity hardware. Key aspects of Hadoop covered include MapReduce for parallel processing, the Hadoop Distributed File System (HDFS) for reliable storage, and how data is replicated across nodes for fault tolerance.
Hadoop is a software framework that allows for distributed processing of large data sets across clusters of computers. It includes MapReduce for distributed computing, HDFS for storage, and runs efficiently on large clusters by distributing data and processing across nodes. Example applications include log analysis, machine learning, and sorting 1TB of data in under a minute. It is fault-tolerant, scalable, and designed for processing vast amounts of data in a reliable and cost-effective manner.
This document provides an overview of MapReduce and Apache Hadoop. It discusses the history and components of Hadoop, including HDFS and MapReduce. It then walks through an example MapReduce job, the WordCount algorithm, to illustrate how MapReduce works. The WordCount example counts the frequency of words in documents by having mappers emit <word, 1> pairs and reducers sum the counts for each word.
it just provide information about hadoop what is hadoop and how hadoop overcomes the disadvantage of distributed system and i have also shown an example program for mapreduce
This document provides an overview of Hadoop and its core components. Hadoop is an open-source software framework for distributed storage and processing of large datasets across clusters of computers. It uses MapReduce as its programming model and the Hadoop Distributed File System (HDFS) for storage. HDFS stores data redundantly across nodes for reliability. The core subprojects of Hadoop include MapReduce, HDFS, Hive, HBase, and others.
Hadoop Online Training : kelly technologies is the bestHadoop online Training Institutes in Bangalore. ProvidingHadoop online Training by real time faculty in Bangalore.
1. The document discusses the evolution of computing from mainframes to smaller commodity servers and PCs. It then introduces cloud computing as an emerging technology that is changing the technology landscape, with examples like Google File System and Amazon S3.
2. It discusses the need for large data processing due to increasing amounts of data from sources like the stock exchange, Facebook, genealogy sites, and scientific experiments.
3. Hadoop is introduced as a framework for distributed computing and reliable shared storage and analysis of large datasets using its Hadoop Distributed File System (HDFS) for storage and MapReduce for analysis.
This document discusses a Hadoop Job Runner UI Tool that was created to make running Hadoop jobs easier. It allows users to browse input data locally, copy the data and job class to HDFS, run the job, and display results without using command lines. The tool simplifies tasks like distributing data and code, executing jobs, and retrieving output. Background information on Hadoop, MapReduce, and distributed computing environments is also provided.
This presentation will give you Information about :
1.Configuring HDFS
2.Interacting With HDFS
3.HDFS Permissions and Security
4.Additional HDFS Tasks
HDFS Overview and Architecture
5.HDFS Installation
6.Hadoop File System Shell
7.File System Java API
This document discusses big data and Hadoop. It defines big data as large data sets that cannot be processed by traditional software tools within a reasonable time frame due to the volume and variety of data. It then describes the three V's of big data - volume, velocity, and variety. The document provides examples of sources of big data and discusses how Hadoop, an open-source software framework, can be used to manage and analyze big data through its core components - HDFS for storage and MapReduce for processing. Finally, it provides a high-level overview of how MapReduce works.
Pointer to a pointer in C is used to store the address of another pointer. A pointer to a pointer, also known as a double pointer, allows the value of normal pointers to be changed or variable-sized 2D arrays to be created. Double pointers occupy the same amount of memory as regular pointers. They are declared with an additional asterisk before the pointer variable name and initialized by storing the address of a normal pointer variable. Pointer to pointers have various applications including dynamic memory allocation of multidimensional arrays and storing multilevel data.
1. Auxiliary memory refers to storage technologies like magnetic tapes and disks that are slower to access than main memory but can store much more data at a lower cost.
2. Common examples are magnetic tapes, which use magnetic strips to store data sequentially, and magnetic disks, which store data on spinning platters in concentric tracks and sectors that can be randomly accessed.
3. Cache memory is a small amount of very fast memory located close to the CPU that stores frequently used data from main memory to improve access speed. It works by checking for data matches before accessing slower main memory.
MongoDB is a document-oriented database that works with collections and documents. It supports hierarchical data structures and has built-in Python drivers. Python code can connect to a MongoDB database using PyMongo, the native Python library for MongoDB. Documents are inserted into and retrieved from MongoDB collections using methods like insert_one(), find(), update_one(), and delete_many(). Documents can be filtered and sorted before being returned.
Pointers in C programming store the address of other variables or memory locations. Pointers allow accessing and manipulating the data stored at those memory addresses. Pointers are useful for accessing arrays, dynamically allocating memory, and passing arguments by reference. Pointer variables must be declared with a data type and dereferenced using the * operator. Pointers can be initialized by assigning the address of another variable using the & operator. Pointer arithmetic allows incrementing or decrementing pointers to access successive memory locations.
The Certainty Factor Theory uses numeric values between -1 and 1 to represent the likelihood or certainty of statements or hypotheses being true based on evidence. It was developed for artificial intelligence systems to represent uncertain or incomplete information. The Certainty Factor can be calculated based on the Measure of Belief and Measure of Disbelief of hypotheses given evidence, and formulas are provided to combine Certainty Factors from multiple pieces of evidence. However, the theory has limitations, such as difficulty accurately assigning certainty values and the limited numeric range. The Dempster-Shafer Theory was introduced to address some of the limitations of probability theory. It defines a mass function over all subsets of a set of possible conclusions to represent degrees of belief, and uses belief
Magnetic disks and magnetic tapes are common examples of auxiliary memory. Magnetic disks use circular plates coated with magnetized material to store bits along concentric circles called tracks. Magnetic tapes use strips of plastic coated with magnetic material to record bits along multiple tracks. Both provide non-volatile, high-capacity storage but are slower to access than primary memory.
Random forest is an ensemble machine learning algorithm that combines multiple decision trees to improve predictive accuracy. It works by constructing many decision trees during training and outputting the class that is the mode of the classes of the individual trees. Random forest can be used for both classification and regression problems and provides high accuracy even with large datasets.
K-Means clustering is an unsupervised learning algorithm that groups unlabeled data points into K number of clusters based on their similarity. It works by first randomly selecting K cluster centers, known as centroids. It then assigns each data point to the closest centroid, forming K clusters. It then recalculates the position of the centroids and reassigns data points in an iterative process, until the centroids are stable or the maximum number of iterations is reached. The optimal number of clusters K is determined using the elbow method by plotting the within-cluster sum of squares (WCSS) against the number of clusters K.
Pandas is a popular Python library used for working with labeled/relational data and time series data. It provides data structures like Series and DataFrames. Series are one-dimensional arrays that can hold data of any type. DataFrames are two-dimensional structures like tables, with labeled rows and columns. DataFrames can be created from lists, dictionaries, or CSV/Excel files. Columns and rows can be accessed, selected, and manipulated. The values of Series can be reshaped into different dimensions.
MongoDB is a document-oriented database that works with collections and documents. It supports hierarchical data structures and has built-in Python drivers. Python code can connect to a MongoDB database using the PyMongo library, import the MongoClient to connect to a database, then insert, find, update, and delete documents in collections.
The document provides information on how to connect Python to MySQL and perform various operations like creating databases and tables, inserting, updating, deleting and fetching data. It explains how to install the required Python MySQL connector library and connect to a MySQL server from Python. It then demonstrates commands to create databases and tables, insert, update and delete data, and fetch data using where, order by and limit clauses. It also shows how to drop tables and databases and alter table structures.
This document provides an overview of Python programming concepts across 5 units. Unit 1 introduces Python installation, data types, variables, expressions, statements and functions. It covers integers, floats, Booleans, strings, lists and the basics of writing Python code. Unit 2 discusses control flow statements like conditionals and loops. Unit 3 covers functions, strings, arrays and lists in more detail. Unit 4 focuses on lists, tuples, dictionaries and their methods. Unit 5 discusses files, exceptions, modules and packages in Python.
The document contains Python code for performing various operations on binary files including reading, writing, appending, searching, updating, and deleting records from a binary file. The code uses the pickle module to serialize and deserialize Python objects to and from the binary file which stores student records with roll number, name, and marks fields. Functions are defined to handle each operation and call each other to demonstrate the full binary file processing functionality.
Data science involves analyzing data to extract meaningful insights. It uses principles from fields like mathematics, statistics, and computer science. Data scientists analyze large amounts of data to answer questions about what happened, why it happened, and what will happen. This helps generate meaning from data. There are different types of data analysis including descriptive analysis, which looks at past data, diagnostic analysis, which finds causes of past events, and predictive analysis, which forecasts future trends. The data analysis process involves specifying requirements, collecting and cleaning data, analyzing it, interpreting results, and reporting findings. Tools like SAS, Excel, R and Python are used for these tasks.
The document discusses various ways to convert between JSON and XML formats in Python. It describes using the json and xmltodict modules to serialize and deserialize between the two formats. Methods like json.loads(), json.dumps(), xmltodict.unparse() are used to convert between Python dictionaries and JSON/XML strings or files. Both string conversions and file conversions are demonstrated.
This document discusses how to use the Beautiful Soup library in Python to parse HTML files and extract tag values. It covers installing Beautiful Soup, reading HTML files, extracting values from specific tags, extracting values from all instances of a tag, creating HTML files in Python, and viewing HTML files and web pages. Functions of the glob module like iglob(), glob(), and escape() are also discussed for filename pattern matching.
A Python regular expression is a sequence of metacharacters that define a search pattern used to find or find and replace strings. Common regex functions include search(), findall(), split(), sub(), and functions of match objects like start(), span(), and string(). Regular expressions can match characters, quantifiers, word boundaries, and more.
This document discusses different data representation methods used in digital computers including data types, number systems, complements, fixed point representation, floating point representation, and overflow detection. It describes how binary, octal, decimal, and hexadecimal numbers are represented. It also explains signed magnitude representation, 1's complement, and 2's complement representations for negative integers. Floating point representation following the IEEE 754 standard is also summarized.
Life of Ah Gong and Ah Kim ~ A Story with Life Lessons (Hokkien, English & Ch...OH TEIK BIN
A PowerPoint Presentation of a fictitious story that imparts Life Lessons on loving-kindness, virtue, compassion and wisdom.
The texts are in Romanized Hokkien, English and Chinese.
For the Video Presentation with audio narration in Hokkien, please check out the Link:
https://vimeo.com/manage/videos/987932748
Topics to be Covered
Beginning of Pedagogy
What is Pedagogy?
Definition of Pedagogy
Features of Pedagogy
What Is Pedagogy In Teaching?
What Is Teacher Pedagogy?
What Is The Pedagogy Approach?
What are Pedagogy Approaches?
Teaching and Learning Pedagogical approaches?
Importance of Pedagogy in Teaching & Learning
Role of Pedagogy in Effective Learning
Pedagogy Impact on Learner
Pedagogical Skills
10 Innovative Learning Strategies For Modern Pedagogy
Types of Pedagogy
Plato and Aristotle's Views on Poetry by V.Jesinthal Maryjessintv
PPT on Plato and Aristotle's Views on Poetry prepared by Mrs.V.Jesinthal Mary, Dept of English and Foreign Languages(EFL),SRMIST Science and Humanities ,Ramapuram,Chennai-600089
How to Load Custom Field to POS in Odoo 17 - Odoo 17 SlidesCeline George
This slide explains how to load custom fields you've created into the Odoo 17 Point-of-Sale (POS) interface. This approach involves extending the functionalities of existing POS models (e.g., product.product) to include your custom field.
Email Marketing in Odoo 17 - Odoo 17 SlidesCeline George
Email marketing is used to send advertisements or commercial messages to specific groups of people by using email. Email Marketing also helps to track the campaign’s overall effectiveness. This slide will show the features of odoo 17 email marketing.
How to Configure Field Cleaning Rules in Odoo 17Celine George
In this slide let’s discuss how to configure field cleaning rules in odoo 17. Field Cleaning is used to format the data that we use inside Odoo. Odoo 17's Data Cleaning module offers Field Cleaning Rules to improve data consistency and quality within specific fields of our Odoo records. By using the field cleaning, we can correct the typos, correct the spaces between them and also formats can be corrected.
Description:
Welcome to the comprehensive guide on Relational Database Management System (RDBMS) concepts, tailored for final year B.Sc. Computer Science students affiliated with Alagappa University. This document covers fundamental principles and advanced topics in RDBMS, offering a structured approach to understanding databases in the context of modern computing. PDF content is prepared from the text book Learn Oracle 8I by JOSE A RAMALHO.
Key Topics Covered:
Main Topic : PL/SQL
Sub-Topic :
Structure of PL/SQL Block, Declaration Section, Variable, Constant, Execution Section, Exception, How PL/SQL works, Control Structures, If then Command,
Loop Command, Loop with IF, Loop with When, For Loop Command, While Command, Integrating SQL in PL/SQL program.
Target Audience:
Final year B.Sc. Computer Science students at Alagappa University seeking a solid foundation in RDBMS principles for academic and practical applications.
URL for previous slides
Unit V
Chapter 15
Unit IV
Chapter 14 Synonym : https://www.slideshare.net/slideshow/lecture_notes_unit4_chapter14_synonyms-pdf/270327685
Chapter 13 Users, Privileges : https://www.slideshare.net/slideshow/lecture-notes-unit4-chapter13-users-roles-and-privileges/270304806
Chapter 12 View : https://www.slideshare.net/slideshow/rdbms-lecture-notes-unit4-chapter12-view/270199683
Chapter 11 Sequence: https://www.slideshare.net/slideshow/sequnces-lecture_notes_unit4_chapter11_sequence/270134792
chapter 8,9 and 10 : https://www.slideshare.net/slideshow/lecture_notes_unit4_chapter_8_9_10_rdbms-for-the-students-affiliated-by-alagappa-university/270123800
About the Author:
Dr. S. Murugan is Associate Professor at Alagappa Government Arts College, Karaikudi. With 23 years of teaching experience in the field of Computer Science, Dr. S. Murugan has a passion for simplifying complex concepts in database management.
Disclaimer:
This document is intended for educational purposes only. The content presented here reflects the author’s understanding in the field of RDBMS as of 2024.
2. Introduction
Big Data:
•Big data is a term used to describe the voluminous amount of unstructured
and semi-structured data a company creates.
•Data that would take too much time and cost too much money to load into
a relational database for analysis.
• Big data doesn't refer to any specific quantity, the term is often used when
speaking about petabytes and exabytes of data.
3. • The New York Stock Exchange generates about one terabyte of new trade data per day.
• Facebook hosts approximately 10 billion photos, taking up one petabyte of storage.
• Ancestry.com, the genealogy site, stores around 2.5 petabytes of data.
• The Internet Archive stores around 2 petabytes of data, and is growing at a rate of 20
terabytes per month.
• The Large Hadron Collider near Geneva, Switzerland, produces about 15 petabytes of
data per year.
4. What Caused The Problem?
Year
Standard Hard Drive Size
(in Mb)
1990 1370
2010 1000000
Year
Data Transfer Rate
(Mbps)
1990 4.4
2010 100
5. So What Is The Problem?
The transfer speed is around 100 MB/s
A standard disk is 1 Terabyte
Time to read entire disk= 10000 seconds or 3 Hours!
Increase in processing time may not be as helpful because
• Network bandwidth is now more of a limiting factor
• Physical limits of processor chips have been reached
6. So What do We Do?
•The obvious solution is that we use
multiple processors to solve the same
problem by fragmenting it into pieces.
•Imagine if we had 100 drives, each
holding one hundredth of the data.
Working in parallel, we could read the
data in under two minutes.
7. Distributed Computing Vs
Parallelization
Parallelization- Multiple processors or CPU’s
in a single machine
Distributed Computing- Multiple computers
connected via a network
8. Examples
Cray-2 was a four-processor ECL
vector supercomputer made by
Cray Research starting in 1985
9. Distributed Computing
The key issues involved in this Solution:
Hardware failure
Combine the data after analysis
Network Associated Problems
10. What Can We Do With A Distributed
Computer System?
IBM Deep Blue
Multiplying Large Matrices
Simulating several 100’s of characters-
LOTRs
Index the Web (Google)
Simulating an internet size network for
network experiments
11. Problems In Distributed Computing
• Hardware Failure:
As soon as we start using many pieces of
hardware, the chance that one will fail is fairly
high.
• Combine the data after analysis:
Most analysis tasks need to be able to combine
the data in some way; data read from one
disk may need to be combined with the data
from any of the other 99 disks.
12. To The Rescue!
Apache Hadoop is a framework for running applications on
large cluster built of commodity hardware.
A common way of avoiding data loss is through replication:
redundant copies of the data are kept by the system so that in the
event of failure, there is another copy available. The Hadoop
Distributed Filesystem (HDFS), takes care of this problem.
The second problem is solved by a simple programming model-
Mapreduce. Hadoop is the popular open source implementation
of MapReduce, a powerful tool designed for deep analysis and
transformation of very large data sets.
13. What Else is Hadoop?
A reliable shared storage and analysis system.
There are other subprojects of Hadoop that provide complementary
services, or build on the core to add higher-level abstractions The various
subprojects of hadoop include:
1. Core
2. Avro
3. Pig
4. HBase
5. Zookeeper
6. Hive
7. Chukwa
14. Hadoop Approach to Distributed
Computing
The theoretical 1000-CPU machine would cost a very large amount of
money, far more than 1,000 single-CPU.
Hadoop will tie these smaller and more reasonably priced machines together
into a single cost-effective compute cluster.
Hadoop provides a simplified programming model which allows the user to
quickly write and test distributed systems, and its’ efficient, automatic
distribution of data and work across machines and in turn utilizing the
underlying parallelism of the CPU cores.
16. MapReduce
Hadoop limits the amount of communication which can be performed by the
processes, as each individual record is processed by a task in isolation from one another
By restricting the communication between nodes, Hadoop makes the distributed system
much more reliable. Individual node failures can be worked around by restarting tasks
on other machines.
The other workers continue to operate as though nothing went wrong, leaving the
challenging aspects of partially restarting the program to the underlying Hadoop layer.
Map : (in_value,in_key)(out_key, intermediate_value)
Reduce: (out_key, intermediate_value) (out_value list)
17. What is MapReduce?
MapReduce is a programming model
Programs written in this functional style are automatically parallelized and
executed on a large cluster of commodity machines
MapReduce is an associated implementation for processing and generating
large data sets.
18. The Programming Model Of MapReduce
Map, written by the user, takes an input pair and produces a set of intermediate
key/value pairs. The MapReduce library groups together all intermediate values
associated with the same intermediate key I and passes them to the Reduce
function.
19. The Reduce function, also written by the user, accepts an intermediate key I and a set of values
for that key. It merges together these values to form a possibly smaller set of values
20. This abstraction allows us to handle lists of values that are too large to fit in memory.
Example:
// key: document name
// value: document contents
for each word w in value:
EmitIntermediate(w, "1");
reduce(String key, Iterator values):
// key: a word
// values: a list of counts
int result = 0;
for each v in values:
result += ParseInt(v);
Emit(AsString(result));
21. Orientation of Nodes
Data Locality Optimization:
The computer nodes and the storage nodes are the same. The Map-Reduce
framework and the Distributed File System run on the same set of nodes. This
configuration allows the framework to effectively schedule tasks on the nodes where
data is already present, resulting in very high aggregate bandwidth across the
cluster.
If this is not possible: The computation is done by another processor on the same
rack.
“Moving Computation is Cheaper than Moving Data”
22. How MapReduce Works
A Map-Reduce job usually splits the input data-set into independent chunks which are
processed by the map tasks in a completely parallel manner.
The framework sorts the outputs of the maps, which are then input to the reduce tasks.
Typically both the input and the output of the job are stored in a file-system. The
framework takes care of scheduling tasks, monitoring them and re-executes the failed
tasks.
A MapReduce job is a unit of work that the client wants to be performed: it consists of
the input data, the MapReduce program, and configuration information. Hadoop runs
the job by dividing it into tasks, of which there are two types: map tasks and reduce
tasks
23. Fault Tolerance
There are two types of nodes that control the job execution process: tasktrackers and
jobtrackers
The jobtracker coordinates all the jobs run on the system by scheduling tasks to run on
tasktrackers.
Tasktrackers run tasks and send progress reports to the jobtracker, which keeps a record
of the overall progress of each job.
If a tasks fails, the jobtracker can reschedule it on a different tasktracker.
25. Input Splits
Input splits: Hadoop divides the input to a MapReduce job into fixed-size
pieces called input splits, or just splits. Hadoop creates one map task for each
split, which runs the user-defined map function for each record in the split.
The quality of the load balancing increases as the splits become more fine-
grained.
BUT if splits are too small, then the overhead of managing the splits and of map
task creation begins to dominate the total job execution time. For most jobs, a
good split size tends to be the size of a HDFS block, 64 MB by default.
WHY?
Map tasks write their output to local disk, not to HDFS. Map output is
intermediate output: it’s processed by reduce tasks to produce the final output,
and once the job is complete the map output can be thrown away. So storing it
in HDFS, with replication, would be a waste of time. It is also possible that the
node running the map task fails before the map output has been consumed by
the reduce task.
26. Input to Reduce Tasks
Reduce tasks don’t have the advantage of
data locality—the input to a single reduce
task is normally the output from all mappers.
30. •Many MapReduce jobs are limited by the bandwidth available on the cluster.
•In order to minimize the data transferred between the map and reduce tasks, combiner
functions are introduced.
•Hadoop allows the user to specify a combiner function to be run on the map output—the
combiner function’s output forms the input to the reduce function.
•Combiner finctions can help cut down the amount of data shuffled between the maps and
the reduces.
Combiner Functions
31. •Hadoop provides an API to MapReduce that allows you to
write your map and reduce functions in languages other than
Java.
•Hadoop Streaming uses Unix standard streams as the
interface between Hadoop and your program, so you can use
any language that can read standard input and write to
standard output to write your MapReduce program.
Hadoop Streaming:
32. •Hadoop Pipes is the name of the C++ interface to Hadoop MapReduce.
•Unlike Streaming, which uses standard input and output to communicate with
the map and reduce code, Pipes uses sockets as the channel over which the
tasktracker communicates with the process running the C++ map or reduce
function. JNI is not used.
Hadoop Pipes:
33. Filesystems that manage the storage across a network of machines are called
distributed filesystems.
Hadoop comes with a distributed filesystem called HDFS, which stands for
Hadoop Distributed Filesystem.
HDFS, the Hadoop Distributed File System, is a distributed file system
designed to hold very large amounts of data (terabytes or even petabytes),
and provide high-throughput access to this information.
HADOOP DISTRIBUTED
FILESYSTEM (HDFS)
34. Problems In Distributed File Systems
Making distributed filesystems is more complex than regular disk filesystems. This
is because the data is spanned over multiple nodes, so all the complications of
network programming kick in.
•Hardware Failure
An HDFS instance may consist of hundreds or thousands of server machines, each storing
part of the file system’s data. The fact that there are a huge number of components and that
each component has a non-trivial probability of failure means that some component of HDFS
is always non-functional. Therefore, detection of faults and quick, automatic recovery from
them is a core architectural goal of HDFS.
•Large Data Sets
Applications that run on HDFS have large data sets. A typical file in HDFS is gigabytes to
terabytes in size. Thus, HDFS is tuned to support large files. It should provide high
aggregate data bandwidth and scale to hundreds of nodes in a single cluster. It should
support tens of millions of files in a single instance.
35. Goals of HDFS
Streaming Data Access
Applications that run on HDFS need streaming access to their data sets. They are
not general purpose applications that typically run on general purpose file systems.
HDFS is designed more for batch processing rather than interactive use by users.
The emphasis is on high throughput of data access rather than low latency of data
access. POSIX imposes many hard requirements that are not needed for
applications that are targeted for HDFS. POSIX semantics in a few key areas has
been traded to increase data throughput rates.
Simple Coherency Model
HDFS applications need a write-once-read-many access model for files. A file once
created, written, and closed need not be changed. This assumption simplifies data
coherency issues and enables high throughput data access. A Map/Reduce
application or a web crawler application fits perfectly with this model. There is a plan
to support appending-writes to files in the future.
36. “Moving Computation is Cheaper than Moving Data”
A computation requested by an application is much more efficient if
it is executed near the data it operates on. This is especially true when
the size of the data set is huge. This minimizes network congestion
and increases the overall throughput of the system. The assumption is
that it is often better to migrate the computation closer to where the
data is located rather than moving the data to where the application is
running. HDFS provides interfaces for applications to move
themselves closer to where the data is located.
Portability Across Heterogeneous Hardware and Software
Platforms HDFS has been designed to be easily portable from
one platform to another. This facilitates widespread adoption
of HDFS as a platform of choice for a large set of
applications.
37. Design of HDFS
Very large files
Files that are hundreds of megabytes, gigabytes, or terabytes in size. There
are Hadoop clusters running today that store petabytes of data.
Streaming data access
HDFS is built around the idea that the most efficient data processing pattern
is a write-once, read-many-times pattern.
A dataset is typically generated or copied from source, then various
analyses are performed on that dataset over time. Each analysis will involve
a large proportion of the dataset, so the time to read the whole dataset is
more important than the latency in reading the first record.
38. Low-latency data access
Applications that require low-latency access to data, in the tens
of milliseconds
range, will not work well with HDFS. Remember HDFS is
optimized for delivering a high throughput of data, and this may
be at the expense of latency. HBase (Chapter 12) is currently a
better choice for low-latency access.
Multiple writers, arbitrary file modifications
Files in HDFS may be written to by a single writer. Writes are
always made at the end of the file. There is no support for
multiple writers, or for modifications at arbitrary offsets in the
file. (These might be supported in the future, but they are likely
to be relatively inefficient.)
39. • Lots of small files
Since the namenode holds filesystem metadata in memory, the limit to
the number of files in a filesystem is governed by the amount of
memory on the namenode. As a rule of thumb, each file, directory, and
block takes about 150 bytes. So, for example, if you had one million
files, each taking one block, you would need at least 300 MB of
memory. While storing millions of files is feasible, billions is beyond the
capability of current hardware.
40. Commodity hardware
Hadoop doesn’t require expensive, highly reliable hardware to run on.
It’s designed to run on clusters of commodity hardware for which the
chance of node failure across the cluster is high, at least for large
clusters. HDFS is designed to carry on working without a noticeable
interruption to the user in the face of such failure. It is also worth
examining the applications for which using HDFS does not work so
well. While this may change in the future, these are areas where HDFS
is not a good fit today:
42. Block Abstraction
Blocks:
• A block is the minimum amount of data that can be read or
written.
• 64 MB by default.
• Files in HDFS are broken into block-sized chunks, which are
stored as independent units.
• HDFS blocks are large compared to disk blocks, and the
reason is to minimize the cost of seeks. By making a block
large enough, the time to transfer the data from the disk can be
made to be significantly larger than the time to seek to the start
of the block. Thus the time to transfer a large file made of
multiple blocks operates at the disk transfer rate.
43. Benefits of Block Abstraction
A file can be larger than any single disk in the network. There’s
nothing that requires the blocks from a file to be stored on the
same disk, so they can take advantage of any of the disks in
the cluster.
Making the unit of abstraction a block rather than a file
simplifies the storage subsystem.
Blocks provide fault tolerance and availability. To insure against
corrupted blocks and disk and machine failure, each block is
replicated to a small number of physically separate machines
(typically three). If a block becomes unavailable, a copy can be
read from another location in a way that is transparent to the
client.
44. Hadoop Archives
HDFS stores small files inefficiently, since each file is stored in
a block, and block metadata is held in memory by the
namenode. Thus, a large number of small files can eat up a lot
of memory on the namenode.
Hadoop Archives, or HAR files, are a file archiving facility that
packs files into HDFS blocks more efficiently, thereby reducing
namenode memory usage while still allowing transparent
access to files.
Hadoop Archives can be used as input to MapReduce.
45. Limitations of Archiving
There is currently no support for archive
compression, although the files that go into
the archive can be compressed
Archives are immutable once they have been
created. To add or remove files, you must
recreate the archive
46. Namenodes and Datanodes
A HDFS cluster has two types of node operating in a master-
worker pattern: a namenode (the master) and a number of
datanodes (workers).
The namenode manages the filesystem namespace. It
maintains the filesystem tree and the metadata for all the files
and directories in the tree.
Datanodes are the work horses of the filesystem. They store
and retrieve blocks when they are told to (by clients or the
namenode), and they report back to the namenode periodically
with lists of blocks that they are storing.
47. Without the namenode, the filesystem cannot
be used. In fact, if the machine running the
namenode were obliterated, all the files on
the filesystem would be lost since there
would be no way of knowing how to
reconstruct the files from the blocks on the
datanodes.
48. Important to make the namenode resilient to failure, and
Hadoop provides two mechanisms for this:
1. is to back up the files that make up the persistent state of the
filesystem metadata. Hadoop can be configured so that the
namenode writes its persistent state to multiple filesystems.
2. Another solution is to run a secondary namenode. The
secondary namenode usually runs on a separate physical
machine, since it requires plenty of CPU and as much memory
as the namenode to perform the merge. It keeps a copy of the
merged namespace image, which can be used in the event of
the namenode failing
49. File System Namespace
HDFS supports a traditional hierarchical file organization. A user or an
application can create and remove files, move a file from one directory
to another, rename a file, create directories and store files inside these
directories.
HDFS does not yet implement user quotas or access permissions.
HDFS does not support hard links or soft links. However, the HDFS
architecture does not preclude implementing these features.
The Namenode maintains the file system namespace. Any change to
the file system namespace or its properties is recorded by the
Namenode. An application can specify the number of replicas of a file
that should be maintained by HDFS. The number of copies of a file is
called the replication factor of that file. This information is stored by the
Namenode.
50. Data Replication
The blocks of a file are replicated for fault tolerance.
The NameNode makes all decisions regarding replication of
blocks. It periodically receives a Heartbeat and a Blockreport
from each of the DataNodes in the cluster. Receipt of a
Heartbeat implies that the DataNode is functioning properly.
A Blockreport contains a list of all blocks on a DataNode.
When the replication factor is three, HDFS’s placement policy
is to put one replica on one node in the local rack, another on a
different node in the local rack, and the last on a different node
in a different rack.
51. Bibliography
1. Hadoop- The Definitive Guide, O’Reilly 2009, Yahoo! Press
2. MapReduce: Simplified Data Processing on Large Clusters,
Jeffrey Dean and Sanjay Ghemawat
3. Ranking and Semi-supervised Classification on Large Scale
Graphs Using Map-Reduce, Delip Rao, David Yarowsky, Dept.
of Computer Science, Johns Hopkins University
4. Improving MapReduce Performance in Heterogeneous
Environments, Matei Zaharia, Andy Konwinski, Anthony D.
Joseph, Randy Katz, Ion Stoica, University of California,
Berkeley
5. MapReduce in a Week By Hannah Tang, Albert Wong, Aaron
Kimball, Winter 2007
Editor's Notes
(Note, however, that small files do not take up any more disk space than is required to store the raw contents of the file. For example, a 1 MB file stored with a block size of 128 MB uses 1 MB of disk space, not 128 MB.)