未加星标

Hadoop MapReduce一Knowing the Basics

字体大小 | |
[数据库(综合) 所属分类 数据库(综合) | 发布者 店小二05 | 时间 2017 | 作者 红领巾 ] 0人收藏点击收藏

MapReduce can means two different things:

Programming Model If you can rewrite algorithms into Maps and Reduces , and your problem can be broken up into small pieces solvable in parallel, then MapReduce might be a potential distributed problem solving approach to your large datasets. See [ 1 ] for a sample MapReduce application Software Framework A framework such as Hadoop MapRedue breaks up large data into smaller parallelizable chunks and handles scheduling Alternative一 Apache Tez can process certain workloads more efficiently than MapReduce

In this article, we will learn two kinds of Hadoop MapReduce frameworks provided on Apache Hadoop :

MapReduce 1 (MR1) YARN (MR2)

We will start the introduction of Hadoop MapReduce using MR1 and then briefly with MR2.

MapReduce Job

A MapReduce job usually splits the input data-set into independent chunks which are processed by the map tasks in a completely parallel manner. The framework sorts the outputs of the maps, which are then input to the reduce tasks . Typically both the input and the output of the job are stored in a file-system (e.g., HDFS on Apache Hadoop). The framework takes care of scheduling tasks, monitoring them and re-executes the failed tasks.

(input) -> map -> -> combine -> -> reduce -> (output)

MapReduce job is useful for batch processing on terabytes or petabytes of data stored in Apache Hadoop and has the following characteristics: [6] Cannot control the order in which the maps or reductions are run For maximum parallelism, you need Maps and Reduces to not depend on data generated in the same MapReduce job (i.e. stateless) A database with an index will always be faster than a MapReduce job on unindexed data Reduce operations do not take place until all Maps are complete (or have failed then been skipped) General assumption that the output of Reduce is smaller than the input to Map一large datasource used to generate smaller final values Hadoop MapReduce Framework

It's easy to execute MapReduce applications on Apache Hadoop. Other than simplicity , scalability , and performance , Hadoop MapReduce framework also provides additional benefits such as:

Failure and recovery Minimal data motion

Failure & Recovery

The framework takes care of failures. It is designed to detect and handle failures at the application layer, so delivering a highly-available service on top of a cluster of servers, each of which may be prone to failures.

If a server with one copy of the data is unavailable, another server has a copy of the same key/value pair, which can be used to solve the same sub-task. The JobTracker (see below) keeps track of it all.


Hadoop MapReduce一Knowing the Basics

Minimal data motion

Typically the compute nodes and the storage nodes are the same, that is, the MapReduce framework and the Hadoop Distributed File System are running on the same set of nodes (see the above diagram). This configuration allows the framework to effectively schedule tasks on the nodes where data is already present, resulting in lower network I/O and very high aggregate bandwidth across the cluster.

Job Tracker & Task Tracker

JobTracker

Applications using MapReduce framework are required to

Specify Job configuration Specify input/output locations Supply map, combine and reduce functions The Hadoop job client then submits the job (jar/executable etc.) and configuration to the JobTracker

. In MR1, JobTracker manages and monitors both the resources and the MapReduce Job and task scheduling, which makes the JobTracker a single point of failure (SPOF) in a cluster. Also, the cluster cannot be scaled efficiently.

TaskTracker

The JobTracker will first determine the number of splits from the input path and select some TaskTracker based on their network proximity to the data sources, then the JobTracker send the task requests to those selected TaskTrackers.

The TaskTracker spawns a separate JVM processes to do the actual work; this is to ensure that process failure does not take down the task tracker. The TaskTracker monitors these spawned processes, capturing the output and exit codes. When the process finishes, successfully or not, the tracker notifies the JobTracker.

The TaskTrackers also send out heartbeat messages to the JobTracker, usually every few minutes, to reassure the JobTracker that it is still alive. These message also inform the JobTracker of the number of available slots, so the JobTracker can stay up to date with where in the cluster work can be delegated.

YARN一MR2 To overcome the drawbacks of MR1, YARN (or MR2) was introduced and provided for: [9] Better scalability As the JobTracker functionality is split between the ResourceManager and ApplicationMaster . Hadoop may be run on much larger clusters on MR2. Better cluster utilization As the resource capacity configured for each node may be used by both Map and Reduce tasks Non-MapReduce clusters

本文数据库(综合)相关术语:系统安全软件

主题: MapReduceHadoopHDFSJVM
分页:12
转载请注明
本文标题:Hadoop MapReduce一Knowing the Basics
本站链接:http://www.codesec.net/view/531392.html
分享请点击:


1.凡CodeSecTeam转载的文章,均出自其它媒体或其他官网介绍,目的在于传递更多的信息,并不代表本站赞同其观点和其真实性负责;
2.转载的文章仅代表原创作者观点,与本站无关。其原创性以及文中陈述文字和内容未经本站证实,本站对该文以及其中全部或者部分内容、文字的真实性、完整性、及时性,不作出任何保证或承若;
3.如本站转载稿涉及版权等问题,请作者及时联系本站,我们会及时处理。
登录后可拥有收藏文章、关注作者等权限...
技术大类 技术大类 | 数据库(综合) | 评论(0) | 阅读(37)