未加星标

Hadoop Streaming with Node.js and other frameworks

字体大小 | |
[前端(javascript) 所属分类 前端(javascript) | 发布者 店小二05 | 时间 2016 | 作者 红领巾 ] 0人收藏点击收藏

Hadoop Streaming is an utility included with any Hadoop distribution that allows any executable program that can read from standard input and write to standard output to be used as the Mapper or Reducer of a MapReduce job.

Why use Hadoop Streaming?

There are 2 main reasons:

It lets you write MapReduce applications in languages other than Java, so if you are not comfortable with Java this might be an option. Most importantly, it opens MapReduce execution to a large amount of libraries, written by really smart people, that are only available in other languages such as in python with pip, Node.JS with npm, and Ruby with gems. Instead of porting the code to Java, you can simply run the libraries in the environment they were designed for. Hadoop Streaming basics

The simplest example of a Hadoop Stream is using already available command line tools such as cat and wc. In this example, we are going to use a slightly modified version of Ulysses by James Joyce as the input file:

hdfs dfs put 4300-0.txt /tmp/ulyses.txt
yarn jar path/to/hadoop-streaming.jar -input /tmp/ulysses.txt -output /tmp/ulyses -mapper /bin/cat -reducer /usr/bin/wc

Which creates the output file /tmp/ulyses/part-00000 containing:

From wc help you can see this correspond exactly with the newlines, words and byte count of the input file.

The requirements for this command are really simple:

-input inputFilesOrDir : the input files -output OutputDir: the directory where to output the results -mapper /bin/cat: the mapper executable to call -reducer /usr/bin/wc: the reducer executable to call

By default, each line in the input files counts as a record, and the mapper formats each key/value pair in a line separated by a TAB . You can modify this behavior by specifying the -inputformat and -outputformat command options to use custom InputFormat and OutputFormat Java classes, respectively.

Hadoop Streaming using custom scripts

There is also the command line option -files , which consists of a comma-separated list of files and folder, that makes them available to the task nodes running the jobs as temp files.

You can exploit this option to pass custom scripts written in javascript, Ruby, Python, etc. This is a really simple example using mapper.js and reducer.js to Hadoop Streaming.

The mapper: #mapper.js
var stdin = process.openStdin();
var stdout = process.stdout;
var input = '';
stdin.setEncoding('utf8');
stdin.on('data', function(data) {
if (data) {
input += data;
while (input.match(/\r?\n/)) {
input = RegExp.rightContext;
proc(RegExp.leftContext);
}
}
});
stdin.on('end', function() {
if (input) {
proc(input);
}
});
function proc(line) {
var words = line.split(' ');
stdout.write(words[8] + ',1\n');
} The reducer: #reducer.js
var stdin = process.openStdin();
var stdout = process.stdout;
var counter = {};
var input = '';
stdin.setEncoding('utf8');
stdin.setEncoding('utf8');
stdin.on('data', function(data) {
if (data) {
input += data;
while (input.match(/\r?\n/)) {
input = RegExp.rightContext;
proc(RegExp.leftContext);
}
}
});
stdin.on('end', function() {
if (input) proc(input);
for (var k in counter) {
stdout.write(k + ':' + counter[k] + '\n');
}
});
function proc(line) {
var words = line.split(',');
var word = words[0];
var count = parseInt(words[1]);
if (!counter[word]) counter[word] = 1;
else counter[word] += count;
} The command: yarn jar path/to/hadoop-streaming.jar -files mapper.js,reducer.js -input /tmp/ulyses.txt -output /tmp/ulyses-node -mapper 'node mapper.js' -reducer 'node reducer.js'

But you might run into problems if even a single node does not have the correct runtime installed. This situation might not be easy to solve if you have a cluster with a significant amount of nodes, or you don’t have privileged access to the filesystem, situations that occur in many cloud environments.

You can solve this by getting a binary distribution of Node.js and passing the binary as one of the files.

#Get the binary distribution
wget https://nodejs.org/dist/v4.4.7/node-v4.4.7-linux-x64.tar.xz
#Extract the content to a familiar location
tar -xvJf node-v4.4.7-linux-x64.tar.xz -C ./node --strip 1
#Pass the node executable as one of the files
yarn jar path/to/hadoop-streaming.jar \
-files mapper.js,reducer.js,node/bin/node \
-input /tmp/ulyses.txt -output /tmp/ulyses-node -mapper 'node mapper.js' -reducer 'node reducer.js'

These are the last few lines of the result of that command when we sort by count:

he:175
his:193
undefined:215
to:282
in:283
a:307
and:380
of:516
the:740 Using scripts with additional requirements

Although this makes Node.js available in every task node, this solution only provides the core modules/packages of Node.js, those that come precompiled with the executable. If you are using Node.js, it’s probably because you want to use some of its custom packages.

There are two recommended options, depending on the number of extra packages that you are going to use:

Install the packages one-by-one using npm install <package> . Install the packages listed as dependencies in a package.json file and then installing using npm install .

The only requirement is that you must install all the packages locally in the ./node_modules directory.

Now, instead of passing each file one by one, we are going to create a package that we are going to distribute to each of the nodes:

#Create the package containing the node executable,
#the node modules, and the mapper and reducer script
tar -cvzf package.tar.gz node/bin/node node_modules mapper.js reducer.js

You also need a couple of scripts to extract the package in each task node:

1. For the mapper_init.sh use the command:

#!/bin/sh
#Extract the package
tar -xvzf package.tar.gz
#Execute the mapper with the included node binary
./node mapper.js

2. For the reducer_init.sh use the command:

#!/bin/sh
#Extract the package
if [ ! -f node ]; then
tar -xvzf package.tar.gz
fi
#Execute the mapper with the included node binary
./node reducer.js

The only thing left to do is execute the streaming job passing the package and the two scripts with the command:

yarn jar path/to/hadoop-streaming.jar \
-files package.tar.gz,mapper_init.sh,reducer_init.sh \
-input inputFilesOrDir -output OutputDir \
-mapper 'sh mapper_init.sh' -reducer 'sh reducer_init.sh' Using HDFS to distribute the environment

Now, lets put all the necessary files in HDFS and access them from there. The -files and -archives options can use hdfs://... addresses as well. We will also tell Node.js where to find its modules using the -cmdenv option to set the NODE_PATH variable.

Note: The -archives

本文前端(javascript)相关术语:javascript是什么意思 javascript下载 javascript权威指南 javascript基础教程 javascript 正则表达式 javascript设计模式 javascript高级程序设计 精通javascript javascript教程

主题: HadoopNode.jsJavaMapReduceHDFSRubyPython
分页:12
转载请注明
本文标题:Hadoop Streaming with Node.js and other frameworks
本站链接:http://www.codesec.net/view/483076.html
分享请点击:


1.凡CodeSecTeam转载的文章,均出自其它媒体或其他官网介绍,目的在于传递更多的信息,并不代表本站赞同其观点和其真实性负责;
2.转载的文章仅代表原创作者观点,与本站无关。其原创性以及文中陈述文字和内容未经本站证实,本站对该文以及其中全部或者部分内容、文字的真实性、完整性、及时性,不作出任何保证或承若;
3.如本站转载稿涉及版权等问题,请作者及时联系本站,我们会及时处理。
登录后可拥有收藏文章、关注作者等权限...
技术大类 技术大类 | 前端(javascript) | 评论(0) | 阅读(32)