未加星标

Hadoop Word Count Program in Scala

字体大小 | |
[数据库(综合) 所属分类 数据库(综合) | 发布者 店小二03 | 时间 2016 | 作者 红领巾 ] 0人收藏点击收藏

You must have seen Hadoop word count program in java, python or in c/c++ but probably not in Scala. so, lets learn how to build Word Count Program in Scala.

Submitting a Job to Hadoop which is written in Scala is not that easy, because Hadoop runs on Java so, it does not understand the functional aspect of Scala.

For writing Word Count Program in Scala we need to follow the following steps.

Create Scala Project with Sbt having version of your choice. Add Hadoop core Dependency in build.sbt from here . Create Scala object say WordCount with main method in the project. Create a class under the Scala object say Map that extends MapReduceBase class with Mapper class. Provide body to Map Function. Create another class under Scala object say Reduce that extends MapReduceBase class with Reduce class. Provide body to reduce function. Provide necessary job configuration in main method of Scala object.

Here is the example for Word Count Program written in Scala.

import java.io.IOException import java.util._ import org.apache.hadoop.fs.Path import org.apache.hadoop.conf._ import org.apache.hadoop.io._ import org.apache.hadoop.mapred._ import org.apache.hadoop.util._ object WordCount { class Map extends MapReduceBase with Mapper[LongWritable, Text, Text, IntWritable] { private final val one = new IntWritable(1) private val word = new Text() @throws[IOException] def map(key: LongWritable, value: Text, output: OutputCollector[Text, IntWritable], reporter: Reporter) { val line: String = value.toString line.split(" ").foreach {token => word.set(token) output.collect(word, one) } } } class Reduce extends MapReduceBase with Reducer[Text, IntWritable, Text, IntWritable] { @throws[IOException] def reduce(key: Text, values: Iterator[IntWritable], output: OutputCollector[Text, IntWritable], reporter: Reporter) { import scala.collection.JavaConversions._ val sum = values.toList.reduce((valueOne, valueTwo) => new IntWritable(valueOne.get() + valueTwo.get())) output.collect(key, new IntWritable(sum.get())) } } @throws[Exception] def main(args: Array[String]) { val conf: JobConf = new JobConf(this.getClass) conf.setJobName("WordCountScala") conf.setOutputKeyClass(classOf[Text]) conf.setOutputValueClass(classOf[IntWritable]) conf.setMapperClass(classOf[Map]) conf.setCombinerClass(classOf[Reduce]) conf.setReducerClass(classOf[Reduce]) conf.setInputFormat(classOf[TextInputFormat]) conf.setOutputFormat(classOf[TextOutputFormat[Text, IntWritable]]) FileInputFormat.setInputPaths(conf, new Path(args(0))) FileOutputFormat.setOutputPath(conf, new Path(args(1))) JobClient.runJob(conf) } }

Till now we have created a program in Scala, now we need to submit this Program/ Job to Hadoop. For submitting a job to Hadoop we need to follow certain steps.

Add sbt-assembly plugin to plugin.sbt under project from here . Open terminal and change directory to the root of the project. In terminal run the command sbt clean compile assembly This command will build the jar under target/scala<version> folder of project. Create directory in HDFS by the following commnad. $HADOOP_HOME/bin/hadoop fs -mkdir input_dir Insert some data in newly created directory in HDFS by following command. $HADOOP_HOME/bin/hadoop fs -put sample.txt input_dir Now Submit job to Hadoop by following command. $HADOOP_HOME/bin/hadoop jar jar_name.jar input_dir output_dir

the jar in the last command is same which is stored in target/scala<verion> directory of project.

you can see the output by the following command

$HADOOP_HOME/bin/hadoop fs -ls output_dir/

if you encounter any problem regarding the building of jar using sbt clean compile assembly command then you need to include mergeStrategy in build.sbt you can find related information here .

Resources :-

You can find hadoop-core dependency here . You can find sbt-assembly Pluin here . You can find the this Project here .
Hadoop Word Count Program in Scala

本文数据库(综合)相关术语:系统安全软件

主题: ScalaHadoopMapReduceJavaHDFSWord
分页:12
转载请注明
本文标题:Hadoop Word Count Program in Scala
本站链接:http://www.codesec.net/view/481587.html
分享请点击:


1.凡CodeSecTeam转载的文章,均出自其它媒体或其他官网介绍,目的在于传递更多的信息,并不代表本站赞同其观点和其真实性负责;
2.转载的文章仅代表原创作者观点,与本站无关。其原创性以及文中陈述文字和内容未经本站证实,本站对该文以及其中全部或者部分内容、文字的真实性、完整性、及时性,不作出任何保证或承若;
3.如本站转载稿涉及版权等问题,请作者及时联系本站,我们会及时处理。
登录后可拥有收藏文章、关注作者等权限...
技术大类 技术大类 | 数据库(综合) | 评论(0) | 阅读(39)