未加星标

Spark Cassandra Connector On Spark-Shell

字体大小 | |
[系统(linux) 所属分类 系统(linux) | 发布者 店小二03 | 时间 2017 | 作者 红领巾 ] 0人收藏点击收藏
Using Spark-Cassandra-Connector on Spark Shell

Hi All , In this blog we will see how we can execute our spark code on spark shell using Cassandra . This is very efficient at testing or learning time , where we have to execute our code on spark shell rather than doing on any IDE .

Here we will use spark version 1.6.2

you can download the version from Here

and off course its appropriate spark Cassandra connector as

Cassandra Connector spark-cassandra-connector_2.10-1.6.2.jar .

you can download the connector(jar file) from Here .

So lets Begin :

Step 1 )Create any test table in your Cassandra ( I am using Cassandra version Cassandra 3.0.10 ) .

CREATE TABLE test_smack.movies_by_actor (

actor text,

release_year int,

movie_id uuid,

genres set,

rating float,

title text,

PRIMARY KEY (actor, release_year, movie_id)

) WITH CLUSTERING ORDER BY (release_year DESC, movie_id ASC)

Insert Some Test Data :

INSERT INTO movies_by_actor (actor , release_year , movie_id , genres , rating , title ) VALUES ( ‘Johnny Depp’,2010,now(),{‘Drama’, ‘Thriller’}, 7.5 ,’The Tourist’) ;

INSERT INTO movies_by_actor (actor , release_year , movie_id , genres , rating , title ) VALUES ( ‘Johnny Depp’,2011,now(),{‘Animated’, ‘Comedy’}, 8.5,’Rango’) ;

INSERT INTO movies_by_actor (actor , release_year , movie_id , genres ,rating , title ) VALUES ( ‘Johnny Depp’,2012,now(),{‘Crime’, ‘Dark Comedy’}, 6.5,’Dark Shadows’) ;

INSERT INTO movies_by_actor (actor , release_year , movie_id , genres ,rating , title ) VALUES ( ‘Johnny Depp’,2013,now(),{‘Adventurous’, ‘Thriller’}, 9.5,’Transcendence’) ;

INSERT INTO movies_by_actor (actor , release_year , movie_id , genres ,rating , title ) VALUES ( ‘Johnny Depp’,2013,now(),{‘Adventurous’, ‘Thriller’}, 6.5,’The Lone Ranger

‘) ;

INSERT INTO movies_by_actor (actor , release_year , movie_id , genres , title ) VALUES ( ‘Johnny Depp’,2014,now(),{‘thriller’},’Black Mass’) ;

Step 2 )Go to the path where you have kept you spark binary folder . ( ex: /Desktop/spark-1.6.2-bin-hadoop2.6/bin ) and start spark by including the jar file we downloaded above .

$ sudo ./spark-shell jars /PATH_TO_YOUR_CASSANDRA_CONNECTOR/spark-cassandra-connector_2.10-1.6.2.jar

Step 3 )When you starts spark using spark shell then spark by default creates a spark context named ‘sc’ . Now we need to do the following steps to connect our spark cluster with Cassandra :

sc.stop

import com.datastax.spark.connector._

import org.apache.spark.SparkContext

import org.apache.spark.SparkContext._

import org.apache.spark.SparkConf

val conf = new SparkConf(true).set(“spark.cassandra.connection.host”, “localhost”)

// Here localhost is the address where your spark is running

val sc = new SparkContext(conf)

Step 4)Its all done , now you can query you Db and play with your results , like we are calculating the no of ‘Johnney Depp’ movies for each year :

sc.cassandraTable(“keyspaeceName”,”movies_by_actor”).select(“release_year”).as((year:Int) => (year,1)).reduceByKey(_ + _).collect.foreach(println)

Output :

(2010,1)

(2012,1)

(2013,2)

(2011,1)


Spark Cassandra Connector On Spark-Shell

本文系统(linux)相关术语:linux系统 鸟哥的linux私房菜 linux命令大全 linux操作系统

主题: SparkCassandraRIMRY
分页:12
转载请注明
本文标题:Spark Cassandra Connector On Spark-Shell
本站链接:http://www.codesec.net/view/533616.html
分享请点击:


1.凡CodeSecTeam转载的文章,均出自其它媒体或其他官网介绍,目的在于传递更多的信息,并不代表本站赞同其观点和其真实性负责;
2.转载的文章仅代表原创作者观点,与本站无关。其原创性以及文中陈述文字和内容未经本站证实,本站对该文以及其中全部或者部分内容、文字的真实性、完整性、及时性,不作出任何保证或承若;
3.如本站转载稿涉及版权等问题,请作者及时联系本站,我们会及时处理。
登录后可拥有收藏文章、关注作者等权限...
技术大类 技术大类 | 系统(linux) | 评论(0) | 阅读(39)