Dear Vinay,

Hope you are doing well.

Thanks for raising a request, we are glad assisting you.

Plase find the code below.

name := "SparkCassandra"
version := "1.0"
scalaVersion := "2.10.4"
libraryDependencies += "org.apache.spark" % "spark-core_2.10" % "1.5.2"
libraryDependencies += "com.datastax.spark" % "spark-cassandra-connector_2.10" % "1.5.2-M1"


import org.apache.spark.{SparkConf, SparkContext}
import com.datastax.spark.connector._

object SparkCassandra extends App {

  val conf = new SparkConf()
    .setMaster("local[*]")
    .setAppName("SparkCassandra")
    //set Cassandra host address as your local address
    .set("spark.cassandra.connection.host", "192.168.30.154")
  val sc = new SparkContext(conf)
  //get table from keyspace and stored as rdd
  val rdd = sc.cassandraTable("demo", "users")
  //collect will dump the whole rdd data to driver node (here's our machine), 
  //which may crush the machine. So take first 100 (for now we use small table
  //so it's ok)
  val file_collect = rdd.collect().take(100)
  file_collect.foreach(println(_))

  sc.stop()
}


Let us know if you have any further concern over this.

We will be glad assisting you.


Please note if you are not happy with the response on this ticket, please escalate it to escalations@edureka.in.
We assure you that we will get back to you within 24 hours
 
213431