Hope you are doing well.

If you make any changes in eclipse it wont change the code in editor, but if you change the code in editor it will work for eclipse.

If you want to do any changes in the code you have to do it from terminal.

Command for editing the scala code: sudo gedit src/main/scala/code.scala

Once you do the changes in the code it will be applied to the code in eclipse.

Let us know if you have any further concern, we will glad assisting you.

Please note if you are not happy with the response on this ticket, please escalate it to escalations@edureka.in.
We assure you that we will get back to you within 24 hours
 


Regards,
Kamini at Edureka
edureka! Support Team
On Tue, 12 Apr at 8:04 AM , Ranganathan <tellranga@gmail.com> wrote:
Your spark example worked Thanks a lot.
But I do have some observation,

observation 1) 
But I got one exception initially ,
akka.actor.ActorNotFound: Actor not found for: ActorSelection[Anchor(akka.tcp://sparkMaster@localhost.localdomain:7077/), Path(/user/Master)]

so i changed the code from 
val conf= new SparkConf().setAppName("simple application").setMaster("spark://localhost.localdomain:7077")
to
val conf= new SparkConf().setAppName("simple application").setMaster("local")

observation 2) 
for any further changes to hello.scala, If I do it in Eclipse, the editor is not getting saved.
So I gave chmod 777 to this program.

Are there any justification for observation 1 and observation 2 ?



On 10 April 2016 at 05:00, Big Data and Hadoop at Edureka <hadoop@edureka.co> wrote:
Hello Ranganathan,

Hope you are doing well.

Please kindly check the attachment for build.sbt.

For spark submit:

Code:

import org.apache.spark.SparkContext
import org.apache.spark.SparkConf
object hello extends App{
val logfile="file:///home/edureka/Desktop/file2"
val conf= new SparkConf().setAppName("simple application").setMaster("spark://localhost.localdomain:7077")
val sc= new SparkContext(conf)
val logdata= sc.textFile(logfile)
val na= logdata.filter(line=>line.contains("a")).count()
val nb= logdata.filter(line=>line.contains("b")).count()
println(na)
println(nb)
}

 After going for sbt package you can go for the spark submit.

Command:
/spark-1.5.2/bin/spark-submit --class helloSpark --master spark://localhost.localdomain:7077 --deploy-mode client --executor-cores 1 myProject/yaaa/target/scala-2.10/hello_2.10-1.0.jar
Let us know if you face any futher issue, we will be glad assiting you.

Please note if you are not happy with the response on this ticket, please escalate it to escalations@edureka.in.
We assure you that we will get back to you within 24 hours
 


Regards,
Kamini at Edureka
edureka! Support Team
On Sat, 9 Apr at 5:40 PM , Ranganathan <tellranga@gmail.com> wrote:
Hi,
As discussed can you please send the following,
1)dependency file for spark example and execute from eclipse
2)instructions on how to execute spark example from terminal

On 8 April 2016 at 07:31, Big Data and Hadoop at Edureka <hadoop@edureka.co> wrote:
Hello Ranganathan,

Hope you are doing great.

You can contact us anytime on the below numbers.
Please let us know if you have any other concern.

Please note if you are not happy with the response on this ticket, please escalate it to escalations@edureka.in.
We assure you that we will get back to you within 24 hours
 


Regards,
Kamini at Edureka
edureka! Support Team
Please let us know your opinion on our support experience.

HappyAwesome NeutralJust Okay UnhappyNot Good



--
Regards,
Please let us know your opinion on our support experience.

HappyAwesome NeutralJust Okay UnhappyNot Good



--
Regards,
R.Ranganathan.
tellranga@gmail.com
(+1)9018330991

Please let us know your opinion on our support experience.


HappyAwesome NeutralJust Okay UnhappyNot Good
184855