Dear Rehan,

Hope you are doing well.

As the error suggest Input path does not exist: hdfs://localhost:8020/, it seems your input file is not present in path hdfs:/
Please make sure input file shoud be present in specified path.

Hope it resolves your query.
If you have any further query,please let us know.

Please share your feedback by choosing either of smiley's.

I am trying to run the following to test wordcount program in hadoop and getting the following error:
Could you please help identify what is causing it and how to fix it?
[edureka@localhost spark-vs-hadoop 2]$ hadoop jar wordcount.jar / /word_out
16/04/14 18:56:37 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
16/04/14 18:56:37 INFO client.RMProxy: Connecting to ResourceManager at /
16/04/14 18:56:37 WARN mapreduce.JobSubmitter: Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this.
16/04/14 18:56:38 INFO mapreduce.JobSubmitter: Cleaning up the staging area /tmp/hadoop-yarn/staging/edureka/.staging/job_1460560441316_0005
16/04/14 18:56:38 ERROR security.UserGroupInformation: PriviledgedActionException as:edureka (auth:SIMPLE) cause:org.apache.hadoop.mapreduce.lib.input.InvalidInputException: Input path does not exist: hdfs://localhost:8020/
Exception in thread "main" org.apache.hadoop.mapreduce.lib.input.InvalidInputException: Input path does not exist: hdfs://localhost:8020/
at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.listStatus(
at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(
at org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(
at org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(
at org.apache.hadoop.mapreduce.Job$
at org.apache.hadoop.mapreduce.Job$
at Method)
at org.apache.hadoop.mapreduce.Job.submit(
at org.apache.hadoop.mapreduce.Job.waitForCompletion(
at in.edureka.mapreduce.WordCount.main(
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(
at sun.reflect.DelegatingMethodAccessorImpl.invoke(
at java.lang.reflect.Method.invoke(
at org.apache.hadoop.util.RunJar.main(