Dear Venkat,

Hope you are doing great.

Please have a look on the below details for your reference:

The AvroSerde allows users to read or write Avro data as Hive tables. The AvroSerde's bullet points:

Infers the schema of the Hive table from the Avro schema. Starting in Hive 0.14, the Avro schema can be inferred from the Hive table schema.

Reads all Avro files within a table against a specified schema, taking advantage of Avro's backwards compatibility abilities

Supports arbitrarily nested schemas.

Translates all Avro data types into equivalent Hive types. Most types map exactly, but some Avro types don't exist in Hive and are automatically converted by the AvroSerde.
hive> create table NEW_TABLE
    > row format serde 'org.apache.hadoop.hive.serde2.avro.AvroSerDe'
    > stored as inputformat 'org.apache.hadoop.hive.ql.io.avro.AvroContainerInputFormat'
    > outputformat 'org.apache.hadoop.hive.ql.io.avro.AvroContainerOutputFormat'
    > tblproperties ('avro.schema.literal'='{
    >    "name": "my_record",
    >    "type": "record",
    >    "fields": [
    >       {"name":"boolean1", "type":"boolean"},
    >       {"name":"int1", "type":"int"},
    >       {"name":"long1", "type":"long"},
    >       {"name":"float1", "type":"float"},
    >       {"name":"double1", "type":"double"},
    >       {"name":"string1", "type":"string"},
    >       {"name": "nullable_int", "type": ["int", "null"]]}');
OK
Time taken: 6.372 seconds
hive>


ORC File in Hive:

LOAD DATA just copies the files to hive datafiles. Hive does not do any transformation while loading data into tables.
So, in this case the input file /home/user/test_details.txt needs to be in ORC format if you are loading it into an ORC table.

A possible workaround is to create a temporary table with STORED AS TEXT, then LOAD DATA into it, and then copy data from this table to the ORC table.

Here is an example:

CREATE TABLE test_details_txt( visit_id INT, store_id SMALLINT) STORED AS TEXTFILE;
CREATE TABLE test_details_orc( visit_id INT, store_id SMALLINT) STORED AS ORC;
-- Load into Text table

LOAD DATA LOCAL INPATH '/home/user/test_details.txt' INTO TABLE test_details_txt;
-- Copy to ORC table

INSERT INTO TABLE test_details_orc SELECT * FROM test_details_txt;

Parquet  File in HIVE:

Parquet is automatically installed when you install any of the above components, and the necessary libraries are automatically placed in the classpath for all of them. Copies of the libraries are in /usr/lib/parquet or inside the parcels in /lib/parquet.

The Parquet file format incorporates several features that make it highly suited to data warehouse-style operations:
Columnar storage layout. A query can examine and perform calculations on all values for a column while reading only a small fraction of the data from a data file or table.

Large file size. The layout of Parquet data files is optimized for queries that process large volumes of data, with individual files in the multi-megabyte or even gigabyte range.

The hadoop-lzo package gives fast compression that retains our ability to use the data through Hive queries.
If you are only interested in compression, and have Hadoop and Hive configured appropriately, you can even mix compressed and uncompressed data in separate partitions of a Hive table.  

A normal table definition will work:
CREATE EXTERNAL TABLE edu(
columnA string,
columnB string )
PARTITIONED BY (date string)
ROW FORMAT DELIMITED FIELDS TERMINATED BY "\t"
LOCATION '/path/to/hive/tables/edu';

One big advantage of LZO, though, is its ability to be split in map/reduce jobs. This is done by creating an index of the LZO file with the LzoIndexer tool of the hadoop-lzo project. 

To actually use the index, you will need to use a special input format for your Hive table:

CREATE EXTERNAL TABLE edu(
columnA string,
columnB string )
PARTITIONED BY (date string)
ROW FORMAT DELIMITED FIELDS TERMINATED BY "\t"
STORED AS INPUTFORMAT "com.hadoop.mapred.DeprecatedLzoTextInputFormat"
OUTPUTFORMAT "org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat"
LOCATION '/path/to/hive/tables/edu';

Hope this will helps you out.