Loading data into Hive Table

We can load data into hive table in three ways.Two of them are DML operations of Hive.Third way is using hdfs  command.If we have data in RDBMS system like Oracle,Mysql,DB2 or SQLServer we can import it using SQOOP tool.That Part we are not discussing now.

To Practice below commands ,create a table called Employee with below data

eno,ename,salary,dno

11,Balu,100000,15
12,Radha,120000,25
13,Nitya,150000,15
14,Sai Nirupam,120000,35



1. Using Insert Command

We can load data into a table using Insert  command in two ways.One Using Values command and other is using queries.

     1.1 Using Values
       Using Values command ,we can append more rows of data into        existing table.
       for example ,to existing above employee table we can add                extra row 15,Bala,150000,35 like below

Insert into table employee values (15,'Bala',150000,35)

After this You can run a select command to see newly added row.

     1.2 Using Queries



You can also upload query output into a table.for example Assume you have emp table,from this, you can upload data into employee table like below

Insert into table employee Select * from emp where dno=45;

After this also You can fire select query to see uploaded rows.

2.Using Load


You can load data into a hive table using Load statement in two ways.
One is from local file system to hive table and other is from  HDFS to Hive table.

  2.1 From LFS to Hive Table

Assume we have data like below in LFS file called /data/empnew.csv.
15,Bala,150000,35 
Now We can use load statement like below.

Load data local inpath '/data/empnew.csv' into table emp

2.2 From HDFS to Hive Table

if we do not use local keyword ,it assumes it as a HDFS Path.

Load data local inpath '/data/empnew.csv' into table emp

After these two statements you can fire a select query to see loaded rows into table.

3. Using HDFS command


Assume You have data  in a local file,You can simply upload data using hdfs commands.

run describe command to get the location of table like below.

describe formatted employee;

It will display Location of the table ,Assume You got location as /data/employee, you can upload data into table by using one of below commands.

hadoop fs -put /path/to/localfile /Data/employee

hadoop fs -copyFromLocal /path/to/localfile /Data/employee

hadoop fs -moveFromLocal /path/to/localfile /Data/employee








Managed table and External table in Hive

There are two types of tables in Hive ,one is Managed table and second is external table.
the difference is , when you drop a table, if it is managed table hive deletes both data and meta data,if it is external table Hive only deletes metadata.
Now we learn few  things about these two

1. Table Creation

by default It is Managed table .
If you want to create a external table ,you will use external keyword.

for example assume you have emp.csv file under directory /data/employee

to create a managed table we use normal syntax like below

create table managedemp(col1 datatype,col2 datatype, ....) row format delimited fields terminated by 'delimiter character'
location '/data/employee'

but to create external table ,we use external keyword like below

create external table managedemp(col1 datatype,col2 datatype, ....) row format delimited fields terminated by 'delimiter character'
location '/data/employee'

2. Differentiation

How do you check wether existing table is managed or external table?

To check that we use describe command like below

describe formatted tablename;

It displays complete meta data of a table.you will see one row called table type which will display either MANAGED_TABLE OR EXTERNAL_TABLE

for example if it is managed table ,you will see

Table Type:             MANAGED_TABLE

if it is external table ,you will see

Table Type:             EXTERNAL_TABLE


3.  Drop

As I already said If you drop a managed table both data and meta data will be deleted
if you drop an external table only  meta data is deleted ,external table is a way to protect data against accidental drop commands.

You can check this  by below process.

use describe formatted tablename command and it gives location details like below.

Location :hdfs://namnodeip:portno/data/employee

after dropping the table if you use 
hadoop fs -ls hdfs://namnodeip:portno/data/employee command
you should get no such file or directory exits in case of managed table.
or you should get contents of that directory in case of external table.

and the last line is try to use external table in your project ,once you drop it ,do not forget to remove directory if you do not need it anymore .

Word Count In Hive


In this post I am going to discuss how to write word count program in Hive.

Assume we have data in our table like below

This is a Hadoop Post
and Hadoop is a big data technology

and we want to generate word count like below

a 2
and 1
Big 1
data 1
Hadoop 2
is 2
Post 1
technology 1
This 1

Now we will learn how to write program for the same.


1.Convert sentence into words

 the data  we have is in sentences,first we have to convert that it into words applying space as delimiter.we have to use split function of hive.

split (sentence ,' ')


2.Convert column into rows

Now we have array of strings like this 
[This,is,a,hadoop,Post] 
but we have to convert it into multiple rows like below

This
is
a
hadoop
Post

I mean we have to convert every line of data into multiple rows ,for this we have function called explode in hive and this is also called table generating function.

SELECT explode(split(sentence, ' ')) AS word FROM texttable

and create above output as intermediate table.

(SELECT explode(split(sentence, ' ')) AS word FROM texttable)tempTable

after second step you should get output like below

a
a
and
Big
data
Hadoop
Hadoop
is
is
Post
technology
This


3.Apply group by


after second step , it is straight forward ,we have to apply group by to count word occurrences.

select word,count(1) as count from
(SELECT explode(split(sentence, ' ')) AS word FROM texttable)tempTable
group by word

Cascading for your next hadoop project



Cascading is a platform for developing data applications on hadoop.It can process all types of data like structured ,unstructured and semi structured data. It can be used for most of the business analytics requirements.It is written in java on top of mapreduce.It also has different versions supporting python,ruby,clojure and scala.
in this article , I would like share few benefits if you use cascading in  your big data projects.




1. Need not think in terms of keys and values


Biggest problem of using mapreduce is thinking in terms of keys and values apart from business logic.
Map reduce is very low level API,I feel, most fo times,developing data applications using mapreduce  is same as studying mechanical engineering for learning driving.that is the reason mapreduce based tools like hive and pig are widely adopted .for the same reason ,Cascading can also be used.you need not think in terms of key value programming paradigm,you can focus on business logic.



2. Pure java


When we use mapreduce tools like hive or  pig,if you want to build complex business logic ,again you have to depend on UDFs which requires some programming languages like java or python.so rather than using Hive and  java or pig and java for your project,you can depend on single tool like cascading so you can write your entire code in one programming language like java.


3. Rapid application development


In mapreduce ,you will write sparate program for mapper , separate program for reducer and one driver program,so you will write more lines of code.
in cascading ,you will write only business logic and you will have less number of lies of code.as you will also have built in functions ,you can rapidly develop data applications.in mparreduce you dont have any concept of built in analytical functions and you end up writing lot of code.



4.Customizable


Though It is built on top of Mapreduce ,it allows you to customize API as per user requirements.


5.Easy Integration


We have many technologies in big data space like hadoop,hive,sqoop,oozie,cassandra,hbase,solr,elasticsearch,teradata,splunk and rdbms systems like oracle,mysql and postgres.fortunately cascading provides easy facility to integrate with all of them.
I mean integration with other technologies  is also easy.




6. Proven in production


It is being used by many companies including Twitter.



7.Very good documentation


Cascading provides good documentation in terms of tutorials and user guide.
you can easily start learning the same,It might not take more than one week to start your own application.





8.Testable code



Last but not least ,if we go for hive or Pig you many not able test your code but Cascading is also suitable for test driven developments.
you can confidently deliver quality applications using cascading.

With all these benefits ,I think you can easily consider Cascading for your next hadoop project.


Learning Redis on Windows

Redis (remote dictionary server) is a key value store and also a kind of nosql database.There are many key value data stores and redis is widely used data store.It is also known as data structure server because it supports many data structures like set,list ,stack and queues etc..I explored it to implement a real time analytics project along with Apache storm and you can find other use cases of Redis here .In this tutorial I would like to share how to learn  and practice Redis on windows.


1. Download redis from here. and Install it by running .exe file


remember it is not an official version.may be only useful for practicing it on windows.


2. Start redis server 



C:\Program Files\Redis>redis-server.exe conf/redis.conf


3.Start redis-cli



C:\Program Files\Redis>redis-cli.exe
redis 127.0.0.1:6379>


4 . Practice redis like below


redis 127.0.0.1:6379> set name "balu"
OK
redis 127.0.0.1:6379> get name
"balu"
redis 127.0.0.1:6379>


for more redis commands,visit redis website.
You can also practice redis here without installing it.
Enjoy practicing redis on windows.