Search This Blog

Hadoop eco system books to read

Even though Hadoop is a 10 years old technology still you will find less number of resources to learn it. They are different reasons for that. One of them is , Hadoop is rapidly changing technology and many people might have not tried all features of it. and some times It is also not ready for enterprise use cases.

I would like to put list of Hadoop eco system books at one place in this article.

Predictive Analytics: The Power to Predict Who Will Click, Buy, Lie, or Die




This is very good book on Predictive analytics for beginners. This help you to build understanding about predicitve analytics.

Hadoop The definitive guide :




First and fore most important book every body should read on Hadoop is Hadoop the definitive guide. It not only covers HDFS and Mapreduce but also mapreduce abstractions like cascading,hive and pig too. Apart from that it covers both aspects development and administration of technology .It also has latest features covered in latest edition. It also covers certification syllabus for both Hortonworks and Cloudera. Book is organised well and covers quality content and examples.
However if you want to have complete understanding of a specific tool .Then you may have to check another book on that tool. This book covers all features of HDFS and Mapreduce but only core features of other tools in eco system. Spark is also covered in latest edition.


Hadoop in practise :


This book covers in depth topics of HDFS and Mapreduce with very good coding examples.
Apart from HDFS and Mapreduce ,It also covers SQL tools like Hive , Impala and Spark SQL.


Hadoop operations



This book is very good book on administration operations.It covers installation and configuration of hadoop daemons. Operating system  and Network details are also covered as part of cluster planning topics. This is a small book but covers quality content on administration.
for administration you may be interested to refer this book along with Hadoop definitive guide.


Eco system tools:



This is a very good book on Apache Hive . It almost covers all topics of Hive. Best part is It also covers most difficult features of hive in an understandable explanation.If you want to master UDFs and UDAFs , You can depend on it.


Small book covers Apache Pig. Author has very good experience on Apache Pig. Editorial work is not done properly.There is a scope for improvement in this book. 


Cascading is most useful tool in hadoop eco system. It has very good documentation on its home page. To know more practical applications and different analytical capabilities  of Cascading framework, This book is very useful.


If you want to know, why or where RDBMS is not relevant in big data applications and How HBase addresses the problems of big data , is well covered in this book.  It is useful for both development and administration of Hbase.

Below are other books available on utility tools like Sqoo, Oozie and Flume. I have not read these books also we do not have options for them as of now.

Apache Sqoop Cookbook :

Apache Oozie : the Workflow scheduler for Hadoop

Using Flume : Flexible, Scalable ad Reliable Data Streaming

Security Books :

The following books are some security books on Hadoop.


Hadoop Security by Ben Spivey

This book gives us good theoratical explain about security.

Kerberos is very improtant service in Hadoop eco system for network security.

The following book is a best book for learning Kerberos.

Kerberos: The Definitive Guide






Prerequisite for learning Apache Hadoop

Apache Hadoop is a framework, that is used for processing large data sets on commodity hardware.
It has two core modules ,HDFS and Mapreduce. HDFS is used for data storage and Mapreduce is used for data processing. Hadoop has became de facto standard for processing large data sets.
As it is widely used in the companies ,now a days, every body is trying to learn Apache Hadoop .
In this article ,I will discuss prerequisites for learning Hadoop.

Hadoop eco system has so many tools and growing fast day by day.Main tools are HDFS and Mapreduce. Both are written in Java. Many technologies are written on top of Mapreduce. For example Apache Hive ,Apache Pig,Cascading and Crunh etc..These technologies also developed using Java. Apache HBase is a NOSQL Database .Apart from these ,we also have small tools like Apache SQOOP and Apache Oozie. These too written using Java.

Below Skill set is required for learning Apache Hadoop.

Programming Language:


Most of the technologies in Hadoop Eco system are written using Java . Hadoop also has support for several programming languages.  We can use AWK and SED as part of Streaming. C/C++ can be used as part of Pipes. Python can also be used for data processing ,again Streaming can support it.
Java is widely used in Hadoop.Python is also often used . Scala is also used after of success of Spark.


SQL:


Apache Hive query language is almost same as ANSI SQL. Apache Pig has many commands similar to sql .For example ,order by ,group by and joins also available in Apache Pig. Similarly Same commands are also available in Casscading. of course they are Java classes in it.HBase too has some commands similar to SQL commands.Not only Hadoop eco system tools but also many big data tools provide SQL interface so that people can easily learn it.Cassandra is also same as SQL.


Operating System:


You need to have good OS skills. Most of the time, Unix based operating systems are used in production. So if you know any Unix based OS ,Your day to day life will become easy.If you also know shell script You can achieve good productivity.

Others:


 Apache Sqoop has simple commands ,One can easily learn it. Apache Oozie applications are written using XML files. and almost every technology comes with REST API. and some REST APIs give JSON output. as all of these tools are build for parallel computing, It is better have an understanding about different parallel computing technologies.  Last but not least ,one needs to have good debugging/trouble shooting skills to resolve one's day to day issues . Otherwise you may spend several days on a single problem.
Feel free to contact me if you have any other questions.




Introduction to Apache Knox

Hadoop Eco System has many tools as you already know.Some of them are HDFS , Hive, Oozie and Falcon etc.All these tools provide  REST API so that other tools can communicate with them. Every tool will have hostname and port number as part of their REST API URL.With respect to security , It is not good practice to expose internal host names and port numbers. Some body might try to attack using them.


To address this problem , we have a security tool called Apache Knox. Apache Knox is a REST API based gateway security tool that provides perimeter security for all Hadoop services.








Apache Knox hides REST API URLs of  all hadoop services for external hadoop clients.They will only use REST API provided by Apache Knox . Knox will delegate external hadoop client requests to corresponding hadoop services. And before delegating hadoop client requests , Knox provides all security services configured on the cluster.


Below are some more important points of Apache Knox.
  • Demo LDAP is by default available for Apache Knox.
  • Kerberos is optional for Apache Knox but can easily be integrated with knox. 
  • External clients need not remember all REST API URLs of all hadoop services.
  • Provides Audit log
  • Provides authorization even including service level authorization 





.

Difference between Apache Hive and Apache Pig

MapReduce follows key-value programming model. It has two core stages Map and Reduce.
Both Map and Reduce have key-value as an input and key-value as an output. To write Map Reduce applications ,we need to know one programming language like Java.
These MapReduce applications will have a Map program , a reduce program and a driver program to run map and reduce programs.We need to create a jar containing these programs to process the data.

This Mapreduce has lengthy development time and may not be suitable for situations like adhoc querying. That is one of the reasons there are so many abstractions available for Mapreduce.
For example Cascading, Apache Crunch, Apache Hive and Apache Pig etc...All of these hide key-value complexity for developer. We will now discuss differences between Apache Hive and Apache Pig.



Apache Hive       VS   Apache Pig






Types of Data they support


Apache Hive :  

Hive is a scalable data warehouse on top of Apache Hadoop. As data is available in tables it only supports structured data . processing semi structured data is difficult and processing unstructured data is very very difficult.

Apache Pig :

Pig is a platform for processing large data sets. Its query language is called Pig latin. Pig latin can process structured ,semi structured and unstructured data.



Programming model


Apache Hive :  Hive query language is declarative programming language. It is not easy to build complex business logic.

Apache Pig : Pig Latin is an imperative programming language , You can easily write complex business logic.


Integration


Apache Hive :   Hive has a component called HCatalog that provides cross platform schema.
It also has Rest API called WebHCatalog. So You can integrate any tool with Apache Hive.
Already Teradata, Aster Data got integrated with apache Hive. Even Pig can process data using WebHCatalog.

Apache Pig : It does not have any such feature. Because it is processing platform not a storage platform.



Debugging


Apache Hive : We can debug hive queries but not that easy.

Apache Pig : Pig Latin is a data flow language It is designed keeping debugging feature in mind.

So We can easily debug Pig Latin scripts.


Learning


Both can be easily learned . Hive is almost same as SQL. Pig Latin also looks like SQL .

One can easily learn hive and start writing queries to process data.

Industry Adoption


Apache Hive : It is more widely used in the industry than Apache Pig. 


Adhoc Querying


Both can be used for adhoc querying Hive is more suitable than Pig if it is structured data.


Complex Business logic


If you have to develop applications that have so much business complexity. It is better to use Apache Pig rather than using Hive.

Pig is widely used in research applications than Hive for the same reason.

Let me know if you want to compare these two for any other use-case.







Error Categories in Apache Pig


When you are working on Apache Pig you might see some error codes with some error description.
I would like to discuss those error codes categories so that It becomes easy to progress  for error resolution.

Apache Pig categorizes error codes into four groups . They are INPUT,BUG,USER ENVIRONMENT and REMOTE ENVIRONMENT.


If error code is greater than or equals to 100 and less than or equals to 1999 ,They fall in INPUT group.

For Example :

Error code 1000 is thrown if Pig Latin script input is not parse-able.

and also Error Code 1005 is thrown when we are trying to describe relation which does not have  a n input schema.

If error codes is greater than or equals to 2000 and less than or equals to 2999. They fall under BUG group. all of these are run time errors .

For example Error Code 2009 is thrown when copy operation is failed .

If error codes is greater than or equals to 3000 and less than or equals to 4999. They fall under USER ENVIRONMENT group.

For example : Error code 4002 is thrown when program fails to read data from a file. because there is a problem in User environment.

If error codes is greater than or equals to 5000 and less than or equals to 6999. They fall under REMOTE ENVIRONMENT group.

For example error code 6002 is thrown when out of memory occurs on cluster.



Hadoop Eco system research papers


You know , Hadoop eco-system has many tools , Every tool is an implementation of a research paper. Of course ,most of these research papers are written by Google employees.I would like to put most of these papers at one place in this article.



As You already know Hadoop has two core modules HDFS and MAPREDUCE.These two are open source implementations for Google products GFS and MAPREDUCE.


Below are their links .


1. GFS ( The Google File System).


2. MAPREDUCE : Simplified Data Processing Large Clusters.

Apache Hive is a  ware house created on top of Hadoop. It is an implementation of paper Peta byte scale data ware house using Hadoop.

Apache Pig is a platform for analyzing large data sets using data flow language Pig Latin.It is an implementation of paper Pig Latin: Not so foreign language for data processing.

Apache HBase is an open source implementation of Google's BigTable paper.

Apache Spark is an implementation of paper A fault tolerant abstraction for in-memory cluster computing

Apache Tez is an implementation of paper A Unifying Framework for Modeling and Building Data Processing Applications.

Apache Crunch is an implementation of Google's FlumeJava.

Apache Zookeeper is an implementation of paper wait free coordination for internet scale systems.

YARN is an implementation of Apache Hadoop YARN : Yet Another Resource Negotiator.

Apache Storm is an implementation of paper Storm @ Twitter.


Hope these papers are useful to you.





Intermediate data spill in Mapreduce



As we know ,Mapreduce has two stages one is Map and second is Reduce. Map stage is responsible for filtering data and preparing the data and Reduce stage is responsible for aggregate operations and Join operations. Map output is written to disk and this operation is called spilling.
In this article, we are discussing important things happen in data spilling after map stage.


Map output is first written to buffer and buffer size is decided by io.sort.mb property .By default, it will be 100 MB.

When buffer reaches certain threshold ,It will start spilling   buffer data to disk. This threshold is decided by io.sort.spill.percent.

Before writing data onto Hard disk ,data is divided into partitions with respect to reducers.

On each Partition ,in-memory sort will be performed by key.

once per every three spills combiner will be run on sorted data if combiner function is specified.
These number of spills is decided by min.num.spills.for.combine.
after combiner function is performed, data is written to hard disk.

after completing writing of certain number of spills ,data will be merged into single file.
This number of spills is decided by io.sort.factor
By default, It is 10.

Below is picture that depicts the flow, hope it makes you understand better.

Data Flow while spilling map output