Experience with full life cycle Hadoop implementations.
Hands-on experience with the Hadoop stack (e.g. MapReduce, Sqoop, Pig, Hive, Hbase, Flume)
Hands-on experience with “productionalizing” Hadoop applications (e.g. administration, configuration management, monitoring, debugging, and performance tuning)
Experience with very high volume Column-store Data Warehouse like HBase, Cassandra, etc.
Must be flexible and motivated to work in a fast-paced environment.
MS/BS in Computer Science preferred, or equivalent experience
Passion for new technologies.
Experience in building web 2.0 applications with intensive large scale data processing that includes capturing, modeling, storing and analyzing
Excellent problem solving skills; proven teamwork and communication skills
Passion for distributed systems and operating systems with some hands on experience.