Our mission.As the world’s number 1 job site, our mission is to help people get jobs. We need talented, passionate people working together to make this happen. We are looking to grow our teams with people who share our energy and enthusiasm for creating the best experience for job seekers.
The team.We are builders, we are integrators. Tech Services creates and optimizes solutions for a rapidly growing business on a global scale. We work with distributed infrastructure, petabytes of data, and billions of transactions with no limitations on your creativity. You don’t have to wait for some architect or manager to tell you what you can work on - you decide the priorities. With tech hubs in Seattle, San Francisco, Austin, Tokyo and Hyderabad, we are improving people's lives all around the world, one job at a time.
Your job.Within the Big Data environment, you have a deep understanding of the complex nature of Hadoop. You feel as comfortable building simple data flows as you do digging into Hadoop source code to understand the subtle and obscure problems that can arise in this environment. You work with people, technical and non-technical alike, to understand their Big Data needs and to help them understand what they're really trying to achieve. You're a company resource, providing best practices, guidelines, and feedback on internal tools working with Hadoop. You have your finger on the pulse of the cluster, understanding when its not working right and diving in to diagnose the problem before it becomes systemic.
You have a cool head under pressure. When a technical fire occurs, you understand that putting it out should always avoid collateral damage. When you cause a fire (as everyone inevitably does), you take responsibility for it and work with the team to figure out the right way to put that fire out. You believe blaming is a waste of time: when something goes wrong, you figure out why it happened and how to prevent it from happening again in the future. Better yet, you look for how things went right in the first place and improve upon those.
- The design, care, and feeding of our multi-petabyte Big Data environments built upon technologies in the Hadoop Ecosystem
- Day-to-day troubleshooting of problems and performance issues in our clusters
- Investigate and characterize non-trivial performance issues in various environments
- Work with Systems and Network Engineers to evaluate new and different types of hardware to improve performance or capacity
- Work with developers to evaluate their Hadoop use cases, provide feedback and design guidance
- Work simultaneously on multiple projects competing for your time and understand how to prioritize accordingly
- Be part of the On-call Rotation for your areas of responsibility and expertise
- Bachelor's degree in Computer Science or equivalent
- Intimate and extensive knowledge of Linux Administration and Engineering. We use CentOS/Red Hat Enterprise Linux (RHEL), you should too
- Experience in designing, implementing and administering large (200+ node), highly available Hadoop clusters secured with Kerberos, preferably using the Cloudera Hadoop distribution.
- In-depth knowledge of capacity planning, management, and troubleshooting for HDFS, YARN/MapReduce, and HBase.
- An advanced background with common automation tools such as Puppet
- An advanced background with a higher level scripting language, such as Perl, Python, or Ruby.
- Must have experience with monitoring tools used in the Hadoop ecosystem such as Nagios, Cloudera Manager, or Ambari.
- Experience with Pepperdata a plus.
- Knowledge of Impala and Spark a plus Cloudera Certified Administrator for Apache Hadoop (CCAH) a plus Active member or contributor to open source Apache Hadoop projects a plus.