Our mission.As the world’s number 1 job site, our mission is to help people get jobs. We need talented, passionate people working together to make this happen. We are looking to grow our teams with people who share our energy and enthusiasm for creating the best experience for job seekers.
The team.We are builders, we are integrators. Tech Services creates and optimizes solutions for a rapidly growing business on a global scale. We work with distributed infrastructure, petabytes of data, and billions of transactions with no limitations on your creativity. You donÕt have to wait for some architect or manager to tell you what you can work on - you decide the priorities. With tech hubs in Seattle, San Francisco, Austin, Tokyo and Hyderabad, we are improving people's lives all around the world, one job at a time.
Within the Big Data environment, you have a deep understanding of the complex nature of Hadoop. You feel as comfortable building simple data flows as you do digging into Hadoop source code to understand the subtle and obscure problems that can arise in this environment. You work with people, technical and non-technical alike, to understand their Big Data needs and to help them understand what they're really trying to achieve. You ' re a company resource, providing best practices, guidelines, and feedback on internal tools working with Hadoop. You have your finger on the pulse of the cluster, understanding when it ' s not working right and diving in to diagnose the problem before it becomes systemic.
You have a cool head under pressure. When a technical fire occurs, you understand that putting it out should always avoid collateral damage. When you cause a fire (as everyone inevitably does), you take responsibility for it and work with the team to figure out the right way to put that fire out. You believe blaming is a waste of time when something goes wrong, you figure out why it happened and how to prevent it from happening again in the future. Better yet, you look for how things went right in the first place and improve upon those.
- The design, care, and feeding of our multi-petabyte Big Data environments built upon technologies in the Hadoop Ecosystem
- Day-to-day troubleshooting of problems and performance issues in our clusters Investigate and characterize non-trivial performance issues in various environments
- Work with Systems and Network Engineers to evaluate new and different types of hardware to improve performance or capacity
- Deep understanding of system architecture and ability to validate system configurations from hardware layer to Hadoop Application layer
- Working closely with with developers, engineering and operation teams, jointly on key deliverables,evaluate their Hadoop use cases, provide feedback and design guidance
- Work simultaneously on multiple projects competing for your time and understand how to prioritize accordingly
- Be part of the On-call Rotation
- Willingness to mentor and teach people around you
As a member of this team, you seek out feedback on your designs and ideas and provide the same to others.
You constantly ask 'What am I missing?' and 'How will this NOT work?' You don't shy away from what you don't know; you readily admit that you don't know everything, and use every resource available to learn what you need to know.
Bachelor's degree in Computer Science or a closely related computer technical field and 5+ years of Hadoop Administration experience
Intimate and extensive knowledge of Linux Administration and Engineering.
We use CentOS/Red Hat Enterprise Linux (RHEL), you should too
Experience in running things on bare metal, on private cloud, in the public cloud, or hybrids like AWS, OpenStack, and GCP
Experience in designing, implementing and administering large (200 nodes - 1000 nodes), highly available Hadoop clusters secured with Kerberos, preferably using the Cloudera Hadoop distribution
In-depth knowledge of capacity planning, management, and troubleshooting for HDFS, YARN/MapReduce,Hive, Presto, Spark and HBase
Understanding system’s capacity and bottlenecks, basics of memory, CPU, OS, storage, and networks
An advanced background with common automation tools such as Puppet
An advanced background with a higher level scripting language, such as Python or Ruby
Must have experience with monitoring tools used in the Hadoop ecosystem such as Nagios, Cloudera Manager
Experience with modern data pipelines, data streaming, and real time analytics using tools such as Apache Kafka, Spark Streaming, ElasticSearch, or similar tools
Experience with configuration management and orchestration tools (e.g. Chef, Puppet, Ansible, Bosh, Terraform) is a plus
Experience with containerization and related technologies (e.g. Docker, Kubernetes) is a plus
Indeed provides a variety of benefits that help us focus on our mission of helping people get jobs.
View our bounty of perks:  http://indeedhi.re/IndeedBenefits
PandoLogic. Category:Technology, Keywords:Network Manager, Location:Austin, TX 78703
Indeed is proud to be an equal opportunity employer, seeking to create a welcoming and diverse environment.
All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, or veteran status.