Job ID: TT-1012
Job Title: Hadoop System Administrator (Senior)
Location: Annapolis Junction, MD
Clearance: Secret
Travel: None
Description:

This job is responsible for implementing, managing and administering the overall Hadoop infrastructure adding and removing nodes using cluster monitoring tools like Ganglia Nagios or Cloudera Manager, configuring the NameNode high availability and keeping a track of all the running Hadoop jobs. Works closely with the database, network, analytics and application teams to make sure that all the big data applications are readily available and performing as expected. Have clear knowledge of Hadoop cluster configuration and should be able to implement both personal and shared datasets using HDFS, Hive, and other cluster services. Acts as POC for engagement with the vendor (currently Cloudera). Recommends system sizing and scaling based on current workloads. Works with DBAs to back up the system.

***** MUST HAVE AN ACTIVE SECRET CLEARANCE*****

• Responsible for implementation and support of the enterprise Hadoop environment consisting of design, capacity arrangement, cluster set up, performance fine-tuning, monitoring, structure planning, scaling and administration
• Work closely with infrastructure, network, database, business intelligence and application teams to ensure business applications are readily available and performing within agreed on service levels.
• Implements concepts of Hadoop eco system such as YARN, MapReduce, HDFS, HBase, Zookeeper, Pig and Hive
• Accountable for storage, performance tuning and volume management of Hadoop clusters and MapReduce routines

• Monitor Hadoop cluster connectivity and performance. Manage and analyze Hadoop log files. File system management and monitoring
• Supports data modeling, design and implementation, software installation and configuration, database backup and recovery, database connectivity and security
• Implement and support our enterprise security standards on a Hadoop cluster
• Set up new Hadoop users. This includes setting up and testing HDFS, Hive, Pig and MapReduce access for the new users
• Monitor Hadoop cluster job performance, Plan Hadoop cluster capacity, Manage and review Hadoop log files,
• Manage and monitor cluster file system, Manage system Backups and DR plans
• Install operating system and Hadoop updates, patches and version upgrades when required
• Development activities as required by Big Data project specifications working with Python, Apache Spark and/or Java / Scala
• Consult with internal and external users to identify and document user objectives
• Provide root-cause analysis for recurring or critical problems

Experience Required:

Minimum Requirements:
• A 4-year degree in computer science, information technology, or a related field, or equivalent experience.
• 5+ years of experience in a Big Data role with 2+ year experience in a Hadoop environment
• 3+ years of additional experience working in data warehousing and business intelligence tools, techniques and technology is strongly desired
• 5+ years of RHEL systems administration experience, or equivalent experience
• Strong knowledge and experience with computer hardware including Servers, Network, SAN technologies, I/O subsystems
• Extensive experience with different backup/restore and disaster recovery solutions
• Excellent knowledge of Hadoop Architecture and system internals, including updates and features on new versions
• Solid experience in Hadoop system configuration and setup with its utilities and tool sets.
• Strong analytical and troubleshooting skill set.
• Strong communication, organizational, and time-management skills.
• Experience RHEL 6.5. or later, performance monitoring and tuning tools, installation options, problem determination and recovery as well as security
• Experience writing and debugging shell scripts in a standard UNIX shell
• Required to be on-call 24X7X365 and respond in a timely manner

Education/Certification Requirement:
A 4-year degree or equivalent in Computer Science, Information Systems, Engineering, Business, or other related scientific or technical discipline with 7-10 or more year’s related professional experience. (Education can be substituted for years of experience).

Clearance: Active DoD Secret and DHS suitability

Skills Required:

Minimum Requirements:
• A 4-year degree in computer science, information technology, or a related field, or equivalent experience.
• 5+ years of experience in a Big Data role with 2+ year experience in a Hadoop environment
• 3+ years of additional experience working in data warehousing and business intelligence tools, techniques and technology is strongly desired
• 5+ years of RHEL systems administration experience, or equivalent experience
• Strong knowledge and experience with computer hardware including Servers, Network, SAN technologies, I/O subsystems
• Extensive experience with different backup/restore and disaster recovery solutions
• Excellent knowledge of Hadoop Architecture and system internals, including updates and features on new versions
• Solid experience in Hadoop system configuration and setup with its utilities and tool sets.
• Strong analytical and troubleshooting skill set.
• Strong communication, organizational, and time-management skills.
• Experience RHEL 6.5. or later, performance monitoring and tuning tools, installation options, problem determination and recovery as well as security
• Experience writing and debugging shell scripts in a standard UNIX shell
• Required to be on-call 24X7X365 and respond in a timely manner

Education/Certification Requirement:
A 4-year degree or equivalent in Computer Science, Information Systems, Engineering, Business, or other related scientific or technical discipline with 7-10 or more year’s related professional experience. (Education can be substituted for years of experience).

Clearance: Active DoD Secret and DHS suitability


Spread the word:

Back