Acceldata is creating the Data observability space. We make it possible for data-driven enterprises to effectively monitor, discover, and validate Data pipelines at Petabyte scale. Our customers include a Fortune 500 company, one of Asia's largest telecom companies, and a unicorn fintech startup. We are lean, hungry, customer-obsessed, and growing fast. Our Solutions team values productivity, integrity, and pragmatism. We provide a flexible, remote-friendly work environment.
We are building software that can provide insights into companies' data operations and allows them to focus on delivering data reliably with speed and effectiveness. Join us in building an industry-leading data operations platform that focuses on optimizing modern data lakes for both on-premise and cloud environments.
- Our Site reliability engineers work on improving the availability, scalability, performance, and reliability of enterprise production services for our products as well as our customer’s data lake environments.
- You will use your expertise to improve the reliability and performance of Hadoop Data lake clusters and data management services. Just as our products, our SRE are expected to be platform and vendor-agnostic when it comes to implementing, stabilizing, and tuning Hadoop ecosystems.
- You’d be required to provide implementation guidance, best practices framework, and technical thought leadership to our customers for their Hadoop Data lake implementation and migration initiatives.
- You need to be 100% hand-on and as a required test, monitor, administer, and operate multiple Data lake clusters across data centers.
- Troubleshoot issues across the entire stack - hardware, software, application, and network.
- Dive into problems with an eye to both immediate remediations as well as the follow-through changes and automation that will prevent future occurrences.
- Must demonstrate exceptional troubleshooting and strong architectural skills and clearly and effectively describe this in both a verbal and written format.
- Customer-focused, Self-driven, and Motivated with a strong work ethic and a passion for problem-solving.
- 4+ years of designing, implementing, tuning, and managing services in a distributed, enterprise-scale on-premise and public/private cloud environment.
- Familiarity with infrastructure management and operations lifecycle concepts and ecosystem.
- Hadoop cluster design, Implementation, management and performance tuning experience with HDFS, YARN,
- HIVE/IMPALA, SPARK, Kerberos and related Hadoop technologies are a must.
- Must have strong SQL/HQL query troubleshooting and tuning skills on Hive/HBase.
- Must have a strong capacity planning experience for Hadoop ecosystems/data lakes.
- Good to have hands-on experience with – KAFKA, RANGER/SENTRY, NiFi, Ambari, Cloudera Manager, and HBASE.
- Good to have data modeling, data engineering, and data security experience within the Hadoop ecosystem.Good to have deep JVM/Java debugging and tuning skills.
Submit Your Application
You have successfully applied
- You have errors in applying