Candidates Testimonials – How C.S.S Got Me Hired
Advice From Our Recruitment Team – By Carolyne N. – Head Of Recruitment
Personalized Support for Your Success
Upcoming Trainings & Events – Leadership & Career Growth Events
Specialist Support Engineer: DataOps Job Absa Bank
Banking Jobs,
Job Summary
Work as part of an integrated (run & build) tribe in lower complexity environments to
provide enterprise wide application support across multiple stakeholder groups by
maintaining & optimizing enterprise-grade applications (tech products & services).
Job Description
Data Engineering is responsible for the central data platform that receives and distributes data across the bank. This is a multi-platform environment and leverages a blend of custom, commercial and open-source tools to manage and support thousands of critical data-related jobs. These jobs are supported and updated in line with changes across the landscape to avoid disruption to downstream data consumers.
Responsibilities
- Manage an assigned team through day-to-day support tasks
- Oversee development plans for the team and provide mentorship to the team
- Provide guidance and peer review
- Support pipelines end to end
- Build and deploy enhancements and new developments or new data pipelines
- Identify and drive optimisation opportunities across the environment
- Manage the handover of new applications ensuring that required standards and practices are met
- Improvement on recovery time in case of prod failures
- Test prototypes and oversee handover to the Data Operations teams
- Attend and contribute to regular team and User meetings
- Responsible for the actual coding or programming of Hadoop applications • High-speed querying
Job Experience & Skills Required:
- 3+ years’ experience working in Big data environment, optimising and building big data pipelines, architectures and data sets with e.g. Java, Scala, Python, Hadoop, Apache Spark and Kafka
- Minimum one year experience with Scala programming language
- Minimum one year experience managing a team
- Cross domain knowledge
- Familiarity with Hadoop ecosystem and its components
- Good knowledge of the concepts of Hadoop
- Solid experience in a working environment in Big Data development utilising SQL or Python
- Experience in Big Data development using Spark
- Experience in Hadoop, HDFS and MapReduce
- Experience in database design, development and data modelling
The following additional knowledge, skills and attributes are preferred:
- Good knowledge in back-end programming, specifically java
- Experience with development in a Linux environment and its basic commands
- Understanding of Cloud technologies and migration techniques
- Understanding of data streaming and the intersection of batch and real time data
- Ability to write reliable, manageable, and high-performance code
- Should have basic knowledge of SQL, database structures, principles, and theories
- Knowledge of workflow/schedulers
- Strong collaboration and communication skills
- Strong analytical and problem-solving skills
- Experience in Quality Assurance
- Experience in Stakeholder Management
- Experience in Testing
How to Apply
🚨 Before You Apply for This Job. Need Help With Your CV?
This job will attract 1000+ applicants.
Many qualified professionals miss out on getting shortlisted and interviews — not because they lack experience, but because their CV doesn’t clearly show how they fit this specific job.
🎯 Want to get an interview fast? Customize your CV specifically for this job.
Using the same CV for every application will not get you interviews.
Email your CV today to our Client Service Manager, Rose, using cvwriting@corporatestaffing.co.ke
Subject: CV Review & Upgrade.
Rose and our recruiters will review your CV and show you exactly how to improve it for the job you are targeting.
Using an A.I-generated CV but not getting interviews? Get it reviewed here by our recruiters today.

