Candidates Testimonials – How C.S.S Got Me Hired
Advice From Our Recruitment Team – By Carolyne N. – Head Of Recruitment
Personalized Support for Your Success
Upcoming Trainings & Events – Leadership & Career Growth Events
Big Data Support Engineer Job Absa Bank
IT Jobs, Absa Bank
Job Summary
Data Engineering is responsible for the central data platform that receives and distributes data across the bank. This is a multi-platform environment and leverages a blend of custom, commercial and open-source tools to manage and support thousands of critical data-related jobs. These jobs are supported and updated in line with changes across the landscape to avoid disruption to downstream data consumers.
Job Description
- In this role, you will be part of the Data Operations team that is responsible for supporting all the applications on the Hadoop ecosystem. This role expands in maintaining changes on datasets and optimisation activities on all applications, including new development. They therefore need to understand basic programming to enable them to manage Big Data and to transfer all data to Hadoop.
Qualifications
Education:
- Bachelor’s degree in computer science, Information Systems or related field.
Job Experience & Skills Required:
- 2+ years’ experience working in Big data environment, optimising and building big data pipelines, architectures and data sets with e.g. Java, Scala, Python, Hadoop, Apache Spark and Kafka
- Familiarity with Hadoop ecosystem and its components
- Good knowledge of the concepts of Hadoop
- Solid experience in a working environment in Big Data development utilising SQL or
- Python
- Experience in Big Data development using Spark
- Experience in Hadoop, HDFS and MapReduce
- Experience in database design, development and data modelling
The following additional knowledge, skills and attributes are preferred:
- Good knowledge in back-end programming, specifically java
- Experience with development in a Linux environment and its basic commands
- Ability to write reliable, manageable, and high-performance code
- Should have basic knowledge of SQL, database structures, principles, and theories
- Knowledge of workflow/schedulers
- Strong collaboration and communication skills
- Strong analytical and problem-solving skills
Responsibilities
- Support pipelines end to end
- Build enhancements and new developments
- Build and deploy new data pipelines
- Identify optimisation opportunities
- Improvement on recovery time in case of prod failures
- Test prototypes and oversee handover to the Data Operations teams
- Attend and contribute to regular team and User meetings
- Responsible for the actual coding or programming of Hadoop applications
- High-speed querying
How to Apply
🚨 Before You Apply for This Job. Need Help With Your CV?
This job will attract 1000+ applicants.
Many qualified professionals miss out on getting shortlisted and interviews — not because they lack experience, but because their CV doesn’t clearly show how they fit this specific job.
🎯 Want to get an interview fast? Customize your CV specifically for this job.
Using the same CV for every application will not get you interviews.
Email your CV today to our Client Service Manager, Rose, using cvwriting@corporatestaffing.co.ke
Subject: CV Review & Upgrade.
Rose and our recruiters will review your CV and show you exactly how to improve it for the job you are targeting.
Using an A.I-generated CV but not getting interviews? Get it reviewed here by our recruiters today.

