Engineering Jobs In Kenya.

Shining Hope for Communities (SHOFCO) is an internationally recognized grassroots organization that unlocks the potential of urban slum dwellers to lead hopeful and fulfilling lives.
SHOFCO disrupts survival mode by providing critical services including health care, clean water, education, and economic empowerment; and linking these efforts to a community-led advocacy platform. SHOFCO currently impacts over 2.4 million people across 17 urban slums in Kenya and is the largest employer in Kibera. SHOFCO is a trusted name and service provider in Kibera with a 10-year track record. For more information, please visit www.shofco.org.

  • As a Data Engineer at SHOFCO, this role will be responsible for designing, building and maintaining the data architecture, pipelines and systems that support SHOFCO’s data-driven initiatives. This role will collaborate closely with cross-functional teams including Monitoring, Evaluation and Learning (MEL) Data analytics team, software engineers and program managers to ensure that data is collected, processed and made accessible for meaningful insights and informed decision-making. The role demands a solid understanding of data engineering best practices, data modelling and proficiency in using a variety of data tools and technologies.
  • Key Responsibilities
  • Design and implement scalable and robust data architectures to support SHOFCO’s data needs, considering both current requirements and future scalability.
  • Evaluate and choose appropriate technologies for data storage, processing, and analytics, such as data warehouses, data lakes and distributed computing frameworks.
  • Develop, maintain, and optimize ETL (Extract, Transform, Load) processes to extract data from various sources, transform it into usable formats, and load it into the appropriate data repositories.
  • Collaborate with cross-functional teams to understand data requirements and ensure smooth data integration across different systems and platforms.
  • Implement data quality checks, data validation, and data cleansing processes to ensure the accuracy, consistency and reliability of the data.
  • Establish and enforce data governance policies, standards and best practices to maintain data integrity and security.
  • Build and maintain data pipelines that enable the efficient movement of data from source to destination, using tools and frameworks like Apache Spark, Apache Airflow, or similar technologies.
  • Monitor pipeline performance, troubleshoot issues, and ensure optimal data flow and processing.
  • Continuously optimize data processing and storage systems to improve performance, scalability, and efficiency.
  • Identify and address bottlenecks, optimize queries, and fine-tune database systems as needed.
  • Collaborate with Data Scientists, Analysts, and other stakeholders to understand data requirements and ensure that the data infrastructure meets their needs.
  • Stay updated with the latest trends and technologies in the data engineering field, and assess their potential impact on SHOFCO’s data ecosystem.
  • Propose and implement innovative solutions to leverage new technologies and improve data engineering practices.
  • Stay up-to-date with the latest trends and technologies in data engineering, recommending and implementing improvements to the organization’s data infrastructure and practices.
  • Work closely with the IT team to ensure proper integration of data solutions with existing systems and infrastructure.
  • Monitor data pipeline health, troubleshoot issues, and provide timely resolutions to minimize downtime and disruptions.
  • Collaborate with external partners, vendors, and stakeholders on data integration projects as needed.
  • Academic Qualifications
  • Bachelor’s or Master’s degree in Computer Science, Data Engineering, or a related field.
  • Proficiency in programming languages such as Python, Java, or Scala for building data pipelines and data manipulation.
  • Proficiency in working with SQL and relational databases.
  • 3+ years of experience designing and maintaining data pipelines
  • Proficiency working with third party API’s.
  • Proficiency with data modelling techniques.
  • Experience with version control and code management in git
  • Strong foundations in mathematical analysis especially statistics and probability
  • Knowledge of good software engineering principles
  • Professional Qualifications
  • Proven experience (3+ years) as a Data Engineer or similar role, working with large-scale data pipelines and architectures.
  • Strong experience with data warehousing concepts and technologies (e.g., SQL, NoSQL databases, data lakes).
  • Familiarity with cloud platforms such as AWS, Azure, or GCP and their data services (e.g., AWS Redshift, Google BigQuery).
  • Hands-on experience with data pipeline orchestration tools (e.g., Apache Airflow, Luigi).
  • Knowledge of data modelling, schema design, and data governance best practices.
  • Familiarity with containerization and orchestration technologies (Docker, Kubernetes) for deploying and managing data applications.
  • Previous exposure to data cataloging and metadata management tools.
  • Knowledge of machine learning workflows and how data engineering supports machine learning pipelines is a plus.
  • Demonstrated ability to manage and prioritize multiple projects and tasks in a dynamic environment.
  • Strong analytical and problem-solving abilities, with a keen attention to detail.
  • Strong communication skills, with the ability to work collaboratively within cross-functional teams.
  • Develop solutions for real-time data processing and streaming, enabling timely insights and analytics from live data sources.
  • Implement technologies like Apache Kafka or similar tools to capture and process real-time data events.
  • Work closely with the IT team to ensure data protection, privacy, and compliance with relevant data regulations (Kenya’s data protection act, GDPR, HIPAA, etc.).
  • Implement data encryption, access controls, and other security measures to safeguard sensitive information.
  • Design and implement automated testing frameworks to validate the accuracy and quality of data transformations and ETL processes.
  • Set up monitoring and alerting systems to proactively detect and address data pipeline issues.
  • Experience in the non-profit sector or social impact organizations is a plus.
  • Other requirements (unique/job specific)
  • Any professional certification on data management or cloud services is a plus.
  • Data Integration and ETL
  • Database Management
  • Data Modelling
  • Big Data Technologies
  • Data Warehousing
  • Data Pipeline Orchestration
  • Real-time Data Processing
  • Version Control
  • Cloud Platforms
  • Data Security and Compliance
  • Data Governance
  • Data Visualization
  • Automated Testing
  • Performance Tuning
  • API Integration
  • Machine Learning Infrastructure
  • Behavioural Competencies/Attributes:
  • Analytical Thinking
  • Adaptability
  • Attention to Detail
  • Problem-Solving
  • Communication
  • Ownership and Accountability
  • Innovation
  • Ethical and Social Responsibility
  • Time Management
  • Continuous Learning
  • Cultural Sensitivity
  • Risk Management
  • Interdisciplinary Collaboration

Interested applicants should send their applications together with a detailed CV to recruitment@shininghopeforcommunities.org, quoting their current and expected salaries. The Subject should clearly indicate the position being applied for. Applications without this information will not be considered.