Data Engineer

apartmentGoTyme ZA (South Africa) placeCape Town schedulePart-time calendar_month 

Overall Purpose of the Role:

As an Intermediate Data Engineer at GoTyme, you will contribute to our mission of making banking simple, accessible, and affordable by building trusted, scalable, and secure data solutions that enable better decision-making, innovation, and operational excellence across the business.

You will design, develop, and maintain modern data pipelines and data products that transform raw data from a variety of internal and external sources into clean, protected, high-quality, and auditable datasets fit for analytics, reporting, operational use cases, and AI-driven initiatives.

Working within GoTyme’s cloud-native data ecosystem, you will help ensure that data is reliable, well-governed, reusable, and aligned to business needs.

This role is suited to a hands-on data engineer with solid experience in modern data platforms and a strong foundation in software engineering and data engineering best practices. You will work closely with Analytics Engineers, Data Analysts, Data Scientists, Product teams, and business stakeholders to deliver fit-for-purpose data solutions that support both day-to-day operations and longer-term strategic outcomes.

As an intermediate member of the team, you will be expected to contribute independently to the design and delivery of pipelines and data products, support the continuous improvement of platform standards and practices, and actively contribute to a culture of quality, ownership, and collaboration.

Requirements

Experience and Skills Required:

  • Bachelor’s degree in Computer Science, Information Systems, Engineering, Mathematics, or a related field, or equivalent practical experience.
  • 3–5 years’ experience designing, building, and maintaining ETL/ELT pipelines in a modern cloud data environment.
  • Strong proficiency in Python and SQL.
  • Hands-on experience with Apache Spark / PySpark and Databricks in production environments.
  • Good working knowledge of AWS, particularly S3 and cloud-native data architecture patterns.
  • Solid understanding of scalable data pipeline design, orchestration, and distributed data processing.
  • Experience working with structured and semi-structured data formats, including Delta, Parquet, JSON, CSV, XML, and text-based files.
  • Working knowledge of Spark Structured Streaming or similar real-time / near real-time data processing patterns.
  • Good understanding of data modelling, data transformation, and the principles of building reusable, fit-for-purpose data products.
  • Experience implementing data quality checks, monitoring, lineage, metadata, and documentation practices.
  • Familiarity with Git-based version control, CI/CD, code reviews, and modern software engineering practices.
  • Exposure to DBT, workflow automation, and modern data tooling would be advantageous.
  • Understanding of data security, privacy, retention, and auditability requirements, preferably within a regulated environment.
  • Experience integrating data from APIs, web services, and multiple operational source systems.
  • Exposure to business intelligence and analytics use cases, including support for reporting, KPI measurement, and self-service analytics.
  • Banking, fintech, or other regulated industry experience would be advantageous.
  • Strong analytical and problem-solving skills, with the ability to identify issues, optimise performance, and improve reliability.
  • Strong communication and collaboration skills, with the ability to work effectively across technical and non-technical teams.
  • A proactive, delivery-focused mindset with a willingness to learn, adapt, and embrace modern technologies, automation, and best practices.AI & data analytics proficiency essential.

Responsibilities:

  • Design, develop, test, deploy, and monitor scalable batch and streaming data pipelines on Databricks and AWS.
  • Build robust and maintainable data solutions using PySpark, Python, and SQL to ingest, transform, and serve data from a wide range of source systems.
  • Develop trusted, reusable, and well-structured data products that support analytics, reporting, operational decision-making, and AI use cases.
  • Curate and model high-quality datasets for downstream consumption by analysts, analytics engineers, data scientists, and business stakeholders.
  • Implement and enhance data quality checks, validation frameworks, monitoring, and alerting to ensure data accuracy, completeness, timeliness, and reliability.
  • Contribute to data observability, lineage, metadata, and documentation practices that Optimise jobs, code, and platform usage for performance, scalability, maintainability, and cost efficiency.
  • Apply engineering best practices across development, testing, deployment, and operational support processes.
  • Ensure all data engineering solutions comply with GoTyme’s security, privacy, retention, and governance standards.
  • Collaborate closely with cross-functional teams to understand business requirements and translate them into scalable, reliable, and high-impact data solutions.
  • Support self-service analytics by providing reliable, well-documented, and easy-to-consume datasets.
  • Identify opportunities to automate manual processes and improve internal ways of working through better engineering practices and tooling.
  • Participate in peer reviews, knowledge sharing, and team initiatives that strengthen engineering standards and delivery quality.
  • Stay up to date with developments in data engineering, analytics, and AI, and apply relevant innovations to strengthen GoTyme’s data capabilities.
apartmentCommunicate RecruitmentplaceCape Town
that encourage continuous learning and technical advancement. Skills & Experience: Strong foundation in data engineering at Junior, Intermediate, or Senior level Proficiency in SQL and Python (experience with Scala or similar is advantageous) Experience building...
local_fire_departmentUrgent

Junior Data Engineer – AI

apartmentPhoenix RecruitmentplaceRosebank, 6 km from Cape Town
Requirements: Honours degree in CS, IS, Data Science, or equivalent 12 years relevant experience Python - can write scripts, work with APIs, debug existing code SQL - comfortable with JOINs, aggregations, and investigating data mismatches...
thumb_up_altRecommended

Senior Data Engineer

apartmentKingMakersplaceCape Town
As a Senior Data Engineer, you will be designing, implementing and owning data engineering solutions across global projects and products. Your role involves architecting and building scalable data pipelines, optimizing storage systems, and enhancing data...