Schedule

Week 1 - Foundational Python

Week 2 - Structuring Data Pipelines

Week 3 - Ingesting and Validating Data

Week 4 - Data Processing with Pandas

Week 5 - Docker and CI/CD for Data

Week 6

Week 7

Week 8 - Break

Week 9

Week 10

Week 11

Week 12

Week 13

Week 14

Week 15 - Break

Week 16

Overview

Welcome to the Data Track!

This program is designed to take you from the foundations of Python programming into the world of Data Engineering. Over 16 weeks, you will learn how to build, deploy, and orchestrate modern data pipelines using industry-standard tools and cloud platforms. You'll move from simple data manipulation to complex ETL/ELT processes, database modeling, and professional dashboarding. This track is hands-on and intense - you'll be building real systems that process data from end to end.

Why Data Engineering?

Data is the fuel of modern organizations, but it's useless if it's messy, disconnected, or slow. Data Engineers are the architects who build the systems that collect, clean, and deliver this data. We chose this track because it's a high-demand field with a focus on building robust, scalable infrastructure. You'll learn to work with Python, SQL, Docker, and Azure - skills that are essential in any modern tech stack. If you enjoy problem-solving at scale and building things that "just work", this is the track for you.

πŸ’‘

The skills you develop here such as systems thinking, automation, and understanding data lifecycles are highly valuable across the entire tech industry, from software development to specialized data science roles.

In this track, you will learn

  1. Advanced Python for data pipelines (OOP, Pydantic, Pytest)
  2. Professional data ingestion and transformation (Pandas, Parquet)
  3. Cloud infrastructure and containerization (Azure, Docker, CI/CD)
  4. Data modeling and SQL for Analytics (dbt, Azure SQL)
  5. Orchestration and Big Data concepts (Airflow, Spark, Kafka)

After completing this track, you will be able to

  1. Build, read, and understand data pipelines

    Apply software engineering principles to build robust, scalable data ingestion and transformation systems.

  2. Use cloud-native data tools effectively

    Work confidently with Azure, Docker, dbt, and modern orchestration tools to manage data workflows.

  3. Understand modern data architecture

    Explain and use fundamental concepts such as ETL vs ELT, data modeling (Star Schema, OBT), and distributed computing.

  4. Collaborate in a data-driven environment

    Work in teams, follow version-control workflows, and communicate technical decisions clearly to both developers and stakeholders.

  5. Work productively and critically with AI tools

    Use AI tools to support learning and development, while understanding their limitations, validating outputs, and maintaining code quality.

Ready? Let’s begin with Week 1 - Python Foundations


CC BY-NC-SA 4.0 Icons

*https://hackyourfuture.net/*

Found a mistake or have a suggestion? Let us know in the feedback form.