Week 7 - Mid-Track Project

Project Brief

Project Guidelines and Requirements

Week 7 Gotchas & Pitfalls

Project Guidelines and Requirements

This chapter lists exactly what your project must include to pass, what goes beyond the minimum, and how the assessment works. Use the checklists to track your progress.

Minimum requirements

Your project must include all of the following. Missing any item means the project is incomplete.

Data pipeline

Containerization

Testing

CI/CD

Azure deployment

<aside> ⚠️ If Azure resources are unavailable (Postgres unreachable, ACR quota exceeded, Container Apps environment missing), contact your teacher immediately. Do not wait until Day 4 to discover a shared infrastructure issue.

</aside>

Documentation

Git workflow

Cleanup

<aside> ⌨️ Hands on: Before you start coding, copy the minimum requirements checklist above into a file or issue tracker. Check off each item as you complete it. This prevents the "I thought I was done but forgot CI" moment on the last day.

</aside>

Stretch goals

These are optional but demonstrate deeper understanding:

Project structure

Use a clear layout. Here is a recommended structure:

week7-project/
├── src/
│   ├── pipeline.py        # Main pipeline logic
│   ├── models.py          # Pydantic validation models
│   └── storage.py         # Database/blob storage functions
├── tests/
│   └── test_models.py     # Pydantic model tests
├── .github/
│   └── workflows/
│       └── ci.yml         # Linting, formatting, and tests
├── Dockerfile
├── requirements.txt
├── .env.example
├── README.md
└── AI_ASSIST.md

A starter template with this structure is available in assets/starter-template/. See Chapter 1 for instructions on how to get it.

The template gives you working boilerplate so you can focus on your own logic:

Start by calling your API locally and replacing the fetch_data() stub. Everything else can stay in place until the pipeline works end to end.

<aside> 💡 You do not have to use this exact layout, but your project should be organized enough that someone else can find and understand each component.

</aside>

What a complete run looks like

Here is the exact output of a successful pipeline run, from local to Azure:

Local run:

$ docker run --env-file .env my-pipeline
2026-03-30 10:00:01 INFO Pipeline starting
2026-03-30 10:00:02 INFO Fetched 24 records from Open-Meteo API
2026-03-30 10:00:02 INFO Validated 24 / 24 records
2026-03-30 10:00:03 INFO Inserted 24 rows into Postgres
2026-03-30 10:00:03 INFO Uploaded raw data to blob: pipeline/2026-03-30_100003.json
2026-03-30 10:00:03 INFO Pipeline finished: 24 records stored

Azure verification:

$ az containerapp job execution list --name weather-job --resource-group rg-hyf-data --output table
Name              Status     StartTime
----------------  ---------  -------------------
weather-job-abc1  Succeeded  2026-03-30T10:05:00

$ psql "$POSTGRES_URL" -c "SELECT COUNT(*) FROM your_table_name;"  # replace with your table name
 count
-------
    24

$ az storage blob list --account-name hyfstoragedev --container-name raw --prefix pipeline/ --output table
Name                                  Last Modified
------------------------------------  -------------------
pipeline/2026-03-30_100003.json       2026-03-30T10:00:03

Your output will have different numbers and names, but the pattern is the same: logs show counts, Postgres has rows, blob storage has files.

Assessment

The project is evaluated through a 15-20 minute technical interview on Day 5. It has four parts:

Part Duration What is evaluated
Technical questions 5-7 min Can you discuss the concepts behind your project? (APIs, Pydantic, Docker, Azure)
Project demo 5-7 min Does your deployment work? Can you show the evidence?
Code discussion 5-7 min Can you explain why you made specific code decisions?
Code comprehension 5 min Can you read unfamiliar pipeline code, find a bug, and suggest improvements?

Pass threshold: A grade of 6.0 or higher (out of 10), with no part scored 0.

The interview tests understanding, not memorization. Honest answers about trade-offs and limitations score higher than polished answers that avoid uncertainty. See Chapter 3: Gotchas & Pitfalls section 6 for specific preparation advice.

Submission

  1. Ensure all your PRs are merged into main.
  2. Your final main branch is the submission.
  3. Include in the final PR description: a link to your Container App Job execution (screenshot or CLI output) and a short summary of what the pipeline does.

<aside> ⚠️ Delete your Container App Job after the teacher has evaluated your project. Do not leave jobs running. See [Week 6 Chapter 6](../Week 6/week_6__6_cost_awareness.md) for why this matters.

</aside>

Before submitting, review your work against the minimum requirements checklist above. A common mistake is submitting a working pipeline but forgetting CI, documentation, or cleanup.

<aside> 💡 Using AI to help: Ask an LLM to review your Dockerfile or README.md for common mistakes before submitting. Prompt: "Review this Dockerfile for a Python data pipeline and point out any issues." Always verify the suggestions yourself. (⚠️ Ensure no PII or sensitive company data is included!)

</aside>

Extra reading


The HackYourFuture curriculum is licensed under CC BY-NC-SA 4.0 *https://hackyourfuture.net/*

CC BY-NC-SA 4.0 Icons

Built with ❤️ by the HackYourFuture community · Thank you, contributors

Found a mistake or have a suggestion? Let us know in the feedback form.