DE, AE, and AI Tech Interview Assessment Plan
Purpose: Develop and assess technical interview assessments for Data Engineering (DE), Analytics Engineer (AE), and align with the existing AI assessment. All live as publicly accessible repositories and integrate with Stage 3 (role-specific) of the interview process.
Context: Aligned with discussion during morning data standup (Thu/Fri) with Demi, Awaish, and Uttam.
Context
- Stage 3 of the interview process is the role-specific interview (Interview & Hiring Decision System): “Core competence and real-world judgment,” with optional Interview Exercises for skill validation.
- Existing exercises in Notion: Data Analyst Case Study (slide deck), Webflow Developer Interview. Recruitment Resources also describe a technical flow: “Fork our ‘Interview’ repo, read issue, create commit, create PR, review, push, run — run duckdb in repo.”
- AI assessment (existing, public): brainforge-ai/Product-Safety-Compliance-AI-Challenge — AI Tech Challenge: build an AI-powered system (REST API or optional UI) that ingests PDFs/images/CSVs, extracts ingredients, compares to a forbidden list, and returns Accept/Reject. ~5 hours; fork → branch → PR + Loom (5–10 min). Evaluation: Data Handling, AI/ML Reasoning, Code Quality, System Design, Presentation, Completeness. Used for the AI track (Sam, Pranav).
- DE repo: brainforge-ai/data-engineering-assessment (Awaish’s work). Data track.
- AE: No assessment exists today; new public Analytics Engineer assessment for the Data track.
Part 1: DE assessment — assess and develop existing repo
Goal: Turn the current DE assessment into a clear, consistent, and publicly usable Stage 3 exercise.
1.1 Audit current DE repo (with Awaish/Demi)
- Inventory contents: README, instructions, tasks/issues, sample data or code, any automation (e.g. DuckDB run, tests).
- Map to Stage 3 criteria: Role-specific outcomes, functional depth (pipelines, warehouse, tooling), problem-solving in a realistic scenario, learning velocity and quality bar.
- Compare to existing pattern: Align with the “fork → issue → commit → PR → review → run” flow; use AI challenge README as template.
- Gaps to document: Missing rubric, time expectations, candidate instructions, or run/validation steps.
See DE assessment audit checklist for the detailed audit to run against the repo.
1.2 Develop and standardize
- Candidate-facing README: Match the structure of the AI challenge README: overview, objective, requirements, time limit (e.g. ~5 hours), submission (fork → branch → PR), and an evaluation criteria table tailored to DE.
- Explicit rubric: Tie tasks to the Stage 3 scorecard dimensions (e.g. 1 / 3 / 5 anchors); link to Interview Scorecard and Rubrics for the Data track.
- Clean for public: Remove internal references, ensure no secrets; add a short “For Brainforge candidates” or “How we use this” section if useful.
- Notion/process: Add (or update) an entry in the Interview Exercises database for “Data Engineering Assessment” with link to the public repo and when to use it (e.g. after 2nd interview, Data track).
1.3 Handoff and maintenance
- Owner: Awaish as content owner; document in README or Notion who to contact for changes.
- Review cadence: Optional (e.g. once per quarter or after each hire) to refresh scenarios or difficulty.
Part 2: AE assessment — create from scratch
Goal: A new, public Analytics Engineer assessment repo that fits Stage 3 and is distinct from the DE assessment.
2.1 Define scope and format
- Skills in scope (typical AE): SQL, dbt (or similar transformation framework), data modeling (staging → intermediate → marts), testing and documentation, light orchestration or BI-layer thinking.
- Format: Repo-based (recommended): fork → issue(s) with tasks → candidate implements in branch → PR + run (e.g.
dbt buildor DuckDB).
2.2 Design and implement
- Scenario: One or two business questions requiring staging → intermediate → mart models, tests, and brief documentation.
- Repository:
brainforge-ai/analytics-engineering-assessment— README aligned with AI/DE pattern, seed data, skeleton project, tasks (issue or ISSUE_TASKS.md), optional run script. - Notion: Add “Analytics Engineer Assessment” to Interview Exercises with link and track/stage. Assign owner (e.g. Demi or Awaish).
Part 3: AI assessment — use as reference and align
- Use Product-Safety-Compliance-AI-Challenge as the reference pattern for DE and AE READMEs and submission instructions.
- Ensure “AI Tech Challenge” is in the Interview Exercises database with “AI track, Stage 3.”
- Cross-consistency: shared elements (purpose, time, submission, evaluation table, contact); optional Loom for DE/AE; track ownership (AI = Sam/Pranav; Data = Awaish/Demi).
Part 4: Public and process consistency
- All three repos (AI, DE, AE) public; READMEs state they are used by Brainforge for candidate assessments.
- Recruitment Resources and Notion Interview Exercises list all three with links and “when to send which” by role/track.
- No overlap: AI = AI/NLP, APIs, unstructured data; DE = pipelines, warehouse, ingestion; AE = modeling, SQL, dbt, tests.
Suggested order of work
- AI: Confirm Notion and Recruitment Resources list the Product-Safety-Compliance-AI-Challenge; use as template.
- DE: Audit repo with Awaish/Demi → add README and rubric (same style as AI) → public-safe cleanup → update Notion.
- AE: Agree scenario and repo format → create repo with README/rubric → add to Notion and Recruitment Resources.
- Ongoing: Document “when to use which” (AI vs DE vs AE vs Data Analyst Case Study) in Notion or Recruitment handoff.
Summary
| Assessment | Repo | Track | Owner |
|---|---|---|---|
| AI Tech Challenge | Product-Safety-Compliance-AI-Challenge | AI | Sam, Pranav |
| Data Engineering | data-engineering-assessment | Data | Awaish |
| Analytics Engineer | brainforge-ai/analytics-engineering-assessment (to create) | Data | Demi / Awaish |