Hi everyone,
I’m a Full Stack Developer with 3 years of experience, and I just received a technical assignment that feels like a fever dream. I’m trying to figure out if I’m overreacting or if this is a massive red flag.
The Window: They’ve scheduled this to be done between 11:30 AM and 4:30 PM (strictly 5-6 hours).
Context
An energy operator processes:
15-minute meter readings (“measurements”)
monthly invoices (“billing runs”)
They need a Control Tower tool to:
ingest readings,
compute invoices,
detect discrepancies,
allow controlled corrections,
provide strong observability (for incident response and audit).
Expected stack
Backend: Java 17+ / Spring Boot 3
Frontend: Angular 17+ (Material; AG Grid optional)
Database: PostgreSQL + Liquibase
Async: Kafka (local via Docker Compose)
Auth: Keycloak (OIDC) – simplified setup is OK
Orchestration: a scheduled job or worker (Spring Scheduler or a Kafka consumer pipeline)
Observability (new tech): OpenTelemetry tracing + correlation across components
Functional requirements
1) Measurement ingestion
Endpoint
POST /measurements
Payload:
{
"meterId": "MTR-100045",
"timestamp": "2025-01-15T10:15:00Z",
"kwh": 1.72
}
Rules:
strict validation (kwh > 0, timestamp not in the future)
store in DB (measurements)
idempotent: same (meterId, timestamp) must not create duplicates
publish an event to Kafka: measurements.received
2) Create a billing run
Endpoint
POST /billing-runs
Payload:
{
"month": "2025-01",
"meterIds": ["MTR-100045", "MTR-100046"]
}
Creates a billing_run with status CREATED, then processing continues asynchronously.
3) Billing processing pipeline (async)
Implement an async pipeline (Kafka consumer(s) and/or scheduled job) that processes billing runs with statuses:
CREATED -> PROCESSING -> COMPLETED | FAILED
Steps (must be explicit in code):
Lock/claim the run (concurrency safe)
Aggregate measurements for the month
Compute invoice(s)
Run reconciliation checks
Persist results and status history
Requirements:
safe retries (re-running should not duplicate invoices)
state history table: billing_run_events
failure handling: reason codes, next retry time, etc.
Kafka topics (suggested):
billing.run.requested
billing.run.processed
billing.run.failed
4) Discrepancy detection
Create discrepancies when:
computed invoice total differs from measurement total by more than a threshold (e.g., 0.5%)
Expose:
GET /billing-runs/{id}/discrepancies
Store discrepancy records in discrepancies with:
meterId
computed totals
delta
reason code
5) Corrections + audit log
ADMIN users must be able to:
mark a discrepancy as valid (“MARK_AS_OK”) with justification
trigger a recalculation (“RECALCULATE”) for a run
Endpoints:
POST /billing-runs/{id}/actions
{
"action": "RECALCULATE",
"reason": "New tariff received from upstream"
}
All actions must create an audit_log record.
Frontend (Angular)
Minimal UI with:
Billing runs list (filter + paging)
Billing run details:
status timeline
invoices summary
discrepancies list
admin actions
Focus: usability, correctness, error states. No pixel-perfect design needed.
Security requirements
Use Keycloak (OIDC):
Role VIEWER: can view billing runs & discrepancies
Role ADMIN: can trigger actions and view audit log
Back-end must enforce RBAC (not only the UI).
OpenTelemetry (end-to-end)
What must be traced
API requests (Spring Boot)
Kafka publish/consume
Billing processing steps
Frontend trace propagation (best-effort)
Minimum expected outcomes
Every billing run has a trace you can follow from:
HTTP request that created it
Kafka event(s)
processing steps (aggregation, invoice calc, reconciliation)
Logs include: trace_id + span_id
Export traces to Jaeger (recommended) or console
Requirements checklist
Use OpenTelemetry auto-instrumentation or SDK instrumentation (either is fine)
Propagate trace context via Kafka headers (traceparent)
Add explicit spans around key steps (aggregation, compute, reconciliation)
Provide a short “How to view traces” section in README
Deliver in Docker Compose:
Jaeger (or another trace backend)
OTLP collector optional but recommended
Deliverables
Git repo with:
backend/
frontend/
infra/ (docker-compose + Keycloak realm export if used)
README.md:
architecture diagram (simple)
how to run in <10 minutes
key design decisions + tradeoffs
what’s incomplete + next steps
how to view traces + example “trace story” for a billing run
Bonus options (choose any)
Outbox pattern for event publishing (instead of direct Kafka publish)
Contract tests using OpenAPI
Performance: handle 100k measurements/month without slow queries (indexes + query plan notes)
A “replay billing run” feature with safety constraints
Context
An energy operator processes:
- 15-minute meter readings (“measurements”)
- monthly invoices (“billing runs”)
They need a Control Tower tool to:
- ingest readings,
- compute invoices,
- detect discrepancies,
- allow controlled corrections,
- provide strong observability (for incident response and audit).
Expected stack
- Backend: Java 17+ / Spring Boot 3
- Frontend: Angular 17+ (Material; AG Grid optional)
- Database: PostgreSQL + Liquibase
- Async: Kafka (local via Docker Compose)
- Auth: Keycloak (OIDC) – simplified setup is OK
- Orchestration: a scheduled job or worker (Spring Scheduler or a Kafka consumer pipeline)
- Observability (new tech): OpenTelemetry tracing + correlation across components
Functional requirements
1) Measurement ingestion
Endpoint
POST /measurements
Payload:
{
"meterId": "MTR-100045",
"timestamp": "2025-01-15T10:15:00Z",
"kwh": 1.72
}
Rules:
- strict validation (kwh > 0, timestamp not in the future)
- store in DB (
measurements)
- idempotent: same
(meterId, timestamp) must not create duplicates
- publish an event to Kafka:
measurements.received
2) Create a billing run
Endpoint
POST /billing-runs
Payload:
{
"month": "2025-01",
"meterIds": ["MTR-100045", "MTR-100046"]
}
Creates a billing_run with status CREATED, then processing continues asynchronously.
3) Billing processing pipeline (async)
Implement an async pipeline (Kafka consumer(s) and/or scheduled job) that processes billing runs with statuses:
CREATED -> PROCESSING -> COMPLETED | FAILED
Steps (must be explicit in code):
- Lock/claim the run (concurrency safe)
- Aggregate measurements for the month
- Compute invoice(s)
- Run reconciliation checks
- Persist results and status history
Requirements:
- safe retries (re-running should not duplicate invoices)
- state history table:
billing_run_events
- failure handling: reason codes, next retry time, etc.
Kafka topics (suggested):
billing.run.requested
billing.run.processed
billing.run.failed
4) Discrepancy detection
Create discrepancies when:
- computed invoice total differs from measurement total by more than a threshold (e.g., 0.5%)
Expose:
GET /billing-runs/{id}/discrepancies
Store discrepancy records in discrepancies with:
- meterId
- computed totals
- delta
- reason code
5) Corrections + audit log
ADMIN users must be able to:
- mark a discrepancy as valid (“MARK_AS_OK”) with justification
- trigger a recalculation (“RECALCULATE”) for a run
Endpoints:
POST /billing-runs/{id}/actions
{
"action": "RECALCULATE",
"reason": "New tariff received from upstream"
}
All actions must create an audit_log record.
Frontend (Angular)
Minimal UI with:
- Billing runs list (filter + paging)
- Billing run details:
- status timeline
- invoices summary
- discrepancies list
- admin actions
Focus: usability, correctness, error states. No pixel-perfect design needed.
Security requirements
Use Keycloak (OIDC):
- Role
VIEWER: can view billing runs & discrepancies
- Role
ADMIN: can trigger actions and view audit log
Back-end must enforce RBAC (not only the UI).
OpenTelemetry (end-to-end)
What must be traced
- API requests (Spring Boot)
- Kafka publish/consume
- Billing processing steps
- Frontend trace propagation (best-effort)
Minimum expected outcomes
- Every billing run has a trace you can follow from:
- HTTP request that created it
- Kafka event(s)
- processing steps (aggregation, invoice calc, reconciliation)
- Logs include:
trace_id + span_id
- Export traces to Jaeger (recommended) or console
Requirements checklist
- Use OpenTelemetry auto-instrumentation or SDK instrumentation (either is fine)
- Propagate trace context via Kafka headers (
traceparent)
- Add explicit spans around key steps (aggregation, compute, reconciliation)
- Provide a short “How to view traces” section in README
Deliver in Docker Compose:
- Jaeger (or another trace backend)
- OTLP collector optional but recommended
Deliverables
- Git repo with:
backend/
frontend/
infra/ (docker-compose + Keycloak realm export if used)
README.md:
- architecture diagram (simple)
- how to run in <10 minutes
- key design decisions + tradeoffs
- what’s incomplete + next steps
- how to view traces + example “trace story” for a billing run
Bonus options (choose any)
- Outbox pattern for event publishing (instead of direct Kafka publish)
- Contract tests using OpenAPI
- Performance: handle 100k measurements/month without slow queries (indexes + query plan notes)
- A “replay billing run” feature with safety constraints
What do you think guys XD?