makeyourAI.work the machine teaches the human

Week 3: Talking to Models Properly

How LLM APIs Fit Into Real Products

Models are one service in a larger system, not the whole app.

core 60 minutes LLM Interface Gate

Objective

Describe how an application calls an LLM API and transforms model output into product behavior.

The lesson is public. The pressure loop lives inside the app where submissions, revision, and review happen.

Deliverable

A prompt contract and structured-output integration design.

Each lesson contributes to a week-level artifact and eventually to the shipped AI-native SaaS.

Preview

Public lesson preview.

Lesson Preview

How LLM APIs Fit Into Real Products

Models are one service in a larger system, not the whole app.

This lesson defines the operational shape of an LLM feature: request assembly, model invocation, output validation, persistence, and observability.

Teams fail when they treat the model call as the product. In reality, the product lives in the orchestration around the model: retries, output checks, storage, user messaging, and fallback behavior.

An LLM API is an unreliable-but-useful subsystem. Design around it the way you would around any external dependency: narrow interface, explicit validation, good telemetry, graceful degradation.

What This Is

This lesson defines the operational shape of an LLM feature: request assembly, model invocation, output validation, persistence, and observability.

Why This Matters in Production

Teams fail when they treat the model call as the product. In reality, the product lives in the orchestration around the model: retries, output checks, storage, user messaging, and fallback behavior.

Mental Model

An LLM API is an unreliable-but-useful subsystem. Design around it the way you would around any external dependency: narrow interface, explicit validation, good telemetry, graceful degradation.

Deep Dive

A production LLM flow starts before the prompt and ends after the response. You gather context, enforce limits, call the provider, parse output, validate structure, persist the result, and record what happened for future debugging or evaluation. The hard part is deciding what your system will trust, reject, retry, redact, and explain.

Worked Example

A lesson review request comes from the learner, is enriched with rubric context, sent to the model, parsed into a structured schema, scored, persisted, and then shown to the learner. Every step has a distinct failure mode and a different owner.

Common Failure Modes

Common failures include taking free-form text as truth, not storing prompt/model versions, and having no path to explain why one learner received a given review.

References

Further reading the machine expects you to use properly.

official-doc

OpenAI API Overview

Ground model-call lifecycle in a real provider contract.

Open reference

official-doc

JSON Schema

Useful for thinking about output validation.

Open reference

official-doc

OWASP Logging Cheat Sheet

Tie LLM telemetry to ordinary application logging discipline.

Open reference