Skip to content

Instantly share code, notes, and snippets.

@brandonbryant12
brandonbryant12 / fitness-dashboard.html
Created January 29, 2026 16:38
Fitness Dashboard - Interactive workout visualization
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Brandon's Fitness Dashboard</title>
<script src="https://cdn.jsdelivr.net/npm/chart.js"></script>
<style>
* {
margin: 0;
{
"$schema": "https://opencode.ai/config.json",
"provider": {
"zai": {
"npm": "@ai-sdk/openai-compatible",
"name": "Z.AI",
"options": {
"baseURL": "https://your-api-endpoint/v1",
"apiKey": "{env:ZAI_API_KEY}"
},
{
"$schema": "https://opencode.ai/config.json",
"provider": {
"your-provider-name": {
"npm": "@ai-sdk/openai-compatible",
"name": "Display Name",
"options": {
"baseURL": "http://localhost:11434/v1",
"apiKey": "{env:YOUR_API_KEY}"
},
Here is a detailed prompt you can use with an AI coding assistant (like Cursor, GitHub Copilot, or ChatGPT) to generate your full project scaffolding.
It is designed to force the AI to separate your business logic (the pipeline) from your API logic (FastAPI), which is critical for testing.
The Prompt to Copy & Paste
> I am refactoring a Python data pipeline script into a production-ready FastAPI service. I have already installed fastapi, pydantic, instructor, and openai.
> The Goal:
> Create a strictly typed, async API service that processes messages. The pipeline flow is: Input (SQS style payload) -> Pre-processing -> Azure OpenAI Analysis (via Instructor) -> Database Write (Mocked for now).
> Please generate the code for the following project structure:
> * app/core/config.py:
> * Use pydantic-settings to manage environment variables (Azure Endpoint, API Key, Deployment Name).
> * app/models/schemas.py:
@brandonbryant12
brandonbryant12 / pr-review.md
Created December 8, 2025 17:36
Comprehensive PR Code Review - Claude Code Skill & Command (inspired by GitHub Copilot)
description arguments
Comprehensive PR code review inspired by GitHub Copilot - analyzes code quality, security, performance, and best practices
name description required
pr_number
The PR number to review (e.g., 123)
true

Comprehensive PR Code Review

Yes, absolutely—you can get full tracing, spans, and rich data in Datadog even if everything currently just logs → CloudWatch → Datadog.
But there’s an important distinction:
• What you have now:
• Logs from Lambda and Step Functions go to CloudWatch, then to Datadog Logs.
• This is logging only (unless you embed trace IDs yourself).
• What you want:
• APM tracing + spans (end-to-end per transcript / per Lambda execution).
• Log ↔ trace correlation, service maps, latency breakdown, etc.
Short version: use TanStack Start for the full‑stack shell + server functions, TanStack Query for talking to your existing DB APIs, and TanStack DB as your in‑browser “mini database” that keeps everything normalized, reactive, and crazy fast.
I’ll break it down into:
1. Mental model: what each piece should do
2. Suggested architecture / folder structure
3. How to wire TanStack Start ⇄ DB (with stored procedures)
4. How to wire TanStack Query + TanStack DB
5. How this helps for your call transcript grading domain
6. Specific tricks to make the app feel “instant”
Great—let’s switch to moment and update the rules exactly as you described.
Updated Rules (with moment)
• If neither startDate nor endDate is provided → window is the last N days ending today (inclusive), where N = defaultDays (default 30).
• If only startDate is provided → endDate defaults to today (inclusive).
• If only endDate is provided → startDate falls back to endDate - (defaultDays - 1) (inclusive window), with defaultDays coming from the factory options (defaults to 30).
• If both are provided → validate and normalize.
• Normalized output is UTC YYYY-MM-DD (date-only).
• All invalid inputs throw BadRequestException → maps to 400, never 500.
Absolutely—there’s a clean, idiomatic way to (a) **re‑use start/end date validation across many DTOs** and (b) **run unit + e2e in one Jest invocation with a single coverage report**.
---
## A. Re‑usable date‑range validation for query DTOs
**Goal:** many endpoints accept `start` / `end` query params; we want:
* parse strings → `Date`
* validate each value
Below is a **repo‑level structure** that makes GitHub Copilot automatically pick up the right guidance for each part of your monorepo (APIs with NestJS and UIs with Angular), using only what GitHub documents and supports today.
---
## What to create (folder & file layout)
```
your-monorepo/
├─ APIs/
│ └─ ... (NestJS apps & libs)