Skip to content

Instantly share code, notes, and snippets.

@jamiekt
Last active March 13, 2026 13:58
Show Gist options
  • Select an option

  • Save jamiekt/6e76033c619f0025d09f6c6e939972a1 to your computer and use it in GitHub Desktop.

Select an option

Save jamiekt/6e76033c619f0025d09f6c6e939972a1 to your computer and use it in GitHub Desktop.
day-summary
name description
day-summary
Generate a day summary capturing Claude Code conversations, GitHub activity, and Google Calendar meetings (with meeting notes), then append it to an Obsidian daily note for that day. Use this skill whenever the user asks for a daily summary, end-of-day recap, day review, or wants to log their work activity to Obsidian. Also trigger when the user says things like "wrap up my day", "what did I do today", or "log today's work", or "wrap up yesterday", or "what did I do yesterday", or "log yesterday's work".

Day Summary

Generate a detailed end-of-day summary of the user's Claude Code sessions and GitHub activity, then append it to their Obsidian daily note.

Overview

This skill collects five sources of activity data:

  1. Claude Code sessions from today — extracted from session JSONL files across all projects
  2. Claude Code cost — calculated from token usage in session JSONL files
  3. GitHub activity for today — commits, PRs raised, PRs reviewed, PRs merged
  4. Google Calendar — today's meetings with summaries of any attached notes or descriptions
  5. Google Workspace documents — Docs, Sheets, Slides, and Forms that the user edited or viewed today

It then formats a summary and appends it to the Obsidian daily note using the Obsidian CLI.

Determining the target date

The user may ask to summarise today, yesterday, or a specific date. All data-gathering steps and the Obsidian append must use the same target date. Default to today if no date is specified. If the user says "yesterday", "last Friday", or gives a specific date, use that date throughout.

Step 1: Gather Claude Code Session Summaries

Session data lives under ~/.claude/projects/. Each project directory contains .jsonl session files and a sessions-index.json.

Finding today's sessions

Critical: The sessions-index.json is often stale — today's sessions may NOT be in the index. You must also scan the filesystem directly for .jsonl files modified today that aren't in the index. This is the primary discovery method.

Run this script to find all of the target date's sessions. Replace TARGET_DATE with the determined target date (e.g., 2026-03-11):

python3 -c "
import json, os, glob
from datetime import datetime

target_date = 'TARGET_DATE'
found = {}  # dedup by session path

# Method 1: Scan filesystem directly for .jsonl files modified on target date
# This catches sessions not yet in the index (which is the common case for today's sessions)
for project_dir in glob.glob(os.path.expanduser('~/.claude/projects/*')):
    if not os.path.isdir(project_dir):
        continue
    project = os.path.basename(project_dir)
    for jsonl in glob.glob(os.path.join(project_dir, '*.jsonl')):
        try:
            fmtime = os.path.getmtime(jsonl)
            if datetime.fromtimestamp(fmtime).strftime('%Y-%m-%d') == target_date:
                sid = os.path.basename(jsonl).replace('.jsonl', '')
                found[jsonl] = {'project': project, 'sessionId': sid, 'path': jsonl, 'firstPrompt': ''}
        except: pass

# Method 2: Also check sessions-index.json for fileMtime (catches indexed sessions)
for idx_file in glob.glob(os.path.expanduser('~/.claude/projects/*/sessions-index.json')):
    project = os.path.basename(os.path.dirname(idx_file))
    with open(idx_file) as f:
        data = json.load(f)
    entries = data.get('entries', data) if isinstance(data, dict) else data
    for entry in entries:
        fp = entry.get('fullPath', '')
        mtime = entry.get('fileMtime', 0)
        if mtime > 0 and datetime.fromtimestamp(mtime / 1000).strftime('%Y-%m-%d') == target_date:
            if fp not in found:
                found[fp] = {'project': project, 'sessionId': entry.get('sessionId',''), 'path': fp, 'firstPrompt': entry.get('firstPrompt','')[:150]}
            elif not found[fp]['firstPrompt']:
                found[fp]['firstPrompt'] = entry.get('firstPrompt','')[:150]

for s in found.values():
    print(json.dumps(s))
"

Extracting session content

For each session found:

  1. If firstPrompt was populated from the index, that may be enough for a one-line summary.
  2. For sessions not in the index (most of today's sessions), read the first ~100 lines of the JSONL file. Look for:
    • Entries with "type": "user" — these contain the user's prompts in entry["message"]
    • The "gitBranch" field — shows which branch/feature the session was about
  3. Produce a one-line summary describing what was worked on. Include the git branch name, which often contains a Jira ticket and description (e.g., patch/NAOPS-9451-get-adl-delivery-date-from-boxes).

Important:

  • Session JSONL files can be very large (multi-MB). Do NOT read entire files — only the first ~100 lines.
  • Exclude the current session — identify it as the most recently modified JSONL file across all project directories, and skip it. The current session is the one running this skill and should not be summarized.
  • Filter out day-summary sessions — if the first user prompt in a session is the day-summary skill invocation (i.e., it contains "Day Summary" and "Generate a detailed end-of-day summary"), skip it. These are previous runs of this skill, not real work sessions.
  • Filter out empty/trivial sessions — if a session has fewer than 10 lines in the JSONL file, skip it. These are sessions that were opened but no meaningful work was done.

Step 2: Calculate Claude Code Cost

For each session found in Step 1, calculate the cost from token usage data in the JSONL files.

How token data is stored

Each assistant message in the JSONL has a message.usage object with:

  • input_tokens — prompt tokens sent to the model
  • output_tokens — completion tokens generated
  • cache_creation_input_tokens — tokens written to the prompt cache
  • cache_read_input_tokens — tokens read from the prompt cache

The model name is in message.model (e.g., claude-opus-4-6, claude-sonnet-4-6).

Pricing (per million tokens)

Model Input Output Cache Write Cache Read
claude-opus-4-6 (opus) $15 $75 $18.75 $1.50
claude-sonnet-4-6 (sonnet) $3 $15 $3.75 $0.30
claude-haiku-4-5 (haiku) $0.80 $4 $1.00 $0.08

For any unrecognized model, use Sonnet pricing as a reasonable default.

Extraction script

Run this to calculate cost per session. Pass in the session paths from Step 1 as arguments:

python3 -c "
import json, sys

PRICING = {
    'opus': {'input': 15, 'output': 75, 'cache_write': 18.75, 'cache_read': 1.50},
    'sonnet': {'input': 3, 'output': 15, 'cache_write': 3.75, 'cache_read': 0.30},
    'haiku': {'input': 0.80, 'output': 4, 'cache_write': 1.00, 'cache_read': 0.08},
}

def get_tier(model_name):
    m = (model_name or '').lower()
    if 'opus' in m: return 'opus'
    if 'haiku' in m: return 'haiku'
    return 'sonnet'

results = []
total_cost = 0
for path in sys.argv[1:]:
    session_cost = 0
    session_tokens = 0
    try:
        with open(path) as f:
            for line in f:
                entry = json.loads(line)
                if entry.get('type') != 'assistant': continue
                usage = (entry.get('message') or {}).get('usage', {})
                model = (entry.get('message') or {}).get('model', '')
                tier = get_tier(model)
                p = PRICING[tier]
                inp = usage.get('input_tokens', 0)
                out = usage.get('output_tokens', 0)
                cw = usage.get('cache_creation_input_tokens', 0)
                cr = usage.get('cache_read_input_tokens', 0)
                cost = (inp * p['input'] + out * p['output'] + cw * p['cache_write'] + cr * p['cache_read']) / 1_000_000
                session_cost += cost
                session_tokens += inp + out + cw + cr
    except: pass
    total_cost += session_cost
    results.append({'path': path, 'cost_usd': round(session_cost, 2), 'tokens': session_tokens})
    print(json.dumps(results[-1]))

print(json.dumps({'total_cost_usd': round(total_cost, 2)}))
" SESSION_PATH_1 SESSION_PATH_2 ...

Replace SESSION_PATH_1 SESSION_PATH_2 ... with the actual paths from Step 1. This reads the full JSONL files — that's necessary for accurate cost data, unlike the summary step which only reads the first ~100 lines.

Step 3: Gather GitHub Activity

The GitHub contributions GraphQL API can be delayed or return incomplete data. Use multiple approaches to ensure reliable data gathering.

Primary: Search for PRs authored and reviewed today

Use the GitHub search API directly — gh pr list requires being inside a git repo and will fail otherwise.

# PRs authored today (created or updated)
gh api "search/issues?q=type:pr+author:jamiekt+updated:>=YYYY-MM-DD&per_page=20" 2>/dev/null

# PRs reviewed today
gh api "search/issues?q=type:pr+reviewed-by:jamiekt+updated:>=YYYY-MM-DD&per_page=20" 2>/dev/null

Secondary: Get commit activity from recent repos

# Check recent repos for today's commits
gh api graphql -f query='
{
  viewer {
    contributionsCollection(from: "YYYY-MM-DDT00:00:00Z", to: "YYYY-MM-DDT23:59:59Z") {
      commitContributionsByRepository {
        repository { nameWithOwner }
        contributions(first: 10) {
          nodes { commitCount occurredAt }
        }
      }
      totalCommitContributions
      totalPullRequestReviewContributions
    }
  }
}'

Supplementary: Get detailed PR info

For each PR found, use gh pr view to get the full description if the body wasn't included:

gh pr view NUMBER --repo org/repo --json title,body,state,mergedAt,additions,deletions,files

Counting activity

The activity counts in the summary header MUST match what is actually listed in the detail sections below. Do not derive counts from a separate API call — count directly from the items you list. This avoids contradictions between the summary numbers and the detail sections.

  • PRs raised: count of PRs listed under "PRs Raised" (PRs authored by the user that were created or updated today)
  • PRs merged: count of PRs listed where mergedAt falls within today
  • PRs reviewed: count of PRs listed under "PRs Reviewed" (PRs where the user left a review today, excluding self-authored PRs)
  • Commits: total commits across all PRs and repos. Use the GraphQL totalCommitContributions as a starting point, but if it returns 0 while PRs clearly contain commits from today, count commits from the PR data instead (e.g., via gh api repos/ORG/REPO/pulls/NUMBER/commits).

Important: Always reference repositories using the full org/repo format (e.g., hellofresh/katana-airflow). The user works across multiple organisations.

Step 4: Gather Google Calendar Events

Fetch today's meetings from the primary Google Calendar using the gws CLI tool, filtering out non-meeting events like working locations and out-of-office.

Fetching today's events

gws calendar events list --params '{"calendarId": "primary", "timeMin": "YYYY-MM-DDT00:00:00Z", "timeMax": "YYYY-MM-DDT23:59:59Z", "singleEvents": true, "orderBy": "startTime", "eventTypes": "default"}'

Replace YYYY-MM-DD with today's date. The eventTypes: "default" filter excludes workingLocation, focusTime, and outOfOffice events.

Processing each event

First, filter out events the user declined. Check the attendees array for the entry with self: true — if its responseStatus is "declined", skip the entire event. The user didn't attend.

For each remaining event in the items array, extract:

  • Time: from start.dateTime and end.dateTime (convert to local time)
  • Title: from summary
  • Attendees who actually attended: filter the attendees array to those with responseStatus: "accepted" (exclude self: true). Use displayName if available, otherwise extract the name from the email address (e.g., jamie.thomson@hellofresh.comJamie Thomson). People who declined or didn't respond should not be listed.

Fetching meeting notes

Events often have attached Google Docs containing meeting notes — either manually attached agendas or auto-generated Gemini summaries. These are in the attachments array, where each attachment has a fileId, title, and mimeType.

For each attachment where mimeType is application/vnd.google-apps.document, fetch its content:

gws docs documents get --params '{"documentId": "FILE_ID", "includeTabsContent": true}'

Extract the text from the response:

tabs = doc.get('tabs', [])
text = ''
for tab in tabs:
    body = tab.get('documentTab', {}).get('body', {}).get('content', [])
    for elem in body:
        para = elem.get('paragraph', {})
        for el in para.get('elements', []):
            tr = el.get('textRun', {})
            text += tr.get('content', '')

Gemini notes docs are particularly useful — they have a structured format with several sections. Extract them as follows:

  1. Summary with sub-topics — the Gemini summary starts with an overview sentence, followed by sub-topic headings (e.g., "Payment Error Threshold Review", "ADL, Marvin, and EIS Updates") each with a paragraph. Capture all of these sub-topics, not just the overview sentence — they contain the substance of what was discussed. Write a concise bullet point for each sub-topic.
  2. Decisions — look for the "Decisions" section. Each decision has a status label: ALIGNED (agreed upon), SHELVED (deferred), or NEEDS FURTHER DISCUSSION (unresolved). Capture every decision with its status label and a brief description.
  3. Suggested next steps — look for "Suggested next steps" at the end. These are formatted as [Person] Action: description. Capture all of them with the assigned owner.

Skip the "More details" section — it's a verbose per-topic transcript and too long for a summary.

If the doc doesn't have Gemini notes, scan the raw meeting notes for anything that reads like a decision or action item.

If the description field contains text (beyond just links), summarise that too.

Linking to notes

Each attachment has a fileUrl field (e.g., https://docs.google.com/document/d/.../edit). Include this as a markdown link after the meeting notes summary so the user can click through to the full doc. If a meeting has multiple doc attachments, prefer linking to the Gemini notes doc (title contains "Notes by Gemini") as it has the best summary. If there's no Gemini doc, link to whichever doc attachment is available.

Handling failures gracefully

If the Google Docs API call fails for a particular attachment (e.g., permission denied), just note the attachment title and skip it. The meeting should still appear in the summary — just without the notes content.

Step 5: Gather Google Workspace Document Activity

Find all Google Workspace documents (Docs, Sheets, Slides, Forms) that the user viewed or edited on the target date using the Google Drive API via the gws CLI.

Finding documents viewed on the target date

The viewedByMeTime field tracks when the user last opened a document. Query for Workspace files the user viewed on the target date:

gws drive files list --params '{"q": "viewedByMeTime > '\''YYYY-MM-DDT00:00:00'\'' and viewedByMeTime < '\''YYYY-MM-DDT23:59:59'\'' and (mimeType = '\''application/vnd.google-apps.document'\'' or mimeType = '\''application/vnd.google-apps.spreadsheet'\'' or mimeType = '\''application/vnd.google-apps.presentation'\'' or mimeType = '\''application/vnd.google-apps.form'\'')", "fields": "files(id,name,mimeType,modifiedByMeTime,viewedByMeTime,webViewLink)", "pageSize": 50}'

Replace YYYY-MM-DD with the target date.

Note: viewedByMeTime works in query filters but modifiedByMeTime does not. Use viewedByMeTime for the query, then classify docs client-side.

Classifying documents: edited vs. viewed

For each file returned, check the modifiedByMeTime field:

  • Edited: modifiedByMeTime falls on the target date — the user made changes
  • Viewed only: modifiedByMeTime is absent or falls on a different date — the user opened it but didn't edit

Use a script like this to classify:

python3 -c "
import json, sys

target_date = 'YYYY-MM-DD'
data = json.loads(sys.stdin.read())
edited = []
viewed = []

for f in data.get('files', []):
    mbmt = f.get('modifiedByMeTime', '')
    entry = {
        'id': f['id'],
        'name': f['name'],
        'mimeType': f['mimeType'],
        'webViewLink': f['webViewLink'],
        'viewedByMeTime': f.get('viewedByMeTime', ''),
        'modifiedByMeTime': mbmt
    }
    if mbmt.startswith(target_date):
        edited.append(entry)
    else:
        viewed.append(entry)

print(json.dumps({'edited': edited, 'viewed': viewed}, indent=2))
" <<< 'PASTE_API_RESPONSE_HERE'

Summarising edited documents

For documents the user edited, try to produce a brief summary of what the document is about. The approach depends on the document type:

Google Docs — Export the document as plain text and summarise its content in one or two sentences:

gws drive files export --params '{"fileId": "FILE_ID", "mimeType": "text/plain"}' --output /tmp/doc_FILE_ID.txt

Read the first ~2000 characters of the exported text. Produce a one-line summary describing what the document is about and, if apparent from the content, what kind of edits were likely made (e.g., "Meeting notes for the Q1 planning session" or "Technical design doc for the new auth service").

Google Sheets — Export the first sheet as CSV and read the header row plus a few data rows to understand context:

gws drive files export --params '{"fileId": "FILE_ID", "mimeType": "text/csv"}' --output /tmp/sheet_FILE_ID.csv

Read the first ~10 lines to understand what the spreadsheet tracks. Produce a one-line summary (e.g., "Tracking spreadsheet for ingredient spend projections with columns for delivery date, ODL values").

Google Slides and Forms — Just list the document name; content export for these is less useful for summarisation.

Counting revisions (optional enrichment)

For edited documents, you can optionally check how many revisions the user made on the target date to indicate the extent of editing:

gws drive revisions list --params '{"fileId": "FILE_ID", "fields": "revisions(id,modifiedTime,lastModifyingUser(emailAddress))", "pageSize": 200}'

Filter revisions where modifiedTime falls on the target date and lastModifyingUser.emailAddress matches the user (jamie.thomson@hellofresh.com). The count gives a rough sense of how much editing was done — Google coalesces revisions, so even a few revisions can represent significant work.

Handling large result sets

If more than ~10 documents were edited, summarise rather than detail each one. Group by type and list names. Only export and summarise content for up to 5-6 edited documents to keep the summary concise and avoid excessive API calls.

For viewed-only documents, always just list the names — no content export needed.

Step 6: Format the Summary

The summary should be informative enough that someone reading it tomorrow understands what was accomplished. Include brief descriptions of what each PR/commit does and why.

Format the output as markdown:

## End of Day Summary

### Claude Code Sessions
- **[project-name]** (branch `[branch-name]`): [one-line summary of what was worked on]

### Claude Code Cost
| Session | Tokens | Cost |
|---------|--------|------|
| [project-name] | [N] | $[X.XX] |
| **Total** | **[N]** | **$[X.XX]** |

### Google Calendar

#### [HH:MM–HH:MM] [Meeting title] ([Notes](link-to-google-doc))
**Attended by:** [Name 1], [Name 2], [Name 3]

**Discussion:**
- **[Sub-topic 1]:** [concise summary of this topic]
- **[Sub-topic 2]:** [concise summary of this topic]
- **[Sub-topic 3]:** [concise summary of this topic]

**Decisions:**
- [ALIGNED] [Decision description]
- [SHELVED] [Decision description]
- [NEEDS FURTHER DISCUSSION] [Decision description]

**Next steps:**
- [Owner][Action description]
- [Owner][Action description]

### Google Workspace Documents

#### Edited
- **[Document name]** (Sheet) — [one-line summary of what the doc is about] ([link](url))
- **[Document name]** (Doc) — [one-line summary] ([link](url))

#### Viewed
- [Document name] (Sheet) ([link](url))
- [Document name] (Doc) ([link](url))

### GitHub Activity
- Commits: [N]
- PRs raised: [N]
- PRs reviewed: [N]
- PRs merged: [N]

#### PRs Raised
- **[PR title]** ([org/repo#number](url)) — [STATE]
  - [One-sentence description of what this change does and why]

#### PRs Reviewed
- **[PR title]** ([org/repo#number](url))
  - [Brief note on what the PR is about]

#### Commits
- `[short-hash]` on [org/repo]: [commit message][brief description of the change]

### Key Themes
- [Theme 1: e.g., "Database performance improvements"]
- [Theme 2: e.g., "Cross-team code reviews"]

### Summary
[2-3 sentence narrative of the day's work: what was the main focus, what was accomplished, what's still in progress.]

Example of a good PR entry:

- **[NAOPS-9451] feat: Indexes on NKs in orders_enriched_audit** ([hellofresh/katana-airflow#6041](https://github.com/hellofresh/katana-airflow/pull/6041)) — OPEN
  - Added database indexes on natural key columns in audit tables after discovering queries on physical_box_item were taking over a minute. Indexes bring query time to under 1 second.

Example of a good commit entry:

- `689c3a85` on hellofresh/katana-airflow: [NAOPS-9457] fix: Filter on order_nr — Fixed customer_order_changes() to reference the correct field after order_nr was consolidated.

If there are no items for a section, omit that subsection entirely.

Step 7: Append to Obsidian Daily Note

Append the formatted summary to the correct daily note — this must match the date being summarised, not necessarily today's date. If the user asks to summarise yesterday, the summary goes to yesterday's note, etc.

If summarising today:

obsidian daily:append content="[the formatted markdown summary]"

If summarising a different date (e.g., yesterday or a specific date): The daily:append command always targets today's note and doesn't accept a date parameter. For other dates, use the append command with the explicit file path:

obsidian append path="DailyNotes/YYYY/YYYY-MM-DD.md" content="[the formatted markdown summary]"

Replace YYYY-MM-DD with the target date (e.g., DailyNotes/2026/2026-03-05.md for March 5th 2026).

Escaping: Wrap the content in double quotes and escape any inner double quotes. Use \n for newlines — the Obsidian CLI interprets \n as a newline.

After appending, confirm to the user which daily note the summary was added to.

Notes

  • The user's Obsidian vault is at ~/github/jamiekt/notes
  • Daily notes are at notes/DailyNotes/YYYY/YYYY-MM-DD.md (use the correct year from the target date)
  • The obsidian CLI is available as just obsidian in the shell
  • GitHub user: jamiekt
  • This skill searches across ALL GitHub repos, not just one organization
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment