Skip to content

Instantly share code, notes, and snippets.

View kami619's full-sized avatar
🌎

Kamesh Akella kami619

🌎
  • Red Hat
  • Boston
  • 00:03 (UTC -05:00)
  • X @kami619
View GitHub Profile
@kami619
kami619 / swe-benchmark-plan.md
Created February 4, 2026 14:48
Benchmark Plan

SWE-bench Experiments

Quantify AgentReady settings against SWE-bench baseline using both SWE-agent and Claude Code.

Quick Start

# 1. Run agent on repository
agentready experiment run-agent sweagent \
 --repo-path /path/to/repo \
@kami619
kami619 / rh_gh_repo_list.txt
Created February 4, 2026 14:37
Red Hat & Open Data Hub Baseline Study
# Red Hat & Open Data Hub Baseline Study
# Curated list focused on RH product upstreams and ODH ecosystem
# Target: 50+ repositories across AI/ML, Container, Platform, and DevOps domains
# =============================================================================
# CATEGORY A: Open Data Hub (ODH) GitHub Organization - AI/ML Platform
# =============================================================================
# Expected: Silver-Gold (actively maintained, Python-heavy, ML focus)
# Core ODH Platform Components
@kami619
kami619 / gh-mcp.md
Created June 10, 2025 09:00
gh mcp server design

Integrating MCP (Model Context Protocol) servers with GitHub for test analysis involves several coordinated steps to enable AI-driven, automated workflows for CI/CD and test management. Here’s a structured overview of the process:

1. Set Up the GitHub MCP Server

  • Generate a GitHub Personal Access Token:
    Create a token with the necessary repository and workflow permissions to allow the MCP server to interact with your GitHub data[4][3].
  • Launch the MCP Server:
    The official GitHub MCP server can be started using Docker:
    docker run -i --rm -e GITHUB_PERSONAL_ACCESS_TOKEN=<your-token> ghcr.io/github/github-mcp-server
@kami619
kami619 / text_classification.py
Created January 8, 2025 16:36
Improved and modularized code for the text classification using aggression detection model
import argparse
import torch
import torch.nn.functional as F
from transformers import AutoModelForSequenceClassification, AutoTokenizer
# Placeholder for speech-to-text library (e.g., SpeechRecognition)
# import speech_recognition as sr
import os
import numpy as np
import pandas as pd
from torch.utils.data import Dataset, TensorDataset, DataLoader, RandomSampler