This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
This is a GitHub Account and Organization Migration Toolkit for transferring 600+ repositories from a personal account (pythoninthegrass) to an organization (pythoninthegrass2), then swapping names so the organization takes the original account name.
Current Status: 594/600 repos successfully transferred (99% success rate). Ready for manual rename operations.
Tech Stack:
- Python 3.13+ with PEP 723 self-contained scripts (no virtualenv needed)
- uv package manager for running scripts
- GitHub CLI (gh) for API access
- python-decouple for environment configuration
- tqdm for progress bars
All scripts are PEP 723 compliant with embedded dependencies. Run them directly with uv:
# Pre-migration audit (generates inventory backup)
./scripts/00_pre_migration_audit.py
# or: uv run scripts/00_pre_migration_audit.py
# Transfer repositories (main migration script)
./scripts/01_transfer_repos.py
# Post-migration validation
./scripts/02_post_migration_validation.py
# Retry failed transfers
./scripts/03_retry_failed_transfers.py
# Delete all forks from organization (if needed)
./scripts/04_delete_forks.pyForce cache refresh (ignore 30-minute cache):
./scripts/00_pre_migration_audit.py --force-refresh
./scripts/01_transfer_repos.py -fAll scripts read from .env file:
# Copy template
cp .env.example .env
# Edit configuration
nano .envKey variables:
SOURCE_OWNER- Personal account (pythoninthegrass)TARGET_ORG- Target organization (pythoninthegrass2)DRY_RUN- Test mode (true/false)PILOT_MODE- Test with small batch (true/false)PILOT_REPOS- Comma-separated repo list for pilotBATCH_SIZE- Repos per batch (default: 9)MAX_CONCURRENT_TRANSFERS- Parallel transfers (default: 3)DELAY_BETWEEN_TRANSFERS- Seconds between transfers (default: 1)DELAY_BETWEEN_BATCHES- Seconds between batches (default: 3)EXCLUDE_FORKS- Skip forks (true/false)EXCLUDE_ARCHIVED- Skip archived repos (true/false)
# Check authentication
gh auth status
# List repos in source account
gh repo list pythoninthegrass --limit 100
# List repos in target org
gh repo list pythoninthegrass2 --limit 100
# Check specific repo location
gh repo view pythoninthegrass2/repo-name00_pre_migration_audit.py → Inventory & backup
↓
01_transfer_repos.py → Automated transfers
↓
04_delete_forks.py → Delete forks (if needed)
↓
[Manual Web UI Steps] → Rename account/org
↓
02_post_migration_validation.py → Verify success
Both audit and transfer scripts implement a 30-minute cache:
Location: .cache/repos_{owner}.json
Benefits:
- Instant subsequent runs (no API calls)
- Shared cache between scripts
- Automatic TTL expiration after 30 minutes
Cache structure:
{
"owner": "pythoninthegrass",
"timestamp": "2026-01-26T15:22:22.245556",
"repositories": [
{
"name": "repo-name",
"is_fork": false,
"is_archived": false,
"full_name": "pythoninthegrass/repo-name",
...
}
]
}Bypass cache: Use --force-refresh or -f flag
01_transfer_repos.py uses concurrent.futures.ThreadPoolExecutor:
- Processes repos in batches (default: 9 repos per batch)
- Runs 3 concurrent transfers within each batch
- ~3-4x faster than sequential (100 repos in ~3 minutes vs ~8 minutes)
- Rate limiting: Small delays between concurrent operations
- Error handling: Each thread handles failures independently
Transfer flow:
ThreadPoolExecutor(max_workers=3)
├── Thread 1: Transfer repo A
├── Thread 2: Transfer repo B
└── Thread 3: Transfer repo C
↓
Wait for batch completion
↓
Small delay (3 seconds)
↓
Next batch...HTTP 422 "Already Transferred": Scripts detect repos already in target org and skip them:
- Pre-transfer verification check
- Post-error verification for 422 responses
- Marked as "already_transferred" (not failures)
Status categories:
success- New transfer completedalready_transferred- Repo exists in targeterror- Transfer failedtimeout- Request timed outdry_run- Dry-run mode (no actual transfer)
Failed transfers:
Results stored in results/transfer_results_YYYYMMDD_HHMMSS.csv
Scripts use GitHub GraphQL API (via gh api graphql) for efficient data fetching:
Query pattern:
query($owner: String!, $cursor: String) {
user(login: $owner) {
repositories(first: 100, after: $cursor, ownerAffiliations: OWNER) {
totalCount
pageInfo {
hasNextPage
endCursor
}
nodes {
name
isFork
isArchived
hasIssuesEnabled
hasWikiEnabled
...
}
}
}
}Pagination: Automatic cursor-based pagination for 100+ repos
Repository transfers use REST API:
gh api --method POST /repos/{owner}/{repo}/transfer \
-f new_owner={target_org}Limitations:
- Private forks cannot be transferred if parent repo has restrictions
- Repos with pending transfers must wait 24 hours
- Name collisions (already taken) require manual resolution
All scripts include inline dependency specifications:
#!/usr/bin/env -S uv run --script
# /// script
# requires-python = ">=3.13"
# dependencies = [
# "python-decouple>=3.8",
# "tqdm>=4.66.0",
# ]
# [tool.uv]
# exclude-newer = "2025-12-31T00:00:00Z"
# ///This enables direct execution without virtualenv:
./scripts/00_pre_migration_audit.py # uv handles dependencies00_pre_migration_audit.py captures:
- Repository metadata (name, owner, visibility)
- Settings (issues, wiki, pages enabled)
- Statistics (stars, forks, open issues)
- Timestamps (created, updated, last pushed)
- Topics and primary language
- Fork and archive status
Output formats:
- CSV:
backup/repos_inventory_YYYYMMDD_HHMMSS.csv - JSON:
backup/repos_full_YYYYMMDD_HHMMSS.json
01_transfer_repos.py supports multiple filtering modes:
Pilot mode (test with specific repos):
PILOT_MODE=true
PILOT_REPOS=repo1,repo2,repo3Filter by type:
EXCLUDE_FORKS=true # Skip forked repos
EXCLUDE_ARCHIVED=true # Skip archived reposBoth filters can be combined.
02_post_migration_validation.py verifies:
- Repository count matches pre-migration audit
- Metadata preserved (stars, forks, issues)
- Access control intact
- Missing repositories identified
Uses cached audit data if available.
03_retry_failed_transfers.py implements exponential backoff:
Initial delay: 5 seconds
Max retries: 3
Delay multiplier: 2x per retryRetry conditions:
- Timeout errors
- Rate limit errors (429)
- Temporary network failures
Permanent failures (no retry):
- Pending transfers (24-hour cooldown)
- Not found (404)
- Permission denied (403)
04_delete_forks.py deletes all forked repositories:
Features:
- Parallel deletion (5 concurrent by default)
- Batch processing (20 repos per batch)
- GraphQL API for efficient fork discovery
- Confirmation prompt (must type "DELETE")
- Progress tracking with tqdm
- Dry-run mode for safety
Use case: Some organizations have hundreds of forks that can block org renames or cause naming conflicts. This script bulk-deletes them.
Configuration:
MAX_CONCURRENT_DELETIONS=5 # Parallel deletions
DRY_RUN=false # Live deletion modePerformance: ~2-3 minutes for 384 forks (5 concurrent)
- Run audit script to generate backup
- Review CSV output for exclusions
- Test with pilot mode (5-10 repos)
- Wait 24 hours to verify pilot
- Set
PILOT_MODE=falseandDRY_RUN=false - Run transfer script (monitors progress)
- Handle failures with retry script
- Verify transfer counts
- Rename personal account:
pythoninthegrass→pythoninthegrass_og - Rename organization:
pythoninthegrass2→pythoninthegrass - Verify with
gh auth statusand API calls
See: docs/MANUAL_RENAME_PROCEDURE.md for detailed steps
- Run validation script
- Update GitHub Actions secrets if needed
- Verify webhooks and integrations
- GitHub redirects handle URL changes for 90 days (no local clone updates needed)
✅ Migration completed successfully: 2026-01-26
Final results:
- Total repos: 600
- Successfully transferred: 594 (99% success rate)
- 463 new transfers
- 131 already existed
- Failed: 6 repos (edge cases)
- 3 actually succeeded despite 422 errors
- 2 private forks (can't transfer due to GitHub restrictions)
- 1 name collision
- Forks deleted: 384 (cleared path for org rename)
Account/Org status:
- Personal account renamed:
pythoninthegrass→thepythoninthegrass - Organization renamed:
pythoninthegrass2→pythoninthegrass - All repos now under organization with original username
Performance:
- Repository transfers: ~9 minutes (3 concurrent)
- Fork deletion: ~2-3 minutes (5 concurrent)
- Total active time: ~3 hours
- Success rate: 99%
brew install gh # macOS
# or: https://cli.github.com/curl -LsSf https://astral.sh/uv/install.sh | shgh auth login
gh auth statusIncrease delays in .env:
DELAY_BETWEEN_TRANSFERS=5
DELAY_BETWEEN_BATCHES=30
MAX_CONCURRENT_TRANSFERS=2 # Reduce parallelism"Validation Failed" can mean:
- Repo already transferred (check target org)
- Private fork with parent restrictions
- Name collision in target org
- Pending transfer (24-hour cooldown)
Check repo status:
gh repo view pythoninthegrass2/repo-name # Exists?
gh api /repos/pythoninthegrass/repo-name --jq '{fork: .fork, private: .private}'- Check
results/transfer_results_*.csvfor error details - Try manual transfer:
gh api --method POST /repos/pythoninthegrass/repo-name/transfer -f new_owner=pythoninthegrass2
- Run retry script:
./scripts/03_retry_failed_transfers.py
Clear cache and force fresh data:
rm -rf .cache/
./scripts/00_pre_migration_audit.py --force-refresh- Dry-run mode: Test without changes (
DRY_RUN=true) - Pilot mode: Test small batch first (
PILOT_MODE=true) - Confirmation prompts: Prevents accidental transfers
- Rate limiting: Avoids API bans
- Error recovery: Retry script for transient failures
- Pre-migration backup: Complete audit before changes
- Validation: Verify migration success
- Rate limit: 5,000 requests/hour (authenticated)
- Transfer cooldown: 24 hours between transfer attempts per repo
- Concurrent transfers: No official limit, but rate limiting applies
- Rename redirects: 90-day redirect period
✅ Repository metadata (stars, watchers, forks) ✅ Issues and pull requests ✅ Commits and history ✅ Branch protection rules ✅ Webhooks (URLs may need updates) ✅ Deploy keys ✅ Repository settings
❌ GitHub Actions secrets (must be re-added manually) ❌ Some third-party integrations (may need reconfiguration)
- README.md - User guide and setup instructions
- docs/MANUAL_RENAME_PROCEDURE.md - Detailed rename steps
- .env.example - Configuration template
- This file (CLAUDE.md) - AI agent guidance
If extending this toolkit:
- Add progress persistence: Resume interrupted transfers
- Webhook updates: Automatically update webhook URLs after rename
- Batch scheduling: Spread transfers over multiple days for very large migrations
- GitHub Actions integration: Automate secret re-creation
- Monitoring dashboard: Real-time transfer progress visualization
- Rollback capability: Automate transfer reversal if needed
When modifying scripts:
- Always test with
DRY_RUN=truefirst - Use pilot mode with test repos
- Verify cache behavior with
--force-refresh - Test error handling with intentional failures
- Check parallel transfer behavior under load
- Never commit
.envfile (contains config) - Review audit outputs before sharing (may expose private repo names)
- Limit access to
backup/andresults/directories - Use
.gitignoreto prevent accidental commits - Personal access tokens continue working after rename