Skills 技能文档集合
artifacts-builder
一套利用现代前端网页技术(React、Tailwind CSS、shadcn/ui)创建复杂多组件 claude.ai HTML 工件的工具套件
using-git-worktrees
创建独立的 git 工作树,支持智能目录选择和安全验证
iOS Simulator
使 Claude 能够与 iOS Simulator 交互,用于测试和调试 iOS 应用
Playwright 浏览器自动化
基于模型调用的 Playwright 自动化,用于测试和验证网页应用
software-architecture
软件架构 ——实施包括清洁架构、SOLID 原则和全面软件设计最佳实践在内的设计模式
ship-learn-next
基于反馈循环,帮助反复迭代下一步要建造或学习什么的技能
Tailored Resume Generator
定制简历生成器 ——分析职位描述,生成突出相关经验、技能和成就的定制简历,最大化面试机会
1. Web Artifacts Builder 前端工件构建器
--- name: web-artifacts-builder description: Suite of tools for creating elaborate, multi-component claude.ai HTML artifacts using modern frontend web technologies (React, Tailwind CSS, shadcn/ui). Use for complex artifacts requiring state management, routing, or shadcn/ui components - not for simple single-file HTML/JSX artifacts. license: Complete terms in LICENSE.txt ---
Web Artifacts Builder
To build powerful frontend claude.ai artifacts, follow these steps:
- Initialize the frontend repo using
scripts/init-artifact.sh - Develop your artifact by editing the generated code
- Bundle all code into a single HTML file using
scripts/bundle-artifact.sh - Display artifact to user
- (Optional) Test the artifact
Stack: React 18 + TypeScript + Vite + Parcel (bundling) + Tailwind CSS + shadcn/ui
Design & Style Guidelines
VERY IMPORTANT: To avoid what is often referred to as "AI slop", avoid using excessive centered layouts, purple gradients, uniform rounded corners, and Inter font.
Quick Start
Step 1: Initialize Project
Run the initialization script to create a new React project:
scripts/init-artifact.sh <project-name>
cd <project-name>
This creates a fully configured project with:
- ✅ React + TypeScript (via Vite)
- ✅ Tailwind CSS 3.4.1 with shadcn/ui theming system
- ✅ Path aliases (
@/) configured - ✅ 40+ shadcn/ui components pre-installed
- ✅ All Radix UI dependencies included
- ✅ Parcel configured for bundling (via .parcelrc)
- ✅ Node 18+ compatibility (auto-detects and pins Vite version)
Step 2: Develop Your Artifact
To build the artifact, edit the generated files. See Common Development Tasks below for guidance.
Step 3: Bundle to Single HTML File
To bundle the React app into a single HTML artifact:
scripts/bundle-artifact.sh
This creates bundle.html - a self-contained artifact with all JavaScript, CSS, and dependencies inlined. This file can be directly shared in Claude conversations as an artifact.
Requirements: Your project must have an index.html in the root directory.
What the script does:
- Installs bundling dependencies (parcel, @parcel/config-default, parcel-resolver-tspaths, html-inline)
- Creates
.parcelrcconfig with path alias support - Builds with Parcel (no source maps)
- Inlines all assets into single HTML using html-inline
Step 4: Share Artifact with User
Finally, share the bundled HTML file in conversation with the user so they can view it as an artifact.
Step 5: Testing/Visualizing the Artifact (Optional)
Note: This is a completely optional step. Only perform if necessary or requested. To test/visualize the artifact, use available tools (including other Skills or built-in tools like Playwright or Puppeteer). In general, avoid testing the artifact upfront as it adds latency between the request and when the finished artifact can be seen. Test later, after presenting the artifact, if requested or if issues arise.
Reference
- shadcn/ui components: https://ui.shadcn.com/docs/components
2. Using Git Worktrees Git 工作树管理
--- name: using-git-worktrees description: Use when starting feature work that needs isolation from current workspace or before executing implementation plans - creates isolated git worktrees with smart directory selection and safety verification ---
Using Git Worktrees
Overview
Git worktrees create isolated workspaces sharing the same repository, allowing work on multiple branches simultaneously without switching.
Core principle: Systematic directory selection + safety verification = reliable isolation.
Announce at start: "I'm using the using-git-worktrees skill to set up an isolated workspace."
Directory Selection Process
Follow this priority order:
1. Check Existing Directories
ls -d .worktrees 2>/dev/null # Preferred (hidden)
ls -d worktrees 2>/dev/null # Alternative
If found: Use that directory. If both exist, .worktrees wins.
2. Check CLAUDE.md
grep -i "worktree.*director" CLAUDE.md 2>/dev/null
If preference specified: Use it without asking.
3. Ask User
If no directory exists and no CLAUDE.md preference: No worktree directory found. Where should I create worktrees?
- .worktrees/ (project-local, hidden)
- ~/.config/superpowers/worktrees// (global location) Which would you prefer?
Safety Verification
For Project-Local Directories (.worktrees or worktrees)
MUST verify directory is ignored before creating worktree:
git check-ignore -q .worktrees 2>/dev/null || git check-ignore -q worktrees 2>/dev/null
If NOT ignored: Per Jesse's rule "Fix broken things immediately":
- Add appropriate line to .gitignore
- Commit the change
- Proceed with worktree creation
Why critical: Prevents accidentally committing worktree contents to repository.
For Global Directory (~/.config/superpowers/worktrees)
No .gitignore verification needed - outside project entirely.
Creation Steps
1. Detect Project Name
project=$(basename "$(git rev-parse --show-toplevel)")
2. Create Worktree
case $LOCATION in
.worktrees|worktrees) path="$LOCATION/$BRANCH_NAME" ;;
~/.config/superpowers/worktrees/*) path="~/.config/superpowers/worktrees/$project/$BRANCH_NAME" ;;
esac
# Create worktree with new branch
git worktree add "$path" -b "$BRANCH_NAME"
cd "$path"
3. Run Project Setup
Auto-detect and run appropriate setup:
# Node.js
if [ -f package.json ]; then npm install; fi
# Rust
if [ -f Cargo.toml ]; then cargo build; fi
# Python
if [ -f requirements.txt ]; then pip install -r requirements.txt; fi
if [ -f pyproject.toml ]; then poetry install; fi
# Go
if [ -f go.mod ]; then go mod download; fi
4. Verify Clean Baseline
Run tests to ensure worktree starts clean:
# Examples - use project-appropriate command
npm test
cargo test
pytest
go test ./...
If tests fail: Report failures, ask whether to proceed or investigate. If tests pass: Report ready.
5. Report Location
Worktree ready at <full-path>
Tests passing ( tests, 0 failures)
Ready to implement <feature-name>
Quick Reference
| Situation | Action |
|---|---|
.worktrees/ exists | Use it (verify ignored) |
worktrees/ exists | Use it (verify ignored) |
| Both exist | Use .worktrees/ |
| Neither exists | Check CLAUDE.md → Ask user |
| Directory not ignored | Add to .gitignore + commit |
| Tests fail during baseline | Report failures + ask |
| No package.json/Cargo.toml | Skip dependency install |
Common Mistakes
Skipping ignore verification
- Problem: Worktree contents get tracked, pollute git status
- Fix: Always use
git check-ignorebefore creating project-local worktree
Assuming directory location
- Problem: Creates inconsistency, violates project conventions
- Fix: Follow priority: existing > CLAUDE.md > ask
Proceeding with failing tests
- Problem: Can't distinguish new bugs from pre-existing issues
- Fix: Report failures, get explicit permission to proceed
Hardcoding setup commands
- Problem: Breaks on projects using different tools
- Fix: Auto-detect from project files (package.json, etc.)
Example Workflow
You: I'm using the using-git-worktrees skill to set up an isolated workspace. [Check .worktrees/ - exists] [Verify ignored - git check-ignore confirms .worktrees/ is ignored] [Create worktree: git worktree add .worktrees/auth -b feature/auth] [Run npm install] [Run npm test - 47 passing] Worktree ready at /Users/jesse/myproject/.worktrees/auth Tests passing (47 tests, 0 failures) Ready to implement auth feature
Red Flags
Never:
- Create worktree without verifying it's ignored (project-local)
- Skip baseline test verification
- Proceed with failing tests without asking
- Assume directory location when ambiguous
- Skip CLAUDE.md check
Always:
- Follow directory priority: existing > CLAUDE.md > ask
- Verify directory is ignored for project-local
- Auto-detect and run project setup
- Verify clean test baseline
Integration
Called by:
- brainstorming (Phase 4) - REQUIRED when design is approved and implementation follows
- Any skill needing isolated workspace
Pairs with:
- finishing-a-development-branch - REQUIRED for cleanup after work complete
- executing-plans or subagent-driven-development - Work happens in this worktree
3. iOS Simulator Skill iOS 模拟器技能
--- name: ios-simulator-skill version: 1.3.0 description: 21 production-ready scripts for iOS app testing, building, and automation. Provides semantic UI navigation, build automation, accessibility testing, and simulator lifecycle management. Optimized for AI agents with minimal token output. ---
iOS Simulator Skill
Build, test, and automate iOS applications using accessibility-driven navigation and structured data instead of pixel coordinates.
Quick Start
# 1. Check environment
bash scripts/sim_health_check.sh
# 2. Launch app
python scripts/app_launcher.py --launch com.example.app
# 3. Map screen to see elements
python scripts/screen_mapper.py
# 4. Tap button
python scripts/navigator.py --find-text "Login" --tap
# 5. Enter text
python scripts/navigator.py --find-type TextField --enter-text "[email protected]"
All scripts support --help for detailed options and --json for machine-readable output.
21 Production Scripts
Build & Development (2 scripts)
- build_and_test.py - Build Xcode projects, run tests, parse results with progressive disclosure
- Build with live result streaming
- Parse errors and warnings from xcresult bundles
- Retrieve detailed build logs on demand
- Options:
--project,--scheme,--clean,--test,--verbose,--json
- log_monitor.py - Real-time log monitoring with intelligent filtering
- Stream logs or capture by duration
- Filter by severity (error/warning/info/debug)
- Deduplicate repeated messages
- Options:
--app,--severity,--follow,--duration,--output,--json
Navigation & Interaction (5 scripts)
- screen_mapper.py - Analyze current screen and list interactive elements
- Element type breakdown
- Interactive button list
- Text field status
- Options:
--verbose,--hints,--json
- navigator.py - Find and interact with elements semantically
- Find by text (fuzzy matching)
- Find by element type
- Find by accessibility ID
- Enter text or tap elements
- Options:
--find-text,--find-type,--find-id,--tap,--enter-text,--json
- gesture.py - Perform swipes, scrolls, pinches, and complex gestures
- Directional swipes (up/down/left/right)
- Multi-swipe scrolling
- Pinch zoom
- Long press
- Pull to refresh
- Options:
--swipe,--scroll,--pinch,--long-press,--refresh,--json
- keyboard.py - Text input and hardware button control
- Type text (fast or slow)
- Special keys (return, delete, tab, space, arrows)
- Hardware buttons (home, lock, volume, screenshot)
- Key combinations
- Options:
--type,--key,--button,--slow,--clear,--dismiss,--json
- app_launcher.py - App lifecycle management
- Launch apps by bundle ID
- Terminate apps
- Install/uninstall from .app bundles
- Deep link navigation
- List installed apps
- Check app state
- Options:
--launch,--terminate,--install,--uninstall,--open-url,--list,--state,--json
Testing & Analysis (5 scripts)
- accessibility_audit.py - Check WCAG compliance on current screen
- Critical issues (missing labels, empty buttons, no alt text)
- Warnings (missing hints, small touch targets)
- Info (missing IDs, deep nesting)
- Options:
--verbose,--output,--json
- visual_diff.py - Compare two screenshots for visual changes
- Pixel-by-pixel comparison
- Threshold-based pass/fail
- Generate diff images
- Options:
--threshold,--output,--details,--json
- test_recorder.py - Automatically document test execution
- Capture screenshots and accessibility trees per step
- Generate markdown reports with timing data
- Options:
--test-name,--output,--verbose,--json
- app_state_capture.py - Create comprehensive debugging snapshots
- Screenshot, UI hierarchy, app logs, device info
- Markdown summary for bug reports
- Options:
--app-bundle-id,--output,--log-lines,--json
- sim_health_check.sh - Verify environment is properly configured
- Check macOS, Xcode, simctl, IDB, Python
- List available and booted simulators
- Verify Python packages (Pillow)
Advanced Testing & Permissions (4 scripts)
- clipboard.py - Manage simulator clipboard for paste testing
- Copy text to clipboard
- Test paste flows without manual entry
- Options:
--copy,--test-name,--expected,--json
- status_bar.py - Override simulator status bar appearance
- Presets: clean (9:41, 100% battery), testing (11:11, 50%), low-battery (20%), airplane (offline)
- Custom time, network, battery, WiFi settings
- Options:
--preset,--time,--data-network,--battery-level,--clear,--json
- push_notification.py - Send simulated push notifications
- Simple mode (title + body + badge)
- Custom JSON payloads
- Test notification handling and deep links
- Options:
--bundle-id,--title,--body,--badge,--payload,--json
- privacy_manager.py - Grant, revoke, and reset app permissions
- 13 supported services (camera, microphone, location, contacts, photos, calendar, health, etc.)
- Batch operations (comma-separated services)
- Audit trail with test scenario tracking
- Options:
--bundle-id,--grant,--revoke,--reset,--list,--json
Device Lifecycle Management (5 scripts)
- simctl_boot.py - Boot simulators with optional readiness verification
- Boot by UDID or device name
- Wait for device ready with timeout
- Batch boot operations (--all, --type)
- Performance timing
- Options:
--udid,--name,--wait-ready,--timeout,--all,--type,--json
- simctl_shutdown.py - Gracefully shutdown simulators
- Shutdown by UDID or device name
- Optional verification of shutdown completion
- Batch shutdown operations
- Options:
--udid,--name,--verify,--timeout,--all,--type,--json
- simctl_create.py - Create simulators dynamically
- Create by device type and iOS version
- List available device types and runtimes
- Custom device naming
- Returns UDID for CI/CD integration
- Options:
--device,--runtime,--name,--list-devices,--list-runtimes,--json
- simctl_delete.py - Permanently delete simulators
- Delete by UDID or device name
- Safety confirmation by default (skip with --yes)
- Batch delete operations
- Smart deletion (--old N to keep N per device type)
- Options:
--udid,--name,--yes,--all,--type,--old,--json
- simctl_erase.py - Factory reset simulators without deletion
- Preserve device UUID (faster than delete+create)
- Erase all, by type, or booted simulators
- Optional verification
- Options:
--udid,--name,--verify,--timeout,--all,--type,--booted,--json
Common Patterns
Auto-UDID Detection: Most scripts auto-detect the booted simulator if --udid is not provided.
Device Name Resolution: Use device names (e.g., "iPhone 16 Pro") instead of UDIDs - scripts resolve automatically.
Batch Operations: Many scripts support --all for all simulators or --type iPhone for device type filtering.
Output Formats: Default is concise human-readable output. Use --json for machine-readable output in CI/CD.
Help: All scripts support --help for detailed options and examples.
Typical Workflow
- Verify environment:
bash scripts/sim_health_check.sh - Launch app:
python scripts/app_launcher.py --launch com.example.app - Analyze screen:
python scripts/screen_mapper.py - Interact:
python scripts/navigator.py --find-text "Button" --tap - Verify:
python scripts/accessibility_audit.py - Debug if needed:
python scripts/app_state_capture.py --app-bundle-id com.example.app
Requirements
- macOS 12+
- Xcode Command Line Tools
- Python 3
- IDB (optional, for interactive features)
Documentation
- SKILL.md (this file) - Script reference and quick start
- README.md - Installation and examples
- CLAUDE.md - Architecture and implementation details
- references/ - Deep documentation on specific topics
- examples/ - Complete automation workflows
Key Design Principles
Semantic Navigation: Find elements by meaning (text, type, ID) not pixel coordinates. Survives UI changes. Token Efficiency: Concise default output (3-5 lines) with optional verbose and JSON modes for detailed results. Accessibility-First: Built on standard accessibility APIs for reliability and compatibility. Zero Configuration: Works immediately on any macOS with Xcode. No setup required. Structured Data: Scripts output JSON or formatted text, not raw logs. Easy to parse and integrate. Auto-Learning: Build system remembers your device preference. Configuration stored per-project.
--- Use these scripts directly or let Claude Code invoke them automatically when your request matches the skill description.
4. Playwright Browser Automation Playwright 浏览器自动化
--- name: playwright-skill description: Complete browser automation with Playwright. Auto-detects dev servers, writes clean test scripts to /tmp. Test pages, fill forms, take screenshots, check responsive design, validate UX, test login flows, check links, automate any browser task. Use when user wants to test websites, automate browser interactions, validate web functionality, or perform any browser-based testing. ---
Playwright Browser Automation
General-purpose browser automation skill. I'll write custom Playwright code for any automation task you request and execute it via the universal executor.
CRITICAL WORKFLOW - Follow these steps in order:
- Auto-detect dev servers - For localhost testing, ALWAYS run server detection FIRST:
cd $SKILL_DIR && node -e "require('./lib/helpers').detectDevServers().then(servers => console.log(JSON.stringify(servers)))"- If 1 server found: Use it automatically, inform user
- If multiple servers found: Ask user which one to test
- If no servers found: Ask for URL or offer to help start dev server
- Write scripts to /tmp - NEVER write test files to skill directory; always use
/tmp/playwright-test-*.js - Use visible browser by default - Always use
headless: falseunless user specifically requests headless mode - Parameterize URLs - Always make URLs configurable via environment variable or constant at top of script
How It Works
- You describe what you want to test/automate
- I auto-detect running dev servers (or ask for URL if testing external site)
- I write custom Playwright code in
/tmp/playwright-test-*.js(won't clutter your project) - I execute it via:
cd $SKILL_DIR && node run.js /tmp/playwright-test-*.js - Results displayed in real-time, browser window visible for debugging
- Test files auto-cleaned from /tmp by your OS
Setup (First Time)
cd $SKILL_DIR
npm run setup
This installs Playwright and Chromium browser. Only needed once.
Execution Pattern
Step 1: Detect dev servers (for localhost testing)
cd $SKILL_DIR && node -e "require('./lib/helpers').detectDevServers().then(s => console.log(JSON.stringify(s)))"
Step 2: Write test script to /tmp with URL parameter
// /tmp/playwright-test-page.js
const { chromium } = require('playwright');
// Parameterized URL (detected or user-provided)
const TARGET_URL = 'http://localhost:3001'; // <-- Auto-detected or from user
(async () => {
const browser = await chromium.launch({ headless: false });
const page = await browser.newPage();
await page.goto(TARGET_URL);
console.log('Page loaded:', await page.title());
await page.screenshot({ path: '/tmp/screenshot.png', fullPage: true });
console.log('📸 Screenshot saved to /tmp/screenshot.png');
await browser.close();
})();
Step 3: Execute from skill directory
cd $SKILL_DIR && node run.js /tmp/playwright-test-page.js
Common Patterns
Test a Page (Multiple Viewports)
// /tmp/playwright-test-responsive.js
const { chromium } = require('playwright');
const TARGET_URL = 'http://localhost:3001'; // Auto-detected
(async () => {
const browser = await chromium.launch({ headless: false, slowMo: 100 });
const page = await browser.newPage();
// Desktop test
await page.setViewportSize({ width: 1920, height: 1080 });
await page.goto(TARGET_URL);
console.log('Desktop - Title:', await page.title());
await page.screenshot({ path: '/tmp/desktop.png', fullPage: true });
// Mobile test
await page.setViewportSize({ width: 375, height: 667 });
await page.screenshot({ path: '/tmp/mobile.png', fullPage: true });
browser.();
})();
Test Login Flow
// /tmp/playwright-test-login.js
const { chromium } = require('playwright');
const TARGET_URL = 'http://localhost:3001'; // Auto-detected
(async () => {
const browser = await chromium.launch({ headless: false });
const page = await browser.newPage();
await page.goto(`${TARGET_URL}/login`);
await page.fill('input[name="email"]', '[email protected]');
await page.fill('input[name="password"]', 'password123');
await page.click('button[type="submit"]');
// Wait for redirect
await page.waitForURL('**/dashboard');
console.log('✅ Login successful, redirected to dashboard');
await browser.close();
})();
Fill and Submit Form
// /tmp/playwright-test-form.js
const { chromium } = require('playwright');
const TARGET_URL = 'http://localhost:3001'; // Auto-detected
(async () => {
const browser = await chromium.launch({ headless: false, slowMo: 50 });
const page = await browser.newPage();
await page.goto(`${TARGET_URL}/contact`);
await page.fill('input[name="name"]', 'John Doe');
await page.fill('input[name="email"]', '[email protected]');
await page.fill('textarea[name="message"]', 'Test message');
await page.click('button[type="submit"]');
// Verify submission
await page.waitForSelector('.success-message');
console.log('✅ Form submitted successfully');
await browser.close();
})();
Check for Broken Links
const { chromium } = require('playwright');
(async () => {
const browser = await chromium.launch({ headless: false });
const page = await browser.newPage();
await page.goto('http://localhost:3000');
const links = await page.locator('a[href^="http"]').all();
const results = { working: 0, broken: [] };
for (const link of links) {
const href = await link.getAttribute('href');
try {
const response = await page.request.head(href);
if (response.ok()) {
results.working++;
} else {
results.broken.push({ url: href, status: response.status() });
}
} catch (e) {
results.broken.push({ url: href, error: e.message });
}
}
console.();
.(, results.);
browser.();
})();
Take Screenshot with Error Handling
const { chromium } = require('playwright');
(async () => {
const browser = await chromium.launch({ headless: false });
const page = await browser.newPage();
try {
await page.goto('http://localhost:3000', { waitUntil: 'networkidle', timeout: 10000, });
await page.screenshot({ path: '/tmp/screenshot.png', fullPage: true, });
console.log('📸 Screenshot saved to /tmp/screenshot.png');
} catch (error) {
console.error('❌ Error:', error.message);
} finally {
await browser.close();
}
})();
Test Responsive Design
// /tmp/playwright-test-responsive-full.js
const { chromium } = require('playwright');
const TARGET_URL = 'http://localhost:3001'; // Auto-detected
(async () => {
const browser = await chromium.launch({ headless: false });
const page = await browser.newPage();
const viewports = [
{ name: 'Desktop', width: 1920, height: 1080 },
{ name: 'Tablet', width: 768, height: 1024 },
{ name: 'Mobile', width: 375, height: 667 },
];
for (const viewport of viewports) {
console.log(
`Testing ${viewport.name} (${viewport.width}x${viewport.height})`,
);
await page.setViewportSize({ width: viewport.width, height: viewport.height, });
await page.goto();
page.();
page.({ : , : , });
}
.();
browser.();
})();
Inline Execution (Simple Tasks)
For quick one-off tasks, you can execute code inline without creating files:
cd $SKILL_DIR && node run.js "
const browser = await chromium.launch({ headless: false });
const page = await browser.newPage();
await page.goto('http://localhost:3001');
await page.screenshot({ path: '/tmp/quick-screenshot.png', fullPage: true });
console.log('Screenshot saved');
await browser.close();
"
When to use inline vs files:
- Inline: Quick one-off tasks (screenshot, check if element exists, get page title)
- Files: Complex tests, responsive design checks, anything user might want to re-run
Available Helpers
Optional utility functions in lib/helpers.js:
const helpers = require('./lib/helpers');
// Detect running dev servers (CRITICAL - use this first!)
const servers = await helpers.detectDevServers();
console.log('Found servers:', servers);
// Safe click with retry
await helpers.safeClick(page, 'button.submit', { retries: 3 });
// Safe type with clear
await helpers.safeType(page, '#username', 'testuser');
// Take timestamped screenshot
await helpers.takeScreenshot(page, 'test-result');
// Handle cookie banners
await helpers.handleCookieBanner(page);
// Extract table data
const data = await helpers.extractTableData(page, 'table.results');
See lib/helpers.js for full list.
Custom HTTP Headers
Configure custom headers for all HTTP requests via environment variables. Useful for:
- Identifying automated traffic to your backend
- Getting LLM-optimized responses (e.g., plain text errors instead of styled HTML)
- Adding authentication tokens globally
Configuration
Single header (common case):
PW_HEADER_NAME=X-Automated-By PW_HEADER_VALUE=playwright-skill \
cd $SKILL_DIR && node run.js /tmp/my-script.js
Multiple headers (JSON format):
PW_EXTRA_HEADERS='{"X-Automated-By":"playwright-skill","X-Debug":"true"}' \
cd $SKILL_DIR && node run.js /tmp/my-script.js
How It Works
Headers are automatically applied when using helpers.createContext():
const context = await helpers.createContext(browser);
const page = await context.newPage();
// All requests from this page include your custom headers
For scripts using raw Playwright API, use the injected getContextOptionsWithHeaders():
const context = await browser.newContext(
getContextOptionsWithHeaders({ viewport: { width: 1920, height: 1080 } }),
);
Advanced Usage
For comprehensive Playwright API documentation, see API_REFERENCE.md:
- Selectors & Locators best practices
- Network interception & API mocking
- Authentication & session management
- Visual regression testing
- Mobile device emulation
- Performance testing
- Debugging techniques
- CI/CD integration
Tips
- CRITICAL: Detect servers FIRST - Always run
detectDevServers()before writing test code for localhost testing - Custom headers - Use
PW_HEADER_NAME/PW_HEADER_VALUEenv vars to identify automated traffic to your backend - Use /tmp for test files - Write to
/tmp/playwright-test-*.js, never to skill directory or user's project - Parameterize URLs - Put detected/provided URL in a
TARGET_URLconstant at the top of every script - DEFAULT: Visible browser - Always use
headless: falseunless user explicitly asks for headless mode - Headless mode - Only use
headless: truewhen user specifically requests "headless" or "background" execution - Slow down: Use
slowMo: 100to make actions visible and easier to follow - Wait strategies: Use
waitForURL,waitForSelector,waitForLoadStateinstead of fixed timeouts - Error handling: Always use try-catch for robust automation
- Console output: Use
console.log()to track progress and show what's happening
Troubleshooting
Playwright not installed:
cd $SKILL_DIR && npm run setup
Module not found: Ensure running from skill directory via run.js wrapper
Browser doesn't open: Check headless: false and ensure display available
Element not found: Add wait: await page.waitForSelector('.element', { timeout: 10000 })
Example Usage
User: "Test if the marketing page looks good" Claude: I'll test the marketing page across multiple viewports. Let me first detect running servers... [Runs: detectDevServers()] [Output: Found server on port 3001] I found your dev server running on http://localhost:3001 [Writes custom automation script to /tmp/playwright-test-marketing.js with URL parameterized] [Runs: cd $SKILL_DIR && node run.js /tmp/playwright-test-marketing.js] [Shows results with screenshots from /tmp/]
User: "Check if login redirects correctly" Claude: I'll test the login flow. First, let me check for running servers... [Runs: detectDevServers()] [Output: Found servers on ports 3000 and 3001] I found 2 dev servers. Which one should I test?
- http://localhost:3000
- http://localhost:3001 User: "Use 3001" [Writes login automation to /tmp/playwright-test-login.js] [Runs: cd $SKILL_DIR && node run.js /tmp/playwright-test-login.js] [Reports: ✅ Login successful, redirected to /dashboard]
Notes
- Each automation is custom-written for your specific request
- Not limited to pre-built scripts - any browser task possible
- Auto-detects running dev servers to eliminate hardcoded URLs
- Test scripts written to
/tmpfor automatic cleanup (no clutter) - Code executes reliably with proper module resolution via
run.js - Progressive disclosure - API_REFERENCE.md loaded only when advanced features needed
5. Software Architecture 软件架构
--- name: software-architecture description: Guide for quality focused software architecture. This skill should be used when users want to write code, design architecture, analyze code, in any case that relates to software development. ---
Software Architecture Development Skill
This skill provides guidence for quality focused software development and architecture. It is based on Clean Architecture and Domain Driven Design principles.
代码质量 ## Code Style Rules
General Principles
- Early return pattern: Always use early returns when possible, over nested conditions for better readability
- Avoid code duplication through creation of reusable functions and modules
- Decompose long (more than 80 lines of code) components and functions into multiple smaller components and functions. If they cannot be used anywhere else, keep it in the same file. But if file longer than 200 lines of code, it should be split into multiple files.
- Use arrow functions instead of function declarations when possible
Best Practices
Library-First Approach
- ALWAYS search for existing solutions before writing custom code
- Check npm for existing libraries that solve the problem
- Evaluate existing services/SaaS solutions
- Consider third-party APIs for common functionality
- Use libraries instead of writing your own utils or helpers. For example, use
cockatielinstead of writing your own retry logic. - When custom code IS justified:
- Specific business logic unique to the domain
- Performance-critical paths with special requirements
- When external dependencies would be overkill
- Security-sensitive code requiring full control
- When existing solutions don't meet requirements after thorough evaluation
Architecture and Design
- Clean Architecture & DDD Principles:
- Follow domain-driven design and ubiquitous language
- Separate domain entities from infrastructure concerns
- Keep business logic independent of frameworks
- Define use cases clearly and keep them isolated
- Naming Conventions:
- AVOID generic names:
utils,helpers,common,shared - USE domain-specific names:
OrderCalculator,UserAuthenticator,InvoiceGenerator - Follow bounded context naming patterns
- Each module should have a single, clear purpose
- AVOID generic names:
- Separation of Concerns:
- Do NOT mix business logic with UI components
- Keep database queries out of controllers
- Maintain clear boundaries between contexts
- Ensure proper separation of responsibilities
Anti-Patterns to Avoid
- NIH (Not Invented Here) Syndrome:
- Don't build custom auth when Auth0/Supabase exists
- Don't write custom state management instead of using Redux/Zustand
- Don't create custom form validation instead of using established libraries
- Poor Architectural Choices:
- Mixing business logic with UI components
- Database queries directly in controllers
- Lack of clear separation of concerns
- Generic Naming Anti-Patterns:
utils.jswith 50 unrelated functionshelpers/misc.jsas a dumping groundcommon/shared.jswith unclear purpose- Remember: Every line of custom code is a liability that needs maintenance, testing, and documentation
Code Quality
- Proper error handling with typed catch blocks
- Break down complex logic into smaller, reusable functions
- Avoid deep nesting (max 3 levels)
- Keep functions focused and under 50 lines when possible
- Keep files focused and under 200 lines of code when possible
6. Ship-Learn-Next Action Planner 学习行动规划器
--- name: ship-learn-next description: Transform learning content (like YouTube transcripts, articles, tutorials) into actionable implementation plans using the Ship-Learn-Next framework. Use when user wants to turn advice, lessons, or educational content into concrete action steps, reps, or a learning quest. allowed-tools: Read,Write ---
Ship-Learn-Next Action Planner
This skill helps transform passive learning content into actionable Ship-Learn-Next cycles - turning advice and lessons into concrete, shippable iterations.
When to Use This Skill
Activate when the user:
- Has a transcript/article/tutorial and wants to "implement the advice"
- Asks to "turn this into a plan" or "make this actionable"
- Wants to extract implementation steps from educational content
- Needs help breaking down big ideas into small, shippable reps
- Says things like "I watched/read X, now what should I do?"
Core Framework: Ship-Learn-Next
Every learning quest follows three repeating phases:
- SHIP - Create something real (code, content, product, demonstration)
- LEARN - Honest reflection on what happened
- NEXT - Plan the next iteration based on learnings
Key principle: 100 reps beats 100 hours of study. Learning = doing better, not knowing more.
How This Skill Works
Step 1: Read the Content
Read the file the user provides (transcript, article, notes):
FILE_PATH="/path/to/content.txt"
Use the Read tool to analyze the content.
Step 2: Extract Core Lessons
Identify from the content:
- Main advice/lessons: What are the key takeaways?
- Actionable principles: What can actually be practiced?
- Skills being taught: What would someone learn by doing this?
- Examples/case studies: Real implementations mentioned
Do NOT:
- Summarize everything (focus on actionable parts)
- List theory without application
- Include "nice to know" vs "need to practice"
Step 3: Define the Quest
Help the user frame their learning goal: Ask:
- "Based on this content, what do you want to achieve in 4-8 weeks?"
- "What would success look like? (Be specific)"
- "What's something concrete you could build/create/ship?"
Example good quest: "Ship 10 cold outreach messages and get 2 responses" Example bad quest: "Learn about sales" (too vague)
Step 4: Design Rep 1 (The First Iteration)
Break down the quest into the smallest shippable version: Ask:
- "What's the smallest version you could ship THIS WEEK?"
- "What do you need to learn JUST to do that?" (not everything)
- "What would 'done' look like for rep 1?"
Make it:
- Concrete and specific
- Completable in 1-7 days
- Produces real evidence/artifact
- Small enough to not be intimidating
- Big enough to learn something meaningful
Step 5: Create the Rep Plan
Structure each rep with:
Rep 1: [Specific Goal]
Ship Goal: [What you'll create/do] Success Criteria: [How you'll know it's done] What You'll Learn: [Specific skills/insights] Resources Needed: [Minimal - just what's needed for THIS rep] Timeline: [Specific deadline] Action Steps:
- [Concrete step 1]
- [Concrete step 2]
- [Concrete step 3] ... After Shipping - Reflection Questions:
- What actually happened? (Be specific)
- What worked? What didn't?
- What surprised you?
- On a scale of 1-10, how did this rep go?
- What would you do differently next time?
Step 6: Map Future Reps (2-5)
Based on the content, suggest a progression:
Rep 2: [Next level]
Builds on: What you learned in Rep 1 New challenge: One new thing to try/improve Expected difficulty: [Easier/Same/Harder - and why]
Rep 3: [Continue progression]
...
Progression principles:
- Each rep adds ONE new element
- Increase difficulty based on success
- Reference specific lessons from the content
- Keep reps shippable (not theoretical)
Step 7: Connect to Content
For each rep, reference the source material:
- "This implements the [concept] from minute X"
- "You're practicing the [technique] mentioned in the video"
- "This tests the advice about [topic]"
But: Always emphasize DOING over studying. Point to resources only when needed for the specific rep.
Conversation Style
Direct but supportive:
- No fluff, but encouraging
- "Ship it, then we'll improve it"
- "What's the smallest version you could do this week?"
Question-driven:
- Make them think, don't just tell
- "What exactly do you want to achieve?" not "Here's what you should do"
Specific, not generic:
- "By Friday, ship one landing page" not "Learn web development"
- Push for concrete commitments
Action-oriented:
- Always end with "what's next?"
- Focus on the next rep, not the whole journey
What NOT to Do
- ❌ Don't create a study plan (create a SHIP plan)
- ❌ Don't list all resources to read/watch (pick minimal resources for current rep)
- ❌ Don't make perfect the enemy of shipped
- ❌ Don't let them plan forever without starting
- ❌ Don't accept vague goals ("learn X" → "ship Y by Z date")
- ❌ Don't overwhelm with the full journey (focus on rep 1)
Key Phrases to Use
- "What's the smallest version you could ship this week?"
- "What do you need to learn JUST to do that?"
- "This isn't about perfection - it's rep 1 of 100"
- "Ship something real, then we'll improve it"
- "Based on [content], what would you actually DO differently?"
- "Learning = doing better, not knowing more"
Example Output Structure
Your Ship-Learn-Next Quest: [Title]
Quest Overview
Goal: [What they want to achieve in 4-8 weeks] Source: [The content that inspired this] Core Lessons: [3-5 key actionable takeaways from content]
Rep 1: [Specific, Shippable Goal]
Ship Goal: [Concrete deliverable] Timeline: [This week / By [date]] Success Criteria:
- [Specific thing 1]
- [Specific thing 2]
- [Specific thing 3] What You'll Practice (from the content):
- [Skill/concept 1 from source material]
- [Skill/concept 2 from source material] Action Steps:
- [Concrete step]
- [Concrete step]
- [Concrete step]
- Ship it (publish/deploy/share/demonstrate) Minimal Resources (only for this rep):
- [Link or reference - if truly needed] After Shipping - Reflection: Answer these questions:
- What actually happened?
- What worked? What didn't?
- What surprised you?
- Rate this rep: _/10
- What's one thing to try differently next time?
Rep 2: [Next Iteration]
Builds on: Rep 1 + [what you learned] New element: [One new challenge/skill] Ship goal: [Next concrete deliverable] [Similar structure...]
Rep 3-5: Future Path
Rep 3: [Brief description] Rep 4: [Brief description] Rep 5: [Brief description] (Details will evolve based on what you learn in Reps 1-2)
Remember
- This is about DOING, not studying
- Aim for 100 reps over time (not perfection on rep 1)
- Each rep = Plan → Do → Reflect → Next
- You learn by shipping, not by consuming
Ready to ship Rep 1?
Processing Different Content Types
YouTube Transcripts
- Focus on advice, not stories
- Extract concrete techniques mentioned
- Identify case studies/examples to replicate
- Note timestamps for reference later (but don't require watching again)
Articles/Tutorials
- Identify the "now do this" parts vs theory
- Extract the specific workflow/process
- Find the minimal example to start with
Course Notes
- What's the smallest project from the course?
- Which modules are needed for rep 1? (ignore the rest for now)
- What can be practiced immediately?
Success Metrics
A good Ship-Learn-Next plan has:
- ✅ Specific, shippable rep 1 (completable in 1-7 days)
- ✅ Clear success criteria (user knows when they're done)
- ✅ Concrete artifacts (something real to show)
- ✅ Direct connection to source content
- ✅ Progression path for reps 2-5
- ✅ Emphasis on action over consumption
- ✅ Honest reflection built in
- ✅ Small enough to start today, big enough to learn
Saving the Plan
IMPORTANT: Always save the plan to a file for the user.
Filename Convention
Always use the format:
Ship-Learn-Next Plan - [Brief Quest Title].mdExamples:Ship-Learn-Next Plan - Build in Proven Markets.mdShip-Learn-Next Plan - Learn React.mdShip-Learn-Next Plan - Cold Email Outreach.md
Quest title should be:
- Brief (3-6 words)
- Descriptive of the main goal
- Based on the content's core lesson/theme
What to Save
Complete plan including:
- Quest overview with goal and source
- All reps (1-5) with full details
- Action steps and reflection questions
- Timeline commitments
- Reference to source material
Format: Always save as Markdown (.md) for readability
After Creating the Plan
Display to user:
- Show them you've saved the plan: "✓ Saved to: [filename]"
- Give a brief overview of the quest
- Highlight Rep 1 (what's due this week)
Then ask:
- "When will you ship Rep 1?"
- "What's the one thing that might stop you? How will you handle it?"
- "Come back after you ship and we'll reflect + plan Rep 2"
Remember: You're not creating a curriculum. You're helping them ship something real, learn from it, and ship the next thing. Let's help them ship.
7. Tailored Resume Generator 定制简历生成器
--- name: tailored-resume-generator description: Analyzes job descriptions and generates tailored resumes that highlight relevant experience, skills, and achievements to maximize interview chances ---
Tailored Resume Generator
When to Use This Skill
- Applying for a specific job position
- Customizing your resume for different industries or roles
- Highlighting relevant experience for career transitions
- Optimizing your resume for ATS (Applicant Tracking Systems)
- Creating multiple resume versions for different job applications
- Emphasizing specific skills mentioned in job postings
What This Skill Does
- Analyzes Job Descriptions: Extracts key requirements, skills, qualifications, and keywords from job postings
- Identifies Priorities: Determines what employers value most based on the job description language and structure
- Tailors Content: Reorganizes and emphasizes relevant experience, skills, and achievements
- Optimizes Keywords: Incorporates ATS-friendly keywords naturally throughout the resume
- Formats Professionally: Creates clean, professional resume layouts suitable for various formats
- Provides Recommendations: Suggests improvements and highlights gaps to address
How to Use
Basic Usage
Provide a job description and your background information: I'm applying for this job: [paste job description] Here's my background:
- 5 years as software engineer at TechCorp
- Led team of 3 developers on mobile app project
- Expert in Python, JavaScript, React
- Computer Science degree from State University
With Existing Resume
Upload or paste your current resume along with the job description: Please tailor my resume for this position: Job Description: [paste job description] My Current Resume: [paste resume content]
Career Transition
When changing industries or roles: I'm transitioning from marketing to product management. Here's the job I'm applying for: [paste job description] My transferable experience:
- 7 years in digital marketing
- Led cross-functional teams
- Managed product launches
- Data analysis and A/B testing
Example
User Request: I need a tailored resume for this job: Senior Data Analyst Position Requirements:
- 5+ years experience in data analysis
- Expert in SQL, Python, and data visualization tools
- Experience with A/B testing and statistical analysis
- Strong communication skills to present findings
- Healthcare industry experience preferred
My background:
- Data Analyst at RetailCo (2019-2024)
- Created dashboards using Tableau and Power BI
- Built Python scripts for data automation
- Worked with marketing team on campaign analysis
- Business Analytics degree
- Also did 1 year internship at HealthPlus clinic
Generated Output:
JOHN DOE
Email: [email protected] | Phone: (555) 123-4567 | LinkedIn: linkedin.com/in/johndoe
PROFESSIONAL SUMMARY
Results-driven Data Analyst with 5+ years of experience leveraging SQL, Python, and advanced visualization tools to deliver actionable insights. Proven track record in statistical analysis, A/B testing, and cross-functional collaboration. Healthcare industry experience with strong ability to communicate complex data findings to stakeholders.
TECHNICAL SKILLS
- Data Analysis & Programming: SQL, Python, Statistical Analysis, A/B Testing
- Visualization Tools: Tableau, Power BI, Advanced Dashboard Development
- Data Management: Data Automation, ETL Processes, Database Optimization
- Business Intelligence: Campaign Analytics, Performance Metrics, KPI Tracking
PROFESSIONAL EXPERIENCE
Data Analyst | RetailCo | 2019 - 2024
- Designed and implemented 50+ SQL queries and Python automation scripts, reducing manual data processing time by 60%
- Conducted rigorous A/B testing and statistical analysis for marketing campaigns, improving campaign ROI by 35%
- Built interactive Tableau and Power BI dashboards for executive team, enabling data-driven decision making
- Presented analytical findings to cross-functional stakeholders including marketing, operations, and executive leadership
- Collaborated with marketing team to analyze customer behavior patterns and optimize targeting strategies
Data Analysis Intern | HealthPlus Clinic | 2018 - 2019
- Supported healthcare analytics initiatives by analyzing patient data and operational metrics
- Gained exposure to healthcare industry data management and HIPAA compliance requirements
- Assisted in creating reports for clinical staff to improve patient care workflows
EDUCATION
Bachelor of Science in Business Analytics State University | Graduated 2018
KEY ACHIEVEMENTS
- Reduced data processing time by 60% through Python automation
- Improved marketing campaign ROI by 35% through A/B testing and optimization
- Created data visualization system now used across entire organization
Additional Recommendations:
- Consider adding any specific healthcare projects or certifications
- Quantify more achievements with metrics when possible
- If you have experience with R or advanced statistical methods, add them
- Consider mentioning any familiarity with healthcare regulations (HIPAA, etc.)
Instructions
When a user requests resume tailoring:
1. Gather Information
Job Description Analysis:
- Request the full job description if not provided
- Ask for the company name and job title
Candidate Background:
- If user provides existing resume, use it as the foundation
- If not, request:
- Work history (job titles, companies, dates, responsibilities)
- Education background
- Key skills and technical proficiencies
- Notable achievements and metrics
- Certifications or awards
- Any other relevant information
2. Analyze Job Requirements
Extract and prioritize:
- Must-have qualifications: Years of experience, required skills, education
- Key skills: Technical tools, methodologies, competencies
- Soft skills: Communication, leadership, teamwork
- Industry knowledge: Domain-specific experience
- Keywords: Repeated terms, phrases, and buzzwords for ATS optimization
- Company values: Cultural fit indicators from job description
Create a mental map of:
- Priority 1: Critical requirements (deal-breakers)
- Priority 2: Important qualifications (strongly desired)
- Priority 3: Nice-to-have skills (bonus points)
3. Map Candidate Experience to Requirements
For each job requirement:
- Identify matching experience from candidate's background
- Find transferable skills if no direct match
- Note gaps that need to be addressed or de-emphasized
- Identify unique strengths to highlight
4. Structure the Tailored Resume
Professional Summary (3-4 lines):
- Lead with years of experience in the target role/field
- Include top 3-4 required skills from job description
- Mention industry experience if relevant
- Highlight unique value proposition
Technical/Core Skills Section:
- Group skills by category matching job requirements
- List required tools and technologies first
- Use exact terminology from job description
- Only include skills you can substantiate with experience
Professional Experience:
- For each role, emphasize responsibilities and achievements aligned with job requirements
- Use action verbs: Led, Developed, Implemented, Optimized, Managed, Created, Analyzed
- Quantify achievements: Include numbers, percentages, timeframes, scale
- Reorder bullet points to prioritize most relevant experience
- Use keywords naturally from job description
- Format: [Action Verb] + [What] + [How/Why] + [Result/Impact]
Education:
- List degrees, certifications relevant to position
- Include relevant coursework if early career
- Add certifications that match job requirements
Optional Sections (if applicable):
- Certifications & Licenses
- Publications or Speaking Engagements
- Awards & Recognition
- Volunteer Work (if relevant to role)
- Projects (especially for technical roles)
5. Optimize for ATS (Applicant Tracking Systems)
- Use standard section headings (Professional Experience, Education, Skills)
- Incorporate exact keywords from job description naturally
- Avoid tables, graphics, headers/footers, or complex formatting
- Use standard fonts and bullet points
- Include both acronyms and full terms (e.g., "SQL (Structured Query Language)")
- Match job title terminology where truthful
6. Format and Present
Format Options:
- Markdown: Clean, readable, easy to copy
- Plain Text: ATS-optimized, safe for all systems
- Tips for Word/PDF: Provide formatting guidance
Resume Structure Guidelines:
- Keep to 1 page for <10 years experience, 2 pages for 10+ years
- Use consistent formatting and spacing
- Ensure contact information is prominent
- Use reverse chronological order (most recent first)
- Maintain clean, scannable layout with white space
7. Provide Strategic Recommendations
After presenting the tailored resume, offer:
Strengths Analysis:
- What makes this candidate competitive
- Unique qualifications to emphasize in cover letter or interview
Gap Analysis:
- Requirements not fully met
- Suggestions for addressing gaps (courses, projects, reframing experience)
Interview Preparation Tips:
- Key talking points aligned with resume
- Stories to prepare based on job requirements
- Questions to ask that demonstrate fit
Cover Letter Hooks:
- Suggest 2-3 opening lines for cover letter
- Key achievements to expand upon
8. Iterate and Refine
Ask if user wants to:
- Adjust emphasis or tone
- Add or remove sections
- Generate alternative versions for different roles
- Create format variations (traditional vs. modern)
- Develop role-specific versions (if applying to multiple similar positions)
9. Best Practices to Follow
Do:
- Be truthful and accurate - never fabricate experience
- Use industry-standard terminology
- Quantify achievements with specific metrics
- Tailor each resume to specific job
- Proofread for grammar and consistency
- Keep language concise and impactful
Don't:
- Include personal information (age, marital status, photo unless requested)
- Use first-person pronouns (I, me, my)
- Include references ("available upon request" is outdated)
- List every job if career is 20+ years (focus on relevant, recent experience)
- Use generic templates without customization
- Exceed 2 pages unless very senior role
10. Special Considerations
Career Changers:
- Use functional or hybrid resume format
- Emphasize transferable skills
- Create compelling narrative in summary
- Focus on relevant projects and coursework
Recent Graduates:
- Lead with education
- Include relevant coursework, projects, internships
- Emphasize leadership in student organizations
- Include GPA if 3.5+
Senior Executives:
- Lead with executive summary
- Focus on leadership and strategic impact
- Include board memberships, speaking engagements
- Emphasize revenue growth, team building, vision
Technical Roles:
- Include technical skills section prominently
- List programming languages, frameworks, tools
- Include GitHub, portfolio, or project links
- Mention methodologies (Agile, Scrum, etc.)
Creative Roles:
- Include link to portfolio
- Highlight creative achievements and campaigns
- Mention tools and software proficiencies
- Consider more creative formatting (while maintaining ATS compatibility)
Tips for Best Results
- Be specific: Provide complete job descriptions and detailed background information
- Share metrics: Include numbers, percentages, and quantifiable achievements when describing your experience
- Indicate format preference: Let the skill know if you need ATS-optimized, creative, or traditional format
- Mention constraints: Share any specific requirements (page limits, sections to include/exclude)
- Iterate: Don't hesitate to ask for revisions or alternative approaches
- Multiple applications: Generate separate tailored versions for different roles
Privacy Note
This skill processes your personal and professional information to generate tailored resumes. Always review the output before submitting to ensure accuracy and appropriateness. Remove or modify any information you prefer not to share with potential employers.


