This commit is contained in:
Aspergerli
2026-03-09 19:18:47 +01:00
commit ade3d0fb01
240 changed files with 12324 additions and 0 deletions

View File

@@ -0,0 +1,147 @@
# Task Master Commands for Claude Code
Complete guide to using Task Master through Claude Code's slash commands.
## Overview
All Task Master functionality is available through the `/project:tm/` namespace with natural language support and intelligent features.
## Quick Start
```bash
# Install Task Master
/project:tm/setup/quick-install
# Initialize project
/project:tm/init/quick
# Parse requirements
/project:tm/parse-prd requirements.md
# Start working
/project:tm/next
```
## Command Structure
Commands are organized hierarchically to match Task Master's CLI:
- Main commands at `/project:tm/[command]`
- Subcommands for specific operations `/project:tm/[command]/[subcommand]`
- Natural language arguments accepted throughout
## Complete Command Reference
### Setup & Configuration
- `/project:tm/setup/install` - Full installation guide
- `/project:tm/setup/quick-install` - One-line install
- `/project:tm/init` - Initialize project
- `/project:tm/init/quick` - Quick init with -y
- `/project:tm/models` - View AI config
- `/project:tm/models/setup` - Configure AI
### Task Generation
- `/project:tm/parse-prd` - Generate from PRD
- `/project:tm/parse-prd/with-research` - Enhanced parsing
- `/project:tm/generate` - Create task files
### Task Management
- `/project:tm/list` - List with natural language filters
- `/project:tm/list/with-subtasks` - Hierarchical view
- `/project:tm/list/by-status <status>` - Filter by status
- `/project:tm/show <id>` - Task details
- `/project:tm/add-task` - Create task
- `/project:tm/update` - Update tasks
- `/project:tm/remove-task` - Delete task
### Status Management
- `/project:tm/set-status/to-pending <id>`
- `/project:tm/set-status/to-in-progress <id>`
- `/project:tm/set-status/to-done <id>`
- `/project:tm/set-status/to-review <id>`
- `/project:tm/set-status/to-deferred <id>`
- `/project:tm/set-status/to-cancelled <id>`
### Task Analysis
- `/project:tm/analyze-complexity` - AI analysis
- `/project:tm/complexity-report` - View report
- `/project:tm/expand <id>` - Break down task
- `/project:tm/expand/all` - Expand all complex
### Dependencies
- `/project:tm/add-dependency` - Add dependency
- `/project:tm/remove-dependency` - Remove dependency
- `/project:tm/validate-dependencies` - Check issues
- `/project:tm/fix-dependencies` - Auto-fix
### Workflows
- `/project:tm/workflows/smart-flow` - Adaptive workflows
- `/project:tm/workflows/pipeline` - Chain commands
- `/project:tm/workflows/auto-implement` - AI implementation
### Utilities
- `/project:tm/status` - Project dashboard
- `/project:tm/next` - Next task recommendation
- `/project:tm/utils/analyze` - Project analysis
- `/project:tm/learn` - Interactive help
## Key Features
### Natural Language Support
All commands understand natural language:
```
/project:tm/list pending high priority
/project:tm/update mark 23 as done
/project:tm/add-task implement OAuth login
```
### Smart Context
Commands analyze project state and provide intelligent suggestions based on:
- Current task status
- Dependencies
- Team patterns
- Project phase
### Visual Enhancements
- Progress bars and indicators
- Status badges
- Organized displays
- Clear hierarchies
## Common Workflows
### Daily Development
```
/project:tm/workflows/smart-flow morning
/project:tm/next
/project:tm/set-status/to-in-progress <id>
/project:tm/set-status/to-done <id>
```
### Task Breakdown
```
/project:tm/show <id>
/project:tm/expand <id>
/project:tm/list/with-subtasks
```
### Sprint Planning
```
/project:tm/analyze-complexity
/project:tm/workflows/pipeline init → expand/all → status
```
## Migration from Old Commands
| Old | New |
|-----|-----|
| `/project:task-master:list` | `/project:tm/list` |
| `/project:task-master:complete` | `/project:tm/set-status/to-done` |
| `/project:workflows:auto-implement` | `/project:tm/workflows/auto-implement` |
## Tips
1. Use `/project:tm/` + Tab for command discovery
2. Natural language is supported everywhere
3. Commands provide smart defaults
4. Chain commands for automation
5. Check `/project:tm/learn` for interactive help

View File

@@ -0,0 +1,25 @@
# Godot Card Framework Analysis
Analyze specific components of the Godot Card Framework project.
## Usage
```
/godot-analyze [component]
```
## Steps:
1. Read the GDScript files of the requested component to understand structure
2. Analyze related scene (.tscn) files if available
3. Check dependency relationships with other components
4. Review compliance with Godot 4.x best practices
5. Suggest improvements or extension possibilities
## Analysis Targets:
- Card (base card class)
- CardContainer (card container base class)
- Pile (card stack)
- Hand (player hand)
- CardManager (card manager)
- CardFactory (card factory)
- DraggableObject (draggable object)
- DropZone (drop zone)

View File

@@ -0,0 +1,29 @@
# Godot Card Framework Export
Package the Card Framework for Godot AssetLib or export to other projects.
## Usage
```
/godot-export [type] [options]
```
## Steps:
1. Select files to export (addons/card-framework/*)
2. Exclude unnecessary files (.import, .tmp, etc.)
3. Check/create plugin.cfg file
4. Organize README and documentation files
5. Decide whether to include example projects
6. Create compressed package or prepare for AssetLib
## Export Types:
- assetlib: Package for Godot AssetLib
- addon: Addon for other projects
- source: Source code only
- complete: Complete package with examples
## Key Files to Include:
- addons/card-framework/ (core framework)
- example1/ (basic example)
- freecell/ (advanced example)
- README.md
- LICENSE.md

View File

@@ -0,0 +1,26 @@
# Godot Card Game Feature Implementation
Implement new features using the Card Framework.
## Usage
```
/godot-implement [feature_name] [description]
```
## Steps:
1. Analyze existing Card Framework architecture
2. Identify components needed for the requested feature
3. Plan extension of existing classes or creation of new classes
4. Implement functionality in GDScript
5. Create/modify scene (.tscn) files if necessary
6. Update JSON card data structure if needed
7. Provide simple test code or usage examples
## Implementable Features:
- New CardContainer types (e.g., Deck, DiscardPile)
- Card effect systems
- Game-specific card rules
- Animation effects
- Sound integration
- Multiplayer functionality
- Save/load systems

View File

@@ -0,0 +1,30 @@
# Godot Card Framework Testing
Create scenes or scripts to test Card Framework functionality.
## Usage
```
/godot-test [test_type] [component]
```
## Steps:
1. Identify key features of the component to test
2. Create test scene (.tscn) or script (.gd)
3. Compare expected behavior with actual behavior
4. Test edge cases and error conditions
5. Document test results
## Test Types:
- unit: Individual class/method testing
- integration: Inter-component interaction testing
- performance: Performance and memory usage testing
- visual: UI/animation behavior testing
- gameplay: Real gameplay scenario testing
## Testable Components:
- Card drag-and-drop
- CardContainer add/remove
- Hand reordering
- Pile stacking
- Factory card creation
- History undo/redo

View File

@@ -0,0 +1,45 @@
# Quick Documentation Sync
Fast documentation update for development workflow - compares working directory with last commit.
## Usage
```
/quick-sync
```
## Process
### 1. Change Detection
- **Compare with HEAD**: `git diff --name-only HEAD` for working directory changes
- **Check staged files**: `git diff --cached --name-only` for staged changes
- **Focus on**: Only files in `addons/card-framework/` that affect public API
### 2. API Quick Update
- **Conditional analysis**: Only if GDScript files changed
- **Fast scan**: `/analyze [changed-files] --focus api --persona-scribe=en --uc`
- **Incremental update**: Update only affected sections in `docs/API.md`
- **Skip**: Comprehensive regeneration and changelog
### 3. Documentation Consistency Check
- **Version validation**: Ensure version numbers are consistent across files
- **Link verification**: Quick check of internal documentation links
- **Format validation**: Basic markdown syntax verification
### 4. Fast Review
- **Change summary**: Report what was updated and why
- **Warning flags**: Highlight potential issues requiring manual review
- **Recommendations**: Suggest when full `/sync-docs` is needed
## Use Cases
- **Development**: After API changes, before committing
- **PR preparation**: Quick validation before creating pull request
- **Continuous validation**: During iterative development cycles
- **Staging check**: Before pushing to shared branches
## When to Use Full Sync Instead
- Before creating version tags
- After major API changes
- When example projects are modified
- For release preparation
**Note:** This command works with uncommitted changes. For release preparation, always use `/sync-docs` with proper version tag.

View File

@@ -0,0 +1,106 @@
# Sync Documentation with Version Tag
Automatically synchronize all documentation files when a new version tag is created.
## Usage
```
/sync-docs [version-tag]
```
**Example:** `/sync-docs v1.1.4`
## Process Overview
This command performs a comprehensive documentation update by comparing changes between the current version and the specified tag, then regenerating all relevant documentation files using SuperClaude framework.
## Steps
### 1. Version Analysis
- **Find previous version tag**: Use `git tag --sort=version:refname | grep -v [current-tag] | tail -1`
- **Compare versions**: `git diff [previous-tag]..[current-tag]` for changed files
- **Extract commits**: `git log --oneline [previous-tag]..[current-tag]` for changelog
- **Focus areas**: Files in `addons/card-framework/`, `example1/`, `freecell/`
- **Change categorization**: Breaking changes, new features, bug fixes, documentation
### 2. API Documentation Update
- Use `/analyze addons/card-framework/ --focus api --persona-scribe=en --ultrathink`
- Update `docs/API.md` with latest class references, methods, and properties
- Maintain existing documentation structure and formatting style
- Preserve manual annotations and examples where applicable
### 3. Changelog Generation
- **Collect commits**: `git log --oneline [previous-tag]..[current-tag]`
- **Categorize changes**: Group by type (feat:, fix:, docs:, refactor:, etc.)
- **Generate entries**: Use `--persona-scribe=en` following Keep a Changelog format
- **Update CHANGELOG.md**: Add new version section with categorized changes
- **Include context**: Breaking changes, deprecations, migration notes
### 4. README Updates
- **Main README.md**: Update version badges, feature descriptions if changed
- **example1/README.md**: Sync with any example project changes using `--persona-scribe=en`
- **freecell/README.md**: Update advanced implementation patterns using `--persona-scribe=en`
- Maintain educational tone and beginner-friendly approach for example1
- Preserve advanced framework extension focus for freecell
### 5. Documentation Index Update
- Update `docs/index.md` with any new documentation files
- Ensure all cross-references are working
- Update version information and compatibility notes
### 6. Quality Review
- Use SuperClaude Task tool for comprehensive documentation review
- Check for consistency across all updated files
- Verify markdown formatting and link integrity
- Validate version number consistency throughout all files
### 7. Git Integration
- Stage all updated documentation files
- Create commit with descriptive message following project conventions
- Tag commit appropriately if needed
## SuperClaude Configuration
**Personas Used:**
- `--persona-scribe=en` for all documentation generation
- `--persona-analyzer` for change analysis
- `--persona-qa` for final review
**Flags Applied:**
- `--ultrathink` for API analysis requiring deep understanding
- `--think-hard` for changelog generation and impact assessment
- `--uc` for token efficiency during bulk operations
- `--validate` for quality assurance steps
**MCP Integration:**
- **Context7**: For framework patterns and documentation standards
- **Sequential**: For systematic multi-step documentation updates
- **Task**: For comprehensive quality review process
## Error Handling
- Verify git tag exists before starting
- Backup existing documentation files
- Rollback on any step failure
- Report specific errors and suggested fixes
## Dependencies
- Git repository with proper version tagging
- SuperClaude framework available
- Internet connection for MCP servers
- Write access to docs/ directory
## Example Workflow
```bash
# Developer creates new tag
git tag v1.1.4
git push origin v1.1.4
# Run documentation sync
claude "/sync-docs v1.1.4"
# Review and commit changes
git add docs/ *.md **/README.md
git commit -m "docs: sync documentation for v1.1.4"
```

View File

@@ -0,0 +1,55 @@
Add a dependency between tasks.
Arguments: $ARGUMENTS
Parse the task IDs to establish dependency relationship.
## Adding Dependencies
Creates a dependency where one task must be completed before another can start.
## Argument Parsing
Parse natural language or IDs:
- "make 5 depend on 3" → task 5 depends on task 3
- "5 needs 3" → task 5 depends on task 3
- "5 3" → task 5 depends on task 3
- "5 after 3" → task 5 depends on task 3
## Execution
```bash
task-master add-dependency --id=<task-id> --depends-on=<dependency-id>
```
## Validation
Before adding:
1. **Verify both tasks exist**
2. **Check for circular dependencies**
3. **Ensure dependency makes logical sense**
4. **Warn if creating complex chains**
## Smart Features
- Detect if dependency already exists
- Suggest related dependencies
- Show impact on task flow
- Update task priorities if needed
## Post-Addition
After adding dependency:
1. Show updated dependency graph
2. Identify any newly blocked tasks
3. Suggest task order changes
4. Update project timeline
## Example Flows
```
/project:tm/add-dependency 5 needs 3
→ Task #5 now depends on Task #3
→ Task #5 is now blocked until #3 completes
→ Suggested: Also consider if #5 needs #4
```

View File

@@ -0,0 +1,76 @@
Add a subtask to a parent task.
Arguments: $ARGUMENTS
Parse arguments to create a new subtask or convert existing task.
## Adding Subtasks
Creates subtasks to break down complex parent tasks into manageable pieces.
## Argument Parsing
Flexible natural language:
- "add subtask to 5: implement login form"
- "break down 5 with: setup, implement, test"
- "subtask for 5: handle edge cases"
- "5: validate user input" → adds subtask to task 5
## Execution Modes
### 1. Create New Subtask
```bash
task-master add-subtask --parent=<id> --title="<title>" --description="<desc>"
```
### 2. Convert Existing Task
```bash
task-master add-subtask --parent=<id> --task-id=<existing-id>
```
## Smart Features
1. **Automatic Subtask Generation**
- If title contains "and" or commas, create multiple
- Suggest common subtask patterns
- Inherit parent's context
2. **Intelligent Defaults**
- Priority based on parent
- Appropriate time estimates
- Logical dependencies between subtasks
3. **Validation**
- Check parent task complexity
- Warn if too many subtasks
- Ensure subtask makes sense
## Creation Process
1. Parse parent task context
2. Generate subtask with ID like "5.1"
3. Set appropriate defaults
4. Link to parent task
5. Update parent's time estimate
## Example Flows
```
/project:tm/add-subtask to 5: implement user authentication
→ Created subtask #5.1: "implement user authentication"
→ Parent task #5 now has 1 subtask
→ Suggested next subtasks: tests, documentation
/project:tm/add-subtask 5: setup, implement, test
→ Created 3 subtasks:
#5.1: setup
#5.2: implement
#5.3: test
```
## Post-Creation
- Show updated task hierarchy
- Suggest logical next subtasks
- Update complexity estimates
- Recommend subtask order

View File

@@ -0,0 +1,71 @@
Convert an existing task into a subtask.
Arguments: $ARGUMENTS
Parse parent ID and task ID to convert.
## Task Conversion
Converts an existing standalone task into a subtask of another task.
## Argument Parsing
- "move task 8 under 5"
- "make 8 a subtask of 5"
- "nest 8 in 5"
- "5 8" → make task 8 a subtask of task 5
## Execution
```bash
task-master add-subtask --parent=<parent-id> --task-id=<task-to-convert>
```
## Pre-Conversion Checks
1. **Validation**
- Both tasks exist and are valid
- No circular parent relationships
- Task isn't already a subtask
- Logical hierarchy makes sense
2. **Impact Analysis**
- Dependencies that will be affected
- Tasks that depend on converting task
- Priority alignment needed
- Status compatibility
## Conversion Process
1. Change task ID from "8" to "5.1" (next available)
2. Update all dependency references
3. Inherit parent's context where appropriate
4. Adjust priorities if needed
5. Update time estimates
## Smart Features
- Preserve task history
- Maintain dependencies
- Update all references
- Create conversion log
## Example
```
/project:tm/add-subtask/from-task 5 8
→ Converting: Task #8 becomes subtask #5.1
→ Updated: 3 dependency references
→ Parent task #5 now has 1 subtask
→ Note: Subtask inherits parent's priority
Before: #8 "Implement validation" (standalone)
After: #5.1 "Implement validation" (subtask of #5)
```
## Post-Conversion
- Show new task hierarchy
- List updated dependencies
- Verify project integrity
- Suggest related conversions

View File

@@ -0,0 +1,78 @@
Add new tasks with intelligent parsing and context awareness.
Arguments: $ARGUMENTS
## Smart Task Addition
Parse natural language to create well-structured tasks.
### 1. **Input Understanding**
I'll intelligently parse your request:
- Natural language → Structured task
- Detect priority from keywords (urgent, ASAP, important)
- Infer dependencies from context
- Suggest complexity based on description
- Determine task type (feature, bug, refactor, test, docs)
### 2. **Smart Parsing Examples**
**"Add urgent task to fix login bug"**
→ Title: Fix login bug
→ Priority: high
→ Type: bug
→ Suggested complexity: medium
**"Create task for API documentation after task 23 is done"**
→ Title: API documentation
→ Dependencies: [23]
→ Type: documentation
→ Priority: medium
**"Need to refactor auth module - depends on 12 and 15, high complexity"**
→ Title: Refactor auth module
→ Dependencies: [12, 15]
→ Complexity: high
→ Type: refactor
### 3. **Context Enhancement**
Based on current project state:
- Suggest related existing tasks
- Warn about potential conflicts
- Recommend dependencies
- Propose subtasks if complex
### 4. **Interactive Refinement**
```yaml
Task Preview:
─────────────
Title: [Extracted title]
Priority: [Inferred priority]
Dependencies: [Detected dependencies]
Complexity: [Estimated complexity]
Suggestions:
- Similar task #34 exists, consider as dependency?
- This seems complex, break into subtasks?
- Tasks #45-47 work on same module
```
### 5. **Validation & Creation**
Before creating:
- Validate dependencies exist
- Check for duplicates
- Ensure logical ordering
- Verify task completeness
### 6. **Smart Defaults**
Intelligent defaults based on:
- Task type patterns
- Team conventions
- Historical data
- Current sprint/phase
Result: High-quality tasks from minimal input.

View File

@@ -0,0 +1,121 @@
Analyze task complexity and generate expansion recommendations.
Arguments: $ARGUMENTS
Perform deep analysis of task complexity across the project.
## Complexity Analysis
Uses AI to analyze tasks and recommend which ones need breakdown.
## Execution Options
```bash
task-master analyze-complexity [--research] [--threshold=5]
```
## Analysis Parameters
- `--research` → Use research AI for deeper analysis
- `--threshold=5` → Only flag tasks above complexity 5
- Default: Analyze all pending tasks
## Analysis Process
### 1. **Task Evaluation**
For each task, AI evaluates:
- Technical complexity
- Time requirements
- Dependency complexity
- Risk factors
- Knowledge requirements
### 2. **Complexity Scoring**
Assigns score 1-10 based on:
- Implementation difficulty
- Integration challenges
- Testing requirements
- Unknown factors
- Technical debt risk
### 3. **Recommendations**
For complex tasks:
- Suggest expansion approach
- Recommend subtask breakdown
- Identify risk areas
- Propose mitigation strategies
## Smart Analysis Features
1. **Pattern Recognition**
- Similar task comparisons
- Historical complexity accuracy
- Team velocity consideration
- Technology stack factors
2. **Contextual Factors**
- Team expertise
- Available resources
- Timeline constraints
- Business criticality
3. **Risk Assessment**
- Technical risks
- Timeline risks
- Dependency risks
- Knowledge gaps
## Output Format
```
Task Complexity Analysis Report
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
High Complexity Tasks (>7):
📍 #5 "Implement real-time sync" - Score: 9/10
Factors: WebSocket complexity, state management, conflict resolution
Recommendation: Expand into 5-7 subtasks
Risks: Performance, data consistency
📍 #12 "Migrate database schema" - Score: 8/10
Factors: Data migration, zero downtime, rollback strategy
Recommendation: Expand into 4-5 subtasks
Risks: Data loss, downtime
Medium Complexity Tasks (5-7):
📍 #23 "Add export functionality" - Score: 6/10
Consider expansion if timeline tight
Low Complexity Tasks (<5):
✅ 15 tasks - No expansion needed
Summary:
- Expand immediately: 2 tasks
- Consider expanding: 5 tasks
- Keep as-is: 15 tasks
```
## Actionable Output
For each high-complexity task:
1. Complexity score with reasoning
2. Specific expansion suggestions
3. Risk mitigation approaches
4. Recommended subtask structure
## Integration
Results are:
- Saved to `.taskmaster/reports/complexity-analysis.md`
- Used by expand command
- Inform sprint planning
- Guide resource allocation
## Next Steps
After analysis:
```
/project:tm/expand 5 # Expand specific task
/project:tm/expand/all # Expand all recommended
/project:tm/complexity-report # View detailed report
```

View File

@@ -0,0 +1,93 @@
Clear all subtasks from all tasks globally.
## Global Subtask Clearing
Remove all subtasks across the entire project. Use with extreme caution.
## Execution
```bash
task-master clear-subtasks --all
```
## Pre-Clear Analysis
1. **Project-Wide Summary**
```
Global Subtask Summary
━━━━━━━━━━━━━━━━━━━━
Total parent tasks: 12
Total subtasks: 47
- Completed: 15
- In-progress: 8
- Pending: 24
Work at risk: ~120 hours
```
2. **Critical Warnings**
- In-progress subtasks that will lose work
- Completed subtasks with valuable history
- Complex dependency chains
- Integration test results
## Double Confirmation
```
⚠️ DESTRUCTIVE OPERATION WARNING ⚠️
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
This will remove ALL 47 subtasks from your project
Including 8 in-progress and 15 completed subtasks
This action CANNOT be undone
Type 'CLEAR ALL SUBTASKS' to confirm:
```
## Smart Safeguards
- Require explicit confirmation phrase
- Create automatic backup
- Log all removed data
- Option to export first
## Use Cases
Valid reasons for global clear:
- Project restructuring
- Major pivot in approach
- Starting fresh breakdown
- Switching to different task organization
## Process
1. Full project analysis
2. Create backup file
3. Show detailed impact
4. Require confirmation
5. Execute removal
6. Generate summary report
## Alternative Suggestions
Before clearing all:
- Export subtasks to file
- Clear only pending subtasks
- Clear by task category
- Archive instead of delete
## Post-Clear Report
```
Global Subtask Clear Complete
━━━━━━━━━━━━━━━━━━━━━━━━━━━
Removed: 47 subtasks from 12 tasks
Backup saved: .taskmaster/backup/subtasks-20240115.json
Parent tasks updated: 12
Time estimates adjusted: Yes
Next steps:
- Review updated task list
- Re-expand complex tasks as needed
- Check project timeline
```

View File

@@ -0,0 +1,86 @@
Clear all subtasks from a specific task.
Arguments: $ARGUMENTS (task ID)
Remove all subtasks from a parent task at once.
## Clearing Subtasks
Bulk removal of all subtasks from a parent task.
## Execution
```bash
task-master clear-subtasks --id=<task-id>
```
## Pre-Clear Analysis
1. **Subtask Summary**
- Number of subtasks
- Completion status of each
- Work already done
- Dependencies affected
2. **Impact Assessment**
- Data that will be lost
- Dependencies to be removed
- Effect on project timeline
- Parent task implications
## Confirmation Required
```
Clear Subtasks Confirmation
━━━━━━━━━━━━━━━━━━━━━━━━━
Parent Task: #5 "Implement user authentication"
Subtasks to remove: 4
- #5.1 "Setup auth framework" (done)
- #5.2 "Create login form" (in-progress)
- #5.3 "Add validation" (pending)
- #5.4 "Write tests" (pending)
⚠️ This will permanently delete all subtask data
Continue? (y/n)
```
## Smart Features
- Option to convert to standalone tasks
- Backup task data before clearing
- Preserve completed work history
- Update parent task appropriately
## Process
1. List all subtasks for confirmation
2. Check for in-progress work
3. Remove all subtasks
4. Update parent task
5. Clean up dependencies
## Alternative Options
Suggest alternatives:
- Convert important subtasks to tasks
- Keep completed subtasks
- Archive instead of delete
- Export subtask data first
## Post-Clear
- Show updated parent task
- Recalculate time estimates
- Update task complexity
- Suggest next steps
## Example
```
/project:tm/clear-subtasks 5
→ Found 4 subtasks to remove
→ Warning: Subtask #5.2 is in-progress
→ Cleared all subtasks from task #5
→ Updated parent task estimates
→ Suggestion: Consider re-expanding with better breakdown
```

View File

@@ -0,0 +1,117 @@
Display the task complexity analysis report.
Arguments: $ARGUMENTS
View the detailed complexity analysis generated by analyze-complexity command.
## Viewing Complexity Report
Shows comprehensive task complexity analysis with actionable insights.
## Execution
```bash
task-master complexity-report [--file=<path>]
```
## Report Location
Default: `.taskmaster/reports/complexity-analysis.md`
Custom: Specify with --file parameter
## Report Contents
### 1. **Executive Summary**
```
Complexity Analysis Summary
━━━━━━━━━━━━━━━━━━━━━━━━
Analysis Date: 2024-01-15
Tasks Analyzed: 32
High Complexity: 5 (16%)
Medium Complexity: 12 (37%)
Low Complexity: 15 (47%)
Critical Findings:
- 5 tasks need immediate expansion
- 3 tasks have high technical risk
- 2 tasks block critical path
```
### 2. **Detailed Task Analysis**
For each complex task:
- Complexity score breakdown
- Contributing factors
- Specific risks identified
- Expansion recommendations
- Similar completed tasks
### 3. **Risk Matrix**
Visual representation:
```
Risk vs Complexity Matrix
━━━━━━━━━━━━━━━━━━━━━━━
High Risk | #5(9) #12(8) | #23(6)
Med Risk | #34(7) | #45(5) #67(5)
Low Risk | #78(8) | [15 tasks]
| High Complex | Med Complex
```
### 4. **Recommendations**
**Immediate Actions:**
1. Expand task #5 - Critical path + high complexity
2. Expand task #12 - High risk + dependencies
3. Review task #34 - Consider splitting
**Sprint Planning:**
- Don't schedule multiple high-complexity tasks together
- Ensure expertise available for complex tasks
- Build in buffer time for unknowns
## Interactive Features
When viewing report:
1. **Quick Actions**
- Press 'e' to expand a task
- Press 'd' for task details
- Press 'r' to refresh analysis
2. **Filtering**
- View by complexity level
- Filter by risk factors
- Show only actionable items
3. **Export Options**
- Markdown format
- CSV for spreadsheets
- JSON for tools
## Report Intelligence
- Compares with historical data
- Shows complexity trends
- Identifies patterns
- Suggests process improvements
## Integration
Use report for:
- Sprint planning sessions
- Resource allocation
- Risk assessment
- Team discussions
- Client updates
## Example Usage
```
/project:tm/complexity-report
→ Opens latest analysis
/project:tm/complexity-report --file=archived/2024-01-01.md
→ View historical analysis
After viewing:
/project:tm/expand 5
→ Expand high-complexity task
```

View File

@@ -0,0 +1,51 @@
Expand all pending tasks that need subtasks.
## Bulk Task Expansion
Intelligently expands all tasks that would benefit from breakdown.
## Execution
```bash
task-master expand --all
```
## Smart Selection
Only expands tasks that:
- Are marked as pending
- Have high complexity (>5)
- Lack existing subtasks
- Would benefit from breakdown
## Expansion Process
1. **Analysis Phase**
- Identify expansion candidates
- Group related tasks
- Plan expansion strategy
2. **Batch Processing**
- Expand tasks in logical order
- Maintain consistency
- Preserve relationships
- Optimize for parallelism
3. **Quality Control**
- Ensure subtask quality
- Avoid over-decomposition
- Maintain task coherence
- Update dependencies
## Options
- Add `force` to expand all regardless of complexity
- Add `research` for enhanced AI analysis
## Results
After bulk expansion:
- Summary of tasks expanded
- New subtask count
- Updated complexity metrics
- Suggested task order

View File

@@ -0,0 +1,49 @@
Break down a complex task into subtasks.
Arguments: $ARGUMENTS (task ID)
## Intelligent Task Expansion
Analyzes a task and creates detailed subtasks for better manageability.
## Execution
```bash
task-master expand --id=$ARGUMENTS
```
## Expansion Process
1. **Task Analysis**
- Review task complexity
- Identify components
- Detect technical challenges
- Estimate time requirements
2. **Subtask Generation**
- Create 3-7 subtasks typically
- Each subtask 1-4 hours
- Logical implementation order
- Clear acceptance criteria
3. **Smart Breakdown**
- Setup/configuration tasks
- Core implementation
- Testing components
- Integration steps
- Documentation updates
## Enhanced Features
Based on task type:
- **Feature**: Setup → Implement → Test → Integrate
- **Bug Fix**: Reproduce → Diagnose → Fix → Verify
- **Refactor**: Analyze → Plan → Refactor → Validate
## Post-Expansion
After expansion:
1. Show subtask hierarchy
2. Update time estimates
3. Suggest implementation order
4. Highlight critical path

View File

@@ -0,0 +1,81 @@
Automatically fix dependency issues found during validation.
## Automatic Dependency Repair
Intelligently fixes common dependency problems while preserving project logic.
## Execution
```bash
task-master fix-dependencies
```
## What Gets Fixed
### 1. **Auto-Fixable Issues**
- Remove references to deleted tasks
- Break simple circular dependencies
- Remove self-dependencies
- Clean up duplicate dependencies
### 2. **Smart Resolutions**
- Reorder dependencies to maintain logic
- Suggest task merging for over-dependent tasks
- Flatten unnecessary dependency chains
- Remove redundant transitive dependencies
### 3. **Manual Review Required**
- Complex circular dependencies
- Critical path modifications
- Business logic dependencies
- High-impact changes
## Fix Process
1. **Analysis Phase**
- Run validation check
- Categorize issues by type
- Determine fix strategy
2. **Execution Phase**
- Apply automatic fixes
- Log all changes made
- Preserve task relationships
3. **Verification Phase**
- Re-validate after fixes
- Show before/after comparison
- Highlight manual fixes needed
## Smart Features
- Preserves intended task flow
- Minimal disruption approach
- Creates fix history/log
- Suggests manual interventions
## Output Example
```
Dependency Auto-Fix Report
━━━━━━━━━━━━━━━━━━━━━━━━
Fixed Automatically:
✅ Removed 2 references to deleted tasks
✅ Resolved 1 self-dependency
✅ Cleaned 3 redundant dependencies
Manual Review Needed:
⚠️ Complex circular dependency: #12 → #15 → #18 → #12
Suggestion: Make #15 not depend on #12
⚠️ Task #45 has 8 dependencies
Suggestion: Break into subtasks
Run '/project:tm/validate-dependencies' to verify fixes
```
## Safety
- Preview mode available
- Rollback capability
- Change logging
- No data loss

View File

@@ -0,0 +1,121 @@
Generate individual task files from tasks.json.
## Task File Generation
Creates separate markdown files for each task, perfect for AI agents or documentation.
## Execution
```bash
task-master generate
```
## What It Creates
For each task, generates a file like `task_001.txt`:
```
Task ID: 1
Title: Implement user authentication
Status: pending
Priority: high
Dependencies: []
Created: 2024-01-15
Complexity: 7
## Description
Create a secure user authentication system with login, logout, and session management.
## Details
- Use JWT tokens for session management
- Implement secure password hashing
- Add remember me functionality
- Include password reset flow
## Test Strategy
- Unit tests for auth functions
- Integration tests for login flow
- Security testing for vulnerabilities
- Performance tests for concurrent logins
## Subtasks
1.1 Setup authentication framework (pending)
1.2 Create login endpoints (pending)
1.3 Implement session management (pending)
1.4 Add password reset (pending)
```
## File Organization
Creates structure:
```
.taskmaster/
└── tasks/
├── task_001.txt
├── task_002.txt
├── task_003.txt
└── ...
```
## Smart Features
1. **Consistent Formatting**
- Standardized structure
- Clear sections
- AI-readable format
- Markdown compatible
2. **Contextual Information**
- Full task details
- Related task references
- Progress indicators
- Implementation notes
3. **Incremental Updates**
- Only regenerate changed tasks
- Preserve custom additions
- Track generation timestamp
- Version control friendly
## Use Cases
- **AI Context**: Provide task context to AI assistants
- **Documentation**: Standalone task documentation
- **Archival**: Task history preservation
- **Sharing**: Send specific tasks to team members
- **Review**: Easier task review process
## Generation Options
Based on arguments:
- Filter by status
- Include/exclude completed
- Custom templates
- Different formats
## Post-Generation
```
Task File Generation Complete
━━━━━━━━━━━━━━━━━━━━━━━━━━
Generated: 45 task files
Location: .taskmaster/tasks/
Total size: 156 KB
New files: 5
Updated files: 12
Unchanged: 28
Ready for:
- AI agent consumption
- Version control
- Team distribution
```
## Integration Benefits
- Git-trackable task history
- Easy task sharing
- AI tool compatibility
- Offline task access
- Backup redundancy

View File

@@ -0,0 +1,81 @@
Show help for Task Master commands.
Arguments: $ARGUMENTS
Display help for Task Master commands. If arguments provided, show specific command help.
## Task Master Command Help
### Quick Navigation
Type `/project:tm/` and use tab completion to explore all commands.
### Command Categories
#### 🚀 Setup & Installation
- `/project:tm/setup/install` - Comprehensive installation guide
- `/project:tm/setup/quick-install` - One-line global install
#### 📋 Project Setup
- `/project:tm/init` - Initialize new project
- `/project:tm/init/quick` - Quick setup with auto-confirm
- `/project:tm/models` - View AI configuration
- `/project:tm/models/setup` - Configure AI providers
#### 🎯 Task Generation
- `/project:tm/parse-prd` - Generate tasks from PRD
- `/project:tm/parse-prd/with-research` - Enhanced parsing
- `/project:tm/generate` - Create task files
#### 📝 Task Management
- `/project:tm/list` - List tasks (natural language filters)
- `/project:tm/show <id>` - Display task details
- `/project:tm/add-task` - Create new task
- `/project:tm/update` - Update tasks naturally
- `/project:tm/next` - Get next task recommendation
#### 🔄 Status Management
- `/project:tm/set-status/to-pending <id>`
- `/project:tm/set-status/to-in-progress <id>`
- `/project:tm/set-status/to-done <id>`
- `/project:tm/set-status/to-review <id>`
- `/project:tm/set-status/to-deferred <id>`
- `/project:tm/set-status/to-cancelled <id>`
#### 🔍 Analysis & Breakdown
- `/project:tm/analyze-complexity` - Analyze task complexity
- `/project:tm/expand <id>` - Break down complex task
- `/project:tm/expand/all` - Expand all eligible tasks
#### 🔗 Dependencies
- `/project:tm/add-dependency` - Add task dependency
- `/project:tm/remove-dependency` - Remove dependency
- `/project:tm/validate-dependencies` - Check for issues
#### 🤖 Workflows
- `/project:tm/workflows/smart-flow` - Intelligent workflows
- `/project:tm/workflows/pipeline` - Command chaining
- `/project:tm/workflows/auto-implement` - Auto-implementation
#### 📊 Utilities
- `/project:tm/utils/analyze` - Project analysis
- `/project:tm/status` - Project dashboard
- `/project:tm/learn` - Interactive learning
### Natural Language Examples
```
/project:tm/list pending high priority
/project:tm/update mark all API tasks as done
/project:tm/add-task create login system with OAuth
/project:tm/show current
```
### Getting Started
1. Install: `/project:tm/setup/quick-install`
2. Initialize: `/project:tm/init/quick`
3. Learn: `/project:tm/learn start`
4. Work: `/project:tm/workflows/smart-flow`
For detailed command info: `/project:tm/help <command-name>`

View File

@@ -0,0 +1,46 @@
Quick initialization with auto-confirmation.
Arguments: $ARGUMENTS
Initialize a Task Master project without prompts, accepting all defaults.
## Quick Setup
```bash
task-master init -y
```
## What It Does
1. Creates `.taskmaster/` directory structure
2. Initializes empty `tasks.json`
3. Sets up default configuration
4. Uses directory name as project name
5. Skips all confirmation prompts
## Smart Defaults
- Project name: Current directory name
- Description: "Task Master Project"
- Model config: Existing environment vars
- Task structure: Standard format
## Next Steps
After quick init:
1. Configure AI models if needed:
```
/project:tm/models/setup
```
2. Parse PRD if available:
```
/project:tm/parse-prd <file>
```
3. Or create first task:
```
/project:tm/add-task create initial setup
```
Perfect for rapid project setup!

View File

@@ -0,0 +1,50 @@
Initialize a new Task Master project.
Arguments: $ARGUMENTS
Parse arguments to determine initialization preferences.
## Initialization Process
1. **Parse Arguments**
- PRD file path (if provided)
- Project name
- Auto-confirm flag (-y)
2. **Project Setup**
```bash
task-master init
```
3. **Smart Initialization**
- Detect existing project files
- Suggest project name from directory
- Check for git repository
- Verify AI provider configuration
## Configuration Options
Based on arguments:
- `quick` / `-y` → Skip confirmations
- `<file.md>` → Use as PRD after init
- `--name=<name>` → Set project name
- `--description=<desc>` → Set description
## Post-Initialization
After successful init:
1. Show project structure created
2. Verify AI models configured
3. Suggest next steps:
- Parse PRD if available
- Configure AI providers
- Set up git hooks
- Create first tasks
## Integration
If PRD file provided:
```
/project:tm/init my-prd.md
→ Automatically runs parse-prd after init
```

View File

@@ -0,0 +1,103 @@
Learn about Task Master capabilities through interactive exploration.
Arguments: $ARGUMENTS
## Interactive Task Master Learning
Based on your input, I'll help you discover capabilities:
### 1. **What are you trying to do?**
If $ARGUMENTS contains:
- "start" / "begin" → Show project initialization workflows
- "manage" / "organize" → Show task management commands
- "automate" / "auto" → Show automation workflows
- "analyze" / "report" → Show analysis tools
- "fix" / "problem" → Show troubleshooting commands
- "fast" / "quick" → Show efficiency shortcuts
### 2. **Intelligent Suggestions**
Based on your project state:
**No tasks yet?**
```
You'll want to start with:
1. /project:task-master:init <prd-file>
→ Creates tasks from requirements
2. /project:task-master:parse-prd <file>
→ Alternative task generation
Try: /project:task-master:init demo-prd.md
```
**Have tasks?**
Let me analyze what you might need...
- Many pending tasks? → Learn sprint planning
- Complex tasks? → Learn task expansion
- Daily work? → Learn workflow automation
### 3. **Command Discovery**
**By Category:**
- 📋 Task Management: list, show, add, update, complete
- 🔄 Workflows: auto-implement, sprint-plan, daily-standup
- 🛠️ Utilities: check-health, complexity-report, sync-memory
- 🔍 Analysis: validate-deps, show dependencies
**By Scenario:**
- "I want to see what to work on" → `/project:task-master:next`
- "I need to break this down" → `/project:task-master:expand <id>`
- "Show me everything" → `/project:task-master:status`
- "Just do it for me" → `/project:workflows:auto-implement`
### 4. **Power User Patterns**
**Command Chaining:**
```
/project:task-master:next
/project:task-master:start <id>
/project:workflows:auto-implement
```
**Smart Filters:**
```
/project:task-master:list pending high
/project:task-master:list blocked
/project:task-master:list 1-5 tree
```
**Automation:**
```
/project:workflows:pipeline init → expand-all → sprint-plan
```
### 5. **Learning Path**
Based on your experience level:
**Beginner Path:**
1. init → Create project
2. status → Understand state
3. next → Find work
4. complete → Finish task
**Intermediate Path:**
1. expand → Break down complex tasks
2. sprint-plan → Organize work
3. complexity-report → Understand difficulty
4. validate-deps → Ensure consistency
**Advanced Path:**
1. pipeline → Chain operations
2. smart-flow → Context-aware automation
3. Custom commands → Extend the system
### 6. **Try This Now**
Based on what you asked about, try:
[Specific command suggestion based on $ARGUMENTS]
Want to learn more about a specific command?
Type: /project:help <command-name>

View File

@@ -0,0 +1,39 @@
List tasks filtered by a specific status.
Arguments: $ARGUMENTS
Parse the status from arguments and list only tasks matching that status.
## Status Options
- `pending` - Not yet started
- `in-progress` - Currently being worked on
- `done` - Completed
- `review` - Awaiting review
- `deferred` - Postponed
- `cancelled` - Cancelled
## Execution
Based on $ARGUMENTS, run:
```bash
task-master list --status=$ARGUMENTS
```
## Enhanced Display
For the filtered results:
- Group by priority within the status
- Show time in current status
- Highlight tasks approaching deadlines
- Display blockers and dependencies
- Suggest next actions for each status group
## Intelligent Insights
Based on the status filter:
- **Pending**: Show recommended start order
- **In-Progress**: Display idle time warnings
- **Done**: Show newly unblocked tasks
- **Review**: Indicate review duration
- **Deferred**: Show reactivation criteria
- **Cancelled**: Display impact analysis

View File

@@ -0,0 +1,29 @@
List all tasks including their subtasks in a hierarchical view.
This command shows all tasks with their nested subtasks, providing a complete project overview.
## Execution
Run the Task Master list command with subtasks flag:
```bash
task-master list --with-subtasks
```
## Enhanced Display
I'll organize the output to show:
- Parent tasks with clear indicators
- Nested subtasks with proper indentation
- Status badges for quick scanning
- Dependencies and blockers highlighted
- Progress indicators for tasks with subtasks
## Smart Filtering
Based on the task hierarchy:
- Show completion percentage for parent tasks
- Highlight blocked subtask chains
- Group by functional areas
- Indicate critical path items
This gives you a complete tree view of your project structure.

View File

@@ -0,0 +1,43 @@
List tasks with intelligent argument parsing.
Parse arguments to determine filters and display options:
- Status: pending, in-progress, done, review, deferred, cancelled
- Priority: high, medium, low (or priority:high)
- Special: subtasks, tree, dependencies, blocked
- IDs: Direct numbers (e.g., "1,3,5" or "1-5")
- Complex: "pending high" = pending AND high priority
Arguments: $ARGUMENTS
Let me parse your request intelligently:
1. **Detect Filter Intent**
- If arguments contain status keywords → filter by status
- If arguments contain priority → filter by priority
- If arguments contain "subtasks" → include subtasks
- If arguments contain "tree" → hierarchical view
- If arguments contain numbers → show specific tasks
- If arguments contain "blocked" → show blocked tasks only
2. **Smart Combinations**
Examples of what I understand:
- "pending high" → pending tasks with high priority
- "done today" → tasks completed today
- "blocked" → tasks with unmet dependencies
- "1-5" → tasks 1 through 5
- "subtasks tree" → hierarchical view with subtasks
3. **Execute Appropriate Query**
Based on parsed intent, run the most specific task-master command
4. **Enhanced Display**
- Group by relevant criteria
- Show most important information first
- Use visual indicators for quick scanning
- Include relevant metrics
5. **Intelligent Suggestions**
Based on what you're viewing, suggest next actions:
- Many pending? → Suggest priority order
- Many blocked? → Show dependency resolution
- Looking at specific tasks? → Show related tasks

View File

@@ -0,0 +1,51 @@
Run interactive setup to configure AI models.
## Interactive Model Configuration
Guides you through setting up AI providers for Task Master.
## Execution
```bash
task-master models --setup
```
## Setup Process
1. **Environment Check**
- Detect existing API keys
- Show current configuration
- Identify missing providers
2. **Provider Selection**
- Choose main provider (required)
- Select research provider (recommended)
- Configure fallback (optional)
3. **API Key Configuration**
- Prompt for missing keys
- Validate key format
- Test connectivity
- Save configuration
## Smart Recommendations
Based on your needs:
- **For best results**: Claude + Perplexity
- **Budget conscious**: GPT-3.5 + Perplexity
- **Maximum capability**: GPT-4 + Perplexity + Claude fallback
## Configuration Storage
Keys can be stored in:
1. Environment variables (recommended)
2. `.env` file in project
3. Global `.taskmaster/config`
## Post-Setup
After configuration:
- Test each provider
- Show usage examples
- Suggest next steps
- Verify parse-prd works

View File

@@ -0,0 +1,51 @@
View current AI model configuration.
## Model Configuration Display
Shows the currently configured AI providers and models for Task Master.
## Execution
```bash
task-master models
```
## Information Displayed
1. **Main Provider**
- Model ID and name
- API key status (configured/missing)
- Usage: Primary task generation
2. **Research Provider**
- Model ID and name
- API key status
- Usage: Enhanced research mode
3. **Fallback Provider**
- Model ID and name
- API key status
- Usage: Backup when main fails
## Visual Status
```
Task Master AI Model Configuration
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Main: ✅ claude-3-5-sonnet (configured)
Research: ✅ perplexity-sonar (configured)
Fallback: ⚠️ Not configured (optional)
Available Models:
- claude-3-5-sonnet
- gpt-4-turbo
- gpt-3.5-turbo
- perplexity-sonar
```
## Next Actions
Based on configuration:
- If missing API keys → Suggest setup
- If no research model → Explain benefits
- If all configured → Show usage tips

View File

@@ -0,0 +1,66 @@
Intelligently determine and prepare the next action based on comprehensive context.
This enhanced version of 'next' considers:
- Current task states
- Recent activity
- Time constraints
- Dependencies
- Your working patterns
Arguments: $ARGUMENTS
## Intelligent Next Action
### 1. **Context Gathering**
Let me analyze the current situation:
- Active tasks (in-progress)
- Recently completed tasks
- Blocked tasks
- Time since last activity
- Arguments provided: $ARGUMENTS
### 2. **Smart Decision Tree**
**If you have an in-progress task:**
- Has it been idle > 2 hours? → Suggest resuming or switching
- Near completion? → Show remaining steps
- Blocked? → Find alternative task
**If no in-progress tasks:**
- Unblocked high-priority tasks? → Start highest
- Complex tasks need breakdown? → Suggest expansion
- All tasks blocked? → Show dependency resolution
**Special arguments handling:**
- "quick" → Find task < 2 hours
- "easy" → Find low complexity task
- "important" → Find high priority regardless of complexity
- "continue" → Resume last worked task
### 3. **Preparation Workflow**
Based on selected task:
1. Show full context and history
2. Set up development environment
3. Run relevant tests
4. Open related files
5. Show similar completed tasks
6. Estimate completion time
### 4. **Alternative Suggestions**
Always provide options:
- Primary recommendation
- Quick alternative (< 1 hour)
- Strategic option (unblocks most tasks)
- Learning option (new technology/skill)
### 5. **Workflow Integration**
Seamlessly connect to:
- `/project:task-master:start [selected]`
- `/project:workflows:auto-implement`
- `/project:task-master:expand` (if complex)
- `/project:utils:complexity-report` (if unsure)
The goal: Zero friction from decision to implementation.

View File

@@ -0,0 +1,48 @@
Parse PRD with enhanced research mode for better task generation.
Arguments: $ARGUMENTS (PRD file path)
## Research-Enhanced Parsing
Uses the research AI provider (typically Perplexity) for more comprehensive task generation with current best practices.
## Execution
```bash
task-master parse-prd --input=$ARGUMENTS --research
```
## Research Benefits
1. **Current Best Practices**
- Latest framework patterns
- Security considerations
- Performance optimizations
- Accessibility requirements
2. **Technical Deep Dive**
- Implementation approaches
- Library recommendations
- Architecture patterns
- Testing strategies
3. **Comprehensive Coverage**
- Edge cases consideration
- Error handling tasks
- Monitoring setup
- Deployment tasks
## Enhanced Output
Research mode typically:
- Generates more detailed tasks
- Includes industry standards
- Adds compliance considerations
- Suggests modern tooling
## When to Use
- New technology domains
- Complex requirements
- Regulatory compliance needed
- Best practices crucial

View File

@@ -0,0 +1,49 @@
Parse a PRD document to generate tasks.
Arguments: $ARGUMENTS (PRD file path)
## Intelligent PRD Parsing
Analyzes your requirements document and generates a complete task breakdown.
## Execution
```bash
task-master parse-prd --input=$ARGUMENTS
```
## Parsing Process
1. **Document Analysis**
- Extract key requirements
- Identify technical components
- Detect dependencies
- Estimate complexity
2. **Task Generation**
- Create 10-15 tasks by default
- Include implementation tasks
- Add testing tasks
- Include documentation tasks
- Set logical dependencies
3. **Smart Enhancements**
- Group related functionality
- Set appropriate priorities
- Add acceptance criteria
- Include test strategies
## Options
Parse arguments for modifiers:
- Number after filename → `--num-tasks`
- `research` → Use research mode
- `comprehensive` → Generate more tasks
## Post-Generation
After parsing:
1. Display task summary
2. Show dependency graph
3. Suggest task expansion for complex items
4. Recommend sprint planning

View File

@@ -0,0 +1,62 @@
Remove a dependency between tasks.
Arguments: $ARGUMENTS
Parse the task IDs to remove dependency relationship.
## Removing Dependencies
Removes a dependency relationship, potentially unblocking tasks.
## Argument Parsing
Parse natural language or IDs:
- "remove dependency between 5 and 3"
- "5 no longer needs 3"
- "unblock 5 from 3"
- "5 3" → remove dependency of 5 on 3
## Execution
```bash
task-master remove-dependency --id=<task-id> --depends-on=<dependency-id>
```
## Pre-Removal Checks
1. **Verify dependency exists**
2. **Check impact on task flow**
3. **Warn if it breaks logical sequence**
4. **Show what will be unblocked**
## Smart Analysis
Before removing:
- Show why dependency might have existed
- Check if removal makes tasks executable
- Verify no critical path disruption
- Suggest alternative dependencies
## Post-Removal
After removing:
1. Show updated task status
2. List newly unblocked tasks
3. Update project timeline
4. Suggest next actions
## Safety Features
- Confirm if removing critical dependency
- Show tasks that become immediately actionable
- Warn about potential issues
- Keep removal history
## Example
```
/project:tm/remove-dependency 5 from 3
→ Removed: Task #5 no longer depends on #3
→ Task #5 is now UNBLOCKED and ready to start
→ Warning: Consider if #5 still needs #2 completed first
```

View File

@@ -0,0 +1,84 @@
Remove a subtask from its parent task.
Arguments: $ARGUMENTS
Parse subtask ID to remove, with option to convert to standalone task.
## Removing Subtasks
Remove a subtask and optionally convert it back to a standalone task.
## Argument Parsing
- "remove subtask 5.1"
- "delete 5.1"
- "convert 5.1 to task" → remove and convert
- "5.1 standalone" → convert to standalone
## Execution Options
### 1. Delete Subtask
```bash
task-master remove-subtask --id=<parentId.subtaskId>
```
### 2. Convert to Standalone
```bash
task-master remove-subtask --id=<parentId.subtaskId> --convert
```
## Pre-Removal Checks
1. **Validate Subtask**
- Verify subtask exists
- Check completion status
- Review dependencies
2. **Impact Analysis**
- Other subtasks that depend on it
- Parent task implications
- Data that will be lost
## Removal Process
### For Deletion:
1. Confirm if subtask has work done
2. Update parent task estimates
3. Remove subtask and its data
4. Clean up dependencies
### For Conversion:
1. Assign new standalone task ID
2. Preserve all task data
3. Update dependency references
4. Maintain task history
## Smart Features
- Warn if subtask is in-progress
- Show impact on parent task
- Preserve important data
- Update related estimates
## Example Flows
```
/project:tm/remove-subtask 5.1
→ Warning: Subtask #5.1 is in-progress
→ This will delete all subtask data
→ Parent task #5 will be updated
Confirm deletion? (y/n)
/project:tm/remove-subtask 5.1 convert
→ Converting subtask #5.1 to standalone task #89
→ Preserved: All task data and history
→ Updated: 2 dependency references
→ New task #89 is now independent
```
## Post-Removal
- Update parent task status
- Recalculate estimates
- Show updated hierarchy
- Suggest next actions

View File

@@ -0,0 +1,107 @@
Remove a task permanently from the project.
Arguments: $ARGUMENTS (task ID)
Delete a task and handle all its relationships properly.
## Task Removal
Permanently removes a task while maintaining project integrity.
## Argument Parsing
- "remove task 5"
- "delete 5"
- "5" → remove task 5
- Can include "-y" for auto-confirm
## Execution
```bash
task-master remove-task --id=<id> [-y]
```
## Pre-Removal Analysis
1. **Task Details**
- Current status
- Work completed
- Time invested
- Associated data
2. **Relationship Check**
- Tasks that depend on this
- Dependencies this task has
- Subtasks that will be removed
- Blocking implications
3. **Impact Assessment**
```
Task Removal Impact
━━━━━━━━━━━━━━━━━━
Task: #5 "Implement authentication" (in-progress)
Status: 60% complete (~8 hours work)
Will affect:
- 3 tasks depend on this (will be blocked)
- Has 4 subtasks (will be deleted)
- Part of critical path
⚠️ This action cannot be undone
```
## Smart Warnings
- Warn if task is in-progress
- Show dependent tasks that will be blocked
- Highlight if part of critical path
- Note any completed work being lost
## Removal Process
1. Show comprehensive impact
2. Require confirmation (unless -y)
3. Update dependent task references
4. Remove task and subtasks
5. Clean up orphaned dependencies
6. Log removal with timestamp
## Alternative Actions
Suggest before deletion:
- Mark as cancelled instead
- Convert to documentation
- Archive task data
- Transfer work to another task
## Post-Removal
- List affected tasks
- Show broken dependencies
- Update project statistics
- Suggest dependency fixes
- Recalculate timeline
## Example Flows
```
/project:tm/remove-task 5
→ Task #5 is in-progress with 8 hours logged
→ 3 other tasks depend on this
→ Suggestion: Mark as cancelled instead?
Remove anyway? (y/n)
/project:tm/remove-task 5 -y
→ Removed: Task #5 and 4 subtasks
→ Updated: 3 task dependencies
→ Warning: Tasks #7, #8, #9 now have missing dependency
→ Run /project:tm/fix-dependencies to resolve
```
## Safety Features
- Confirmation required
- Impact preview
- Removal logging
- Suggest alternatives
- No cascade delete of dependents

View File

@@ -0,0 +1,55 @@
Cancel a task permanently.
Arguments: $ARGUMENTS (task ID)
## Cancelling a Task
This status indicates a task is no longer needed and won't be completed.
## Valid Reasons for Cancellation
- Requirements changed
- Feature deprecated
- Duplicate of another task
- Strategic pivot
- Technical approach invalidated
## Pre-Cancellation Checks
1. Confirm no critical dependencies
2. Check for partial implementation
3. Verify cancellation rationale
4. Document lessons learned
## Execution
```bash
task-master set-status --id=$ARGUMENTS --status=cancelled
```
## Cancellation Impact
When cancelling:
1. **Dependency Updates**
- Notify dependent tasks
- Update project scope
- Recalculate timelines
2. **Clean-up Actions**
- Remove related branches
- Archive any work done
- Update documentation
- Close related issues
3. **Learning Capture**
- Document why cancelled
- Note what was learned
- Update estimation models
- Prevent future duplicates
## Historical Preservation
- Keep for reference
- Tag with cancellation reason
- Link to replacement if any
- Maintain audit trail

View File

@@ -0,0 +1,47 @@
Defer a task for later consideration.
Arguments: $ARGUMENTS (task ID)
## Deferring a Task
This status indicates a task is valid but not currently actionable or prioritized.
## Valid Reasons for Deferral
- Waiting for external dependencies
- Reprioritized for future sprint
- Blocked by technical limitations
- Resource constraints
- Strategic timing considerations
## Execution
```bash
task-master set-status --id=$ARGUMENTS --status=deferred
```
## Deferral Management
When deferring:
1. **Document Reason**
- Capture why it's being deferred
- Set reactivation criteria
- Note any partial work completed
2. **Impact Analysis**
- Check dependent tasks
- Update project timeline
- Notify affected stakeholders
3. **Future Planning**
- Set review reminders
- Tag for specific milestone
- Preserve context for reactivation
- Link to blocking issues
## Smart Tracking
- Monitor deferral duration
- Alert when criteria met
- Prevent scope creep
- Regular review cycles

View File

@@ -0,0 +1,44 @@
Mark a task as completed.
Arguments: $ARGUMENTS (task ID)
## Completing a Task
This command validates task completion and updates project state intelligently.
## Pre-Completion Checks
1. Verify test strategy was followed
2. Check if all subtasks are complete
3. Validate acceptance criteria met
4. Ensure code is committed
## Execution
```bash
task-master set-status --id=$ARGUMENTS --status=done
```
## Post-Completion Actions
1. **Update Dependencies**
- Identify newly unblocked tasks
- Update sprint progress
- Recalculate project timeline
2. **Documentation**
- Generate completion summary
- Update CLAUDE.md with learnings
- Log implementation approach
3. **Next Steps**
- Show newly available tasks
- Suggest logical next task
- Update velocity metrics
## Celebration & Learning
- Show impact of completion
- Display unblocked work
- Recognize achievement
- Capture lessons learned

View File

@@ -0,0 +1,36 @@
Start working on a task by setting its status to in-progress.
Arguments: $ARGUMENTS (task ID)
## Starting Work on Task
This command does more than just change status - it prepares your environment for productive work.
## Pre-Start Checks
1. Verify dependencies are met
2. Check if another task is already in-progress
3. Ensure task details are complete
4. Validate test strategy exists
## Execution
```bash
task-master set-status --id=$ARGUMENTS --status=in-progress
```
## Environment Setup
After setting to in-progress:
1. Create/checkout appropriate git branch
2. Open relevant documentation
3. Set up test watchers if applicable
4. Display task details and acceptance criteria
5. Show similar completed tasks for reference
## Smart Suggestions
- Estimated completion time based on complexity
- Related files from similar tasks
- Potential blockers to watch for
- Recommended first steps

View File

@@ -0,0 +1,32 @@
Set a task's status to pending.
Arguments: $ARGUMENTS (task ID)
## Setting Task to Pending
This moves a task back to the pending state, useful for:
- Resetting erroneously started tasks
- Deferring work that was prematurely begun
- Reorganizing sprint priorities
## Execution
```bash
task-master set-status --id=$ARGUMENTS --status=pending
```
## Validation
Before setting to pending:
- Warn if task is currently in-progress
- Check if this will block other tasks
- Suggest documenting why it's being reset
- Preserve any work already done
## Smart Actions
After setting to pending:
- Update sprint planning if needed
- Notify about freed resources
- Suggest priority reassessment
- Log the status change with context

View File

@@ -0,0 +1,40 @@
Set a task's status to review.
Arguments: $ARGUMENTS (task ID)
## Marking Task for Review
This status indicates work is complete but needs verification before final approval.
## When to Use Review Status
- Code complete but needs peer review
- Implementation done but needs testing
- Documentation written but needs proofreading
- Design complete but needs stakeholder approval
## Execution
```bash
task-master set-status --id=$ARGUMENTS --status=review
```
## Review Preparation
When setting to review:
1. **Generate Review Checklist**
- Link to PR/MR if applicable
- Highlight key changes
- Note areas needing attention
- Include test results
2. **Documentation**
- Update task with review notes
- Link relevant artifacts
- Specify reviewers if known
3. **Smart Actions**
- Create review reminders
- Track review duration
- Suggest reviewers based on expertise
- Prepare rollback plan if needed

View File

@@ -0,0 +1,117 @@
Check if Task Master is installed and install it if needed.
This command helps you get Task Master set up globally on your system.
## Detection and Installation Process
1. **Check Current Installation**
```bash
# Check if task-master command exists
which task-master || echo "Task Master not found"
# Check npm global packages
npm list -g task-master-ai
```
2. **System Requirements Check**
```bash
# Verify Node.js is installed
node --version
# Verify npm is installed
npm --version
# Check Node version (need 16+)
```
3. **Install Task Master Globally**
If not installed, run:
```bash
npm install -g task-master-ai
```
4. **Verify Installation**
```bash
# Check version
task-master --version
# Verify command is available
which task-master
```
5. **Initial Setup**
```bash
# Initialize in current directory
task-master init
```
6. **Configure AI Provider**
Ensure you have at least one AI provider API key set:
```bash
# Check current configuration
task-master models --status
# If no API keys found, guide setup
echo "You'll need at least one API key:"
echo "- ANTHROPIC_API_KEY for Claude"
echo "- OPENAI_API_KEY for GPT models"
echo "- PERPLEXITY_API_KEY for research"
echo ""
echo "Set them in your shell profile or .env file"
```
7. **Quick Test**
```bash
# Create a test PRD
echo "Build a simple hello world API" > test-prd.txt
# Try parsing it
task-master parse-prd test-prd.txt -n 3
```
## Troubleshooting
If installation fails:
**Permission Errors:**
```bash
# Try with sudo (macOS/Linux)
sudo npm install -g task-master-ai
# Or fix npm permissions
npm config set prefix ~/.npm-global
export PATH=~/.npm-global/bin:$PATH
```
**Network Issues:**
```bash
# Use different registry
npm install -g task-master-ai --registry https://registry.npmjs.org/
```
**Node Version Issues:**
```bash
# Install Node 18+ via nvm
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.0/install.sh | bash
nvm install 18
nvm use 18
```
## Success Confirmation
Once installed, you should see:
```
✅ Task Master v0.16.2 (or higher) installed
✅ Command 'task-master' available globally
✅ AI provider configured
✅ Ready to use slash commands!
Try: /project:task-master:init your-prd.md
```
## Next Steps
After installation:
1. Run `/project:utils:check-health` to verify setup
2. Configure AI providers with `/project:task-master:models`
3. Start using Task Master commands!

View File

@@ -0,0 +1,22 @@
Quick install Task Master globally if not already installed.
Execute this streamlined installation:
```bash
# Check and install in one command
task-master --version 2>/dev/null || npm install -g task-master-ai
# Verify installation
task-master --version
# Quick setup check
task-master models --status || echo "Note: You'll need to set up an AI provider API key"
```
If you see "command not found" after installation, you may need to:
1. Restart your terminal
2. Or add npm global bin to PATH: `export PATH=$(npm bin -g):$PATH`
Once installed, you can use all the Task Master commands!
Quick test: Run `/project:help` to see all available commands.

View File

@@ -0,0 +1,82 @@
Show detailed task information with rich context and insights.
Arguments: $ARGUMENTS
## Enhanced Task Display
Parse arguments to determine what to show and how.
### 1. **Smart Task Selection**
Based on $ARGUMENTS:
- Number → Show specific task with full context
- "current" → Show active in-progress task(s)
- "next" → Show recommended next task
- "blocked" → Show all blocked tasks with reasons
- "critical" → Show critical path tasks
- Multiple IDs → Comparative view
### 2. **Contextual Information**
For each task, intelligently include:
**Core Details**
- Full task information (id, title, description, details)
- Current status with history
- Test strategy and acceptance criteria
- Priority and complexity analysis
**Relationships**
- Dependencies (what it needs)
- Dependents (what needs it)
- Parent/subtask hierarchy
- Related tasks (similar work)
**Time Intelligence**
- Created/updated timestamps
- Time in current status
- Estimated vs actual time
- Historical completion patterns
### 3. **Visual Enhancements**
```
📋 Task #45: Implement User Authentication
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Status: 🟡 in-progress (2 hours)
Priority: 🔴 High | Complexity: 73/100
Dependencies: ✅ #41, ✅ #42, ⏳ #43 (blocked)
Blocks: #46, #47, #52
Progress: ████████░░ 80% complete
Recent Activity:
- 2h ago: Status changed to in-progress
- 4h ago: Dependency #42 completed
- Yesterday: Task expanded with 3 subtasks
```
### 4. **Intelligent Insights**
Based on task analysis:
- **Risk Assessment**: Complexity vs time remaining
- **Bottleneck Analysis**: Is this blocking critical work?
- **Recommendation**: Suggested approach or concerns
- **Similar Tasks**: How others completed similar work
### 5. **Action Suggestions**
Context-aware next steps:
- If blocked → Show how to unblock
- If complex → Suggest expansion
- If in-progress → Show completion checklist
- If done → Show dependent tasks ready to start
### 6. **Multi-Task View**
When showing multiple tasks:
- Common dependencies
- Optimal completion order
- Parallel work opportunities
- Combined complexity analysis

View File

@@ -0,0 +1,64 @@
Enhanced status command with comprehensive project insights.
Arguments: $ARGUMENTS
## Intelligent Status Overview
### 1. **Executive Summary**
Quick dashboard view:
- 🏃 Active work (in-progress tasks)
- 📊 Progress metrics (% complete, velocity)
- 🚧 Blockers and risks
- ⏱️ Time analysis (estimated vs actual)
- 🎯 Sprint/milestone progress
### 2. **Contextual Analysis**
Based on $ARGUMENTS, focus on:
- "sprint" → Current sprint progress and burndown
- "blocked" → Dependency chains and resolution paths
- "team" → Task distribution and workload
- "timeline" → Schedule adherence and projections
- "risk" → High complexity or overdue items
### 3. **Smart Insights**
**Workflow Health:**
- Idle tasks (in-progress > 24h without updates)
- Bottlenecks (multiple tasks waiting on same dependency)
- Quick wins (low complexity, high impact)
**Predictive Analytics:**
- Completion projections based on velocity
- Risk of missing deadlines
- Recommended task order for optimal flow
### 4. **Visual Intelligence**
Dynamic visualization based on data:
```
Sprint Progress: ████████░░ 80% (16/20 tasks)
Velocity Trend: ↗️ +15% this week
Blocked Tasks: 🔴 3 critical path items
Priority Distribution:
High: ████████ 8 tasks (2 blocked)
Medium: ████░░░░ 4 tasks
Low: ██░░░░░░ 2 tasks
```
### 5. **Actionable Recommendations**
Based on analysis:
1. **Immediate actions** (unblock critical path)
2. **Today's focus** (optimal task sequence)
3. **Process improvements** (recurring patterns)
4. **Resource needs** (skills, time, dependencies)
### 6. **Historical Context**
Compare to previous periods:
- Velocity changes
- Pattern recognition
- Improvement areas
- Success patterns to repeat

View File

@@ -0,0 +1,117 @@
Export tasks to README.md with professional formatting.
Arguments: $ARGUMENTS
Generate a well-formatted README with current task information.
## README Synchronization
Creates or updates README.md with beautifully formatted task information.
## Argument Parsing
Optional filters:
- "pending" → Only pending tasks
- "with-subtasks" → Include subtask details
- "by-priority" → Group by priority
- "sprint" → Current sprint only
## Execution
```bash
task-master sync-readme [--with-subtasks] [--status=<status>]
```
## README Generation
### 1. **Project Header**
```markdown
# Project Name
## 📋 Task Progress
Last Updated: 2024-01-15 10:30 AM
### Summary
- Total Tasks: 45
- Completed: 15 (33%)
- In Progress: 5 (11%)
- Pending: 25 (56%)
```
### 2. **Task Sections**
Organized by status or priority:
- Progress indicators
- Task descriptions
- Dependencies noted
- Time estimates
### 3. **Visual Elements**
- Progress bars
- Status badges
- Priority indicators
- Completion checkmarks
## Smart Features
1. **Intelligent Grouping**
- By feature area
- By sprint/milestone
- By assigned developer
- By priority
2. **Progress Tracking**
- Overall completion
- Sprint velocity
- Burndown indication
- Time tracking
3. **Formatting Options**
- GitHub-flavored markdown
- Task checkboxes
- Collapsible sections
- Table format available
## Example Output
```markdown
## 🚀 Current Sprint
### In Progress
- [ ] 🔄 #5 **Implement user authentication** (60% complete)
- Dependencies: API design (#3 ✅)
- Subtasks: 4 (2 completed)
- Est: 8h / Spent: 5h
### Pending (High Priority)
- [ ]#8 **Create dashboard UI**
- Blocked by: #5
- Complexity: High
- Est: 12h
```
## Customization
Based on arguments:
- Include/exclude sections
- Detail level control
- Custom grouping
- Filter by criteria
## Post-Sync
After generation:
1. Show diff preview
2. Backup existing README
3. Write new content
4. Commit reminder
5. Update timestamp
## Integration
Works well with:
- Git workflows
- CI/CD pipelines
- Project documentation
- Team updates
- Client reports

View File

@@ -0,0 +1,146 @@
# Task Master Command Reference
Comprehensive command structure for Task Master integration with Claude Code.
## Command Organization
Commands are organized hierarchically to match Task Master's CLI structure while providing enhanced Claude Code integration.
## Project Setup & Configuration
### `/project:tm/init`
- `init-project` - Initialize new project (handles PRD files intelligently)
- `init-project-quick` - Quick setup with auto-confirmation (-y flag)
### `/project:tm/models`
- `view-models` - View current AI model configuration
- `setup-models` - Interactive model configuration
- `set-main` - Set primary generation model
- `set-research` - Set research model
- `set-fallback` - Set fallback model
## Task Generation
### `/project:tm/parse-prd`
- `parse-prd` - Generate tasks from PRD document
- `parse-prd-with-research` - Enhanced parsing with research mode
### `/project:tm/generate`
- `generate-tasks` - Create individual task files from tasks.json
## Task Management
### `/project:tm/list`
- `list-tasks` - Smart listing with natural language filters
- `list-tasks-with-subtasks` - Include subtasks in hierarchical view
- `list-tasks-by-status` - Filter by specific status
### `/project:tm/set-status`
- `to-pending` - Reset task to pending
- `to-in-progress` - Start working on task
- `to-done` - Mark task complete
- `to-review` - Submit for review
- `to-deferred` - Defer task
- `to-cancelled` - Cancel task
### `/project:tm/sync-readme`
- `sync-readme` - Export tasks to README.md with formatting
### `/project:tm/update`
- `update-task` - Update tasks with natural language
- `update-tasks-from-id` - Update multiple tasks from a starting point
- `update-single-task` - Update specific task
### `/project:tm/add-task`
- `add-task` - Add new task with AI assistance
### `/project:tm/remove-task`
- `remove-task` - Remove task with confirmation
## Subtask Management
### `/project:tm/add-subtask`
- `add-subtask` - Add new subtask to parent
- `convert-task-to-subtask` - Convert existing task to subtask
### `/project:tm/remove-subtask`
- `remove-subtask` - Remove subtask (with optional conversion)
### `/project:tm/clear-subtasks`
- `clear-subtasks` - Clear subtasks from specific task
- `clear-all-subtasks` - Clear all subtasks globally
## Task Analysis & Breakdown
### `/project:tm/analyze-complexity`
- `analyze-complexity` - Analyze and generate expansion recommendations
### `/project:tm/complexity-report`
- `complexity-report` - Display complexity analysis report
### `/project:tm/expand`
- `expand-task` - Break down specific task
- `expand-all-tasks` - Expand all eligible tasks
- `with-research` - Enhanced expansion
## Task Navigation
### `/project:tm/next`
- `next-task` - Intelligent next task recommendation
### `/project:tm/show`
- `show-task` - Display detailed task information
### `/project:tm/status`
- `project-status` - Comprehensive project dashboard
## Dependency Management
### `/project:tm/add-dependency`
- `add-dependency` - Add task dependency
### `/project:tm/remove-dependency`
- `remove-dependency` - Remove task dependency
### `/project:tm/validate-dependencies`
- `validate-dependencies` - Check for dependency issues
### `/project:tm/fix-dependencies`
- `fix-dependencies` - Automatically fix dependency problems
## Workflows & Automation
### `/project:tm/workflows`
- `smart-workflow` - Context-aware intelligent workflow execution
- `command-pipeline` - Chain multiple commands together
- `auto-implement-tasks` - Advanced auto-implementation with code generation
## Utilities
### `/project:tm/utils`
- `analyze-project` - Deep project analysis and insights
### `/project:tm/setup`
- `install-taskmaster` - Comprehensive installation guide
- `quick-install-taskmaster` - One-line global installation
## Usage Patterns
### Natural Language
Most commands accept natural language arguments:
```
/project:tm/add-task create user authentication system
/project:tm/update mark all API tasks as high priority
/project:tm/list show blocked tasks
```
### ID-Based Commands
Commands requiring IDs intelligently parse from $ARGUMENTS:
```
/project:tm/show 45
/project:tm/expand 23
/project:tm/set-status/to-done 67
```
### Smart Defaults
Commands provide intelligent defaults and suggestions based on context.

View File

@@ -0,0 +1,119 @@
Update a single specific task with new information.
Arguments: $ARGUMENTS
Parse task ID and update details.
## Single Task Update
Precisely update one task with AI assistance to maintain consistency.
## Argument Parsing
Natural language updates:
- "5: add caching requirement"
- "update 5 to include error handling"
- "task 5 needs rate limiting"
- "5 change priority to high"
## Execution
```bash
task-master update-task --id=<id> --prompt="<context>"
```
## Update Types
### 1. **Content Updates**
- Enhance description
- Add requirements
- Clarify details
- Update acceptance criteria
### 2. **Metadata Updates**
- Change priority
- Adjust time estimates
- Update complexity
- Modify dependencies
### 3. **Strategic Updates**
- Revise approach
- Change test strategy
- Update implementation notes
- Adjust subtask needs
## AI-Powered Updates
The AI:
1. **Understands Context**
- Reads current task state
- Identifies update intent
- Maintains consistency
- Preserves important info
2. **Applies Changes**
- Updates relevant fields
- Keeps style consistent
- Adds without removing
- Enhances clarity
3. **Validates Results**
- Checks coherence
- Verifies completeness
- Maintains relationships
- Suggests related updates
## Example Updates
```
/project:tm/update/single 5: add rate limiting
→ Updating Task #5: "Implement API endpoints"
Current: Basic CRUD endpoints
Adding: Rate limiting requirements
Updated sections:
✓ Description: Added rate limiting mention
✓ Details: Added specific limits (100/min)
✓ Test Strategy: Added rate limit tests
✓ Complexity: Increased from 5 to 6
✓ Time Estimate: Increased by 2 hours
Suggestion: Also update task #6 (API Gateway) for consistency?
```
## Smart Features
1. **Incremental Updates**
- Adds without overwriting
- Preserves work history
- Tracks what changed
- Shows diff view
2. **Consistency Checks**
- Related task alignment
- Subtask compatibility
- Dependency validity
- Timeline impact
3. **Update History**
- Timestamp changes
- Track who/what updated
- Reason for update
- Previous versions
## Field-Specific Updates
Quick syntax for specific fields:
- "5 priority:high" → Update priority only
- "5 add-time:4h" → Add to time estimate
- "5 status:review" → Change status
- "5 depends:3,4" → Add dependencies
## Post-Update
- Show updated task
- Highlight changes
- Check related tasks
- Update suggestions
- Timeline adjustments

View File

@@ -0,0 +1,72 @@
Update tasks with intelligent field detection and bulk operations.
Arguments: $ARGUMENTS
## Intelligent Task Updates
Parse arguments to determine update intent and execute smartly.
### 1. **Natural Language Processing**
Understand update requests like:
- "mark 23 as done" → Update status to done
- "increase priority of 45" → Set priority to high
- "add dependency on 12 to task 34" → Add dependency
- "tasks 20-25 need review" → Bulk status update
- "all API tasks high priority" → Pattern-based update
### 2. **Smart Field Detection**
Automatically detect what to update:
- Status keywords: done, complete, start, pause, review
- Priority changes: urgent, high, low, deprioritize
- Dependency updates: depends on, blocks, after
- Assignment: assign to, owner, responsible
- Time: estimate, spent, deadline
### 3. **Bulk Operations**
Support for multiple task updates:
```
Examples:
- "complete tasks 12, 15, 18"
- "all pending auth tasks to in-progress"
- "increase priority for tasks blocking 45"
- "defer all documentation tasks"
```
### 4. **Contextual Validation**
Before updating, check:
- Status transitions are valid
- Dependencies don't create cycles
- Priority changes make sense
- Bulk updates won't break project flow
Show preview:
```
Update Preview:
─────────────────
Tasks to update: #23, #24, #25
Change: status → in-progress
Impact: Will unblock tasks #30, #31
Warning: Task #24 has unmet dependencies
```
### 5. **Smart Suggestions**
Based on update:
- Completing task? → Show newly unblocked tasks
- Changing priority? → Show impact on sprint
- Adding dependency? → Check for conflicts
- Bulk update? → Show summary of changes
### 6. **Workflow Integration**
After updates:
- Auto-update dependent task states
- Trigger status recalculation
- Update sprint/milestone progress
- Log changes with context
Result: Flexible, intelligent task updates with safety checks.

View File

@@ -0,0 +1,108 @@
Update multiple tasks starting from a specific ID.
Arguments: $ARGUMENTS
Parse starting task ID and update context.
## Bulk Task Updates
Update multiple related tasks based on new requirements or context changes.
## Argument Parsing
- "from 5: add security requirements"
- "5 onwards: update API endpoints"
- "starting at 5: change to use new framework"
## Execution
```bash
task-master update --from=<id> --prompt="<context>"
```
## Update Process
### 1. **Task Selection**
Starting from specified ID:
- Include the task itself
- Include all dependent tasks
- Include related subtasks
- Smart boundary detection
### 2. **Context Application**
AI analyzes the update context and:
- Identifies what needs changing
- Maintains consistency
- Preserves completed work
- Updates related information
### 3. **Intelligent Updates**
- Modify descriptions appropriately
- Update test strategies
- Adjust time estimates
- Revise dependencies if needed
## Smart Features
1. **Scope Detection**
- Find natural task groupings
- Identify related features
- Stop at logical boundaries
- Avoid over-updating
2. **Consistency Maintenance**
- Keep naming conventions
- Preserve relationships
- Update cross-references
- Maintain task flow
3. **Change Preview**
```
Bulk Update Preview
━━━━━━━━━━━━━━━━━━
Starting from: Task #5
Tasks to update: 8 tasks + 12 subtasks
Context: "add security requirements"
Changes will include:
- Add security sections to descriptions
- Update test strategies for security
- Add security-related subtasks where needed
- Adjust time estimates (+20% average)
Continue? (y/n)
```
## Example Updates
```
/project:tm/update/from-id 5: change database to PostgreSQL
→ Analyzing impact starting from task #5
→ Found 6 related tasks to update
→ Updates will maintain consistency
→ Preview changes? (y/n)
Applied updates:
✓ Task #5: Updated connection logic references
✓ Task #6: Changed migration approach
✓ Task #7: Updated query syntax notes
✓ Task #8: Revised testing strategy
✓ Task #9: Updated deployment steps
✓ Task #12: Changed backup procedures
```
## Safety Features
- Preview all changes
- Selective confirmation
- Rollback capability
- Change logging
- Validation checks
## Post-Update
- Summary of changes
- Consistency verification
- Suggest review tasks
- Update timeline if needed

View File

@@ -0,0 +1,97 @@
Advanced project analysis with actionable insights and recommendations.
Arguments: $ARGUMENTS
## Comprehensive Project Analysis
Multi-dimensional analysis based on requested focus area.
### 1. **Analysis Modes**
Based on $ARGUMENTS:
- "velocity" → Sprint velocity and trends
- "quality" → Code quality metrics
- "risk" → Risk assessment and mitigation
- "dependencies" → Dependency graph analysis
- "team" → Workload and skill distribution
- "architecture" → System design coherence
- Default → Full spectrum analysis
### 2. **Velocity Analytics**
```
📊 Velocity Analysis
━━━━━━━━━━━━━━━━━━━
Current Sprint: 24 points/week ↗️ +20%
Rolling Average: 20 points/week
Efficiency: 85% (17/20 tasks on time)
Bottlenecks Detected:
- Code review delays (avg 4h wait)
- Test environment availability
- Dependency on external team
Recommendations:
1. Implement parallel review process
2. Add staging environment
3. Mock external dependencies
```
### 3. **Risk Assessment**
**Technical Risks**
- High complexity tasks without backup assignee
- Single points of failure in architecture
- Insufficient test coverage in critical paths
- Technical debt accumulation rate
**Project Risks**
- Critical path dependencies
- Resource availability gaps
- Deadline feasibility analysis
- Scope creep indicators
### 4. **Dependency Intelligence**
Visual dependency analysis:
```
Critical Path:
#12 → #15 → #23 → #45 → #50 (20 days)
↘ #24 → #46 ↗
Optimization: Parallelize #15 and #24
Time Saved: 3 days
```
### 5. **Quality Metrics**
**Code Quality**
- Test coverage trends
- Complexity scores
- Technical debt ratio
- Review feedback patterns
**Process Quality**
- Rework frequency
- Bug introduction rate
- Time to resolution
- Knowledge distribution
### 6. **Predictive Insights**
Based on patterns:
- Completion probability by deadline
- Resource needs projection
- Risk materialization likelihood
- Suggested interventions
### 7. **Executive Dashboard**
High-level summary with:
- Health score (0-100)
- Top 3 risks
- Top 3 opportunities
- Recommended actions
- Success probability
Result: Data-driven decisions with clear action paths.

View File

@@ -0,0 +1,71 @@
Validate all task dependencies for issues.
## Dependency Validation
Comprehensive check for dependency problems across the entire project.
## Execution
```bash
task-master validate-dependencies
```
## Validation Checks
1. **Circular Dependencies**
- A depends on B, B depends on A
- Complex circular chains
- Self-dependencies
2. **Missing Dependencies**
- References to non-existent tasks
- Deleted task references
- Invalid task IDs
3. **Logical Issues**
- Completed tasks depending on pending
- Cancelled tasks in dependency chains
- Impossible sequences
4. **Complexity Warnings**
- Over-complex dependency chains
- Too many dependencies per task
- Bottleneck tasks
## Smart Analysis
The validation provides:
- Visual dependency graph
- Critical path analysis
- Bottleneck identification
- Suggested optimizations
## Report Format
```
Dependency Validation Report
━━━━━━━━━━━━━━━━━━━━━━━━━━
✅ No circular dependencies found
⚠️ 2 warnings found:
- Task #23 has 7 dependencies (consider breaking down)
- Task #45 blocks 5 other tasks (potential bottleneck)
❌ 1 error found:
- Task #67 depends on deleted task #66
Critical Path: #1 → #5 → #23 → #45 → #50 (15 days)
```
## Actionable Output
For each issue found:
- Clear description
- Impact assessment
- Suggested fix
- Command to resolve
## Next Steps
After validation:
- Run `/project:tm/fix-dependencies` to auto-fix
- Manually adjust problematic dependencies
- Rerun to verify fixes

View File

@@ -0,0 +1,97 @@
Enhanced auto-implementation with intelligent code generation and testing.
Arguments: $ARGUMENTS
## Intelligent Auto-Implementation
Advanced implementation with context awareness and quality checks.
### 1. **Pre-Implementation Analysis**
Before starting:
- Analyze task complexity and requirements
- Check codebase patterns and conventions
- Identify similar completed tasks
- Assess test coverage needs
- Detect potential risks
### 2. **Smart Implementation Strategy**
Based on task type and context:
**Feature Tasks**
1. Research existing patterns
2. Design component architecture
3. Implement with tests
4. Integrate with system
5. Update documentation
**Bug Fix Tasks**
1. Reproduce issue
2. Identify root cause
3. Implement minimal fix
4. Add regression tests
5. Verify side effects
**Refactoring Tasks**
1. Analyze current structure
2. Plan incremental changes
3. Maintain test coverage
4. Refactor step-by-step
5. Verify behavior unchanged
### 3. **Code Intelligence**
**Pattern Recognition**
- Learn from existing code
- Follow team conventions
- Use preferred libraries
- Match style guidelines
**Test-Driven Approach**
- Write tests first when possible
- Ensure comprehensive coverage
- Include edge cases
- Performance considerations
### 4. **Progressive Implementation**
Step-by-step with validation:
```
Step 1/5: Setting up component structure ✓
Step 2/5: Implementing core logic ✓
Step 3/5: Adding error handling ⚡ (in progress)
Step 4/5: Writing tests ⏳
Step 5/5: Integration testing ⏳
Current: Adding try-catch blocks and validation...
```
### 5. **Quality Assurance**
Automated checks:
- Linting and formatting
- Test execution
- Type checking
- Dependency validation
- Performance analysis
### 6. **Smart Recovery**
If issues arise:
- Diagnostic analysis
- Suggestion generation
- Fallback strategies
- Manual intervention points
- Learning from failures
### 7. **Post-Implementation**
After completion:
- Generate PR description
- Update documentation
- Log lessons learned
- Suggest follow-up tasks
- Update task relationships
Result: High-quality, production-ready implementations.

View File

@@ -0,0 +1,77 @@
Execute a pipeline of commands based on a specification.
Arguments: $ARGUMENTS
## Command Pipeline Execution
Parse pipeline specification from arguments. Supported formats:
### Simple Pipeline
`init → expand-all → sprint-plan`
### Conditional Pipeline
`status → if:pending>10 → sprint-plan → else → next`
### Iterative Pipeline
`for:pending-tasks → expand → complexity-check`
### Smart Pipeline Patterns
**1. Project Setup Pipeline**
```
init [prd] →
expand-all →
complexity-report →
sprint-plan →
show first-sprint
```
**2. Daily Work Pipeline**
```
standup →
if:in-progress → continue →
else → next → start
```
**3. Task Completion Pipeline**
```
complete [id] →
git-commit →
if:blocked-tasks-freed → show-freed →
next
```
**4. Quality Check Pipeline**
```
list in-progress →
for:each → check-idle-time →
if:idle>1day → prompt-update
```
### Pipeline Features
**Variables**
- Store results: `status → $count=pending-count`
- Use in conditions: `if:$count>10`
- Pass between commands: `expand $high-priority-tasks`
**Error Handling**
- On failure: `try:complete → catch:show-blockers`
- Skip on error: `optional:test-run`
- Retry logic: `retry:3:commit`
**Parallel Execution**
- Parallel branches: `[analyze | test | lint]`
- Join results: `parallel → join:report`
### Execution Flow
1. Parse pipeline specification
2. Validate command sequence
3. Execute with state passing
4. Handle conditions and loops
5. Aggregate results
6. Show summary
This enables complex workflows like:
`parse-prd → expand-all → filter:complex>70 → assign:senior → sprint-plan:weighted`

View File

@@ -0,0 +1,55 @@
Execute an intelligent workflow based on current project state and recent commands.
This command analyzes:
1. Recent commands you've run
2. Current project state
3. Time of day / day of week
4. Your working patterns
Arguments: $ARGUMENTS
## Intelligent Workflow Selection
Based on context, I'll determine the best workflow:
### Context Analysis
- Previous command executed
- Current task states
- Unfinished work from last session
- Your typical patterns
### Smart Execution
If last command was:
- `status` → Likely starting work → Run daily standup
- `complete` → Task finished → Find next task
- `list pending` → Planning → Suggest sprint planning
- `expand` → Breaking down work → Show complexity analysis
- `init` → New project → Show onboarding workflow
If no recent commands:
- Morning? → Daily standup workflow
- Many pending tasks? → Sprint planning
- Tasks blocked? → Dependency resolution
- Friday? → Weekly review
### Workflow Composition
I'll chain appropriate commands:
1. Analyze current state
2. Execute primary workflow
3. Suggest follow-up actions
4. Prepare environment for coding
### Learning Mode
This command learns from your patterns:
- Track command sequences
- Note time preferences
- Remember common workflows
- Adapt to your style
Example flows detected:
- Morning: standup → next → start
- After lunch: status → continue task
- End of day: complete → commit → status

View File

@@ -0,0 +1,144 @@
# Version Helper Commands
Utility commands for version comparison and analysis in documentation sync workflows.
## Git Commands for Version Analysis
### Find Previous Version Tag
```bash
# Get all version tags sorted by version number
CURRENT_TAG="v1.1.4"
PREVIOUS_TAG=$(git tag --sort=version:refname | grep -E '^v[0-9]+\.[0-9]+\.[0-9]+$' | grep -v "$CURRENT_TAG" | tail -1)
echo "Comparing $PREVIOUS_TAG$CURRENT_TAG"
```
### Compare Versions
```bash
# Get changed files between versions
git diff --name-only $PREVIOUS_TAG..$CURRENT_TAG
# Focus on framework files
git diff --name-only $PREVIOUS_TAG..$CURRENT_TAG -- addons/card-framework/
# Get commit messages for changelog
git log --oneline $PREVIOUS_TAG..$CURRENT_TAG
# Get detailed changes for specific files
git diff $PREVIOUS_TAG..$CURRENT_TAG -- addons/card-framework/
```
### Change Categorization
```bash
# Categorize commits by conventional commit types
git log --oneline $PREVIOUS_TAG..$CURRENT_TAG | grep -E '^[a-f0-9]+ feat:' # New features
git log --oneline $PREVIOUS_TAG..$CURRENT_TAG | grep -E '^[a-f0-9]+ fix:' # Bug fixes
git log --oneline $PREVIOUS_TAG..$CURRENT_TAG | grep -E '^[a-f0-9]+ docs:' # Documentation
git log --oneline $PREVIOUS_TAG..$CURRENT_TAG | grep -E '^[a-f0-9]+ refactor:' # Refactoring
git log --oneline $PREVIOUS_TAG..$CURRENT_TAG | grep -E '^[a-f0-9]+ test:' # Tests
```
### Quick Sync Change Detection
```bash
# Working directory changes
git diff --name-only HEAD
# Staged changes
git diff --cached --name-only
# Focus on API-affecting files
git diff --name-only HEAD -- addons/card-framework/ | grep '\.gd$'
# Check if any GDScript files changed
if git diff --name-only HEAD -- addons/card-framework/ | grep -q '\.gd$'; then
echo "API files changed - documentation update needed"
fi
```
## Claude Commands Integration
### Full Sync Implementation
```bash
#!/bin/bash
# Implementation for /sync-docs command
CURRENT_TAG="$1"
PREVIOUS_TAG=$(git tag --sort=version:refname | grep -E '^v[0-9]+\.[0-9]+\.[0-9]+$' | grep -v "$CURRENT_TAG" | tail -1)
echo "📊 Analyzing changes from $PREVIOUS_TAG to $CURRENT_TAG"
# Get changed files
CHANGED_FILES=$(git diff --name-only $PREVIOUS_TAG..$CURRENT_TAG -- addons/card-framework/)
# Get commit messages for changelog
COMMITS=$(git log --oneline $PREVIOUS_TAG..$CURRENT_TAG)
# Pass to Claude for analysis
claude "/analyze addons/card-framework/ --focus api --persona-scribe=en --ultrathink --context '$CHANGED_FILES' --changelog-commits '$COMMITS'"
```
### Quick Sync Implementation
```bash
#!/bin/bash
# Implementation for /quick-sync command
# Check for changes
WORKING_CHANGES=$(git diff --name-only HEAD -- addons/card-framework/ | grep '\.gd$' || true)
STAGED_CHANGES=$(git diff --cached --name-only -- addons/card-framework/ | grep '\.gd$' || true)
if [ -z "$WORKING_CHANGES" ] && [ -z "$STAGED_CHANGES" ]; then
echo "✅ No API changes detected"
exit 0
fi
echo "📝 Updating documentation for changed files:"
echo "$WORKING_CHANGES"
echo "$STAGED_CHANGES"
# Quick analysis of changed files only
claude "/analyze $WORKING_CHANGES $STAGED_CHANGES --focus api --persona-scribe=en --uc --incremental-update docs/API.md"
```
## Version Tag Best Practices
### Semantic Versioning
- `v1.0.0` - Major version (breaking changes)
- `v1.1.0` - Minor version (new features, backward compatible)
- `v1.1.1` - Patch version (bug fixes)
### Tagging Workflow
```bash
# Create and push tag
git tag v1.1.4 -m "Release version 1.1.4"
git push origin v1.1.4
# List all tags
git tag -l --sort=version:refname
# Get latest tag
git describe --tags --abbrev=0
# Check if tag exists
git rev-parse --verify "refs/tags/v1.1.4" >/dev/null 2>&1
```
## Error Handling
### Common Issues
1. **No previous tag found**: Handle initial release case
2. **Invalid tag format**: Validate semantic versioning
3. **Empty diff**: Handle no changes between versions
4. **Tag doesn't exist**: Verify tag before processing
### Fallback Strategies
```bash
# If no previous tag, use initial commit
PREVIOUS_TAG=${PREVIOUS_TAG:-$(git rev-list --max-parents=0 HEAD)}
# If current tag doesn't exist, use HEAD
CURRENT_TAG=${CURRENT_TAG:-"HEAD"}
# Validate tag format
if [[ ! "$CURRENT_TAG" =~ ^v[0-9]+\.[0-9]+\.[0-9]+$ ]]; then
echo "Warning: Tag format should be vX.Y.Z"
fi
```

29
.claude/settings.json Normal file
View File

@@ -0,0 +1,29 @@
{
"allowedTools": [
"Read",
"Write",
"Edit",
"MultiEdit",
"Bash",
"Grep",
"Glob",
"LS",
"TodoWrite",
"Task",
"WebSearch",
"WebFetch",
"mcp__task-master-ai__*"
],
"rules": [
"Always follow GDScript syntax and Godot 4.x conventions when working on Godot projects",
"Maintain the existing Card Framework architecture and naming conventions",
"Ensure compatibility with JsonCardFactory when modifying JSON card data structures",
"Inherit from the base CardContainer class when implementing new CardContainer types",
"Be careful with node structure and signal connections when modifying scene (.tscn) files"
],
"contextFiles": [
"README.md",
"addons/card-framework/**/*.gd",
"project.godot"
]
}

4
.editorconfig Normal file
View File

@@ -0,0 +1,4 @@
root = true
[*]
charset = utf-8

2
.gitattributes vendored Normal file
View File

@@ -0,0 +1,2 @@
# Normalize EOL for all files that Git considers text files.
* text=auto eol=lf

3
.gitignore vendored Normal file
View File

@@ -0,0 +1,3 @@
# Godot 4+ specific ignores
.godot/
/android/

17
.mcp.template.json Normal file
View File

@@ -0,0 +1,17 @@
{
"mcpServers": {
"task-master-ai": {
"command": "npx",
"args": ["-y", "--package=task-master-ai", "task-master-ai"],
"env": {
"ANTHROPIC_API_KEY": "your_anthropic_api_key_here",
"PERPLEXITY_API_KEY": "your_perplexity_api_key_here",
"OPENAI_API_KEY": "your_openai_api_key_here",
"GOOGLE_API_KEY": "your_google_api_key_here",
"XAI_API_KEY": "your_xai_api_key_here",
"OPENROUTER_API_KEY": "your_openrouter_api_key_here",
"MISTRAL_API_KEY": "your_mistral_api_key_here"
}
}
}
}

417
.taskmaster/CLAUDE.md Normal file
View File

@@ -0,0 +1,417 @@
# Task Master AI - Agent Integration Guide
## Essential Commands
### Core Workflow Commands
```bash
# Project Setup
task-master init # Initialize Task Master in current project
task-master parse-prd .taskmaster/docs/prd.txt # Generate tasks from PRD document
task-master models --setup # Configure AI models interactively
# Daily Development Workflow
task-master list # Show all tasks with status
task-master next # Get next available task to work on
task-master show <id> # View detailed task information (e.g., task-master show 1.2)
task-master set-status --id=<id> --status=done # Mark task complete
# Task Management
task-master add-task --prompt="description" --research # Add new task with AI assistance
task-master expand --id=<id> --research --force # Break task into subtasks
task-master update-task --id=<id> --prompt="changes" # Update specific task
task-master update --from=<id> --prompt="changes" # Update multiple tasks from ID onwards
task-master update-subtask --id=<id> --prompt="notes" # Add implementation notes to subtask
# Analysis & Planning
task-master analyze-complexity --research # Analyze task complexity
task-master complexity-report # View complexity analysis
task-master expand --all --research # Expand all eligible tasks
# Dependencies & Organization
task-master add-dependency --id=<id> --depends-on=<id> # Add task dependency
task-master move --from=<id> --to=<id> # Reorganize task hierarchy
task-master validate-dependencies # Check for dependency issues
task-master generate # Update task markdown files (usually auto-called)
```
## Key Files & Project Structure
### Core Files
- `.taskmaster/tasks/tasks.json` - Main task data file (auto-managed)
- `.taskmaster/config.json` - AI model configuration (use `task-master models` to modify)
- `.taskmaster/docs/prd.txt` - Product Requirements Document for parsing
- `.taskmaster/tasks/*.txt` - Individual task files (auto-generated from tasks.json)
- `.env` - API keys for CLI usage
### Claude Code Integration Files
- `CLAUDE.md` - Auto-loaded context for Claude Code (this file)
- `.claude/settings.json` - Claude Code tool allowlist and preferences
- `.claude/commands/` - Custom slash commands for repeated workflows
- `.mcp.json` - MCP server configuration (project-specific)
### Directory Structure
```
project/
├── .taskmaster/
│ ├── tasks/ # Task files directory
│ │ ├── tasks.json # Main task database
│ │ ├── task-1.md # Individual task files
│ │ └── task-2.md
│ ├── docs/ # Documentation directory
│ │ ├── prd.txt # Product requirements
│ ├── reports/ # Analysis reports directory
│ │ └── task-complexity-report.json
│ ├── templates/ # Template files
│ │ └── example_prd.txt # Example PRD template
│ └── config.json # AI models & settings
├── .claude/
│ ├── settings.json # Claude Code configuration
│ └── commands/ # Custom slash commands
├── .env # API keys
├── .mcp.json # MCP configuration
└── CLAUDE.md # This file - auto-loaded by Claude Code
```
## MCP Integration
Task Master provides an MCP server that Claude Code can connect to. Configure in `.mcp.json`:
```json
{
"mcpServers": {
"task-master-ai": {
"command": "npx",
"args": ["-y", "--package=task-master-ai", "task-master-ai"],
"env": {
"ANTHROPIC_API_KEY": "your_key_here",
"PERPLEXITY_API_KEY": "your_key_here",
"OPENAI_API_KEY": "OPENAI_API_KEY_HERE",
"GOOGLE_API_KEY": "GOOGLE_API_KEY_HERE",
"XAI_API_KEY": "XAI_API_KEY_HERE",
"OPENROUTER_API_KEY": "OPENROUTER_API_KEY_HERE",
"MISTRAL_API_KEY": "MISTRAL_API_KEY_HERE",
"AZURE_OPENAI_API_KEY": "AZURE_OPENAI_API_KEY_HERE",
"OLLAMA_API_KEY": "OLLAMA_API_KEY_HERE"
}
}
}
}
```
### Essential MCP Tools
```javascript
help; // = shows available taskmaster commands
// Project setup
initialize_project; // = task-master init
parse_prd; // = task-master parse-prd
// Daily workflow
get_tasks; // = task-master list
next_task; // = task-master next
get_task; // = task-master show <id>
set_task_status; // = task-master set-status
// Task management
add_task; // = task-master add-task
expand_task; // = task-master expand
update_task; // = task-master update-task
update_subtask; // = task-master update-subtask
update; // = task-master update
// Analysis
analyze_project_complexity; // = task-master analyze-complexity
complexity_report; // = task-master complexity-report
```
## Claude Code Workflow Integration
### Standard Development Workflow
#### 1. Project Initialization
```bash
# Initialize Task Master
task-master init
# Create or obtain PRD, then parse it
task-master parse-prd .taskmaster/docs/prd.txt
# Analyze complexity and expand tasks
task-master analyze-complexity --research
task-master expand --all --research
```
If tasks already exist, another PRD can be parsed (with new information only!) using parse-prd with --append flag. This will add the generated tasks to the existing list of tasks..
#### 2. Daily Development Loop
```bash
# Start each session
task-master next # Find next available task
task-master show <id> # Review task details
# During implementation, check in code context into the tasks and subtasks
task-master update-subtask --id=<id> --prompt="implementation notes..."
# Complete tasks
task-master set-status --id=<id> --status=done
```
#### 3. Multi-Claude Workflows
For complex projects, use multiple Claude Code sessions:
```bash
# Terminal 1: Main implementation
cd project && claude
# Terminal 2: Testing and validation
cd project-test-worktree && claude
# Terminal 3: Documentation updates
cd project-docs-worktree && claude
```
### Custom Slash Commands
Create `.claude/commands/taskmaster-next.md`:
```markdown
Find the next available Task Master task and show its details.
Steps:
1. Run `task-master next` to get the next task
2. If a task is available, run `task-master show <id>` for full details
3. Provide a summary of what needs to be implemented
4. Suggest the first implementation step
```
Create `.claude/commands/taskmaster-complete.md`:
```markdown
Complete a Task Master task: $ARGUMENTS
Steps:
1. Review the current task with `task-master show $ARGUMENTS`
2. Verify all implementation is complete
3. Run any tests related to this task
4. Mark as complete: `task-master set-status --id=$ARGUMENTS --status=done`
5. Show the next available task with `task-master next`
```
## Tool Allowlist Recommendations
Add to `.claude/settings.json`:
```json
{
"allowedTools": [
"Edit",
"Bash(task-master *)",
"Bash(git commit:*)",
"Bash(git add:*)",
"Bash(npm run *)",
"mcp__task_master_notion_ai__*"
]
}
```
## Configuration & Setup
### API Keys Required
At least **one** of these API keys must be configured:
- `ANTHROPIC_API_KEY` (Claude models) - **Recommended**
- `PERPLEXITY_API_KEY` (Research features) - **Highly recommended**
- `OPENAI_API_KEY` (GPT models)
- `GOOGLE_API_KEY` (Gemini models)
- `MISTRAL_API_KEY` (Mistral models)
- `OPENROUTER_API_KEY` (Multiple models)
- `XAI_API_KEY` (Grok models)
An API key is required for any provider used across any of the 3 roles defined in the `models` command.
### Model Configuration
```bash
# Interactive setup (recommended)
task-master models --setup
# Set specific models
task-master models --set-main claude-3-5-sonnet-20241022
task-master models --set-research perplexity-llama-3.1-sonar-large-128k-online
task-master models --set-fallback gpt-4o-mini
```
## Task Structure & IDs
### Task ID Format
- Main tasks: `1`, `2`, `3`, etc.
- Subtasks: `1.1`, `1.2`, `2.1`, etc.
- Sub-subtasks: `1.1.1`, `1.1.2`, etc.
### Task Status Values
- `pending` - Ready to work on
- `in-progress` - Currently being worked on
- `done` - Completed and verified
- `deferred` - Postponed
- `cancelled` - No longer needed
- `blocked` - Waiting on external factors
### Task Fields
```json
{
"id": "1.2",
"title": "Implement user authentication",
"description": "Set up JWT-based auth system",
"status": "pending",
"priority": "high",
"dependencies": ["1.1"],
"details": "Use bcrypt for hashing, JWT for tokens...",
"testStrategy": "Unit tests for auth functions, integration tests for login flow",
"subtasks": []
}
```
## Claude Code Best Practices with Task Master
### Context Management
- Use `/clear` between different tasks to maintain focus
- This CLAUDE.md file is automatically loaded for context
- Use `task-master show <id>` to pull specific task context when needed
### Iterative Implementation
1. `task-master show <subtask-id>` - Understand requirements
2. Explore codebase and plan implementation
3. `task-master update-subtask --id=<id> --prompt="detailed plan"` - Log plan
4. `task-master set-status --id=<id> --status=in-progress` - Start work
5. Implement code following logged plan
6. `task-master update-subtask --id=<id> --prompt="what worked/didn't work"` - Log progress
7. `task-master set-status --id=<id> --status=done` - Complete task
### Complex Workflows with Checklists
For large migrations or multi-step processes:
1. Create a markdown PRD file describing the new changes: `touch task-migration-checklist.md` (prds can be .txt or .md)
2. Use Taskmaster to parse the new prd with `task-master parse-prd --append` (also available in MCP)
3. Use Taskmaster to expand the newly generated tasks into subtasks. Consdier using `analyze-complexity` with the correct --to and --from IDs (the new ids) to identify the ideal subtask amounts for each task. Then expand them.
4. Work through items systematically, checking them off as completed
5. Use `task-master update-subtask` to log progress on each task/subtask and/or updating/researching them before/during implementation if getting stuck
### Git Integration
Task Master works well with `gh` CLI:
```bash
# Create PR for completed task
gh pr create --title "Complete task 1.2: User authentication" --body "Implements JWT auth system as specified in task 1.2"
# Reference task in commits
git commit -m "feat: implement JWT auth (task 1.2)"
```
### Parallel Development with Git Worktrees
```bash
# Create worktrees for parallel task development
git worktree add ../project-auth feature/auth-system
git worktree add ../project-api feature/api-refactor
# Run Claude Code in each worktree
cd ../project-auth && claude # Terminal 1: Auth work
cd ../project-api && claude # Terminal 2: API work
```
## Troubleshooting
### AI Commands Failing
```bash
# Check API keys are configured
cat .env # For CLI usage
# Verify model configuration
task-master models
# Test with different model
task-master models --set-fallback gpt-4o-mini
```
### MCP Connection Issues
- Check `.mcp.json` configuration
- Verify Node.js installation
- Use `--mcp-debug` flag when starting Claude Code
- Use CLI as fallback if MCP unavailable
### Task File Sync Issues
```bash
# Regenerate task files from tasks.json
task-master generate
# Fix dependency issues
task-master fix-dependencies
```
DO NOT RE-INITIALIZE. That will not do anything beyond re-adding the same Taskmaster core files.
## Important Notes
### AI-Powered Operations
These commands make AI calls and may take up to a minute:
- `parse_prd` / `task-master parse-prd`
- `analyze_project_complexity` / `task-master analyze-complexity`
- `expand_task` / `task-master expand`
- `expand_all` / `task-master expand --all`
- `add_task` / `task-master add-task`
- `update` / `task-master update`
- `update_task` / `task-master update-task`
- `update_subtask` / `task-master update-subtask`
### File Management
- Never manually edit `tasks.json` - use commands instead
- Never manually edit `.taskmaster/config.json` - use `task-master models`
- Task markdown files in `tasks/` are auto-generated
- Run `task-master generate` after manual changes to tasks.json
### Claude Code Session Management
- Use `/clear` frequently to maintain focused context
- Create custom slash commands for repeated Task Master workflows
- Configure tool allowlist to streamline permissions
- Use headless mode for automation: `claude -p "task-master next"`
### Multi-Task Updates
- Use `update --from=<id>` to update multiple future tasks
- Use `update-task --id=<id>` for single task updates
- Use `update-subtask --id=<id>` for implementation logging
### Research Mode
- Add `--research` flag for research-based AI enhancement
- Requires a research model API key like Perplexity (`PERPLEXITY_API_KEY`) in environment
- Provides more informed task creation and updates
- Recommended for complex technical tasks
---
_This guide ensures Claude Code has immediate access to Task Master's essential functionality for agentic development workflows._

View File

@@ -0,0 +1,33 @@
{
"models": {
"main": {
"provider": "anthropic",
"modelId": "claude-3-5-sonnet-20241022",
"maxTokens": 64000,
"temperature": 0.2
},
"research": {
"provider": "perplexity",
"modelId": "llama-3.1-sonar-large-128k-online",
"maxTokens": 65536,
"temperature": 0.1
},
"fallback": {
"provider": "openai",
"modelId": "gpt-4o-mini",
"maxTokens": 64000,
"temperature": 0.2
}
},
"global": {
"logLevel": "info",
"debug": false,
"defaultNumTasks": 10,
"defaultSubtasks": 5,
"defaultPriority": "medium",
"projectName": "CardFramework",
"responseLanguage": "English",
"defaultTag": "master"
},
"claudeCode": {}
}

View File

@@ -0,0 +1,47 @@
<context>
# Overview
[Provide a high-level overview of your product here. Explain what problem it solves, who it's for, and why it's valuable.]
# Core Features
[List and describe the main features of your product. For each feature, include:
- What it does
- Why it's important
- How it works at a high level]
# User Experience
[Describe the user journey and experience. Include:
- User personas
- Key user flows
- UI/UX considerations]
</context>
<PRD>
# Technical Architecture
[Outline the technical implementation details:
- System components
- Data models
- APIs and integrations
- Infrastructure requirements]
# Development Roadmap
[Break down the development process into phases:
- MVP requirements
- Future enhancements
- Do not think about timelines whatsoever -- all that matters is scope and detailing exactly what needs to be build in each phase so it can later be cut up into tasks]
# Logical Dependency Chain
[Define the logical order of development:
- Which features need to be built first (foundation)
- Getting as quickly as possible to something usable/visible front end that works
- Properly pacing and scoping each feature so it is atomic but can also be built upon and improved as development approaches]
# Risks and Mitigations
[Identify potential risks and how they'll be addressed:
- Technical challenges
- Figuring out the MVP that we can build upon
- Resource constraints]
# Appendix
[Include any additional information:
- Research findings
- Technical specifications]
</PRD>

236
CLAUDE.md Normal file
View File

@@ -0,0 +1,236 @@
# Card Framework - Claude Code Project Guide
## Project Overview
**Card Framework** is a professional-grade Godot 4.x addon for creating 2D card games. This lightweight, extensible toolkit supports various card game genres from classic Solitaire to complex TCGs and deck-building roguelikes.
### Key Characteristics
- **Target Engine**: Godot 4.4.1
- **Architecture**: Modular addon with factory patterns and inheritance hierarchy
- **License**: Open source with CC0 assets
- **Status**: Production-ready (v1.1.3) with comprehensive examples
## Architecture Overview
### Core Components
```
CardManager (Root orchestrator)
├── CardFactory (Abstract) → JsonCardFactory (Concrete)
├── CardContainer (Abstract) → Pile/Hand (Specialized containers)
├── Card (extends DraggableObject)
└── DropZone (Interaction handling)
```
### Design Patterns in Use
- **Factory Pattern**: Flexible card creation via CardFactory/JsonCardFactory
- **Template Method**: CardContainer with overridable methods for game-specific logic
- **Observer Pattern**: Event-driven card movement and interaction callbacks
- **Strategy Pattern**: Pluggable drag-and-drop via DraggableObject inheritance
### File Structure
- `addons/card-framework/` - Core framework code
- `example1/` - Basic demonstration project
- `freecell/` - Complete FreeCell game implementation
- `project.godot` - Godot 4.4+ project configuration
## Development Guidelines
### Code Standards
1. **GDScript Best Practices**
- Use strong typing: `func create_card(name: String) -> Card`
- Follow naming conventions: `card_container`, `front_face_texture`
- Document public APIs with `##` comments
- Use `@export` for designer-configurable properties
2. **Godot 4.x Compliance**
- Use `class_name` declarations for reusable classes
- Prefer `@onready` for node references
- Use signals for decoupled communication
- Leverage Resource system for configuration (Curve resources)
3. **Framework Architecture Rules**
- Inherit from CardContainer for new container types
- Extend CardFactory for custom card creation logic
- Use CardManager as the central orchestrator
- Maintain JSON compatibility for card data when using JsonCardFactory
### Extension Patterns
#### Creating Custom Card Containers
```gdscript
class_name MyCustomContainer
extends CardContainer
func check_card_can_be_dropped(cards: Array) -> bool:
# Implement game-specific rules
return true
func add_card(card: Card, index: int = -1) -> void:
# Custom card placement logic
super.add_card(card, index)
```
#### Extending Card Properties
```gdscript
class_name GameCard
extends Card
@export var power: int
@export var cost: int
@export var effect: String
func _ready():
super._ready()
# Initialize custom properties from card_info
```
## Claude Code Usage Patterns
### Quick Commands for Development
#### Analysis and Exploration
```bash
# Analyze specific components
/godot-analyze Card
/godot-analyze CardContainer
/godot-analyze "drag and drop system"
# Review architecture
/analyze addons/card-framework/ --focus architecture
```
#### Implementation Tasks
```bash
# Add new features
/godot-implement "deck shuffling animation"
/godot-implement "card effect system"
# Create custom containers
/godot-implement "discard pile with auto-organize"
```
#### Testing and Validation
```bash
# Create tests
/godot-test unit Card
/godot-test integration "hand reordering"
/godot-test performance "large deck handling"
```
### Development Workflow
#### 1. Understanding Existing Code
- Start with `/godot-analyze [component]` to understand structure
- Use `/analyze` for deeper architectural investigation
- Read example implementations in `freecell/` for complex patterns
#### 2. Planning New Features
- Create task breakdown using TodoWrite
- Consider compatibility with existing CardContainer interface
- Plan JSON schema changes if extending card properties
#### 3. Implementation Best Practices
- Always extend base classes rather than modifying core framework
- Test with both `example1` and `freecell` projects
- Maintain backwards compatibility with existing JSON card data
#### 4. Quality Assurance
- Run both example scenes to verify functionality
- Check performance with large card collections
- Validate proper cleanup and memory management
## Key Configuration Areas
### CardManager Setup
- `card_size`: Default dimensions for all cards
- `card_factory_scene`: Factory responsible for card creation
- `debug_mode`: Enable visual debugging for drop zones
### CardFactory Configuration
- `card_asset_dir`: Location of card image assets
- `card_info_dir`: Directory containing JSON card definitions
- `back_image`: Default card back texture
### JSON Card Schema
```json
{
"name": "card_identifier",
"front_image": "texture_filename.png",
"suit": "optional_game_data",
"value": "additional_properties"
}
```
## Common Implementation Patterns
### Card Movement and Animation
- Use `card.move(target_position, rotation)` for programmatic movement
- Leverage `moving_speed` property for consistent animation timing
- Handle movement completion via `on_card_move_done()` callbacks
### Game Rules Implementation
- Override `check_card_can_be_dropped()` in custom containers
- Use `move_cards()` with history tracking for undo/redo support
- Implement game state validation in container logic
### Performance Optimization
- Preload card data using `factory.preload_card_data()`
- Limit visual card display with `max_stack_display` in Pile containers
- Use `debug_mode` to identify performance bottlenecks
## Integration Points
### Asset Pipeline
- Card images in `card_asset_dir` (typically PNG format)
- JSON metadata in `card_info_dir` matching image filenames
- Support for Kenney.nl asset packs (included in examples)
### Scene Structure
- CardManager as root node in card-enabled scenes
- CardContainers as children of CardManager
- Cards instantiated dynamically via factory pattern
### Extensibility Hooks
- Virtual methods in CardContainer for custom behavior
- Card property extensions via inheritance
- Factory pattern for alternative card creation strategies
## Task Master AI Integration
This project includes Task Master AI for advanced project management:
```bash
# Initialize task tracking
task-master init
# Create tasks from project requirements
task-master parse-prd .taskmaster/docs/prd.txt
# Track development progress
task-master next # Get next task
task-master show <id> # View task details
task-master set-status --id=<id> --status=done
```
See `.taskmaster/CLAUDE.md` for detailed Task Master workflows.
## Troubleshooting Guide
### Common Issues
- **Cards not appearing**: Check `card_asset_dir` path and file naming
- **JSON loading errors**: Verify JSON syntax and required fields
- **Drag-and-drop issues**: Ensure CardContainer has `enable_drop_zone = true`
- **Performance problems**: Use `debug_mode` to visualize sensor areas
### Debug Tools
- Enable `debug_mode` in CardManager for visual debugging
- Use Godot's remote inspector for runtime state examination
- Check console output for framework-specific error messages
---
*This project demonstrates professional Godot addon development with comprehensive documentation, clean architecture, and production-ready examples. It serves as an excellent foundation for 2D card game development.*
## Task Master AI Instructions
**Import Task Master's development workflow commands and guidelines, treat as if import is in the main CLAUDE.md file.**
@./.taskmaster/CLAUDE.md

9
LICENSE.md Normal file
View File

@@ -0,0 +1,9 @@
MIT License
Copyright (c) 2025 Hyunjoon Park
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the “Software”), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

82
README.md Normal file
View File

@@ -0,0 +1,82 @@
# Card Framework
[![Version](https://img.shields.io/badge/version-1.2.3-blue.svg)](https://github.com/hyunjoon/card-framework)
[![Godot](https://img.shields.io/badge/Godot-4.4+-green.svg)](https://godotengine.org/)
[![License](https://img.shields.io/badge/License-MIT-yellow.svg)](LICENSE.md)
[![Platform](https://img.shields.io/badge/platform-cross--platform-lightgrey.svg)]()
**Professional-grade Godot 4.x addon** for building 2D card games. Create **Solitaire**, **TCG**, or **deck-building roguelikes** with flexible card handling and drag-and-drop interactions.
![Example1 Screenshot](addons/card-framework/screenshots/example1.png) ![Freecell Screenshot](addons/card-framework/screenshots/freecell.png)
## Key Features
**Drag & Drop System** - Intuitive card interactions with built-in validation
**Flexible Containers** - `Pile` (stacks), `Hand` (fanned layouts), custom containers
**JSON Card Data** - Define cards with metadata, images, and custom properties
**Production Ready** - Complete FreeCell implementation included
**Extensible Architecture** - Factory patterns, inheritance hierarchy, event system
## Installation
**From AssetLib:** Search "Card Framework" in Godot's AssetLib tab
**Manual:** Copy contents to `res://addons/card-framework`
## Quick Start
1. **Add CardManager** - Instance `card-framework/card_manager.tscn` in your scene
2. **Configure Factory** - Assign `JsonCardFactory` to `card_factory_scene`
3. **Set Directories** - Point `card_asset_dir` to images, `card_info_dir` to JSON files
4. **Add Containers** - Create `Pile` or `Hand` nodes as children of CardManager
### Basic Card JSON
```json
{
"name": "club_2",
"front_image": "cardClubs2.png",
"suit": "club",
"value": "2"
}
```
## Core Architecture
**CardManager** - Root orchestrator managing factories, containers, and move history
**Card** - Individual card nodes with animations, face states, interaction properties
**CardContainer** - Base class for `Pile` (stacks) and `Hand` (fanned layouts)
**CardFactory** - Creates cards from JSON data, supports custom implementations
## Sample Projects
**`example1/`** - Basic demonstration with different container types
**`freecell/`** - Complete game with custom rules, statistics, seed generation
Run: `res://example1/example1.tscn` or `res://freecell/scenes/menu/menu.tscn`
## Customization
**Custom Containers** - Extend `CardContainer`, override `check_card_can_be_dropped()`
**Custom Cards** - Extend `Card` class for game-specific properties
**Custom Factories** - Extend `CardFactory` for database/procedural card creation
## Documentation
**[Getting Started Guide](docs/GETTING_STARTED.md)** - Complete setup and configuration
**[API Reference](docs/API.md)** - Full class documentation and method reference
**[Changelog](docs/CHANGELOG.md)** - Version history and upgrade guide
**[Documentation Index](docs/index.md)** - Complete documentation overview
## Contributing
1. Fork repository
2. Create feature branch
3. Commit with clear messages
4. Open pull request with problem description
## License & Credits
**Framework**: Open source
**Card Assets**: [Kenney.nl](https://kenney.nl/assets/boardgame-pack) (CC0 License)
**Version**: 1.2.3 (Godot 4.4+ compatible)
**Thanks to:** [Kenney.nl](https://kenney.nl/assets/boardgame-pack), [InsideOut-Andrew](https://github.com/insideout-andrew/simple-card-pile-ui), [Rosetta Code FreeCell](https://rosettacode.org/wiki/Deal_cards_for_FreeCell)

View File

@@ -0,0 +1,135 @@
## A card object that represents a single playing card with drag-and-drop functionality.
##
## The Card class extends DraggableObject to provide interactive card behavior including
## hover effects, drag operations, and visual state management. Cards can display
## different faces (front/back) and integrate with the card framework's container system.
##
## Key Features:
## - Visual state management (front/back face display)
## - Drag-and-drop interaction with state machine
## - Integration with CardContainer for organized card management
## - Hover animation and visual feedback
##
## Usage:
## [codeblock]
## var card = card_factory.create_card("ace_spades", target_container)
## card.show_front = true
## card.move(target_position, 0)
## [/codeblock]
class_name Card
extends DraggableObject
# Static counters for global card state tracking
static var hovering_card_count: int = 0
static var holding_card_count: int = 0
## The name of the card.
@export var card_name: String
## The size of the card.
@export var card_size: Vector2 = CardFrameworkSettings.LAYOUT_DEFAULT_CARD_SIZE
## The texture for the front face of the card.
@export var front_image: Texture2D
## The texture for the back face of the card.
@export var back_image: Texture2D
## Whether the front face of the card is shown.
## If true, the front face is visible; otherwise, the back face is visible.
@export var show_front: bool = true:
set(value):
if value:
front_face_texture.visible = true
back_face_texture.visible = false
else:
front_face_texture.visible = false
back_face_texture.visible = true
# Card data and container reference
var card_info: Dictionary
var card_container: CardContainer
@onready var front_face_texture: TextureRect = $FrontFace/TextureRect
@onready var back_face_texture: TextureRect = $BackFace/TextureRect
func _ready() -> void:
super._ready()
front_face_texture.size = card_size
back_face_texture.size = card_size
if front_image:
front_face_texture.texture = front_image
if back_image:
back_face_texture.texture = back_image
pivot_offset = card_size / 2
func _on_move_done() -> void:
card_container.on_card_move_done(self)
## Sets the front and back face textures for this card.
##
## @param front_face: The texture to use for the front face
## @param back_face: The texture to use for the back face
func set_faces(front_face: Texture2D, back_face: Texture2D) -> void:
front_face_texture.texture = front_face
back_face_texture.texture = back_face
## Returns the card to its original position with smooth animation.
func return_card() -> void:
super.return_to_original()
# Override state entry to add card-specific logic
func _enter_state(state: DraggableState, from_state: DraggableState) -> void:
super._enter_state(state, from_state)
match state:
DraggableState.HOVERING:
hovering_card_count += 1
DraggableState.HOLDING:
holding_card_count += 1
if card_container:
card_container.hold_card(self)
# Override state exit to add card-specific logic
func _exit_state(state: DraggableState) -> void:
match state:
DraggableState.HOVERING:
hovering_card_count -= 1
DraggableState.HOLDING:
holding_card_count -= 1
super._exit_state(state)
## Legacy compatibility method for holding state.
## @deprecated Use state machine transitions instead
func set_holding() -> void:
if card_container:
card_container.hold_card(self)
## Returns a string representation of this card.
func get_string() -> String:
return card_name
## Checks if this card can start hovering based on global card state.
## Prevents multiple cards from hovering simultaneously.
func _can_start_hovering() -> bool:
return hovering_card_count == 0 and holding_card_count == 0
## Handles mouse press events with container notification.
func _handle_mouse_pressed() -> void:
card_container.on_card_pressed(self)
super._handle_mouse_pressed()
## Handles mouse release events and releases held cards.
func _handle_mouse_released() -> void:
super._handle_mouse_released()
if card_container:
card_container.release_holding_cards()

View File

@@ -0,0 +1 @@
uid://dtpomjc0u41g

View File

@@ -0,0 +1,38 @@
[gd_scene load_steps=2 format=3 uid="uid://brjlo8xing83p"]
[ext_resource type="Script" uid="uid://dtpomjc0u41g" path="res://addons/card-framework/card.gd" id="1_6ohl5"]
[node name="Card" type="Control"]
layout_mode = 3
anchors_preset = 0
script = ExtResource("1_6ohl5")
card_name = null
card_size = null
show_front = null
moving_speed = null
can_be_interacted_with = null
hover_distance = null
[node name="FrontFace" type="Control" parent="."]
layout_mode = 1
anchors_preset = 0
offset_right = 40.0
offset_bottom = 40.0
mouse_filter = 1
[node name="TextureRect" type="TextureRect" parent="FrontFace"]
layout_mode = 1
offset_right = 150.0
offset_bottom = 210.0
[node name="BackFace" type="Control" parent="."]
layout_mode = 1
anchors_preset = 0
offset_right = 40.0
offset_bottom = 40.0
mouse_filter = 1
[node name="TextureRect" type="TextureRect" parent="BackFace"]
layout_mode = 1
offset_right = 150.0
offset_bottom = 210.0

View File

@@ -0,0 +1,384 @@
## Abstract base class for all card containers in the card framework.
##
## CardContainer provides the foundational functionality for managing collections of cards,
## including drag-and-drop operations, position management, and container interactions.
## All specialized containers (Hand, Pile, etc.) extend this class.
##
## Key Features:
## - Card collection management with position tracking
## - Drag-and-drop integration with DropZone system
## - History tracking for undo/redo operations
## - Extensible layout system through virtual methods
## - Visual debugging support for development
##
## Virtual Methods to Override:
## - _card_can_be_added(): Define container-specific rules
## - _update_target_positions(): Implement container layout logic
## - on_card_move_done(): Handle post-movement processing
##
## Usage:
## [codeblock]
## class_name MyContainer
## extends CardContainer
##
## func _card_can_be_added(cards: Array) -> bool:
## return cards.size() == 1 # Only allow single cards
## [/codeblock]
class_name CardContainer
extends Control
# Static counter for unique container identification
static var next_id: int = 0
@export_group("drop_zone")
## Enables or disables the drop zone functionality.
@export var enable_drop_zone := true
@export_subgroup("Sensor")
## The size of the sensor. If not set, it will follow the size of the card.
@export var sensor_size: Vector2
## The position of the sensor.
@export var sensor_position: Vector2
## The texture used for the sensor.
@export var sensor_texture: Texture
## Determines whether the sensor is visible or not.
## Since the sensor can move following the status, please use it for debugging.
@export var sensor_visibility := false
# Container identification and management
var unique_id: int
var drop_zone_scene = preload("drop_zone.tscn")
var drop_zone: DropZone = null
# Card collection and state
var _held_cards: Array[Card] = []
var _holding_cards: Array[Card] = []
# Scene references
var cards_node: Control
var card_manager: CardManager
var debug_mode := false
func _init() -> void:
unique_id = next_id
next_id += 1
func _ready() -> void:
# Check if 'Cards' node already exists
if has_node("Cards"):
cards_node = $Cards
else:
cards_node = Control.new()
cards_node.name = "Cards"
cards_node.mouse_filter = Control.MOUSE_FILTER_PASS
add_child(cards_node)
var parent = get_parent()
if parent is CardManager:
card_manager = parent
else:
push_error("CardContainer should be under the CardManager")
return
card_manager._add_card_container(unique_id, self)
if enable_drop_zone:
drop_zone = drop_zone_scene.instantiate()
add_child(drop_zone)
drop_zone.init(self, [CardManager.CARD_ACCEPT_TYPE])
# If sensor_size is not set, they will follow the card size.
if sensor_size == Vector2(0, 0):
sensor_size = card_manager.card_size
drop_zone.set_sensor(sensor_size, sensor_position, sensor_texture, sensor_visibility)
if debug_mode:
drop_zone.sensor_outline.visible = true
else:
drop_zone.sensor_outline.visible = false
func _exit_tree() -> void:
if card_manager != null:
card_manager._delete_card_container(unique_id)
## Adds a card to this container at the specified index.
## @param card: The card to add
## @param index: Position to insert (-1 for end)
func add_card(card: Card, index: int = -1) -> void:
if index == -1:
_assign_card_to_container(card)
else:
_insert_card_to_container(card, index)
_move_object(card, cards_node, index)
## Removes a card from this container.
## @param card: The card to remove
## @returns: True if card was removed, false if not found
func remove_card(card: Card) -> bool:
var index = _held_cards.find(card)
if index != -1:
_held_cards.remove_at(index)
else:
return false
update_card_ui()
return true
## Returns the number of contained cards
func get_card_count() -> int:
return _held_cards.size()
## Checks if this container contains the specified card.
func has_card(card: Card) -> bool:
return _held_cards.has(card)
## Removes all cards from this container.
func clear_cards() -> void:
for card in _held_cards:
_remove_object(card)
_held_cards.clear()
update_card_ui()
## Checks if the specified cards can be dropped into this container.
## Override _card_can_be_added() in subclasses for custom rules.
func check_card_can_be_dropped(cards: Array) -> bool:
if not enable_drop_zone:
return false
if drop_zone == null:
return false
if drop_zone.accept_types.has(CardManager.CARD_ACCEPT_TYPE) == false:
return false
if not drop_zone.check_mouse_is_in_drop_zone():
return false
return _card_can_be_added(cards)
func get_partition_index() -> int:
var vertical_index = drop_zone.get_vertical_layers()
if vertical_index != -1:
return vertical_index
var horizontal_index = drop_zone.get_horizontal_layers()
if horizontal_index != -1:
return horizontal_index
return -1
## Shuffles the cards in this container using Fisher-Yates algorithm.
func shuffle() -> void:
_fisher_yates_shuffle(_held_cards)
for i in range(_held_cards.size()):
var card = _held_cards[i]
cards_node.move_child(card, i)
update_card_ui()
## Moves cards to this container with optional history tracking.
## @paramcard_container cards: Array of cards to move
## @param index: Target position (-1 for end)
## @param with_history: Whether to record for undo
## @returns: True if move was successful
func move_cards(cards: Array, index: int = -1, with_history: bool = true) -> bool:
if not _card_can_be_added(cards):
return false
# XXX: If the card is already in the container, we don't add it into the history.
if not cards.all(func(card): return _held_cards.has(card)) and with_history:
card_manager._add_history(self, cards)
_move_cards(cards, index)
return true
## Restores cards to their original positions with index precision.
## @param cards: Cards to restore
## @param from_indices: Original indices for precise positioning
func undo(cards: Array, from_indices: Array = []) -> void:
# Validate input parameters
if not from_indices.is_empty() and cards.size() != from_indices.size():
push_error("Mismatched cards and indices arrays in undo operation!")
# Fallback to basic undo
_move_cards(cards)
return
# Fallback: add to end if no index info available
if from_indices.is_empty():
_move_cards(cards)
return
# Validate all indices are valid
for i in range(from_indices.size()):
if from_indices[i] < 0:
push_error("Invalid index found during undo: %d" % from_indices[i])
# Fallback to basic undo
_move_cards(cards)
return
# Check if indices are consecutive (bulk move scenario)
var sorted_indices = from_indices.duplicate()
sorted_indices.sort()
var is_consecutive = true
for i in range(1, sorted_indices.size()):
if sorted_indices[i] != sorted_indices[i-1] + 1:
is_consecutive = false
break
if is_consecutive and sorted_indices.size() > 1:
# Bulk consecutive restore: maintain original relative order
var lowest_index = sorted_indices[0]
# Sort cards by their original indices to maintain proper order
var card_index_pairs = []
for i in range(cards.size()):
card_index_pairs.append({"card": cards[i], "index": from_indices[i]})
# Sort by index ascending to maintain original order
card_index_pairs.sort_custom(func(a, b): return a.index < b.index)
# Insert all cards starting from the lowest index
for i in range(card_index_pairs.size()):
var target_index = min(lowest_index + i, _held_cards.size())
_move_cards([card_index_pairs[i].card], target_index)
else:
# Non-consecutive indices: restore individually (original logic)
var card_index_pairs = []
for i in range(cards.size()):
card_index_pairs.append({"card": cards[i], "index": from_indices[i], "original_order": i})
# Sort by index descending, then by original order ascending for stable sorting
card_index_pairs.sort_custom(func(a, b):
if a.index == b.index:
return a.original_order < b.original_order
return a.index > b.index
)
# Restore each card to its original index
for pair in card_index_pairs:
var target_index = min(pair.index, _held_cards.size()) # Clamp to valid range
_move_cards([pair.card], target_index)
func hold_card(card: Card) -> void:
if _held_cards.has(card):
_holding_cards.append(card)
func release_holding_cards():
if _holding_cards.is_empty():
return
for card in _holding_cards:
# Transition from HOLDING to IDLE state
card.change_state(DraggableObject.DraggableState.IDLE)
var copied_holding_cards = _holding_cards.duplicate()
if card_manager != null:
card_manager._on_drag_dropped(copied_holding_cards)
_holding_cards.clear()
func get_string() -> String:
return "card_container: %d" % unique_id
func on_card_move_done(_card: Card):
pass
func on_card_pressed(_card: Card):
pass
func _assign_card_to_container(card: Card) -> void:
if card.card_container != self:
card.card_container = self
if not _held_cards.has(card):
_held_cards.append(card)
update_card_ui()
func _insert_card_to_container(card: Card, index: int) -> void:
if card.card_container != self:
card.card_container = self
if not _held_cards.has(card):
if index < 0:
index = 0
elif index > _held_cards.size():
index = _held_cards.size()
_held_cards.insert(index, card)
update_card_ui()
func _move_to_card_container(_card: Card, index: int = -1) -> void:
if _card.card_container != null:
_card.card_container.remove_card(_card)
add_card(_card, index)
func _fisher_yates_shuffle(array: Array) -> void:
for i in range(array.size() - 1, 0, -1):
var j = randi() % (i + 1)
var temp = array[i]
array[i] = array[j]
array[j] = temp
func _move_cards(cards: Array, index: int = -1) -> void:
var cur_index = index
for i in range(cards.size() - 1, -1, -1):
var card = cards[i]
if cur_index == -1:
_move_to_card_container(card)
else:
_move_to_card_container(card, cur_index)
cur_index += 1
func _card_can_be_added(_cards: Array) -> bool:
return true
## Updates the visual positions of all cards in this container.
## Call this after modifying card positions or container properties.
func update_card_ui() -> void:
_update_target_z_index()
_update_target_positions()
func _update_target_z_index() -> void:
pass
func _update_target_positions() -> void:
pass
func _move_object(target: Node, to: Node, index: int = -1) -> void:
if target.get_parent() == to:
# If already the same parent, just change the order with move_child
if index != -1:
to.move_child(target, index)
else:
# If index is -1, move to the last position
to.move_child(target, to.get_child_count() - 1)
return
var global_pos = target.global_position
if target.get_parent() != null:
target.get_parent().remove_child(target)
if index != -1:
to.add_child(target)
to.move_child(target, index)
else:
to.add_child(target)
target.global_position = global_pos
func _remove_object(target: Node) -> void:
var parent = target.get_parent()
if parent != null:
parent.remove_child(target)
target.queue_free()

View File

@@ -0,0 +1 @@
uid://de8yhmalsa0pm

View File

@@ -0,0 +1,62 @@
@tool
## Abstract base class for card creation factories using the Factory design pattern.
##
## CardFactory defines the interface for creating cards in the card framework.
## Concrete implementations like JsonCardFactory provide specific card creation
## logic while maintaining consistent behavior across different card types and
## data sources.
##
## Design Pattern: Factory Method
## This abstract factory allows the card framework to create cards without
## knowing the specific implementation details. Different factory types can
## support various data sources (JSON files, databases, hardcoded data, etc.).
##
## Key Responsibilities:
## - Define card creation interface for consistent behavior
## - Manage card data caching for performance optimization
## - Provide card size configuration for uniform scaling
## - Support preloading mechanisms for reduced runtime I/O
##
## Subclass Implementation Requirements:
## - Override create_card() to implement specific card creation logic
## - Override preload_card_data() to implement data initialization
## - Use preloaded_cards dictionary for caching when appropriate
##
## Usage:
## [codeblock]
## class_name MyCardFactory
## extends CardFactory
##
## func create_card(card_name: String, target: CardContainer) -> Card:
## # Implementation-specific card creation
## return my_card_instance
## [/codeblock]
class_name CardFactory
extends Node
# Core factory data and configuration
## Dictionary cache for storing preloaded card data to improve performance
## Key: card identifier (String), Value: card data (typically Dictionary)
var preloaded_cards = {}
## Default size for cards created by this factory
## Applied to all created cards unless overridden
var card_size: Vector2
## Virtual method for creating a card instance and adding it to a container.
## Must be implemented by concrete factory subclasses to provide specific
## card creation logic based on the factory's data source and requirements.
## @param card_name: Identifier for the card to create
## @param target: CardContainer where the created card will be added
## @returns: Created Card instance or null if creation failed
func create_card(card_name: String, target: CardContainer) -> Card:
return null
## Virtual method for preloading card data into the factory's cache.
## Concrete implementations should override this to load card definitions
## from their respective data sources (files, databases, etc.) into the
## preloaded_cards dictionary for faster card creation during gameplay.
func preload_card_data() -> void:
pass

View File

@@ -0,0 +1 @@
uid://3lsrv6tjfdc5

View File

@@ -0,0 +1,8 @@
[gd_scene load_steps=3 format=3 uid="uid://7qcsutlss3oj"]
[ext_resource type="Script" uid="uid://8n36yadkvxai" path="res://addons/card-framework/json_card_factory.gd" id="1_jlwb4"]
[ext_resource type="PackedScene" uid="uid://brjlo8xing83p" path="res://addons/card-framework/card.tscn" id="2_1mca4"]
[node name="CardFactory" type="Node"]
script = ExtResource("1_jlwb4")
default_card_scene = ExtResource("2_1mca4")

View File

@@ -0,0 +1,56 @@
## Card Framework configuration constants class.
##
## This class provides centralized constant values for all Card Framework components
## without requiring Autoload. All values are defined as constants to ensure
## consistent behavior across the framework.
##
## Usage:
## [codeblock]
## # Reference constants directly
## var speed = CardFrameworkSettings.ANIMATION_MOVE_SPEED
## var z_offset = CardFrameworkSettings.VISUAL_DRAG_Z_OFFSET
## [/codeblock]
class_name CardFrameworkSettings
extends RefCounted
# Animation Constants
## Speed of card movement animations in pixels per second
const ANIMATION_MOVE_SPEED: float = 2000.0
## Duration of hover animations in seconds
const ANIMATION_HOVER_DURATION: float = 0.10
## Scale multiplier applied during hover effects
const ANIMATION_HOVER_SCALE: float = 1.1
## Rotation in degrees applied during hover effects
const ANIMATION_HOVER_ROTATION: float = 0.0
# Physics & Interaction Constants
## Distance threshold for hover detection in pixels
const PHYSICS_HOVER_DISTANCE: float = 10.0
## Distance cards move up during hover in pixels
const PHYSICS_CARD_HOVER_DISTANCE: float = 30.0
# Visual Layout Constants
## Z-index offset applied to cards during drag operations
const VISUAL_DRAG_Z_OFFSET: int = 1000
## Z-index for pile cards to ensure proper layering
const VISUAL_PILE_Z_INDEX: int = 3000
## Z-index for drop zone sensors (below everything)
const VISUAL_SENSOR_Z_INDEX: int = -1000
## Z-index for debug outlines (above UI)
const VISUAL_OUTLINE_Z_INDEX: int = 1200
# Container Layout Constants
## Default card size used throughout the framework
const LAYOUT_DEFAULT_CARD_SIZE: Vector2 = Vector2(150, 210)
## Distance between stacked cards in piles
const LAYOUT_STACK_GAP: int = 8
## Maximum cards to display in stack before hiding
const LAYOUT_MAX_STACK_DISPLAY: int = 6
## Maximum number of cards in hand containers
const LAYOUT_MAX_HAND_SIZE: int = 10
## Maximum pixel spread for hand arrangements
const LAYOUT_MAX_HAND_SPREAD: int = 700
# Color Constants for Debugging
## Color used for sensor outlines and debug indicators
const DEBUG_OUTLINE_COLOR: Color = Color(1, 0, 0, 1)

View File

@@ -0,0 +1 @@
uid://c308ongyuejma

View File

@@ -0,0 +1,167 @@
@tool
## Central orchestrator for the card framework system.
##
## CardManager coordinates all card-related operations including drag-and-drop,
## history management, and container registration. It serves as the root node
## for card game scenes and manages the lifecycle of cards and containers.
##
## Key Responsibilities:
## - Card factory management and initialization
## - Container registration and coordination
## - Drag-and-drop event handling and routing
## - History tracking for undo/redo operations
## - Debug mode and visual debugging support
##
## Setup Requirements:
## - Must be the parent of all CardContainer instances
## - Requires card_factory_scene to be assigned in inspector
## - Configure card_size to match your card assets
##
## Usage:
## [codeblock]
## # In scene setup
## CardManager (root)
## ├── Hand (CardContainer)
## ├── Foundation (CardContainer)
## └── Deck (CardContainer)
## [/codeblock]
class_name CardManager
extends Control
# Constants
const CARD_ACCEPT_TYPE = "card"
## Default size for all cards in the game
@export var card_size := CardFrameworkSettings.LAYOUT_DEFAULT_CARD_SIZE
## Scene containing the card factory implementation
@export var card_factory_scene: PackedScene
## Enables visual debugging for drop zones and interactions
@export var debug_mode := false
# Core system components
var card_factory: CardFactory
var card_container_dict: Dictionary = {}
var history: Array[HistoryElement] = []
func _init() -> void:
if Engine.is_editor_hint():
return
func _ready() -> void:
if not _pre_process_exported_variables():
return
if Engine.is_editor_hint():
return
card_factory.card_size = card_size
card_factory.preload_card_data()
## Undoes the last card movement operation.
## Restores cards to their previous positions using stored history.
func undo() -> void:
if history.is_empty():
return
var last = history.pop_back()
if last.from != null:
last.from.undo(last.cards, last.from_indices)
## Clears all history entries, preventing further undo operations.
func reset_history() -> void:
history.clear()
func _add_card_container(id: int, card_container: CardContainer) -> void:
card_container_dict[id] = card_container
card_container.debug_mode = debug_mode
func _delete_card_container(id: int) -> void:
card_container_dict.erase(id)
# Handles dropped cards by finding suitable container
func _on_drag_dropped(cards: Array) -> void:
if cards.is_empty():
return
# Store original mouse_filter states and temporarily disable input during drop processing
var original_mouse_filters = {}
for card in cards:
original_mouse_filters[card] = card.mouse_filter
card.mouse_filter = Control.MOUSE_FILTER_IGNORE
# Find first container that accepts the cards
for key in card_container_dict.keys():
var card_container = card_container_dict[key]
var result = card_container.check_card_can_be_dropped(cards)
if result:
var index = card_container.get_partition_index()
# Restore mouse_filter before move_cards (DraggableObject will manage it from here)
for card in cards:
card.mouse_filter = original_mouse_filters[card]
card_container.move_cards(cards, index)
return
for card in cards:
# Restore mouse_filter before return_card (DraggableObject will manage it from here)
card.mouse_filter = original_mouse_filters[card]
card.return_card()
func _add_history(to: CardContainer, cards: Array) -> void:
var from = null
var from_indices = []
# Record indices FIRST, before any movement operations
for i in range(cards.size()):
var c = cards[i]
var current = c.card_container
if i == 0:
from = current
else:
if from != current:
push_error("All cards must be from the same container!")
return
# Record index immediately to avoid race conditions
if from != null:
var original_index = from._held_cards.find(c)
if original_index == -1:
push_error("Card not found in source container during history recording!")
return
from_indices.append(original_index)
var history_element = HistoryElement.new()
history_element.from = from
history_element.to = to
history_element.cards = cards
history_element.from_indices = from_indices
history.append(history_element)
func _is_valid_directory(path: String) -> bool:
var dir = DirAccess.open(path)
return dir != null
func _pre_process_exported_variables() -> bool:
if card_factory_scene == null:
push_error("CardFactory is not assigned! Please set it in the CardManager Inspector.")
return false
var factory_instance = card_factory_scene.instantiate() as CardFactory
if factory_instance == null:
push_error("Failed to create an instance of CardFactory! CardManager imported an incorrect card factory scene.")
return false
add_child(factory_instance)
card_factory = factory_instance
return true

View File

@@ -0,0 +1 @@
uid://clqgq1n7v0ar

View File

@@ -0,0 +1,10 @@
[gd_scene load_steps=3 format=3 uid="uid://c7u8hryloq7hy"]
[ext_resource type="Script" uid="uid://clqgq1n7v0ar" path="res://addons/card-framework/card_manager.gd" id="1_cp2xm"]
[ext_resource type="PackedScene" uid="uid://7qcsutlss3oj" path="res://addons/card-framework/card_factory.tscn" id="2_57jpu"]
[node name="CardManager" type="Control"]
layout_mode = 3
anchors_preset = 0
script = ExtResource("1_cp2xm")
card_factory_scene = ExtResource("2_57jpu")

View File

@@ -0,0 +1,391 @@
## A draggable object that supports mouse interaction with state-based animation system.
##
## This class provides a robust state machine for handling mouse interactions including
## hover effects, drag operations, and programmatic movement using Tween animations.
## All interactive cards and objects extend this base class to inherit consistent
## drag-and-drop behavior.
##
## Key Features:
## - State machine with safe transitions (IDLE → HOVERING → HOLDING → MOVING)
## - Tween-based animations for smooth hover effects and movement
## - Mouse interaction handling with proper event management
## - Z-index management for visual layering during interactions
## - Extensible design with virtual methods for customization
##
## State Transitions:
## - IDLE: Default state, ready for interaction
## - HOVERING: Mouse over with visual feedback (scale, rotation, position)
## - HOLDING: Active drag state following mouse movement
## - MOVING: Programmatic movement ignoring user input
##
## Usage:
## [codeblock]
## class_name MyDraggable
## extends DraggableObject
##
## func _can_start_hovering() -> bool:
## return my_custom_condition
## [/codeblock]
class_name DraggableObject
extends Control
# Enums
## Enumeration of possible interaction states for the draggable object.
enum DraggableState {
IDLE, ## Default state - no interaction
HOVERING, ## Mouse over state - visual feedback
HOLDING, ## Dragging state - follows mouse
MOVING ## Programmatic move state - ignores input
}
## The speed at which the objects moves.
@export var moving_speed: int = CardFrameworkSettings.ANIMATION_MOVE_SPEED
## Whether the object can be interacted with.
@export var can_be_interacted_with: bool = true
## The distance the object hovers when interacted with.
@export var hover_distance: int = CardFrameworkSettings.PHYSICS_HOVER_DISTANCE
## The scale multiplier when hovering.
@export var hover_scale: float = CardFrameworkSettings.ANIMATION_HOVER_SCALE
## The rotation in degrees when hovering.
@export var hover_rotation: float = CardFrameworkSettings.ANIMATION_HOVER_ROTATION
## The duration for hover animations.
@export var hover_duration: float = CardFrameworkSettings.ANIMATION_HOVER_DURATION
# Legacy variables - kept for compatibility but no longer used in state machine
var is_pressed: bool = false
var is_holding: bool = false
var stored_z_index: int:
set(value):
z_index = value
stored_z_index = value
# State Machine
var current_state: DraggableState = DraggableState.IDLE
# Mouse tracking
var is_mouse_inside: bool = false
# Movement state tracking
var is_moving_to_destination: bool = false
var is_returning_to_original: bool = false
# Position and animation tracking
var current_holding_mouse_position: Vector2
var original_position: Vector2
var original_scale: Vector2
var original_hover_rotation: float
var current_hover_position: Vector2 # Track position during hover animation
# Move operation tracking
var target_destination: Vector2 # Target position passed to move() function
var target_rotation: float # Target rotation passed to move() function
var original_destination: Vector2
var original_rotation: float
var destination_degree: float
# Tween objects
var move_tween: Tween
var hover_tween: Tween
# State transition rules
var allowed_transitions = {
DraggableState.IDLE: [DraggableState.HOVERING, DraggableState.HOLDING, DraggableState.MOVING],
DraggableState.HOVERING: [DraggableState.IDLE, DraggableState.HOLDING, DraggableState.MOVING],
DraggableState.HOLDING: [DraggableState.IDLE, DraggableState.MOVING],
DraggableState.MOVING: [DraggableState.IDLE]
}
func _ready() -> void:
mouse_filter = Control.MOUSE_FILTER_STOP
connect("mouse_entered", _on_mouse_enter)
connect("mouse_exited", _on_mouse_exit)
connect("gui_input", _on_gui_input)
original_destination = global_position
original_rotation = rotation
original_position = position
original_scale = scale
original_hover_rotation = rotation
stored_z_index = z_index
## Safely transitions between interaction states using predefined rules.
## Validates transitions and handles state cleanup/initialization automatically.
## @param new_state: Target state to transition to
## @returns: True if transition was successful, false if invalid/blocked
func change_state(new_state: DraggableState) -> bool:
if new_state == current_state:
return true
# Validate transition is allowed by state machine rules
if not new_state in allowed_transitions[current_state]:
return false
# Clean up previous state
_exit_state(current_state)
var old_state = current_state
current_state = new_state
# Enter new state
_enter_state(new_state, old_state)
return true
# Handle state entry
func _enter_state(state: DraggableState, from_state: DraggableState) -> void:
match state:
DraggableState.IDLE:
z_index = stored_z_index
mouse_filter = Control.MOUSE_FILTER_STOP
DraggableState.HOVERING:
# z_index = stored_z_index + CardFrameworkSettings.VISUAL_DRAG_Z_OFFSET
_start_hover_animation()
DraggableState.HOLDING:
# Preserve hover position if transitioning from HOVERING state
if from_state == DraggableState.HOVERING:
_preserve_hover_position()
# For IDLE → HOLDING transitions, current position is maintained
current_holding_mouse_position = get_local_mouse_position()
z_index = stored_z_index + CardFrameworkSettings.VISUAL_DRAG_Z_OFFSET
rotation = 0
DraggableState.MOVING:
# Stop hover animations and ignore input during programmatic movement
if hover_tween and hover_tween.is_valid():
hover_tween.kill()
hover_tween = null
z_index = stored_z_index + CardFrameworkSettings.VISUAL_DRAG_Z_OFFSET
mouse_filter = Control.MOUSE_FILTER_IGNORE
# Handle state exit
func _exit_state(state: DraggableState) -> void:
match state:
DraggableState.HOVERING:
z_index = stored_z_index
_stop_hover_animation()
DraggableState.HOLDING:
z_index = stored_z_index
# Reset visual effects but preserve position for return_card() animation
scale = original_scale
rotation = original_hover_rotation
DraggableState.MOVING:
mouse_filter = Control.MOUSE_FILTER_STOP
func _process(delta: float) -> void:
match current_state:
DraggableState.HOLDING:
global_position = get_global_mouse_position() - current_holding_mouse_position
func _finish_move() -> void:
# Complete movement processing
is_moving_to_destination = false
rotation = destination_degree
# Update original position and rotation only when not returning to original
# Important: Use original target values from move() instead of global_position
if not is_returning_to_original:
original_destination = target_destination
original_rotation = target_rotation
# Reset return flag
is_returning_to_original = false
# End MOVING state - return to IDLE
change_state(DraggableState.IDLE)
# Call inherited class callback
_on_move_done()
func _on_move_done() -> void:
# This function can be overridden by subclasses to handle when the move is done.
pass
# Start hover animation with tween
func _start_hover_animation() -> void:
# Stop any existing hover animation
if hover_tween and hover_tween.is_valid():
hover_tween.kill()
hover_tween = null
position = original_position # Reset position to original before starting new hover
scale = original_scale
rotation = original_hover_rotation
# Update original position to current position (important for correct return)
original_position = position
original_scale = scale
original_hover_rotation = rotation
# Store current position before animation
current_hover_position = position
# Create new hover tween
hover_tween = create_tween()
hover_tween.set_parallel(true) # Allow multiple properties to animate simultaneously
# Animate position (hover up)
var target_position = Vector2(position.x - 30, position.y - hover_distance)
hover_tween.tween_property(self, "position", target_position, hover_duration)
# Animate scale
hover_tween.tween_property(self, "scale", original_scale * hover_scale, hover_duration)
# Animate rotation
#hover_tween.tween_property(self, "rotation", deg_to_rad(hover_rotation), hover_duration)
# Update current hover position tracking
hover_tween.tween_method(_update_hover_position, position, target_position, hover_duration)
# Stop hover animation and return to original state
func _stop_hover_animation() -> void:
# Stop any existing hover animation
if hover_tween and hover_tween.is_valid():
hover_tween.kill()
hover_tween = null
# Create new tween to return to original state
hover_tween = create_tween()
hover_tween.set_parallel(true)
# Animate back to original position
hover_tween.tween_property(self, "position", original_position, hover_duration)
# Animate back to original scale
hover_tween.tween_property(self, "scale", original_scale, hover_duration)
# Animate back to original rotation
hover_tween.tween_property(self, "rotation", original_hover_rotation, hover_duration)
# Update current hover position tracking
hover_tween.tween_method(_update_hover_position, position, original_position, hover_duration)
# Track current position during hover animation for smooth HOLDING transition
func _update_hover_position(pos: Vector2) -> void:
current_hover_position = pos
# Preserve current hover position when transitioning to HOLDING
func _preserve_hover_position() -> void:
# Stop hover animation and preserve current position
if hover_tween and hover_tween.is_valid():
hover_tween.kill()
hover_tween = null
# Explicitly set position to current hover position
# This ensures smooth transition from hover animation to holding
position = current_hover_position
## Virtual method to determine if hovering animation can start.
## Override in subclasses to implement custom hovering conditions.
## @returns: True if hovering is allowed, false otherwise
func _can_start_hovering() -> bool:
return true
func _on_mouse_enter() -> void:
is_mouse_inside = true
if can_be_interacted_with and _can_start_hovering():
change_state(DraggableState.HOVERING)
func _on_mouse_exit() -> void:
is_mouse_inside = false
match current_state:
DraggableState.HOVERING:
change_state(DraggableState.IDLE)
func _on_gui_input(event: InputEvent) -> void:
if not can_be_interacted_with:
return
if event is InputEventMouseButton:
_handle_mouse_button(event as InputEventMouseButton)
## Moves the object to target position with optional rotation using smooth animation.
## Automatically transitions to MOVING state and handles animation timing based on distance.
## @param target_destination: Global position to move to
## @param degree: Target rotation in radians
func move(target_destination: Vector2, degree: float) -> void:
# Skip if current position and rotation match target
if global_position == target_destination and rotation == degree:
return
# Force transition to MOVING state (highest priority)
change_state(DraggableState.MOVING)
# Stop existing movement
if move_tween and move_tween.is_valid():
move_tween.kill()
move_tween = null
# Store target position and rotation for original value preservation
self.target_destination = target_destination
self.target_rotation = degree
# Initial setup
rotation = 0
destination_degree = degree
is_moving_to_destination = true
# Smooth Tween-based movement with dynamic duration based on moving_speed
var distance = global_position.distance_to(target_destination)
var duration = distance / moving_speed
move_tween = create_tween()
move_tween.tween_property(self, "global_position", target_destination, duration)
move_tween.tween_callback(_finish_move)
func _handle_mouse_button(mouse_event: InputEventMouseButton) -> void:
if mouse_event.button_index != MOUSE_BUTTON_LEFT:
return
# Ignore all input during MOVING state
if current_state == DraggableState.MOVING:
return
if mouse_event.is_pressed():
_handle_mouse_pressed()
if mouse_event.is_released():
_handle_mouse_released()
## Returns the object to its original position with smooth animation.
func return_to_original() -> void:
is_returning_to_original = true
move(original_destination, original_rotation)
func _handle_mouse_pressed() -> void:
is_pressed = true
match current_state:
DraggableState.HOVERING:
change_state(DraggableState.HOLDING)
DraggableState.IDLE:
if is_mouse_inside and can_be_interacted_with and _can_start_hovering():
change_state(DraggableState.HOLDING)
func _handle_mouse_released() -> void:
is_pressed = false
match current_state:
DraggableState.HOLDING:
change_state(DraggableState.IDLE)

View File

@@ -0,0 +1 @@
uid://bfhrx3h70sor0

View File

@@ -0,0 +1,253 @@
## Interactive drop zone system with sensor partitioning and visual debugging.
##
## DropZone provides sophisticated drag-and-drop target detection with configurable
## sensor areas, partitioning systems, and visual debugging capabilities. It integrates
## with CardContainer to enable precise card placement and reordering operations.
##
## Key Features:
## - Flexible sensor sizing and positioning with dynamic adjustment
## - Vertical/horizontal partitioning for precise drop targeting
## - Visual debugging with colored outlines and partition indicators
## - Mouse detection with global coordinate transformation
## - Accept type filtering for specific draggable object types
##
## Partitioning System:
## - Vertical partitions: Divide sensor into left-right sections for card ordering
## - Horizontal partitions: Divide sensor into up-down sections for layered placement
## - Dynamic outline generation for visual feedback during development
##
## Usage:
## [codeblock]
## var drop_zone = DropZone.new()
## drop_zone.init(container, ["card"])
## drop_zone.set_sensor(Vector2(200, 300), Vector2.ZERO, null, false)
## drop_zone.set_vertical_partitions([100, 200, 300])
## [/codeblock]
class_name DropZone
extends Control
# Dynamic sensor properties with automatic UI synchronization
## Size of the drop sensor area
var sensor_size: Vector2:
set(value):
sensor.size = value
sensor_outline.size = value
## Position offset of the drop sensor relative to DropZone
var sensor_position: Vector2:
set(value):
sensor.position = value
sensor_outline.position = value
## @deprecated: Since it was designed to debug the sensor, please use sensor_outline_visible instead.
var sensor_texture : Texture:
set(value):
sensor.texture = value
## @deprecated: Since it was designed to debug the sensor, please use sensor_outline_visible instead.
var sensor_visible := true:
set(value):
sensor.visible = value
## Controls visibility of debugging outlines for sensor and partitions
var sensor_outline_visible := false:
set(value):
sensor_outline.visible = value
for outline in sensor_partition_outlines:
outline.visible = value
# Core drop zone configuration and state
## Array of accepted draggable object types (e.g., ["card", "token"])
var accept_types: Array = []
## Original sensor size for restoration after dynamic changes
var stored_sensor_size: Vector2
## Original sensor position for restoration after dynamic changes
var stored_sensor_position: Vector2
## Parent container that owns this drop zone
var parent: Node
# UI components
## Main sensor control for hit detection (invisible)
var sensor: Control
## Debug outline for visual sensor boundary indication
var sensor_outline: ReferenceRect
## Array of partition outline controls for debugging
var sensor_partition_outlines: Array = []
# Partitioning system for precise drop targeting
## Global vertical lines to divide sensing partitions (left to right direction)
var vertical_partition: Array
## Global horizontal lines to divide sensing partitions (up to down direction)
var horizontal_partition: Array
## Initializes the drop zone with parent reference and accepted drag types.
## Creates sensor and debugging UI components.
## @param _parent: Container that owns this drop zone
## @param accept_types: Array of draggable object types this zone accepts
func init(_parent: Node, accept_types: Array =[]):
parent = _parent
self.accept_types = accept_types
# Create invisible sensor for hit detection
if sensor == null:
sensor = TextureRect.new()
sensor.name = "Sensor"
sensor.mouse_filter = Control.MOUSE_FILTER_IGNORE
sensor.z_index = CardFrameworkSettings.VISUAL_SENSOR_Z_INDEX # Behind everything else
add_child(sensor)
# Create debugging outline (initially hidden)
if sensor_outline == null:
sensor_outline = ReferenceRect.new()
sensor_outline.editor_only = false
sensor_outline.name = "SensorOutline"
sensor_outline.mouse_filter = Control.MOUSE_FILTER_IGNORE
sensor_outline.border_color = CardFrameworkSettings.DEBUG_OUTLINE_COLOR
sensor_outline.z_index = CardFrameworkSettings.VISUAL_OUTLINE_Z_INDEX
add_child(sensor_outline)
# Initialize default values
stored_sensor_size = Vector2(0, 0)
stored_sensor_position = Vector2(0, 0)
vertical_partition = []
horizontal_partition = []
## Checks if the mouse cursor is currently within the drop zone sensor area.
## @returns: True if mouse is inside the sensor bounds
func check_mouse_is_in_drop_zone() -> bool:
var mouse_position = get_global_mouse_position()
var result = sensor.get_global_rect().has_point(mouse_position)
return result
## Configures the sensor with size, position, texture, and visibility settings.
## Stores original values for later restoration.
## @param _size: Size of the sensor area
## @param _position: Position offset from DropZone origin
## @param _texture: Optional texture for sensor visualization
## @param _visible: Whether sensor texture is visible (deprecated)
func set_sensor(_size: Vector2, _position: Vector2, _texture: Texture, _visible: bool):
sensor_size = _size
sensor_position = _position
stored_sensor_size = _size
stored_sensor_position = _position
sensor_texture = _texture
sensor_visible = _visible
## Dynamically adjusts sensor size and position without affecting stored values.
## Used for temporary sensor modifications that can be restored later.
## @param _size: New temporary sensor size
## @param _position: New temporary sensor position
func set_sensor_size_flexibly(_size: Vector2, _position: Vector2):
sensor_size = _size
sensor_position = _position
## Restores sensor to its original size and position from stored values.
## Used to undo temporary modifications made by set_sensor_size_flexibly.
func return_sensor_size():
sensor_size = stored_sensor_size
sensor_position = stored_sensor_position
## Adjusts sensor position by adding an offset to the stored position.
## @param offset: Vector2 offset to add to the original stored position
func change_sensor_position_with_offset(offset: Vector2):
sensor_position = stored_sensor_position + offset
## Sets vertical partition lines for drop targeting and creates debug outlines.
## Vertical partitions divide the sensor into left-right sections for card ordering.
## @param positions: Array of global X coordinates for partition lines
func set_vertical_partitions(positions: Array):
vertical_partition = positions
# Clear existing partition outlines
for outline in sensor_partition_outlines:
outline.queue_free()
sensor_partition_outlines.clear()
# Create debug outline for each partition
for i in range(vertical_partition.size()):
var outline = ReferenceRect.new()
outline.editor_only = false
outline.name = "VerticalPartition" + str(i)
outline.z_index = CardFrameworkSettings.VISUAL_OUTLINE_Z_INDEX
outline.border_color = CardFrameworkSettings.DEBUG_OUTLINE_COLOR
outline.mouse_filter = Control.MOUSE_FILTER_IGNORE
outline.size = Vector2(1, sensor.size.y) # Vertical line full height
# Convert global partition position to local coordinates
var local_x = vertical_partition[i] - global_position.x
outline.position = Vector2(local_x, sensor.position.y)
outline.visible = sensor_outline.visible
add_child(outline)
sensor_partition_outlines.append(outline)
func set_horizontal_partitions(positions: Array):
horizontal_partition = positions
# clear existing outlines
for outline in sensor_partition_outlines:
outline.queue_free()
sensor_partition_outlines.clear()
for i in range(horizontal_partition.size()):
var outline = ReferenceRect.new()
outline.editor_only = false
outline.name = "HorizontalPartition" + str(i)
outline.z_index = CardFrameworkSettings.VISUAL_OUTLINE_Z_INDEX
outline.border_color = CardFrameworkSettings.DEBUG_OUTLINE_COLOR
outline.mouse_filter = Control.MOUSE_FILTER_IGNORE
outline.size = Vector2(sensor.size.x, 1)
var local_y = horizontal_partition[i] - global_position.y
outline.position = Vector2(sensor.position.x, local_y)
outline.visible = sensor_outline.visible
add_child(outline)
sensor_partition_outlines.append(outline)
## Determines which vertical partition the mouse is currently in.
## Returns the partition index for precise drop targeting.
## @returns: Partition index (0-based) or -1 if outside sensor or no partitions
func get_vertical_layers() -> int:
if not check_mouse_is_in_drop_zone():
return -1
if vertical_partition == null or vertical_partition.is_empty():
return -1
var mouse_position = get_global_mouse_position()
# Count how many partition lines the mouse has crossed
var current_index := 0
for i in range(vertical_partition.size()):
if mouse_position.x >= vertical_partition[i]:
current_index += 1
else:
break
return current_index
func get_horizontal_layers() -> int:
if not check_mouse_is_in_drop_zone():
return -1
if horizontal_partition == null or horizontal_partition.is_empty():
return -1
var mouse_position = get_global_mouse_position()
var current_index := 0
for i in range(horizontal_partition.size()):
if mouse_position.y >= horizontal_partition[i]:
current_index += 1
else:
break
return current_index

View File

@@ -0,0 +1 @@
uid://dhultt7pav0b

View File

@@ -0,0 +1,14 @@
[gd_scene load_steps=2 format=3 uid="uid://dkmme1pig03ie"]
[ext_resource type="Script" uid="uid://dhultt7pav0b" path="res://addons/card-framework/drop_zone.gd" id="1_w6usu"]
[node name="DropZone" type="Control"]
layout_mode = 3
anchors_preset = 15
anchor_right = 1.0
anchor_bottom = 1.0
grow_horizontal = 2
grow_vertical = 2
mouse_filter = 2
mouse_force_pass_scroll_events = false
script = ExtResource("1_w6usu")

View File

@@ -0,0 +1,291 @@
## A fan-shaped card container that arranges cards in an arc formation.
##
## Hand provides sophisticated card layout using mathematical curves to create
## natural-looking card arrangements. Cards are positioned in a fan pattern
## with configurable spread, rotation, and vertical displacement.
##
## Key Features:
## - Fan-shaped card arrangement with customizable curves
## - Smooth card reordering with optional swap-only mode
## - Dynamic drop zone sizing to match hand spread
## - Configurable card limits and hover distances
## - Mathematical positioning using Curve resources
##
## Curve Configuration:
## - hand_rotation_curve: Controls card rotation (linear -X to +X recommended)
## - hand_vertical_curve: Controls vertical offset (3-point ease 0-X-0 recommended)
##
## Usage:
## [codeblock]
## @onready var hand = $Hand
## hand.max_hand_size = 7
## hand.max_hand_spread = 600
## hand.card_face_up = true
## [/codeblock]
class_name Hand
extends CardContainer
@export_group("hand_meta_info")
## maximum number of cards that can be held.
@export var max_hand_size := CardFrameworkSettings.LAYOUT_MAX_HAND_SIZE
## maximum spread of the hand.
@export var max_hand_spread := CardFrameworkSettings.LAYOUT_MAX_HAND_SPREAD
## whether the card is face up.
@export var card_face_up := true
## distance the card hovers when interacted with.
@export var card_hover_distance := CardFrameworkSettings.PHYSICS_CARD_HOVER_DISTANCE
@export_group("hand_shape")
## rotation curve of the hand.
## This works best as a 2-point linear rise from -X to +X.
@export var hand_rotation_curve : Curve
## vertical curve of the hand.
## This works best as a 3-point ease in/out from 0 to X to 0
@export var hand_vertical_curve : Curve
@export_group("drop_zone")
## Determines whether the drop zone size follows the hand size. (requires enable drop zone true)
@export var align_drop_zone_size_with_current_hand_size := true
## If true, only swap the positions of two cards when reordering (a <-> b), otherwise shift the range (default behavior).
@export var swap_only_on_reorder := false
var vertical_partitions_from_outside = []
var vertical_partitions_from_inside = []
func _ready() -> void:
super._ready()
$"../..".dealt.connect(sort_hand)
## Returns a random selection of cards from this hand.
## @param n: Number of cards to select
## @returns: Array of randomly selected cards
func get_random_cards(n: int) -> Array:
var deck = _held_cards.duplicate()
deck.shuffle()
if n > _held_cards.size():
n = _held_cards.size()
return deck.slice(0, n)
func sort_hand():
var sort_cards = _held_cards.duplicate()
sort_cards.sort_custom(compare_cards)
#print("sorted")
for card in sort_cards:
print(card.card_info["value"])
var n = 0
for card in sort_cards:
var arr_card : Array
arr_card.append(card)
move_cards((arr_card), n)
arr_card.clear()
n += 1
func compare_cards(a,b):
var val_1 = int(a.card_info["value"])
var val_2 = int(b.card_info["value"])
var su_1 = a.card_info["suit"]
var su_2 = b.card_info["suit"]
match su_1:
"Diampnd":
val_1 -= 10
"Spade":
val_1 -= 20
"Heart":
val_1 -= 30
#print(val_1,su_1)
match su_2:
"Diamond":
val_2 -= 10
"Spade":
val_2 -= 20
"Heart":
val_2 -= 30
#print(val_2,su_2)
return val_1 < val_2
func _card_can_be_added(_cards: Array) -> bool:
var is_all_cards_contained = true
for i in range(_cards.size()):
var card = _cards[i]
if !_held_cards.has(card):
is_all_cards_contained = false
if is_all_cards_contained:
return true
var card_size = _cards.size()
return _held_cards.size() + card_size <= max_hand_size
func _update_target_z_index() -> void:
for i in range(_held_cards.size()):
var card = _held_cards[i]
card.stored_z_index = i
## Calculates target positions for all cards using mathematical curves.
## Implements sophisticated fan-shaped arrangement with rotation and vertical displacement.
func _update_target_positions() -> void:
var x_min: float
var x_max: float
var y_min: float
var y_max: float
var card_size = card_manager.card_size
var _w = card_size.x
var _h = card_size.y
vertical_partitions_from_outside.clear()
# Calculate position and rotation for each card in the fan arrangement
for i in range(_held_cards.size()):
var card = _held_cards[i]
# Calculate normalized position ratio (0.0 to 1.0) for curve sampling
var hand_ratio = 0.5 # Single card centered
if _held_cards.size() > 1:
hand_ratio = float(i) / float(_held_cards.size() - 1)
# Calculate base horizontal position with even spacing
var target_pos = global_position
@warning_ignore("integer_division")
var card_spacing = max_hand_spread / (_held_cards.size() + 1)
target_pos.x += (i + 1) * card_spacing - max_hand_spread / 2.0
# Apply vertical curve displacement for fan shape
if hand_vertical_curve:
target_pos.y -= hand_vertical_curve.sample(hand_ratio)
# Apply rotation curve for realistic card fanning
var target_rotation = 0
if hand_rotation_curve:
target_rotation = deg_to_rad(hand_rotation_curve.sample(hand_ratio))
# Calculate rotated card bounding box for drop zone partitioning
# This complex math determines the actual screen space occupied by each rotated card
var _x = target_pos.x
var _y = target_pos.y
# Calculate angles to card corners after rotation
var _t1 = atan2(_h, _w) + target_rotation # bottom-right corner
var _t2 = atan2(_h, -_w) + target_rotation # bottom-left corner
var _t3 = _t1 + PI + target_rotation # top-left corner
var _t4 = _t2 + PI + target_rotation # top-right corner
# Card center and radius for corner calculation
var _c = Vector2(_x + _w / 2, _y + _h / 2) # card center
var _r = sqrt(pow(_w / 2, 2.0) + pow(_h / 2, 2.0)) # diagonal radius
# Calculate actual corner positions after rotation
var _p1 = Vector2(_r * cos(_t1), _r * sin(_t1)) + _c # right bottom
var _p2 = Vector2(_r * cos(_t2), _r * sin(_t2)) + _c # left bottom
var _p3 = Vector2(_r * cos(_t3), _r * sin(_t3)) + _c # left top
var _p4 = Vector2(_r * cos(_t4), _r * sin(_t4)) + _c # right top
# Find bounding box of rotated card
var current_x_min = min(_p1.x, _p2.x, _p3.x, _p4.x)
var current_x_max = max(_p1.x, _p2.x, _p3.x, _p4.x)
var current_y_min = min(_p1.y, _p2.y, _p3.y, _p4.y)
var current_y_max = max(_p1.y, _p2.y, _p3.y, _p4.y)
var current_x_mid = (current_x_min + current_x_max) / 2
vertical_partitions_from_outside.append(current_x_mid)
if i == 0:
x_min = current_x_min
x_max = current_x_max
y_min = current_y_min
y_max = current_y_max
else:
x_min = minf(x_min, current_x_min)
x_max = maxf(x_max, current_x_max)
y_min = minf(y_min, current_y_min)
y_max = maxf(y_max, current_y_max)
card.move(target_pos, target_rotation)
card.show_front = card_face_up
card.can_be_interacted_with = true
# Calculate midpoints between consecutive values in vertical_partitions_from_outside
vertical_partitions_from_inside.clear()
if vertical_partitions_from_outside.size() > 1:
for j in range(vertical_partitions_from_outside.size() - 1):
var mid = (vertical_partitions_from_outside[j] + vertical_partitions_from_outside[j + 1]) / 2.0
vertical_partitions_from_inside.append(mid)
if align_drop_zone_size_with_current_hand_size:
if _held_cards.size() == 0:
drop_zone.return_sensor_size()
else:
var _size = Vector2(x_max - x_min, y_max - y_min)
var _position = Vector2(x_min, y_min) - position
drop_zone.set_sensor_size_flexibly(_size, _position)
drop_zone.set_vertical_partitions(vertical_partitions_from_outside)
func move_cards(cards: Array, index: int = -1, with_history: bool = true) -> bool:
# Handle single card reordering within same Hand container
if cards.size() == 1 and _held_cards.has(cards[0]) and index >= 0 and index < _held_cards.size():
var current_index = _held_cards.find(cards[0])
# Swap-only mode: exchange two cards directly
if swap_only_on_reorder:
swap_card(cards[0], index)
return true
# Same position optimization
if current_index == index:
# Same card, same position - ensure consistent state
update_card_ui()
_restore_mouse_interaction(cards)
return true
# Different position: use efficient internal reordering
_reorder_card_in_hand(cards[0], current_index, index, with_history)
_restore_mouse_interaction(cards)
return true
# Fall back to parent implementation for other cases
return super.move_cards(cards, index, with_history)
func swap_card(card: Card, index: int) -> void:
var current_index = _held_cards.find(card)
if current_index == index:
return
var temp = _held_cards[current_index]
_held_cards[current_index] = _held_cards[index]
_held_cards[index] = temp
update_card_ui()
## Restore mouse interaction for cards after drag & drop completion.
func _restore_mouse_interaction(cards: Array) -> void:
# Restore mouse interaction for cards after drag & drop completion.
for card in cards:
card.mouse_filter = Control.MOUSE_FILTER_STOP
## Efficiently reorder card within Hand without intermediate UI updates.
## Prevents position calculation errors during same-container moves.
func _reorder_card_in_hand(card: Card, from_index: int, to_index: int, with_history: bool) -> void:
# Efficiently reorder card within Hand without intermediate UI updates.
# Add to history if needed (before making changes)
if with_history:
card_manager._add_history(self, [card])
# Efficient array reordering without intermediate states
_held_cards.remove_at(from_index)
_held_cards.insert(to_index, card)
# Single UI update after array change
update_card_ui()
func hold_card(card: Card) -> void:
if _held_cards.has(card):
drop_zone.set_vertical_partitions(vertical_partitions_from_inside)
super.hold_card(card)

View File

@@ -0,0 +1 @@
uid://dj46jo3lfbclo

View File

@@ -0,0 +1,21 @@
[gd_scene load_steps=4 format=3 uid="uid://bkpjlq7ggckg6"]
[ext_resource type="Script" uid="uid://dj46jo3lfbclo" path="res://addons/card-framework/hand.gd" id="1_hrxjc"]
[sub_resource type="Curve" id="Curve_lsli3"]
_limits = [-15.0, 15.0, 0.0, 1.0]
_data = [Vector2(0, -15), 0.0, 30.0, 0, 1, Vector2(1, 15), 30.0, 0.0, 1, 0]
point_count = 2
[sub_resource type="Curve" id="Curve_8dbo5"]
_limits = [0.0, 50.0, 0.0, 1.0]
_data = [Vector2(0, 0), 0.0, 0.0, 0, 0, Vector2(0.5, 40), 0.0, 0.0, 0, 0, Vector2(1, 0), 0.0, 0.0, 0, 0]
point_count = 3
[node name="Hand" type="Control"]
layout_mode = 3
anchors_preset = 0
mouse_filter = 1
script = ExtResource("1_hrxjc")
hand_rotation_curve = SubResource("Curve_lsli3")
hand_vertical_curve = SubResource("Curve_8dbo5")

View File

@@ -0,0 +1,64 @@
## History tracking element for card movement operations with precise undo support.
##
## HistoryElement stores complete state information for card movements to enable
## accurate undo/redo operations. It tracks source and destination containers,
## moved cards, and their original indices for precise state restoration.
##
## Key Features:
## - Complete movement state capture for reliable undo operations
## - Precise index tracking to restore original card positions
## - Support for multi-card movement operations
## - Detailed string representation for debugging and logging
##
## Used By:
## - CardManager for history management and undo operations
## - CardContainer.undo() for precise card position restoration
##
## Index Precision:
## The from_indices array stores the exact original positions of cards in their
## source container. This enables precise restoration even when multiple cards
## are moved simultaneously or containers have been modified since the operation.
##
## Usage:
## [codeblock]
## var history = HistoryElement.new()
## history.from = source_container
## history.to = target_container
## history.cards = [card1, card2]
## history.from_indices = [0, 2] # Original positions in source
## [/codeblock]
class_name HistoryElement
extends Object
# Movement tracking data
## Source container where cards originated (null for newly created cards)
var from: CardContainer
## Destination container where cards were moved
var to: CardContainer
## Array of Card instances that were moved in this operation
var cards: Array
## Original indices of cards in the source container for precise undo restoration
var from_indices: Array
## Generates a detailed string representation of the history element for debugging.
## Includes container information, card details, and original indices.
## @returns: Formatted string with complete movement information
func get_string() -> String:
var from_str = from.get_string() if from != null else "null"
var to_str = to.get_string() if to != null else "null"
# Build card list representation
var card_strings = []
for c in cards:
card_strings.append(c.get_string())
var cards_str = ""
for i in range(card_strings.size()):
cards_str += card_strings[i]
if i < card_strings.size() - 1:
cards_str += ", "
# Format index array for display
var indices_str = str(from_indices) if not from_indices.is_empty() else "[]"
return "from: [%s], to: [%s], cards: [%s], indices: %s" % [from_str, to_str, cards_str, indices_str]

View File

@@ -0,0 +1 @@
uid://b4ykigioo87gs

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.2 KiB

View File

@@ -0,0 +1,40 @@
[remap]
importer="texture"
type="CompressedTexture2D"
uid="uid://bc3d1x13hxb5j"
path="res://.godot/imported/icon.png-817a7fa694fbd595037553fbc05904d8.ctex"
metadata={
"vram_texture": false
}
[deps]
source_file="res://addons/card-framework/icon.png"
dest_files=["res://.godot/imported/icon.png-817a7fa694fbd595037553fbc05904d8.ctex"]
[params]
compress/mode=0
compress/high_quality=false
compress/lossy_quality=0.7
compress/uastc_level=0
compress/rdo_quality_loss=0.0
compress/hdr_compression=1
compress/normal_map=0
compress/channel_pack=0
mipmaps/generate=false
mipmaps/limit=-1
roughness/mode=0
roughness/src_normal=""
process/channel_remap/red=0
process/channel_remap/green=1
process/channel_remap/blue=2
process/channel_remap/alpha=3
process/fix_alpha_border=true
process/premult_alpha=false
process/normal_map_invert_y=false
process/hdr_as_srgb=false
process/hdr_clamp_exposure=false
process/size_limit=0
detect_3d/compress_to=1

View File

@@ -0,0 +1,217 @@
@tool
## JSON-based card factory implementation with asset management and caching.
##
## JsonCardFactory extends CardFactory to provide JSON-based card creation with
## sophisticated asset loading, data caching, and error handling. It manages
## card definitions stored as JSON files and automatically loads corresponding
## image assets from specified directories.
##
## Key Features:
## - JSON-based card data definition with flexible schema
## - Automatic asset loading and texture management
## - Performance-optimized data caching for rapid card creation
## - Comprehensive error handling with detailed logging
## - Directory scanning for bulk card data preloading
## - Configurable asset and data directory paths
##
## File Structure Requirements:
## [codeblock]
## project/
## ├── card_assets/ # card_asset_dir
## │ ├── ace_spades.png
## │ └── king_hearts.png
## ├── card_data/ # card_info_dir
## │ ├── ace_spades.json # Matches asset filename
## │ └── king_hearts.json
## [/codeblock]
##
## JSON Schema Example:
## [codeblock]
## {
## "name": "ace_spades",
## "front_image": "ace_spades.png",
## "suit": "spades",
## "value": "ace"
## }
## [/codeblock]
class_name JsonCardFactory
extends CardFactory
@export_group("card_scenes")
## Base card scene to instantiate for each card (must inherit from Card class)
@export var default_card_scene: PackedScene
@export_group("asset_paths")
## Directory path containing card image assets (PNG, JPG, etc.)
@export var card_asset_dir: String
## Directory path containing card information JSON files
@export var card_info_dir: String
@export_group("default_textures")
## Common back face texture used for all cards when face-down
@export var back_image: Texture2D
## Validates configuration and default card scene on initialization.
## Ensures default_card_scene references a valid Card-inherited node.
func _ready() -> void:
if default_card_scene == null:
push_error("default_card_scene is not assigned!")
return
# Validate that default_card_scene produces Card instances
var temp_instance = default_card_scene.instantiate()
if not (temp_instance is Card):
push_error("Invalid node type! default_card_scene must reference a Card.")
default_card_scene = null
temp_instance.queue_free()
## Creates a new card instance with JSON data and adds it to the target container.
## Uses cached data if available, otherwise loads from JSON and asset files.
## @param card_name: Identifier matching JSON filename (without .json extension)
## @param target: CardContainer to receive the new card
## @returns: Created Card instance or null if creation failed
func create_card(card_name: String, target: CardContainer) -> Card:
# Use cached data for optimal performance
if preloaded_cards.has(card_name):
var card_info = preloaded_cards[card_name]["info"]
var front_image = preloaded_cards[card_name]["texture"]
return _create_card_node(card_info.name, front_image, target, card_info)
else:
# Load card data on-demand (slower but supports dynamic loading)
var card_info = _load_card_info(card_name)
if card_info == null or card_info == {}:
push_error("Card info not found for card: %s" % card_name)
return null
# Validate required JSON fields
if not card_info.has("front_image"):
push_error("Card info does not contain 'front_image' key for card: %s" % card_name)
return null
# Load corresponding image asset
var front_image_path = card_asset_dir + "/" + card_info["front_image"]
var front_image = _load_image(front_image_path)
if front_image == null:
push_error("Card image not found: %s" % front_image_path)
return null
return _create_card_node(card_info.name, front_image, target, card_info)
## Scans card info directory and preloads all JSON data and textures into cache.
## Significantly improves card creation performance by eliminating file I/O during gameplay.
## Should be called during game initialization or loading screens.
func preload_card_data() -> void:
var dir = DirAccess.open(card_info_dir)
if dir == null:
push_error("Failed to open directory: %s" % card_info_dir)
return
# Scan directory for all JSON files
dir.list_dir_begin()
var file_name = dir.get_next()
while file_name != "":
# Skip non-JSON files
if !file_name.ends_with(".json"):
file_name = dir.get_next()
continue
# Extract card name from filename (without .json extension)
var card_name = file_name.get_basename()
var card_info = _load_card_info(card_name)
if card_info == null:
push_error("Failed to load card info for %s" % card_name)
continue
# Load corresponding texture asset
var front_image_path = card_asset_dir + "/" + card_info.get("front_image", "")
var front_image_texture = _load_image(front_image_path)
if front_image_texture == null:
push_error("Failed to load card image: %s" % front_image_path)
continue
# Cache both JSON data and texture for fast access
preloaded_cards[card_name] = {
"info": card_info,
"texture": front_image_texture
}
print("Preloaded card data:", preloaded_cards[card_name])
file_name = dir.get_next()
## Loads and parses JSON card data from file system.
## @param card_name: Card identifier (filename without .json extension)
## @returns: Dictionary containing card data or empty dict if loading failed
func _load_card_info(card_name: String) -> Dictionary:
var json_path = card_info_dir + "/" + card_name + ".json"
if !FileAccess.file_exists(json_path):
return {}
# Read JSON file content
var file = FileAccess.open(json_path, FileAccess.READ)
var json_string = file.get_as_text()
file.close()
# Parse JSON with error handling
var json = JSON.new()
var error = json.parse(json_string)
if error != OK:
push_error("Failed to parse JSON: %s" % json_path)
return {}
return json.data
## Loads image texture from file path with error handling.
## @param image_path: Full path to image file
## @returns: Loaded Texture2D or null if loading failed
func _load_image(image_path: String) -> Texture2D:
var texture = load(image_path) as Texture2D
if texture == null:
push_error("Failed to load image resource: %s" % image_path)
return null
return texture
## Creates and configures a card node with textures and adds it to target container.
## @param card_name: Card identifier for naming and reference
## @param front_image: Texture for card front face
## @param target: CardContainer to receive the card
## @param card_info: Dictionary of card data from JSON
## @returns: Configured Card instance or null if addition failed
func _create_card_node(card_name: String, front_image: Texture2D, target: CardContainer, card_info: Dictionary) -> Card:
var card = _generate_card(card_info)
# Validate container can accept this card
if !target._card_can_be_added([card]):
print("Card cannot be added: %s" % card_name)
card.queue_free()
return null
# Configure card properties
card.card_info = card_info
card.card_size = card_size
# Add to scene tree and container
var cards_node = target.get_node("Cards")
cards_node.add_child(card)
target.add_card(card)
# Set card identity and textures
card.card_name = card_name
card.set_faces(front_image, back_image)
return card
## Instantiates a new card from the default card scene.
## @param _card_info: Card data dictionary (reserved for future customization)
## @returns: New Card instance or null if scene is invalid
func _generate_card(_card_info: Dictionary) -> Card:
if default_card_scene == null:
push_error("default_card_scene is not assigned!")
return null
return default_card_scene.instantiate()

View File

@@ -0,0 +1 @@
uid://8n36yadkvxai

View File

@@ -0,0 +1,141 @@
## A stacked card container with directional positioning and interaction controls.
##
## Pile provides a traditional card stack implementation where cards are arranged
## in a specific direction with configurable spacing. It supports various interaction
## modes from full movement to top-card-only access, making it suitable for deck
## implementations, foundation piles, and discard stacks.
##
## Key Features:
## - Directional stacking (up, down, left, right)
## - Configurable stack display limits and spacing
## - Flexible interaction controls (all cards, top only, none)
## - Dynamic drop zone positioning following top card
## - Visual depth management with z-index layering
##
## Common Use Cases:
## - Foundation piles in Solitaire games
## - Draw/discard decks with face-down cards
## - Tableau piles with partial card access
##
## Usage:
## [codeblock]
## @onready var deck = $Deck
## deck.layout = Pile.PileDirection.DOWN
## deck.card_face_up = false
## deck.restrict_to_top_card = true
## [/codeblock]
class_name Pile
extends CardContainer
# Enums
## Defines the stacking direction for cards in the pile.
enum PileDirection {
UP, ## Cards stack upward (negative Y direction)
DOWN, ## Cards stack downward (positive Y direction)
LEFT, ## Cards stack leftward (negative X direction)
RIGHT ## Cards stack rightward (positive X direction)
}
@export_group("pile_layout")
## Distance between each card in the stack display
@export var stack_display_gap := CardFrameworkSettings.LAYOUT_STACK_GAP
## Maximum number of cards to visually display in the pile
## Cards beyond this limit will be hidden under the visible stack
@export var max_stack_display := CardFrameworkSettings.LAYOUT_MAX_STACK_DISPLAY
## Whether cards in the pile show their front face (true) or back face (false)
@export var card_face_up := true
## Direction in which cards are stacked from the pile's base position
@export var layout := PileDirection.UP
@export_group("pile_interaction")
## Whether any card in the pile can be moved via drag-and-drop
@export var allow_card_movement: bool = true
## Restricts movement to only the top card (requires allow_card_movement = true)
@export var restrict_to_top_card: bool = true
## Whether drop zone follows the top card position (requires allow_card_movement = true)
@export var align_drop_zone_with_top_card := true
## Returns the top n cards from the pile without removing them.
## Cards are returned in top-to-bottom order (most recent first).
## @param n: Number of cards to retrieve from the top
## @returns: Array of cards from the top of the pile (limited by available cards)
func get_top_cards(n: int) -> Array:
var arr_size = _held_cards.size()
if n > arr_size:
n = arr_size
var result = []
for i in range(n):
result.append(_held_cards[arr_size - 1 - i])
return result
## Updates z-index values for all cards to maintain proper layering.
## Pressed cards receive elevated z-index to appear above the pile.
func _update_target_z_index() -> void:
for i in range(_held_cards.size()):
var card = _held_cards[i]
if card.is_pressed:
card.stored_z_index = CardFrameworkSettings.VISUAL_PILE_Z_INDEX + i
else:
card.stored_z_index = i
## Updates visual positions and interaction states for all cards in the pile.
## Positions cards according to layout direction and applies interaction restrictions.
func _update_target_positions() -> void:
# Calculate top card position for drop zone alignment
var last_index = _held_cards.size() - 1
if last_index < 0:
last_index = 0
var last_offset = _calculate_offset(last_index)
# Align drop zone with top card if enabled
if enable_drop_zone and align_drop_zone_with_top_card:
drop_zone.change_sensor_position_with_offset(last_offset)
# Position each card and set interaction state
for i in range(_held_cards.size()):
var card = _held_cards[i]
var offset = _calculate_offset(i)
var target_pos = position + offset
# Set card appearance and position
card.show_front = card_face_up
card.move(target_pos, 0)
# Apply interaction restrictions
if not allow_card_movement:
card.can_be_interacted_with = false
elif restrict_to_top_card:
if i == _held_cards.size() - 1:
card.can_be_interacted_with = true
else:
card.can_be_interacted_with = false
## Calculates the visual offset for a card at the given index in the stack.
## Respects max_stack_display limit to prevent excessive visual spreading.
## @param index: Position of the card in the stack (0 = bottom, higher = top)
## @returns: Vector2 offset from the pile's base position
func _calculate_offset(index: int) -> Vector2:
# Clamp to maximum display limit to prevent visual overflow
var actual_index = min(index, max_stack_display - 1)
var offset_value = actual_index * stack_display_gap
var offset = Vector2()
# Apply directional offset based on pile layout
match layout:
PileDirection.UP:
offset.y -= offset_value # Stack upward (negative Y)
PileDirection.DOWN:
offset.y += offset_value # Stack downward (positive Y)
PileDirection.RIGHT:
offset.x += offset_value # Stack rightward (positive X)
PileDirection.LEFT:
offset.x -= offset_value # Stack leftward (negative X)
return offset

View File

@@ -0,0 +1 @@
uid://6ams8uvg43gu

View File

@@ -0,0 +1,9 @@
[gd_scene load_steps=2 format=3 uid="uid://dk6rb7lhv1ef6"]
[ext_resource type="Script" uid="uid://6ams8uvg43gu" path="res://addons/card-framework/pile.gd" id="1_34nb1"]
[node name="Pile" type="Control"]
layout_mode = 3
anchors_preset = 0
mouse_filter = 1
script = ExtResource("1_34nb1")

Binary file not shown.

After

Width:  |  Height:  |  Size: 133 KiB

View File

@@ -0,0 +1,40 @@
[remap]
importer="texture"
type="CompressedTexture2D"
uid="uid://ljeealcpkb5y"
path="res://.godot/imported/example1.png-d47c260f2e5b0b52536888b18b0729e1.ctex"
metadata={
"vram_texture": false
}
[deps]
source_file="res://addons/card-framework/screenshots/example1.png"
dest_files=["res://.godot/imported/example1.png-d47c260f2e5b0b52536888b18b0729e1.ctex"]
[params]
compress/mode=0
compress/high_quality=false
compress/lossy_quality=0.7
compress/uastc_level=0
compress/rdo_quality_loss=0.0
compress/hdr_compression=1
compress/normal_map=0
compress/channel_pack=0
mipmaps/generate=false
mipmaps/limit=-1
roughness/mode=0
roughness/src_normal=""
process/channel_remap/red=0
process/channel_remap/green=1
process/channel_remap/blue=2
process/channel_remap/alpha=3
process/fix_alpha_border=true
process/premult_alpha=false
process/normal_map_invert_y=false
process/hdr_as_srgb=false
process/hdr_clamp_exposure=false
process/size_limit=0
detect_3d/compress_to=1

Binary file not shown.

After

Width:  |  Height:  |  Size: 204 KiB

View File

@@ -0,0 +1,40 @@
[remap]
importer="texture"
type="CompressedTexture2D"
uid="uid://dsjxecfihye2x"
path="res://.godot/imported/freecell.png-1692aa5a544f98e15b106d383296ff76.ctex"
metadata={
"vram_texture": false
}
[deps]
source_file="res://addons/card-framework/screenshots/freecell.png"
dest_files=["res://.godot/imported/freecell.png-1692aa5a544f98e15b106d383296ff76.ctex"]
[params]
compress/mode=0
compress/high_quality=false
compress/lossy_quality=0.7
compress/uastc_level=0
compress/rdo_quality_loss=0.0
compress/hdr_compression=1
compress/normal_map=0
compress/channel_pack=0
mipmaps/generate=false
mipmaps/limit=-1
roughness/mode=0
roughness/src_normal=""
process/channel_remap/red=0
process/channel_remap/green=1
process/channel_remap/blue=2
process/channel_remap/alpha=3
process/fix_alpha_border=true
process/premult_alpha=false
process/normal_map_invert_y=false
process/hdr_as_srgb=false
process/hdr_clamp_exposure=false
process/size_limit=0
detect_3d/compress_to=1

Some files were not shown because too many files have changed in this diff Show More