askill
research

researchSafety 100Repository

This skill should be used when researching best practices, evaluating technologies, comparing approaches, or when research, evaluation, or comparison are mentioned.

2 stars
1.2k downloads
Updated 3 weeks ago

Package Files

Loading files...
SKILL.md

Research

Systematic investigation → evidence-based analysis → authoritative recommendations.

Steps

  1. Define scope and evaluation criteria
  2. Discover sources using MCP tools (context7, octocode, firecrawl)
  3. Gather information with multi-source approach
  4. Load the report-findings skill for synthesis
  5. Compile report with confidence levels and citations

<when_to_use>

  • Technology evaluation and comparison
  • Documentation discovery and troubleshooting
  • Best practices and industry standards research
  • Implementation guidance with authoritative sources

NOT for: quick lookups, well-known patterns, time-critical debugging without investigation stage

</when_to_use>

Load the maintain-tasks skill for stage tracking. Stages advance only, never regress.

StageTriggeractiveForm
Analyze RequestSession start"Analyzing research request"
Discover SourcesCriteria defined"Discovering sources"
Gather InformationSources identified"Gathering information"
Synthesize FindingsInformation gathered"Synthesizing findings"
Compile ReportSynthesis complete"Compiling report"

Workflow:

  • Start: Create "Analyze Request" as in_progress
  • Transition: Mark current completed, add next in_progress
  • Simple queries: Skip directly to "Gather Information" if unambiguous
  • Gaps during synthesis: Add new "Gather Information" task
  • Early termination: Skip to "Compile Report" with caveats

Five-stage systematic approach:

1. Question Stage — Define scope

  • Decision to be made?
  • Evaluation parameters? (performance, maintainability, security, adoption)
  • Constraints? (timeline, expertise, infrastructure)

2. Discovery Stage — Multi-source retrieval

Use CasePrimarySecondaryTertiary
Official docscontext7octocodefirecrawl
Troubleshootingoctocode issuesfirecrawl communitycontext7 guides
Code examplesoctocode reposfirecrawl tutorialscontext7 examples
Technology evalParallel allCross-referenceValidate

3. Evaluation Stage — Analyze against criteria

CriterionMetrics
PerformanceBenchmarks, latency, throughput, memory
MaintainabilityCode complexity, docs quality, community activity
SecurityCVEs, audits, compliance
AdoptionDownloads, production usage, industry patterns

4. Comparison Stage — Systematic tradeoff analysis

For each option: Strengths → Weaknesses → Best fit → Deal breakers

5. Recommendation Stage — Clear guidance with rationale

Primary recommendation → Alternatives → Implementation steps → Limitations

Three MCP servers for multi-source research:

ToolBest ForKey Functions
context7Official docs, API refsresolve-library-id, get-library-docs
octocodeCode examples, issuespackageSearch, githubSearchCode, githubSearchIssues
firecrawlTutorials, benchmarkssearch, scrape, map

Execution patterns:

  • Parallel: Run independent queries simultaneously for speed
  • Fallback: context7 → octocode → firecrawl if primary fails
  • Progressive: Start broad, narrow based on findings

See tool-selection.md for detailed usage.

<discovery_patterns>

Common research workflows:

ScenarioApproach
Library InstallationPackage search → Official docs → Installation guide
Error ResolutionParse error → Search issues → Official troubleshooting → Community solutions
API ExplorationDocumentation ID → API reference → Real usage examples
Technology ComparisonParallel all sources → Cross-reference → Build matrix → Recommend

See discovery-patterns.md for detailed workflows.

</discovery_patterns>

<findings_format>

Two output modes:

Evaluation Mode (recommendations):

Finding: { assertion }
Source: { authoritative source with link }
Confidence: High/Medium/Low — { rationale }

Discovery Mode (gathering):

Found: { what was discovered }
Source: { where from with link }
Notes: { context or caveats }

</findings_format>

<response_structure>

## Research Summary
Brief overview — what investigated, sources consulted.

## Options Discovered
1. **Option A** — description
2. **Option B** — description

## Comparison Matrix
| Criterion | Option A | Option B |
|-----------|----------|----------|

## Recommendation
### Primary: [Option Name]
**Rationale**: reasoning + evidence
**Confidence**: level + explanation

### Alternatives
When to choose differently.

## Implementation Guidance
Next steps, common pitfalls, validation.

## Sources
- Official, benchmarks, case studies, community

</response_structure>

Always include:

  • Direct citations with links
  • Confidence levels and limitations
  • Context about when recommendations may not apply

Always validate:

  • Version is latest stable
  • Documentation matches user context
  • Critical info cross-referenced
  • Code examples complete and runnable

Proactively flag:

  • Deprecated approaches with modern alternatives
  • Missing prerequisites
  • Common pitfalls and gotchas
  • Related tools in ecosystem

ALWAYS:

  • Create "Analyze Request" todo at session start
  • One stage in_progress at a time
  • Use multi-source approach (context7, octocode, firecrawl)
  • Provide direct citations with links
  • Cross-reference critical information
  • Include confidence levels and limitations

NEVER:

  • Skip "Analyze Request" stage without defining scope
  • Single-source when multi-source available
  • Deliver recommendations without citations
  • Include deprecated approaches without flagging
  • Omit limitations and edge cases

Research vs Report-Findings:

  • This skill (research) covers the full investigation workflow using MCP tools
  • report-findings skill covers synthesis, source assessment, and presentation

Use research for technology evaluation, documentation discovery, and best practices research. Load report-findings during synthesis stage for source authority assessment and confidence calibration.

Install

Download ZIP
Requires askill CLI v1.0+

AI Quality Score

93/100Analyzed 2/19/2026

Highly polished research skill with systematic five-stage methodology, clear stage tracking workflow, detailed MCP tool selection matrix, discovery patterns for common scenarios, and comprehensive output templates. Includes explicit when-not-to-use guidance, quality safeguards (confidence levels, validation, deprecation flagging), and cross-references to related skills. Tags improve discoverability. Score reflects excellent actionability, completeness, and clarity; slight deduction for no https icon metadata.

100
95
90
95
95

Metadata

Licenseunknown
Version-
Updated3 weeks ago
Publisheroutfitter-dev

Tags

apici-cdgithub-actionsobservabilitysecurity