Skip to main content

User Guide

Learn how to navigate and get the most value from our evaluation of AI-powered developer tools.

Getting Started

  1. 1Explore the Radar — Visit the main page to see an interactive visualization
  2. 2Browse Tools — Check the Tools catalog for a complete list
  3. 3Browse the Timeline — See the Industry Timeline for model releases, funding rounds, launches, and shutdowns
  4. 4Read Insights — Visit Insights for strategic recommendations
  5. 5Deep Dive — Click any tool to see detailed evaluation rationale

Understanding the Radar

Each tool is evaluated across five equally-weighted dimensions (0-20 scale each):

AI Autonomy

Ability to plan and execute multi-step tasks (assistive → agentic → self-directed)

Collaboration

Human + AI co-creation fluency (prompting → pairing → natural collaboration)

Contextual Understanding

Depth of understanding across repos, projects, and systems (file → repo → ecosystem)

Governance

Enterprise readiness: compliance, observability, and trust controls

User Interface

Interaction maturity: keyboard → chat → multimodal ("vibe coding")

Understanding Scores

Rating vs. Adjusted Score

Rating (0-100): Pure capability score based on dimension assessments.

Adjusted Score: Confidence-adjusted score that accounts for evaluation evidence and status maturity. Use this for enterprise decisions.

Why two scores? A tool might have strong capabilities (high Rating) but limited validation (lower Adjusted Score).

Score Interpretation

Score RangeInterpretation
80-100Exceptional - Leading capabilities
60-79Strong - Solid, production-ready
40-59Moderate - Functional with gaps
20-39Limited - Basic capabilities
0-19Minimal - Significant limitations

Using Presets

Quick selection presets help you focus on relevant tool subsets:

Top

Highest-scoring tools by adjusted score

Adopted

Enterprise-validated tools for production use

Emerging

New to market (<12 months), promising but unproven

Watch

Established tools we're monitoring, not yet formally evaluated

Recent

Most recently updated evaluations (excludes pre-evaluation tools)

Tool Statuses

Each tool's status indicates evaluation maturity and confidence level:

StatusConfidenceMeaning
Adopted85-100% confidenceFully integrated into workflows or client implementations
In Review65-90% confidenceUnder active evaluation
Emerging55-80% confidenceNew to market (< 12 months)
Watch50-75% confidenceEstablished tool we're monitoring
Deferred40-65% confidencePreviously reviewed, now paused
Not Enterprise Viable30-50% confidenceFails reliability, governance, or readiness criteria

Frequently Asked Questions

How often are tools re-evaluated?

Monthly for score adjustments, quarterly for deep-dives. Significant product changes trigger immediate re-evaluation.

What's the difference between Rating and Adjusted Score?

Rating is pure capability (0-100). Adjusted Score applies confidence based on status maturity and evidence quality—proven tools keep full scores, emerging tools are discounted.

Why are some tools not showing scores?

Tools in Submitted or Backlog status haven't been evaluated yet. They appear in the catalog but don't have scores until evaluation is complete.

How can I suggest a tool?

Use the Submit page to suggest a new tool. Provide details about capabilities and your use case.

Can I share my custom tool selection?

Yes! URLs preserve your selection state. Simply copy and share the URL to let others see the same comparison.

Why do some dimensions have capped scores?

When a tool has fundamental limitations (e.g., no SSO), we cap its dimension score. The reason is shown on the tool's detail page.

Learn More