# Competitive Scoring Rubric

## Overview

This rubric provides a standardized framework for evaluating competitors across key dimensions. Consistent scoring enables meaningful comparisons and tracks competitive position changes over time.

## Scoring Scale (1-10)

| Score | Label | Definition |
|-------|-------|-----------|
| 1-2 | Poor | Significant gaps, major usability issues, or missing capability |
| 3-4 | Below Average | Basic functionality with notable limitations |
| 5-6 | Average | Meets market expectations, no standout qualities |
| 7-8 | Above Average | Strong execution with clear advantages |
| 9-10 | Exceptional | Industry-leading, sets the standard for others |

## Dimension Categories

### 1. User Experience (UX) - Weight: 20%
- **Onboarding**: Time to first value, setup complexity, guided flows
- **Navigation**: Information architecture, discoverability, consistency
- **Visual Design**: Modern aesthetics, brand coherence, accessibility
- **Performance**: Page load times, responsiveness, offline capability
- **Mobile Experience**: Native app quality, responsive design, feature parity

### 2. Feature Completeness - Weight: 25%
- **Core Features**: Coverage of essential use cases
- **Advanced Features**: Power user capabilities, automation, customization
- **Workflow Support**: End-to-end process coverage without workarounds
- **API & Extensibility**: API coverage, webhook support, SDK quality
- **Innovation**: Unique capabilities not found in competitors

### 3. Pricing & Value - Weight: 15%
- **Transparency**: Clear pricing without hidden costs
- **Flexibility**: Plan options matching different customer sizes
- **Value-to-Cost Ratio**: Feature access relative to price point
- **Free Tier / Trial**: Quality of free offering for evaluation
- **Contract Terms**: Lock-in requirements, cancellation ease

### 4. Integrations - Weight: 10%
- **Native Integrations**: Number and quality of built-in connectors
- **Marketplace**: Third-party app ecosystem breadth
- **API Quality**: Documentation, reliability, rate limits
- **Data Import/Export**: Migration ease, format support
- **Workflow Automation**: Zapier, Make, native automation support

### 5. Support & Documentation - Weight: 10%
- **Documentation Quality**: Completeness, searchability, freshness
- **Support Channels**: Chat, email, phone, community availability
- **Response Time**: SLA adherence, resolution speed
- **Self-Service**: Knowledge base, video tutorials, community forums
- **Onboarding Support**: Dedicated CSM, implementation assistance

### 6. Performance & Reliability - Weight: 10%
- **Uptime**: Historical availability, SLA commitments
- **Speed**: Application responsiveness under normal load
- **Scalability**: Performance at high volume, enterprise readiness
- **Data Handling**: Large dataset support, bulk operations
- **Global Performance**: CDN, regional deployments, latency

### 7. Security & Compliance - Weight: 10%
- **Authentication**: SSO, MFA, RBAC granularity
- **Data Protection**: Encryption at rest and in transit, data residency
- **Certifications**: SOC 2, ISO 27001, GDPR, HIPAA compliance
- **Audit Trail**: Activity logging, access monitoring
- **Privacy Controls**: Data retention policies, right to deletion

## Weighting Guidelines

Default weights above suit most B2B SaaS evaluations. Adjust based on:

- **Enterprise buyers**: Increase Security (15%), Support (15%), reduce Pricing (10%)
- **Developer tools**: Increase Integrations (20%), Features (30%), reduce UX (10%)
- **SMB products**: Increase Pricing (25%), UX (25%), reduce Security (5%)
- **Regulated industries**: Increase Security (25%), reduce Features (15%)

## Calibration Process

1. **Anchor scoring** - Score your own product first to establish baseline
2. **Multiple scorers** - Have 2-3 team members score independently
3. **Discuss outliers** - Reconcile scores that differ by more than 2 points
4. **Document evidence** - Record specific examples justifying each score
5. **Normalize quarterly** - Re-calibrate as market expectations evolve

## Bias Mitigation

- **Avoid halo effect** - Score each dimension independently, not influenced by overall impression
- **Use evidence, not feelings** - Every score must link to observable data points
- **Include competitor strengths** - Resist tendency to under-score competitors
- **Rotate scorers** - Different team members bring fresh perspectives
- **Blind scoring** - When possible, evaluate features without knowing which competitor
- **Customer validation** - Compare internal scores against user review sentiment

## Composite Score Calculation

```
Weighted Score = SUM(Dimension Score x Dimension Weight)

Example:
UX(8) x 0.20 = 1.60
Features(7) x 0.25 = 1.75
Pricing(6) x 0.15 = 0.90
Integrations(8) x 0.10 = 0.80
Support(7) x 0.10 = 0.70
Performance(9) x 0.10 = 0.90
Security(8) x 0.10 = 0.80
---
Total = 7.45 / 10
```

## Output Format

Present results as a comparison matrix with color coding:
- Green (8-10): Competitive advantage
- Yellow (5-7): Market parity
- Red (1-4): Competitive gap
