Skip to content
Skip to content

How Scoring Works

Every score is deterministic, explainable and reproducible.

Two layers

Implera uses two distinct layers to assess your codebase. The deterministic analysis layer is the source of truth. It scans up to 500 files, applies static heuristics and produces scores that are fully reproducible. The same code always produces the same result.

The AI interpretation layer runs in the background after the deterministic analysis completes. It contextualises findings, explains why issues matter for your specific codebase and suggests concrete fixes. AI never changes or overrides the deterministic scores.

These two layers are always kept separate in the interface. Deterministic scores are displayed as numbers. AI interpretations are clearly labelled as AI-generated content.

The seven domains

Your codebase is scored across seven specialist domains. Each domain measures a different aspect of code quality. Maintainability is a base scoring module that feeds into the overall score but does not have its own specialist page.

DomainWeightWhat's measured
Security20%Hardcoded secrets, dangerous API patterns, dependency vulnerabilities, licence compliance
Testing20%Test file ratio, real coverage data (LCOV/Istanbul), CI and linter detection
Architecture20%Circular dependencies, change coupling, directory structure, lockfile health
MaintainabilityFile sizes, function complexity, nesting depth, README presence
Performance10%Heavy imports, N+1 query patterns, sequential awaits, large files
Dependencies10%Transitive vulnerability scanning, licence classification, lockfile parsing
Accessibility10%10 WCAG patterns across templates and CSS files
Documentation10%README sections, environment variable coverage, infrastructure doc sync

Overall score calculation

The overall score is a weighted sum of the domain scores. The three core domains (security, testing and architecture) each carry 20% weight. The four supplementary domains (performance, dependencies, accessibility and documentation) each carry 10%.

Maintainability feeds into the base structural score rather than being weighted separately. This means file complexity, nesting depth and code organisation directly influence the foundation that other domain scores build on.

overall = (security * 0.20)
        + (testing * 0.20)
        + (architecture * 0.20)
        + (performance * 0.10)
        + (dependencies * 0.10)
        + (accessibility * 0.10)
        + (documentation * 0.10)

In the API response, testing maps to test_coverage and architecture maps to structure in the breakdown object.

What deterministic scanning finds

The static analysis engine scans up to 500 files per analysis. It runs a fixed set of pattern matchers across your codebase, producing consistent results on every run.

  • 13 secret patterns (API keys, tokens, private keys, connection strings)
  • 16 dangerous API patterns with CWE mapping
  • 10 WCAG accessibility patterns across templates and stylesheets
  • Sequential await and N+1 query detection
  • Circular dependency detection across module imports
  • Coverage report parsing (LCOV, Istanbul, Cobertura)
  • Licence classification for declared dependencies
  • Cognitive complexity scoring (SonarQube-style, nesting-aware) for 11 languages

What AI reviews add

AI reviews interpret the deterministic findings in context. Rather than simply listing issues, the AI layer explains why a specific finding matters for your codebase and what the practical impact is.

Each AI review examines 10 to 15 high-signal files per domain. These include configuration files, entry points and the largest source files. The AI suggests concrete fixes with code snippets where appropriate.

AI-generated content is always clearly labelled in the interface. It supplements the deterministic analysis but never replaces it.

Score labels

Your overall score maps to a label that gives you a quick read on where your codebase stands.

LabelScore rangeWhat it means
Healthy80 – 100Strong practices across all domains. Tests are present, dependencies are managed, no critical security findings. This codebase is well maintained.
Stable65 – 79Good foundations with room for improvement. Most codebases in active development fall in this range. A few domains may need focused attention.
Needs Attention50 – 64Multiple domains are below average. There are likely missing tests, structural issues or unaddressed security findings. Targeted improvements will have a visible impact on the score.
At Risk0 – 49Significant quality gaps across several domains. Common in early-stage projects or rapid prototypes. The domain breakdown will highlight the most impactful areas to address first.