Get expert-level code quality assessment with confidence scoring. Turn subjective "this code feels messy" into objective insights about maintainability, complexity, and concrete improvement opportunities.

Quick Start

Jump straight in with these copy-paste prompts. Perfect for code reviews and quality audits:

Natural Language Prompt
Please analyze the code quality of this file: C:/project/src/UserService.ts
Focus on Maintainability
Can you assess the maintainability of this code? C:/api/PaymentProcessor.php
Comprehensive Quality Review
Give me a comprehensive code quality analysis for the entire project at C:/my-app/src

Quality Scoring

You'll get specific quality scores (1-10) for different aspects like maintainability, readability, and testability - no more guesswork about code health.

What It Does

Think of this as your personal code quality consultant who never gets tired and always gives objective feedback. analyze_code_quality performs comprehensive quality assessment using expert-level prompts to deliver insights comparable to senior developer reviews.

It evaluates your code across multiple quality dimensions:

  • Maintainability - How easy is this code to modify and extend?
  • Readability - Can other developers (including future you) understand this quickly?
  • Complexity - Is the code overly complex or appropriately structured?
  • Testability - How difficult would it be to write comprehensive tests?
  • Performance Implications - Potential performance issues from a quality perspective
  • Technical Debt - What's the accumulation of shortcuts and compromises?

Unlike generic linting tools, this provides contextual analysis that considers your specific framework, project constraints, and business requirements. It's like having a senior developer review your work with fresh eyes every time.

Parameters

Here's how to customize the quality analysis to match your specific needs:

Parameter Type Required Default Description
filePath string Yes* -

Path to the file you want quality-checked

Example: "C:/project/src/components/UserProfile.tsx"
code string Yes* -

Code content directly (alternative to filePath)

Perfect for analyzing specific functions or classes
projectPath string No -

Project root for multi-file quality analysis

Use for comprehensive project-wide quality assessment
analysisDepth string No "detailed"

Analysis depth: "basic", "detailed", "comprehensive"

"comprehensive" recommended for quality audits
analysisType string No "comprehensive"

Focus area: "complexity", "maintainability", "comprehensive"

"maintainability" for legacy code assessment
context object No {}

Framework and project context for specialized quality metrics

Example: {"framework": "React", "team_size": "large", "legacy": true}
maxDepth number No 3

Maximum directory depth for multi-file discovery (1-5)

Increase for deep project structures

*One of filePath, code, or projectPath is required - choose based on your analysis scope

Real-World Examples

Here's how to use quality analysis in different scenarios:

Legacy Code Assessment

Inherited Codebase Review
houtini-lm:analyze_code_quality with:
- filePath: "C:/legacy-app/src/core/UserManager.js"
- analysisType: "maintainability"
- analysisDepth: "comprehensive"
- context: {"legacy": true, "team_size": "small", "refactoring_planned": true}

Pre-Review Quality Check

Before Code Review
houtini-lm:analyze_code_quality with:
- filePath: "C:/project/src/features/checkout/PaymentFlow.tsx"
- analysisType: "comprehensive"
- context: {"framework": "React", "typescript": true, "critical_path": true}

Project-Wide Quality Assessment

Comprehensive Project Analysis
houtini-lm:analyze_code_quality with:
- projectPath: "C:/my-saas-app/backend"
- analysisDepth: "comprehensive"
- maxDepth: 4
- context: {"framework": "Node.js", "scale": "enterprise", "team_size": "large"}

Complexity-Focused Analysis

Complexity Assessment
houtini-lm:analyze_code_quality with:
- code: "// Paste your complex function here"
- analysisType: "complexity"
- context: {"algorithm_heavy": true, "performance_critical": true}

What You Get Back

The quality analysis delivers actionable insights with objective scoring to guide your improvement efforts:

๐Ÿ“Š Quality Scoring & Metrics

  • Overall Quality Score - Composite score (1-10) with confidence rating
  • Maintainability Score - How easy is it to modify this code?
  • Readability Score - Can other developers understand it quickly?
  • Complexity Rating - Cognitive load and cyclomatic complexity assessment
  • Testability Score - How difficult would comprehensive testing be?

๐Ÿ” Detailed Quality Assessment

  • Code Structure Analysis - Organization, separation of concerns, modularity
  • Naming Convention Review - Variable, function, and class naming quality
  • Documentation Quality - Comment adequacy and usefulness
  • Error Handling Assessment - Robustness and failure management

โšก Performance Quality Insights

  • Algorithm Efficiency - Big O analysis where relevant
  • Memory Usage Patterns - Potential memory leaks or inefficient usage
  • Framework Best Practices - React hooks, Laravel patterns, etc.
  • Scalability Concerns - How will this perform under load?

๐Ÿ› ๏ธ Concrete Improvement Recommendations

  • Specific Refactoring Suggestions - What to change and why
  • Priority Rankings - Which improvements have the highest impact
  • Risk Assessment - Safety of proposed changes
  • Before/After Examples - Clear demonstrations of improvements

๐Ÿ“ˆ Technical Debt Assessment

  • Debt Hotspots - Areas needing immediate attention
  • Accumulation Analysis - How technical debt is building up
  • Remediation Timeline - Estimated effort for improvements
  • Business Impact - How quality issues affect development velocity

Objective Quality Metrics

Unlike subjective "this feels messy" feedback, you get specific scores and measurable improvements. Perfect for tracking quality improvements over time.

Perfect Use Cases

Here's when code quality analysis becomes your strategic advantage:

๐Ÿฅ Legacy Code Modernization

Inherited a codebase that nobody wants to touch? Get objective quality scores to prioritize what needs fixing first. Turn "this code is scary" into "here's exactly what needs improvement and why."

๐Ÿ‘ฅ Code Review Preparation

Before submitting for peer review, run a quality check to catch issues early. Shows up to reviews with confidence, having already addressed potential concerns.

๐Ÿ“Š Team Quality Standards

Establish objective quality thresholds for your team. No more arguments about code style - let the quality metrics guide discussions about what needs improvement.

๐Ÿš€ Pre-Production Quality Gates

Before merging to main or deploying to production, ensure code meets your quality standards. Catch maintainability issues before they become expensive problems.

๐Ÿ“š Developer Learning & Mentoring

Help junior developers understand what makes code "good" with specific, measurable feedback. Turn abstract concepts like "clean code" into concrete guidance.

๐Ÿ”„ Refactoring Decision Support

Which parts of your codebase would benefit most from refactoring? Get data-driven insights about where improvement efforts will have the biggest impact.

โš–๏ธ Technical Debt Management

Quantify technical debt for stakeholder discussions. Replace "we need to refactor" with "here's the measurable quality debt and its business impact."

Understanding Quality Metrics

Here's what the different quality scores mean and how to interpret them:

๐ŸŽฏ Overall Quality Score (1-10)

  • 9-10: Production-ready, exemplary code
  • 7-8: Good quality, minor improvements possible
  • 5-6: Acceptable but needs attention
  • 3-4: Quality issues requiring refactoring
  • 1-2: Significant quality problems, major refactoring needed

๐Ÿ”ง Maintainability Score

Measures how easy it is to modify, extend, and debug the code:

  • High (8-10): Changes are straightforward, well-isolated impact
  • Medium (5-7): Requires understanding but manageable changes
  • Low (1-4): Changes are risky and require extensive understanding

๐Ÿ‘๏ธ Readability Score

Assesses how quickly a developer can understand the code:

  • Clear naming conventions and meaningful variable names
  • Logical code organization and appropriate abstraction levels
  • Helpful comments where the code isn't self-explanatory
  • Consistent formatting and style throughout

๐Ÿง  Complexity Assessment

Evaluates cognitive load and potential for bugs:

  • Cyclomatic Complexity: Number of independent paths
  • Nesting Depth: How deeply nested is the logic?
  • Function Length: Are functions appropriately sized?
  • Cognitive Load: How much mental effort to understand?

๐Ÿงช Testability Score

How difficult would it be to write comprehensive tests:

  • Dependency Injection: Are dependencies easily mockable?
  • Pure Functions: Predictable input/output relationships
  • Side Effects: Are they isolated and manageable?
  • State Management: Clear and predictable state changes

Context Matters

Quality metrics should be interpreted in context. A prototype might reasonably have lower quality scores than production code. Use these metrics to guide improvements, not as absolute judgments.

Best Practices

Get the most valuable insights from your code quality analysis:

๐Ÿ“Š Set Baseline Quality Standards

Establish quality thresholds for your team:

  • Production code should score 7+ overall quality
  • Critical paths require 8+ maintainability scores
  • New features must meet readability standards
  • Legacy refactoring targets specific score improvements

๐Ÿ”„ Regular Quality Assessments

Make quality analysis part of your development workflow:

  • Pre-commit: Check quality before committing changes
  • PR reviews: Include quality analysis in pull request templates
  • Sprint retrospectives: Review quality trends and improvements
  • Technical debt planning: Use metrics to prioritize refactoring

๐ŸŽฏ Focus on High-Impact Areas

Prioritize quality improvements strategically:

  • Critical business logic paths deserve highest attention
  • Frequently modified code benefits most from quality improvements
  • Complex algorithms should prioritize readability and testability
  • Public APIs need excellent maintainability scores

๐Ÿ“ˆ Track Quality Improvements

Monitor progress over time:

  • Document quality scores before and after refactoring
  • Set measurable quality improvement goals
  • Celebrate quality improvements alongside feature delivery
  • Use trends to identify areas needing attention

๐Ÿ—๏ธ Context-Aware Analysis

Provide rich context for better insights:

  • Mention if code is prototype vs production-ready
  • Include framework and technology stack information
  • Note team size and experience levels
  • Specify performance or scalability requirements

Troubleshooting

Resolve common issues with code quality analysis:

Quality scores seem too harsh or lenient

The quality assessment doesn't match your expectations.

  • Add more context about your project type and constraints
  • Specify if this is prototype, production, or legacy code
  • Include team size and experience level context
  • Use "comprehensive" analysis depth for more nuanced scoring
Analysis is too generic, not framework-specific

Not getting React, Vue, or framework-specific quality insights.

  • Always include framework context: {"framework": "React"}
  • Mention specific patterns: {"framework": "React", "hooks": true, "redux": true}
  • Include version information when relevant
  • Use "comprehensive" analysisType for framework-specific insights
Project-wide analysis takes too long

Multi-file quality analysis is slow or timing out.

  • Reduce maxDepth to 2 or 3 for large projects
  • Use "basic" analysisDepth initially, then "comprehensive" for key files
  • Analyze specific directories instead of entire project
  • Ensure your LM Studio model is fully loaded with sufficient resources
Recommendations are not actionable enough

Getting vague suggestions instead of specific improvements.

  • Use "comprehensive" analysisDepth for more specific recommendations
  • Focus analysisType on specific areas: "maintainability" or "complexity"
  • Provide more context about your improvement goals
  • Ask for specific examples in your prompt
Quality metrics don't align with team standards

The scoring system doesn't match your team's quality expectations.

  • Include team standards in context: {"standards": "strict", "team_size": "large"}
  • Mention specific quality requirements or constraints
  • Specify if you're following particular coding standards
  • Use results as relative comparisons rather than absolute judgments