Detalhes do pacote

@juspay/neurolink

juspay11.3kMIT8.21.0

Universal AI Development Platform with working MCP integration, multi-provider support, and professional CLI. Built-in tools operational, 58+ external MCP servers discoverable. Connect to filesystem, GitHub, database operations, and more. Build, test, and

ai, llm, mcp, model-context-protocol

readme (leia-me)

🧠 NeuroLink

NPM Version Downloads GitHub Stars License TypeScript CI

Enterprise AI development platform with unified provider access, production-ready tooling, and an opinionated factory architecture. NeuroLink ships as both a TypeScript SDK and a professional CLI so teams can build, operate, and iterate on AI features quickly.

🧠 What is NeuroLink?

NeuroLink is the universal AI integration platform that unifies 12 major AI providers and 100+ models under one consistent API.

Extracted from production systems at Juspay and battle-tested at enterprise scale, NeuroLink provides a production-ready solution for integrating AI into any application. Whether you're building with OpenAI, Anthropic, Google, AWS Bedrock, Azure, or any of our 12 supported providers, NeuroLink gives you a single, consistent interface that works everywhere.

Why NeuroLink? Switch providers with a single parameter change, leverage 64+ built-in tools and MCP servers, deploy with confidence using enterprise features like Redis memory and multi-provider failover, and optimize costs automatically with intelligent routing. Use it via our professional CLI or TypeScript SDK—whichever fits your workflow.

Where we're headed: We're building for the future of AI—edge-first execution and continuous streaming architectures that make AI practically free and universally available. Read our vision →

Get Started in <5 Minutes →


What's New (Q4 2025)

  • Structured Output with Zod Schemas – Type-safe JSON generation with automatic validation using schema + output.format: "json" in generate(). → Structured Output Guide
  • CSV File Support – Attach CSV files to prompts for AI-powered data analysis with auto-detection. → CSV Guide
  • PDF File Support – Process PDF documents with native visual analysis for Vertex AI, Anthropic, Bedrock, AI Studio. → PDF Guide
  • LiteLLM Integration – Access 100+ AI models from all major providers through unified interface. → Setup Guide
  • SageMaker Integration – Deploy and use custom trained models on AWS infrastructure. → Setup Guide
  • Human-in-the-loop workflows – Pause generation for user approval/input before tool execution. → HITL Guide
  • Guardrails middleware – Block PII, profanity, and unsafe content with built-in filtering. → Guardrails Guide
  • Context summarization – Automatic conversation compression for long-running sessions. → Summarization Guide
  • Redis conversation export – Export full session history as JSON for analytics and debugging. → History Guide

Q3 highlights (multimodal chat, auto-evaluation, loop sessions, orchestration) are now in Platform Capabilities below.

Get Started in Two Steps

# 1. Run the interactive setup wizard (select providers, validate keys)
pnpm dlx @juspay/neurolink setup

# 2. Start generating with automatic provider selection
npx @juspay/neurolink generate "Write a launch plan for multimodal chat"

Need a persistent workspace? Launch loop mode with npx @juspay/neurolink loop - Learn more →

🌟 Complete Feature Set

NeuroLink is a comprehensive AI development platform. Every feature below is production-ready and fully documented.

🤖 AI Provider Integration

12 providers unified under one API - Switch providers with a single parameter change.

Provider Models Free Tier Tool Support Status Documentation
OpenAI GPT-4o, GPT-4o-mini, o1 ✅ Full ✅ Production Setup Guide
Anthropic Claude 3.5/3.7 Sonnet, Opus ✅ Full ✅ Production Setup Guide
Google AI Studio Gemini 2.5 Flash/Pro ✅ Free Tier ✅ Full ✅ Production Setup Guide
AWS Bedrock Claude, Titan, Llama, Nova ✅ Full ✅ Production Setup Guide
Google Vertex Gemini via GCP ✅ Full ✅ Production Setup Guide
Azure OpenAI GPT-4, GPT-4o, o1 ✅ Full ✅ Production Setup Guide
LiteLLM 100+ models unified Varies ✅ Full ✅ Production Setup Guide
AWS SageMaker Custom deployed models ✅ Full ✅ Production Setup Guide
Mistral AI Mistral Large, Small ✅ Free Tier ✅ Full ✅ Production Setup Guide
Hugging Face 100,000+ models ✅ Free ⚠️ Partial ✅ Production Setup Guide
Ollama Local models (Llama, Mistral) ✅ Free (Local) ⚠️ Partial ✅ Production Setup Guide
OpenAI Compatible Any OpenAI-compatible endpoint Varies ✅ Full ✅ Production Setup Guide

📖 Provider Comparison Guide - Detailed feature matrix and selection criteria 🔬 Provider Feature Compatibility - Test-based compatibility reference for all 19 features across 11 providers


🔧 Built-in Tools & MCP Integration

6 Core Tools (work across all providers, zero configuration):

Tool Purpose Auto-Available Documentation
getCurrentTime Real-time clock access Tool Reference
readFile File system reading Tool Reference
writeFile File system writing Tool Reference
listDirectory Directory listing Tool Reference
calculateMath Mathematical operations Tool Reference
websearchGrounding Google Vertex web search ⚠️ Requires credentials Tool Reference

58+ External MCP Servers supported (GitHub, PostgreSQL, Google Drive, Slack, and more):

// Add any MCP server dynamically
await neurolink.addExternalMCPServer("github", {
  command: "npx",
  args: ["-y", "@modelcontextprotocol/server-github"],
  transport: "stdio",
  env: { GITHUB_TOKEN: process.env.GITHUB_TOKEN },
});

// Tools automatically available to AI
const result = await neurolink.generate({
  input: { text: 'Create a GitHub issue titled "Bug in auth flow"' },
});

📖 MCP Integration Guide - Setup external servers


💻 Developer Experience Features

SDK-First Design with TypeScript, IntelliSense, and type safety:

Feature Description Documentation
Auto Provider Selection Intelligent provider fallback SDK Guide
Streaming Responses Real-time token streaming Streaming Guide
Conversation Memory Automatic context management Memory Guide
Full Type Safety Complete TypeScript types Type Reference
Error Handling Graceful provider fallback Error Guide
Analytics & Evaluation Usage tracking, quality scores Analytics Guide
Middleware System Request/response hooks Middleware Guide
Framework Integration Next.js, SvelteKit, Express Framework Guides

🏢 Enterprise & Production Features

Production-ready capabilities for regulated industries:

Feature Description Use Case Documentation
Enterprise Proxy Corporate proxy support Behind firewalls Proxy Setup
Redis Memory Distributed conversation state Multi-instance deployment Redis Guide
Cost Optimization Automatic cheapest model selection Budget control Cost Guide
Multi-Provider Failover Automatic provider switching High availability Failover Guide
Telemetry & Monitoring OpenTelemetry integration Observability Telemetry Guide
Security Hardening Credential management, auditing Compliance Security Guide
Custom Model Hosting SageMaker integration Private models SageMaker Guide
Load Balancing LiteLLM proxy integration Scale & routing Load Balancing

Security & Compliance:

  • ✅ SOC2 Type II compliant deployments
  • ✅ ISO 27001 certified infrastructure compatible
  • ✅ GDPR-compliant data handling (EU providers available)
  • ✅ HIPAA compatible (with proper configuration)
  • ✅ Hardened OS verified (SELinux, AppArmor)
  • ✅ Zero credential logging
  • ✅ Encrypted configuration storage

📖 Enterprise Deployment Guide - Complete production checklist


🎨 Professional CLI

15+ commands for every workflow:

Command Purpose Example Documentation
setup Interactive provider configuration neurolink setup Setup Guide
generate Text generation neurolink gen "Hello" Generate
stream Streaming generation neurolink stream "Story" Stream
status Provider health check neurolink status Status
loop Interactive session neurolink loop Loop
mcp MCP server management neurolink mcp discover MCP CLI
models Model listing neurolink models Models
eval Model evaluation neurolink eval Eval

📖 Complete CLI Reference - All commands and options

💰 Smart Model Selection

NeuroLink features intelligent model selection and cost optimization:

Cost Optimization Features

  • 💰 Automatic Cost Optimization: Selects cheapest models for simple tasks
  • 🔄 LiteLLM Model Routing: Access 100+ models with automatic load balancing
  • 🔍 Capability-Based Selection: Find models with specific features (vision, function calling)
  • ⚡ Intelligent Fallback: Seamless switching when providers fail
# Cost optimization - automatically use cheapest model
npx @juspay/neurolink generate "Hello" --optimize-cost

# LiteLLM specific model selection
npx @juspay/neurolink generate "Complex analysis" --provider litellm --model "anthropic/claude-3-5-sonnet"

# Auto-select best available provider
npx @juspay/neurolink generate "Write code" # Automatically chooses optimal provider

✨ Interactive Loop Mode

NeuroLink features a powerful interactive loop mode that transforms the CLI into a persistent, stateful session. This allows you to run multiple commands, set session-wide variables, and maintain conversation history without restarting.

Start the Loop

npx @juspay/neurolink loop

Example Session

# Start the interactive session
$ npx @juspay/neurolink loop

neurolink » /set provider google-ai
✓ provider set to google-ai

neurolink » /set temperature 0.8
✓ temperature set to 0.8

neurolink » Tell me a fun fact about space

The quietest place on Earth is an anechoic chamber at Microsoft's headquarters in Redmond, Washington. The background noise is so low that it's measured in negative decibels, and you can hear your own heartbeat.

# Use "/" for CLI commands
neurolink » /generate "Draft a haiku"
...

# Use "//" to escape prompts starting with "/"
neurolink » //what is /usr/bin used for?
...

# Exit the session
neurolink » exit

Conversation Memory in Loop Mode

Start the loop with conversation memory to have the AI remember the context of your previous commands.

npx @juspay/neurolink loop --enable-conversation-memory

Skip the wizard and configure manually? See docs/getting-started/provider-setup.md.

CLI & SDK Essentials

neurolink CLI mirrors the SDK so teams can script experiments and codify them later.

# Discover available providers and models
npx @juspay/neurolink status
npx @juspay/neurolink models list --provider google-ai

# Route to a specific provider/model
npx @juspay/neurolink generate "Summarize customer feedback" \
  --provider azure --model gpt-4o-mini

# Turn on analytics + evaluation for observability
npx @juspay/neurolink generate "Draft release notes" \
  --enable-analytics --enable-evaluation --format json
import { NeuroLink } from "@juspay/neurolink";

const neurolink = new NeuroLink({
  conversationMemory: {
    enabled: true,
    store: "redis",
  },
  enableOrchestration: true,
});

const result = await neurolink.generate({
  input: {
    text: "Create a comprehensive analysis",
    files: [
      "./sales_data.csv", // Auto-detected as CSV
      "examples/data/invoice.pdf", // Auto-detected as PDF
      "./diagrams/architecture.png", // Auto-detected as image
    ],
  },
  provider: "vertex", // PDF-capable provider (see docs/features/pdf-support.md)
  enableEvaluation: true,
  region: "us-east-1",
});

console.log(result.content);
console.log(result.evaluation?.overallScore);

Full command and API breakdown lives in docs/cli/commands.md and docs/sdk/api-reference.md.

Platform Capabilities at a Glance

Capability Highlights
Provider unification 12+ providers with automatic fallback, cost-aware routing, provider orchestration (Q3).
Multimodal pipeline Stream images + CSV data + PDF documents across providers with local/remote assets. Auto-detection for mixed file types.
Quality & governance Auto-evaluation engine (Q3), guardrails middleware (Q4), HITL workflows (Q4), audit logging.
Memory & context Conversation memory, Mem0 integration, Redis history export (Q4), context summarization (Q4).
CLI tooling Loop sessions (Q3), setup wizard, config validation, Redis auto-detect, JSON output.
Enterprise ops Proxy support, regional routing (Q3), telemetry hooks, configuration management.
Tool ecosystem MCP auto discovery, LiteLLM hub access, SageMaker custom deployment, web search.

Documentation Map

Area When to Use Link
Getting started Install, configure, run first prompt docs/getting-started/index.md
Feature guides Understand new functionality front-to-back docs/features/index.md
CLI reference Command syntax, flags, loop sessions docs/cli/index.md
SDK reference Classes, methods, options docs/sdk/index.md
Integrations LiteLLM, SageMaker, MCP, Mem0 docs/LITELLM-INTEGRATION.md
Operations Configuration, troubleshooting, provider matrix docs/reference/index.md
Visual demos Screens, GIFs, interactive tours docs/demos/index.md

Integrations

Contributing & Support


NeuroLink is built with ❤️ by Juspay. Contributions, questions, and production feedback are always welcome.

changelog (log de mudanças)

8.21.0 (2025-12-22)

Features

  • (types): Add office document type definitions and comprehensive tests (1b34d3d)

8.20.1 (2025-12-22)

Bug Fixes

  • (Validation): implement secure base64 validation with fail-fast checks (f1b9b9c), closes #277

8.20.0 (2025-12-22)

Features

  • (memory): Implement token based summarization (ffdc902)

8.19.1 (2025-12-20)

Bug Fixes

  • (files): comprehensive extension-less file detection with fallback parsing (FD-018) (7e9dbc7)

8.19.0 (2025-12-18)

Features

  • (tts): Integrate TTS into BaseProvider.generate() (ffae0b5)

8.18.0 (2025-12-16)

Features

  • (utils): standardize logging levels in CSVProcessor (1c348b2)

8.17.0 (2025-12-16)

Features

  • (tts): Add TTS type integration to GenerateOptions, GenerateResult, and StreamChunk (e290330)

8.16.0 (2025-12-16)

Features

  • (tts): Implement GoogleTTSHandler.getVoices() API (15d39f7)

8.15.0 (2025-12-14)

Features

  • (tts): Implement synthesize method in GoogleTTSHandler (9262e37)

8.14.0 (2025-12-14)

Features

  • (tts): Create GoogleTTSHandler skeleton structure (60db6a8)

8.13.2 (2025-12-14)

Bug Fixes

  • (sdk): Replace hardcoded timeouts with class constants (a34c291)

8.13.1 (2025-12-13)

Bug Fixes

  • (provider): Implement image count limits with validation and warnings (ff3e27a)

8.13.0 (2025-12-13)

Features

  • (tts): Implement TTSProcessor.synthesize() method (d6f3567)

8.12.0 (2025-12-13)

Features

  • (files): Install office processing dependencies: mammoth, xlsx, adm-zip, xml2js with TypeScript types (a236818)

8.11.0 (2025-12-12)

Features

  • (tts): implement TTSProcessor skeleton class with handler registry (8dc63d1)

8.10.1 (2025-12-12)

Bug Fixes

  • (ci): check formatting instead of auto-fix to catch issues during PR builds (6af89d2)

8.10.0 (2025-12-12)

Features

  • (cli): add video CLI flags tests and verification (2d75347)
  • (models): add GPT-5.2 and comprehensive model updates across all providers (b75042f)

Bug Fixes

  • (ci): use minimal plugins for semantic-release validation to avoid npm auth requirement (f3ab09e)

8.9.0 (2025-12-11)

Features

  • (csv): add sampleDataFormat option for CSV metadata (ded6ec4)

8.8.0 (2025-12-11)

Features

  • (types): add AudioProviderConfig type definition for transcription providers (c34f437)

8.7.0 (2025-12-10)

Features

  • (cli): implement TTS audio file output (TTS-024) (48af003)
  • (ImageProcessor): Add output validation to ImageProcessor.process() method (6fe3a16)
  • (imageProcessor): add retry logic with exponential backoff for URL downloads (e6ab4df)
  • (types): add AudioProcessorOptions and audioOptions to FileDetectorOptions (2bd877b)

Bug Fixes

  • (deps): convert canvas and pdfjs-dist to dynamic imports for SSR compatibility (cc7d99e)
  • (deps): force @semantic-release/npm v13 via pnpm override for OIDC support (8a528c9)
  • (lock): add missing update for lockfile (376b7ad)
  • (release): enable OIDC trusted publishing for npm (3ba6dd9)
  • (tts): add audio property to GenerateResult type and improve type safety (e85c7d0)
  • (workflows): add job-level OIDC permissions and remove conflicting auth (8ee4fb1)
  • (workflows): add OIDC authentication for npm trusted publishing (c6bb5bb)

8.6.0 (2025-12-06)

Features

  • (multimodal): add altText support to ImageContent for accessibility (27118c8), closes #565

Bug Fixes

  • (guardrails): added fallback for guardrail errors on azure's jailbreak errors (ae42552)
  • (observability): add support to let applications customize traces (608d991)

8.5.1 (2025-12-04)

Bug Fixes

  • (vertex): clarify schema+tools support for Gemini vs Claude models (e7beae9)

8.5.0 (2025-12-04)

Features

  • (audio): add AudioProcessorOptions type for audio transcription configuration (b969ba9)

8.4.1 (2025-12-04)

Bug Fixes

  • (mem0): custom instructions support for mem0 conversation ingestion (486c55c)

8.4.0 (2025-12-01)

Features

  • (core): comprehensive multimodal architecture with modular refactoring and enhanced testing (fd8d207)

8.3.0 (2025-11-28)

Features

  • (cli): make stream the default command in loop mode (7aeb1d7)

8.2.0 (2025-11-25)

Features

  • (vertex): add global endpoint support for Gemini 3 Pro Preview (5de2cbe)

8.1.0 (2025-11-20)

Features

  • (vertex): add gemini-3-pro-preview model support (896dc73)

8.0.1 (2025-11-20)

Bug Fixes

  • (lint): prettier and lint errors (810475c)
  • (memory): migrate to cloud-hosted mem0 API [BZ-45257] (3a53a0c)

8.0.0 (2025-11-19)

⚠ BREAKING CHANGES

  • (deps): Node.js 20.18.1+ is now required due to undici v7 dependency. Undici v7 requires the File API which is only available in Node.js 20.18.1+.

Changes:

  • Update fileDetector.ts to use interceptors.redirect()
  • Update messageBuilder.ts to use interceptors.redirect()
  • Add getGlobalDispatcher and interceptors imports from undici
  • Temporarily exclude known package vulnerabilities from security validation
  • Require Node.js >=20.18.1 in package.json engines
  • Update npm requirement to >=10.0.0
  • Remove Node 18 from CI test matrix

Fixes build failures introduced in f19c433 (undici bump to v7)

Features

  • (observability): add support for userid and sessionid in langfuse traces (b1895a6)

Bug Fixes

  • (deps): update undici v7 API usage and require Node.js 20+ (dc81bba)

7.54.0 (2025-11-08)

Features

  • (logs): enable neurolink logs to be pushed into lighthouse (9a752c4)

7.53.5 (2025-11-06)

7.53.4 (2025-11-05)

Bug Fixes

  • (sdk): structured object response in generate function (f16d597)

7.53.3 (2025-11-03)

7.53.2 (2025-10-28)

7.53.1 (2025-10-27)

Bug Fixes

  • (redis): Emit redis related info through events for logging (3224075)

7.53.0 (2025-10-26)

Features

  • (test): vitest configuration setup for cli (ffb7db3)

7.52.0 (2025-10-24)

Features

  • (redis): Enable SDK-level Redis configuration for multi-tenancy (6c68883)

7.51.3 (2025-10-23)

Bug Fixes

  • (neurolink): add Zod schema detection for inputSchema field in baseProvider (5ad0c0a)

7.51.2 (2025-10-14)

7.51.1 (2025-10-13)

7.51.0 (2025-10-12)

Features

  • (multimodal): add comprehensive PDF file support with native document processing (52abf1a)
  • (multimodal): add comprehensive PDF file support with native document processing (020e15a)

7.50.0 (2025-10-08)

Features

  • (observability): add langfuse and telemetry support (4172d28)

7.49.0 (2025-10-07)

Features

  • (cli): added support for resuming a conversation (b860d29)
  • (middleware): implement guardrails pre-call filtering with demo and proof (b99a7f1)
  • (multimodal): add comprehensive CSV file support with auto-detection and analysis tools (374b375)

Bug Fixes

  • (azure): add SDK parameter support for lighthouse tool events compatibility (a3bca3b)
  • (formatting): fixed linting issues with docs (a7d1aff)

7.48.1 (2025-10-02)

7.48.0 (2025-09-30)

Features

  • (cli): add command history support on up/down (5aa3c2d)

7.47.3 (2025-09-26)

7.47.2 (2025-09-26)

Bug Fixes

  • (timestamp): Incorrect timestamps being stored in redis (2d66232)

7.47.1 (2025-09-26)

Bug Fixes

  • (tools): Unregistered tools getting called (45fd67a)

7.47.0 (2025-09-25)

Features

  • (chat): Implement multimodal UI and extend SDK support (12a2f59)

7.46.0 (2025-09-24)

Features

  • (auto-evaluation): added auto evaluation for LLM response (6f23fae)

7.45.0 (2025-09-24)

Features

  • (provider): Add support to provide region while streaming or generating for few providers (a0a5bed)

7.44.0 (2025-09-24)

Features

  • (sdk): Integrate mem0 for better context (78edf08)

7.43.0 (2025-09-23)

Features

  • (cli): auto-detect and enable redis support in loop conversation memory (b7b5514)

7.42.0 (2025-09-23)

Features

  • (middleware): robust bad word filtering in guardrails and correct stream usage (d396797)

7.41.4 (2025-09-21)

Bug Fixes

  • (types): expose core SDK types for external developer integration (66199c9)

7.41.3 (2025-09-20)

7.41.2 (2025-09-20)

7.41.1 (2025-09-20)

7.41.0 (2025-09-20)

Features

  • (test): Added tests for hitl (5ab1885)

7.40.1 (2025-09-17)

Bug Fixes

  • (title): Update system prompt to generate better title (9d0e5b8)

7.40.0 (2025-09-17)

Features

  • (envsetup): Added env setup test (d08917e)

7.39.0 (2025-09-16)

Features

  • (hitl): Implemented human in the loop for sdk (1a66f53)

7.38.1 (2025-09-16)

Bug Fixes

  • (tool): Openai provider's no of tool to pass (8804d56)

7.38.0 (2025-09-14)

Features

  • (memory): Add support to store tool history in redis (93d3223)

7.37.1 (2025-09-13)

Bug Fixes

  • (tools): resolve MCP tool execution and parameter validation failures (2aa2ef7)

7.37.0 (2025-09-10)

Features

  • (sdk): Add advanced orchestration of model and providers BZ-43839 (840d697)

7.36.0 (2025-09-10)

Features

  • (image): added support for multimodality(image) in cli and sdk (678b61b)

7.35.0 (2025-09-09)

Features

  • (cli): Add interactive provider setup wizard (50ee963)

7.34.0 (2025-09-09)

Features

  • (cli): expose memory commands to cli from sdk (b9eb802)
  • (cli): Implement interactive loop mode (89b5012)
  • (memory): Add Redis Support for conversation History (28e2f86)
  • (tool): Optimize tool discovery and add conversation tutorial (56c7a3f)

7.33.4 (2025-09-04)

Bug Fixes

  • (azure): resolve provider initialization and streaming issues (f35114b)

7.33.3 (2025-09-04)

7.33.2 (2025-09-04)

Bug Fixes

  • (latency): Reduced Tool Latency via Concurrent Server Init (eb36fc9)

7.33.1 (2025-09-03)

7.33.0 (2025-09-03)

Features

  • (provider): refactor generate method to use streamText for improved performance and consistency (a118300)

7.32.0 (2025-09-03)

Features

  • (sdk): Add Speech to Speech agents implementation (a8bf953)

7.31.0 (2025-09-01)

Features

  • (core): implement global middleware architecture (8eb711a)

7.30.1 (2025-08-31)

Bug Fixes

  • (bedrock): migrate from ai-sdk to native AWS SDK implementation (e5d8a4c)

7.30.0 (2025-08-29)

Features

  • (SDK): Integrate context summarization with conversation memory BZ-43344 (a2316ff)

7.29.3 (2025-08-29)

Bug Fixes

  • (build): resolve ESLint compliance and TypeScript compilation errors (c9030f2)

7.29.2 (2025-08-29)

Bug Fixes

  • (providers): enable drop-in replacement for bedrock-mcp-connector (9b67d23)

7.29.1 (2025-08-28)

Bug Fixes

  • (vertex): restored support for adc (238666a)

7.29.0 (2025-08-26)

Features

  • (guardrails): added guardrails as a middleware (ac60f6b)

7.28.1 (2025-08-26)

Bug Fixes

  • (cli): resolve ESM interop and spawn synchronization issues (4983221)
  • (security): prevent command injection in ollama pull (27e6088)

7.28.0 (2025-08-25)

Features

  • (proxy): comprehensive proxy support for all AI providers (332974a)

7.27.0 (2025-08-24)

Features

  • (History): Added the functionality to export the conversation history for debugging purpose (71cec7e)

7.26.1 (2025-08-21)

7.26.0 (2025-08-21)

Features

  • (core): implement provider performance metrics and optimization system (caa68e7)

7.25.0 (2025-08-21)

Features

  • (middleware): add custom middleware development guide (ffd0343)

7.24.1 (2025-08-21)

7.24.0 (2025-08-20)

Features

  • (deploy): Added a configurable force-rebuild flag for the deploy command. (e5a81d4)

7.23.0 (2025-08-19)

Features

  • (docs): modernize api examples (c77706b)

7.22.0 (2025-08-19)

Features

  • (memory): Add conversation memory test suite for NeuroLink stream functionality (b896bef)

7.21.0 (2025-08-19)

Features

  • (provider): add env-based fallback for available models (BZ-43348) (4b6cee3)

7.20.0 (2025-08-19)

Features

  • (cli): add --version flag to display package version (632eb7c)

7.19.0 (2025-08-19)

Features

  • (docs): HUMAN IN THE LOOP - User consent for some tools execution (3f8db51)

7.18.0 (2025-08-19)

Features

  • (dev-experience): add pre-commit hook for automated quality checks (7d26726)

7.17.0 (2025-08-19)

Features

  • (proxy): implement comprehensive enterprise proxy support with testing (0dd124b)

7.16.0 (2025-08-19)

Features

  • (cli): Add validate provider config support in CLI (2e8d6ad)

7.15.0 (2025-08-19)

Features

  • (tools): add websearch tool using Gemini AI for Google search integration BZ-43347 (bcd5160)

7.14.8 (2025-08-19)

Bug Fixes

  • (mcp): implement generic error handling for all MCP server response formats (5aa707a)

7.14.7 (2025-08-18)

Bug Fixes

  • (core): add validation for tool registration (caed431)

7.14.6 (2025-08-18)

Bug Fixes

  • (docs): improve and update cli guide (4039044)

7.14.5 (2025-08-18)

Bug Fixes

  • (mcp): prevent memory leak from uncleared interval timer in MCPCircuitBreaker (1f2ae47)

7.14.4 (2025-08-18)

Bug Fixes

  • (docs): use pnpm in setup script and correct modelServer run command BZ-43341 (fcfa465)

7.14.3 (2025-08-16)

Bug Fixes

  • (typescript): eliminate all TypeScript any types for improved type safety (45043cb)

7.14.2 (2025-08-16)

Bug Fixes

  • (sdk): add generateText backward compatibility and fix formatting consistency (93ff23c)

7.14.1 (2025-08-15)

Bug Fixes

  • (mcp): implement external MCP server integration with real tool execution (9427a95)

7.14.0 (2025-08-14)

Features

  • (external-mcp): add external MCP server integration support (c03dee8)

7.13.0 (2025-08-14)

Features

  • (SDK): Add context summarizer for conversation BZ-43204 (38231c4)

7.12.0 (2025-08-14)

Features

  • (memory): Added support for Conversation History (5cf3650)

7.11.1 (2025-08-14)

Bug Fixes

  • (ci): add external contributor detection to GitHub Copilot PR Review workflow (c7d9f2c)

7.11.0 (2025-08-14)

Features

  • (providers): consolidate provider logic to BaseProvider for consistency and performance (a5da739)

7.10.3 (2025-08-13)

Bug Fixes

  • (ci): make comment posting non-blocking for external contributor PRs (f40b3f7)

7.10.2 (2025-08-13)

Bug Fixes

  • (ci): prevent external contributor PR failures due to comment permissions (ac76270)

7.10.1 (2025-08-13)

Bug Fixes

  • (ci): resolve external contributor PR failures in single commit policy validation (0536828), closes #60

7.10.0 (2025-08-12)

Features

  • (build): implement comprehensive build rule enforcement system (7648cad)

7.9.0 (2025-08-11)

Features

  • (core): add EventEmitter functionality for real-time event monitoring (fd8b6b0)

Bug Fixes

  • (ci): add semantic-release configuration with dependencies and testing (d48e274)
  • (ci): configure semantic-release to handle ticket prefixes with proper JSON escaping (6d575dc)
  • (ci): correct JSON escaping in semantic-release configuration (b9bbe50)
  • (cli): missing model name in analytics output (416c5b7)
  • (providers): respect VERTEX_MODEL environment variable in model selection (40eddb1)

7.8.0 (2025-08-11)

Bug Fixes

  • exclude _site directory from ESLint (c0e5f1d)

Features

  • providers: add comprehensive Amazon SageMaker provider integration (9ef4ebe)

7.7.1 (2025-08-11)

Bug Fixes

  • providers: resolve ESLint errors and improve validation in Vertex AI health checker (a5822ee)

7.7.0 (2025-08-10)

Features

  • tools: add Lighthouse compatibility with unified registerTools API (5200da2)

7.6.1 (2025-08-09)

Bug Fixes

  • docs: resolve documentation deployment and broken links (e78d7e8)

7.6.0 (2025-08-09)

Features

  • openai-compatible: add OpenAI Compatible provider with intelligent model auto-discovery (3041d26)

7.5.0 (2025-08-06)

Features

  • providers: add LiteLLM provider integration with access to 100+ AI models (8918f8e)

7.4.0 (2025-08-06)

Features

  • add Bitbucket MCP server integration (239ca6d)

7.3.8 (2025-08-05)

Bug Fixes

7.3.7 (2025-08-04)

Bug Fixes

  • docs: configure pymdownx.emoji to properly render Material Design icons (09ba764)

7.3.6 (2025-08-04)

Bug Fixes

  • docs: trigger fresh deployment after GitHub Pages source change (e9c5975)

7.3.5 (2025-08-04)

Bug Fixes

  • docs: force fresh deployment after GitHub Pages source change to GitHub Actions (b4b498f)

7.3.4 (2025-08-04)

Bug Fixes

  • docs: retry deployment to apply .nojekyll fix for MkDocs Material theme (ce0afab)

7.3.3 (2025-08-04)

Bug Fixes

  • docs: add .nojekyll file to prevent Jekyll processing (a2f28b2)

7.3.2 (2025-08-04)

Bug Fixes

  • docs: update copyright year and set dark theme as default (bf1973d)

7.3.1 (2025-08-04)

Bug Fixes

  • docs: resolve GitHub Actions workflow YAML parsing errors (182416d)

7.3.0 (2025-08-04)

Features

  • docs: implement comprehensive documentation website infrastructure (77c81f4)

7.2.0 (2025-08-04)

Features

  • core: complete NeuroLink Phase 1-4 implementation with comprehensive verification (37d5cb1)

7.1.0 (2025-08-03)

Features

  • core: major CLI optimization and comprehensive core functionality overhaul (66ad664)

7.0.0 (2025-07-31)

Code Refactoring

  • structure: standardize all filenames and directories to camelCase (656d094)

BREAKING CHANGES

  • structure: None - all functionality preserved, only naming conventions updated

6.2.1 (2025-07-31)

Bug Fixes

  • logging: consolidate MCP logging and add debug flag control (ea0132d)

6.2.0 (2025-07-30)

Features

  • systematic dead code elimination across entire codebase (571060a)

6.1.0 (2025-07-24)

Features

  • github: enhance GitHub project configuration and community features (deb1407)

6.0.0 (2025-07-24)

Features

  • types: eliminate all TypeScript any usage across entire codebase (777c3cd)

BREAKING CHANGES

  • types: Complete removal of TypeScript 'any' types for enhanced type safety

This comprehensive refactor eliminates all TypeScript 'any' usage across the entire NeuroLink codebase, affecting 140+ files with systematic type safety improvements:

  • NEW: src/lib/types/common.ts - Unknown, UnknownRecord, JsonValue utility types
  • NEW: src/lib/types/tools.ts - Tool system types (ToolArgs, ToolResult, ToolDefinition)
  • NEW: src/lib/types/providers.ts - Provider-specific types (ProviderConfig, AnalyticsData)
  • NEW: src/lib/types/cli.ts - CLI command types and interfaces
  • NEW: src/lib/types/index.ts - Centralized type exports

  • Export NeuroLinkSDK interface from baseProvider for proper typing

  • Fix all provider constructors: anthropic, azureOpenai, googleAiStudio, googleVertex, mistral
  • Update functionCalling-provider with proper type casting
  • Enhanced analytics-helper with comprehensive type guards
  • Fix timeout-wrapper and all provider error handling

  • Fix directToolsServer: inputSchema and execution result types

  • Fix aiCoreServer: provider factory and result access types
  • Fix transport-manager: HTTP client transport constructor types
  • Fix unified-registry: server configuration type compatibility
  • Update all MCP adapters, clients, managers, and orchestrators
  • Fix tool integration, registry, and session management
  • Enhanced error handling and recovery systems

  • Update baseProvider with proper abstract method signatures

  • Fix serviceRegistry with type-safe service management
  • Enhanced factory pattern with proper generic constraints
  • Update evaluation system with strict typing
  • Fix analytics core with proper data flow types

  • Fix all CLI commands with proper argument typing

  • Update command factory with type-safe command creation
  • Enhanced tool extension and registration with strict interfaces
  • Fix SDK integration with proper type boundaries

  • Update all test files with proper type assertions

  • Fix test helpers with generic constraints
  • Enhanced integration tests with type safety
  • Update performance and streaming tests
  • Fix all provider-specific test suites

  • Update eslint.config.js for enhanced type checking

  • Fix logger with proper structured logging types
  • Update provider validation with type guards
  • Enhanced proxy and networking layers
  • Fix telemetry service with proper event typing

  • Update tsconfig.json for stricter type checking

  • Enhanced build pipeline compatibility
  • Fix package exports and type definitions

  • ESLint violations: 14 → 0 (100% elimination)

  • TypeScript compilation: ✅ PASSING
  • Build pipeline: ✅ PASSING
  • All tests: ✅ PASSING
  • Runtime behavior: ✅ PRESERVED

This change maintains complete backward compatibility while establishing a foundation for enhanced developer experience and code reliability.

🤖 Generated with Claude Code

Co-Authored-By: Claude noreply@anthropic.com

5.3.0 (2025-07-23)

Features

  • mcp: enhance MCP integration with comprehensive testing infrastructure and tool ecosystem improvements (a38d845)

5.2.0 (2025-07-22)

Features

  • core: implement comprehensive factory pattern architecture with full MCP integration and provider unification (b13963a)

5.1.0 (2025-07-13)

Features

  • core: complete unified multimodal AI platform architecture with generate/stream unification (846e409)

5.0.0 (2025-07-11)

  • refactor(cli)!: remove agent-generate command, unify CLI to single generate command (9c034b7)

Bug Fixes

  • scripts: update docs:generate to use docs:validate instead of removed docs:sync (3277bab)

BREAKING CHANGES

  • agent-generate command has been removed

The agent-generate command has been completely removed from the CLI. All functionality is now available through the enhanced generate command with tools enabled by default.

Changes Made:

  • Delete src/cli/commands/agent-generate.ts command implementation
  • Remove agent-generate import and registration from src/cli/index.ts
  • Update docs/CLI-GUIDE.md to remove agent-generate documentation
  • Update memory-bank documentation files to reflect unified approach
  • Remove agent-generate test cases from scripts/corrected-functionality-test.js

Migration Guide:

  • Replace neurolink agent-generate "prompt" with neurolink generate "prompt"
  • Tools are enabled by default in generate command
  • Use --disable-tools flag if tool-calling is not desired
  • All previous agent-generate functionality available in generate command

Technical Impact:

  • Simplified CLI interface with single text generation command
  • Reduced codebase complexity and maintenance overhead
  • Enhanced generate command provides all tool-calling capabilities
  • Zero breaking changes to core functionality
  • Clean TypeScript compilation and documentation consistency

4.2.0 (2025-07-11)

Features

  • mcp: comprehensive MCP system enhancements with timeout management (1d35b5e)

4.1.1 (2025-07-10)

Bug Fixes

  • format: fix formatting for all follow (a49a94b)

4.1.0 (2025-07-09)

Features

  • mcp: comprehensive MCP system overhaul with GitHub PR fixes (c0d8114)

4.0.0 (2025-07-06)

  • feat(core)!: transform NeuroLink into enterprise AI analytics platform (74c88d6)

BREAKING CHANGES

  • Major architectural enhancement from basic AI SDK to comprehensive enterprise platform with analytics, evaluation, real-time services, and business intelligence capabilities.

Core Features Added:

  • Analytics System: Usage tracking, cost estimation, performance monitoring
  • Evaluation Framework: AI-powered quality assessment and scoring
  • Enterprise Config: Backup/restore, validation, provider management
  • Real-time Services: Chat, streaming, websocket capabilities
  • Telemetry: OpenTelemetry integration for production monitoring
  • Documentation: Complete business and technical documentation overhaul
  • Examples: Comprehensive demo library with 30+ working examples
  • Provider Integration: Analytics helper integrated across all 9 providers

Technical Implementation:

  • NEW: src/lib/core/analytics.ts - Real usage tracking engine
  • NEW: src/lib/core/evaluation.ts - AI quality assessment framework
  • NEW: src/lib/config/configManager.ts - Enterprise configuration management
  • NEW: src/lib/chat/ - Complete chat service infrastructure (7 files)
  • NEW: src/lib/services/ - Streaming and WebSocket architecture
  • NEW: src/lib/telemetry/ - OpenTelemetry integration
  • NEW: examples/ - Comprehensive demo ecosystem (30+ examples)
  • NEW: docs/ - Complete documentation overhaul (15+ guides)
  • ENHANCED: All 9 providers with analytics integration
  • ENHANCED: CLI with professional analytics display
  • ENHANCED: Testing infrastructure with new test suites

Files Changed: 127 files (+20,542 additions, -6,142 deletions) Backward Compatibility: 100% maintained - existing functionality preserved New Features: Opt-in via --enable-analytics --enable-evaluation flags

Business Impact:

  • Production Monitoring: Real-time performance and cost tracking
  • Quality Assurance: AI-powered response evaluation and scoring
  • Cost Optimization: Usage analytics and provider comparison
  • Risk Management: Backup systems and error recovery
  • Developer Experience: Professional CLI and comprehensive examples
  • Enterprise Readiness: OpenTelemetry observability and operational excellence

Performance Metrics:

  • Analytics: Real token counts (299-768), response times (2-10s)
  • Evaluation: Quality scores (8-10/10), sub-6s processing
  • Providers: All 9 providers enhanced with zero breaking changes
  • CLI: Professional output with debug diagnostics

3.0.1 (2025-07-01)

Bug Fixes

  • cli: honor --model parameter in CLI commands (467ea85)

3.0.0 (2025-07-01)

Features

  • proxy: add comprehensive enterprise proxy support across all providers (9668e67)

BREAKING CHANGES

  • proxy: None - fully backward compatible

Files modified:

  • docs/ENTERPRISE-PROXY-SETUP.md (NEW) - Comprehensive enterprise proxy guide
  • docs/PROVIDER-CONFIGURATION.md - Added proxy configuration section
  • docs/CLI-GUIDE.md - Added proxy environment variables documentation
  • docs/ENVIRONMENT-VARIABLES.md - Added proxy configuration examples
  • docs/TROUBLESHOOTING.md - Added proxy troubleshooting procedures
  • .env.example - Added proxy environment variables
  • memory-bank/ - Updated with proxy implementation milestone
  • .clinerules - Added proxy success patterns
  • CHANGELOG.md - Added v2.2.0 proxy support entry
  • package.json - Updated description with enterprise features
  • README.md - Removed outdated content

2.1.0 (2025-06-29)

Features

  • timeout: add comprehensive timeout support for all AI providers (8610f4a)

2.0.0 (2025-06-28)

Features

  • cli: add command variations and stream agent support (5fc4c26)

BREAKING CHANGES

  • cli: 'generate-text' command is deprecated and will be removed in v2.0

1.11.3 (2025-06-22)

Bug Fixes

  • resolve MCP external tools returning raw JSON instead of human-readable responses (921a12b)

1.11.2 (2025-06-22)

Bug Fixes

  • ci: refactor auto-converted Node.js scripts (4088888)

1.11.1 (2025-06-21)

Bug Fixes

  • add backward compatiblity for gemini (5e84dab)

1.11.0 (2025-06-21)

Features

  • finalize MCP ecosystem and resolve all TypeScript errors (605d8b2)

1.10.0 (2025-06-21)

Features

  • cli: improve provider status accuracy and error handling (523e845)

1.9.0 (2025-06-20)

  • 🎉 feat: Enhanced multi-provider support with production infrastructure (#16) (55eb81a)

Bug Fixes

  • cli: prevent debug log persistence in production deployments (#14) (7310a4c)
  • production-ready CLI logging system and enhanced provider fallback (#13) (a7e8122)

Features

  • 🚀 MCP automatic tool discovery + dynamic models + AI function calling (781b4e5)
  • add Google AI Studio integration and restructure documentation (#11) (346fed2)
  • add Google AI Studio, fix CLI dependencies, and add LICENSE file (#12) (c234bcb)
  • implement AI Development Workflow Tools and comprehensive visual documentation (#10) (b0ae179)
  • implement comprehensive CLI tool with visual documentation and … (#4) (9991edb)

BREAKING CHANGES

  • Enhanced provider architecture with MCP integration

  • ✨ MCP automatic tool discovery - detects 82+ tools from connected servers

  • 🎯 AI function calling - seamless tool execution with Vercel AI SDK
  • 🔧 Dynamic model configuration via config/models.json
  • 🤖 Agent-based generation with automatic tool selection
  • 📡 Real-time MCP server management and monitoring

  • Added MCPEnhancedProvider for automatic tool integration

  • Implemented function calling for Google AI, OpenAI providers
  • Created unified tool registry for MCP and built-in tools
  • Enhanced CLI with agent-generate and MCP management commands
  • Added comprehensive examples and documentation

  • Automatic .mcp-config.json discovery across platforms

  • Session-based context management for tool execution
  • Graceful fallback when MCP servers unavailable
  • Performance optimized tool discovery (<1ms per tool)

  • Added 5 new comprehensive guides (MCP, troubleshooting, dynamic models)

  • Created practical examples for all integration patterns
  • Updated API reference with new capabilities
  • Enhanced memory bank with implementation details

Resolves: Enhanced AI capabilities with real-world tool integration

  • None - 100% backward compatibility maintained

Closes: Enhanced multi-provider support milestone Ready for: Immediate production deployment Impact: Most comprehensive AI provider ecosystem (9 providers)

Co-authored-by: sachin.sharma sachin.sharma@juspay.in

@juspay/neurolink

1.8.0

🎯 Major Feature: Dynamic Model Configuration System

  • ⚡ Revolutionary Model Management: Introduced dynamic model configuration system replacing static enums

    • Self-Updating Models: New models automatically available without code updates
    • Cost Optimization: Automatic selection of cheapest models for tasks
    • Smart Resolution: Fuzzy matching, aliases, and capability-based search
    • Multi-Source Loading: Configuration from API → GitHub → local with fallback
  • 💰 Cost Intelligence: Built-in cost optimization and model selection algorithms

    • Current Leader: Gemini 2.0 Flash at $0.000075/1K input tokens
    • Capability Mapping: Find models by features (functionCalling, vision, code-execution)
    • Real-Time Pricing: Always current model costs and performance data
    • Budget Controls: Maximum price filtering and cost-aware selection
  • 🔧 Production-Ready Infrastructure: Complete system with validation and monitoring

    • Model Configuration Server: REST API with search capabilities (scripts/model-server.js)
    • Zod Schema Validation: Type-safe runtime configuration validation
    • Comprehensive Testing: Full test suite for all dynamic model functionality
    • Documentation: Complete guide with examples and best practices
  • 🏷️ Smart Model Features: Advanced model resolution and aliasing

    • Aliases: Use friendly names like "claude-latest", "best-coding", "fastest"
    • Default Models: Provider-specific defaults when no model specified
    • Fuzzy Matching: "opus" → resolves to "claude-3-opus"
    • Deprecation Handling: Automatically exclude deprecated models

Technical Implementation

  • New Module: src/lib/core/dynamicModels.ts - Core dynamic model provider
  • Configuration: config/models.json - Structured model definitions with metadata
  • Integration: Updated AIProviderFactory to use dynamic models by default
  • Testing: Comprehensive test suite (test-dynamicModels.js, test-complete-integration.js)
  • Server: Fake hosted server for testing and development (scripts/model-server.js)

API Enhancements

  • Environment Variables: Added GOOGLE_AI_API_KEY for better compatibility
  • New Scripts: npm run model-server, npm run test:dynamicModels
  • Model Search API: RESTful endpoints for model discovery and filtering
  • Performance: Sub-millisecond provider creation with intelligent caching

Current Model Inventory

  • 10 Active Models: Across Anthropic, OpenAI, Google, and Bedrock
  • Cost Range: $0.000075 - $0.075 per 1K input tokens (100x cost difference)
  • Capabilities: Function-calling (9 models), Vision (7 models), Code-execution (1 model)
  • Deprecation Tracking: 1 deprecated model (GPT-4 Turbo) automatically excluded

Breaking Changes

  • MCP Default: MCP tools now enabled by default in AIProviderFactory.createProvider
  • Environment: Added GOOGLE_AI_API_KEY requirement for Google AI Studio
  • Model Resolution: Some edge cases in model name resolution may behave differently

Migration Notes

  • Backward Compatible: Existing code continues to work with improved functionality
  • Optional Features: Dynamic model features are additive and optional
  • Configuration: No changes required to existing .env files
  • Performance: Improved provider creation speed and reliability

1.7.1

Bug Fixes - MCP System Restoration

  • 🔧 Fixed Built-in Tool Loading: Resolved critical circular dependency issues preventing default tools from loading

    • Root Cause: Circular dependency between config.ts and unified-registry.ts preventing proper initialization
    • Solution: Implemented dynamic imports and restructured initialization chain
    • Result: Built-in tools restored from 0 → 3 tools (100% recovery rate)
  • ⏰ Fixed Time Tool Functionality: Time tool now properly available and returns accurate real-time data

    • Fixed tool registration and execution pathway
    • Proper timezone handling and formatting
    • Verified accuracy against system time
  • 🔍 Enhanced External Tool Discovery: 58+ external MCP tools now discoverable via comprehensive auto-discovery

    • Auto-discovery across VS Code, Claude Desktop, Cursor, Windsurf
    • Proper placeholder system for lazy activation
    • Unified registry integration
  • 🏗️ Unified Registry Architecture: Centralized tool management system now fully operational

    • Seamless integration of built-in and external tools
    • Proper initialization sequence and dependency management
    • Enhanced debugging and status reporting

Technical Changes

  • Fixed circular dependency between core MCP modules
  • Updated initialize.ts to use dynamic imports preventing startup issues
  • Enhanced loadDefaultRegistryTools() to ensure proper built-in server registration
  • Temporarily disabled AI core server to resolve complex dependencies (utility server fully working)
  • Improved error handling and logging throughout MCP system

Validation Results

  • Built-in Tools: 3/3 working (get-current-time, calculate-date-difference, format-number)
  • External Discovery: 58+ tools discovered across multiple MCP sources
  • Tool Execution: Real-time AI tool calling verified and working
  • System Integration: Full CLI and SDK integration operational

Breaking Changes

  • None - all changes are backward compatible improvements

Migration Notes

  • Existing MCP configurations continue to work
  • Built-in tools now work automatically without additional setup
  • External tools require proper MCP server configuration (as before)

1.7.0

Patch Changes

  • 🔧 Version Bump: Updated version to 1.7.0 to publish the three-provider implementation
    • All code changes were already included in 1.6.0 but not published
    • This version publishes the complete implementation to npm

1.6.0

Major Changes

  • 🎉 Universal AI Provider Support: Expanded from 6 to 9 AI providers with support for open source models, local AI, and European compliance
    • 🆕 Hugging Face Provider: Access to 100,000+ open source models with community-driven AI ecosystem
    • 🆕 Ollama Provider: 100% local AI execution with complete data privacy and no internet required
    • 🆕 Mistral AI Provider: European GDPR-compliant AI with competitive pricing and multilingual models

Features

  • 🛠️ Enhanced CLI with Ollama Commands: New Ollama-specific management commands

    • neurolink ollama list-models - List installed local models
    • neurolink ollama pull <model> - Download models locally
    • neurolink ollama remove <model> - Remove installed models
    • neurolink ollama status - Check Ollama service health
    • neurolink ollama start/stop - Manage Ollama service
    • neurolink ollama setup - Interactive setup wizard
  • 📚 Comprehensive Documentation: Complete documentation for all new providers

    • OLLAMA-SETUP.md: Platform-specific installation guides
    • PROVIDER-COMPARISON.md: Detailed provider comparison matrix
    • Updated all documentation to reflect 9 providers
    • Enhanced provider configuration guides

Technical Implementation

  • Provider Files: huggingFace.ts, ollama.ts, mistralAI.ts
  • Dependencies: Added @huggingface/inference, @ai-sdk/mistral, inquirer
  • MCP Integration: All 10 MCP tools support new providers
  • Demo Updates: Enhanced demo to showcase all 9 providers
  • CLI Enhancement: Ollama command structure with 7 subcommands
  • Provider Priority: Updated auto-selection to include new providers

Provider Comparison

Provider Best For Setup Time Privacy Cost
OpenAI General use 2 min Cloud $$$
Ollama Privacy 5 min Local Free
Hugging Face Open source 2 min Cloud Free/$$
Mistral EU compliance 2 min Cloud $$

Bug Fixes

  • 🔧 Local Provider Fallback: Implemented no-fallback policy for Ollama
    • When explicitly requesting --provider ollama, no cloud fallback occurs
    • Preserves user privacy intent when using local providers
    • Auto-selection still maintains intelligent fallback

Breaking Changes

  • None - 100% backward compatibility maintained

1.5.3

Patch Changes

  • 🔧 CLI Debug Log Persistence Fix: Fixed unwanted debug logs appearing in production deployments
    • Issue: CLI showed debug logs even when --debug flag was not provided, cluttering production output
    • Root Cause: CLI middleware had logical gap where NEUROLINK_DEBUG wasn't explicitly set to 'false' when no debug flag provided, allowing inherited environment variables to persist
    • Solution: Updated middleware to always set NEUROLINK_DEBUG = 'false' when debug mode not enabled
    • Impact: Deterministic logging behavior - debug logs only appear when explicitly requested with --debug flag

Technical Changes

  • Clean Production Output: No debug logs in deployed CLI unless --debug flag explicitly provided
  • Deterministic Behavior: Logging controlled by CLI flags, not inherited environment variables
  • Backward Compatible: Debug mode still works perfectly when --debug flag is used
  • Environment Independence: CLI output no longer affected by external NEUROLINK_DEBUG settings

CLI Behavior Fix

# Before Fix (Problematic)
neurolink generate-text "test"
# Could show debug logs if NEUROLINK_DEBUG was set in environment

# After Fix (Clean)
neurolink generate-text "test"
# Output: ⠋ 🤖 Generating text... ✔ ✅ Text generated successfully! [content]

# Debug still works when requested
neurolink generate-text "test" --debug
# Output: [debug logs] + spinner + success + content

1.5.2

Patch Changes

  • 🔧 Production-Ready CLI Logging System: Fixed critical logging system for clean production output

    • Issue: CLI showed excessive debug output during normal operation, breaking demo presentations
    • Root Cause: Mixed console.log statements bypassed conditional logger system
    • Solution: Systematic replacement of all console.log with logger.debug across codebase
    • Impact: Clean CLI output by default with conditional debug available via NEUROLINK_DEBUG=true
  • 🔄 Enhanced Provider Fallback Logic: Fixed incomplete provider fallback coverage

    • Issue: Provider fallback only attempted 4 of 6 providers (missing Anthropic & Azure)
    • Root Cause: Incomplete provider array in NeuroLink class fallback logic
    • Solution: Updated to include all 6 providers: ['openai', 'vertex', 'bedrock', 'anthropic', 'azure', 'google-ai']
    • Impact: 100% provider coverage with comprehensive fallback for maximum reliability
  • 🧹 Console Statement Cleanup: Systematic cleanup of debug output across entire codebase

    • Files Updated: src/lib/neurolink.ts, src/lib/core/factory.ts, src/lib/providers/openAI.ts, src/lib/mcp/servers/aiProviders/aiCoreServer.ts
    • Pattern: Replaced 200+ console.log() statements with logger.debug() calls
    • Result: Professional CLI behavior suitable for production deployment and demos

Technical Changes

  • Production CLI Output: Clean spinner → success → content (zero debug noise)
  • Debug Mode Available: Full debug logging with NEUROLINK_DEBUG=true environment variable
  • Complete Provider Support: All 6 AI providers now included in automatic fallback
  • Error Handling: Provider-level error logs preserved for troubleshooting
  • Conditional Logging: Debug messages only appear when explicitly enabled
  • Demo Ready: CLI output suitable for presentations and production use

CLI Behavior

# Production/Demo Mode (Clean Output)
node dist/cli/cli/index.js generate-text "test" --max-tokens 5
# Output: ⠋ 🤖 Generating text... ✔ ✅ Text generated successfully! [content]

# Debug Mode (Full Logging)
NEUROLINK_DEBUG=true node dist/cli/cli/index.js generate-text "test" --max-tokens 5
# Output: [debug logs] + spinner + success + content

Backward Compatibility

  • 100% API Compatible: No breaking changes to public interfaces
  • Environment Variables: NEUROLINK_DEBUG=true works as documented
  • Provider Selection: All existing provider configurations continue working
  • CLI Commands: All commands maintain same functionality with cleaner output

1.5.1

Patch Changes

  • 🔧 Critical CLI Dependency Fix: Removed peer dependencies to ensure zero-friction CLI usage

    • Issue: CLI commands failed when provider-specific SDK packages were peer dependencies
    • Root Cause: npx doesn't install peer dependencies, causing missing module errors
    • Solution: Moved ALL AI provider SDKs to regular dependencies
    • Impact: 100% reliable CLI - all providers work immediately with npx @juspay/neurolink
    • Dependencies: All AI SDK packages now bundled automatically (@ai-sdk/openai, @ai-sdk/bedrock, @ai-sdk/vertex, @ai-sdk/google)
  • 📄 Critical Legal Compliance: Added missing MIT LICENSE file

    • Issue: Package claimed MIT license but had no LICENSE file in repository
    • Legal Risk: Without explicit license file, users had no legal permission to use the software
    • Solution: Added proper MIT License file with Juspay Technologies copyright (2025)
    • Impact: Full legal compliance - users now have explicit permission to use, modify, and distribute
    • Files: Added LICENSE file with standard MIT license text

Technical Changes

  • Dependency Structure: Eliminated peer dependencies entirely for CLI compatibility
  • Provider Support: All 5 AI providers (OpenAI, Bedrock, Vertex AI, Google AI Studio, Anthropic) now work out-of-the-box
  • Zero Setup: No manual dependency installation required for any provider
  • Repository Structure: LICENSE file now included in package distribution
  • Legal Clarity: Explicit copyright and permission statements
  • Compliance: Matches industry standards for open source software licensing
  • Package Files: LICENSE included in NPM package distribution
  • Backward Compatibility: 100% compatible with existing code and configurations

1.5.0

Major Changes

  • 🧠 Google AI Studio Integration: Added Google AI Studio as 5th AI provider with Gemini models
    • 🔧 New Provider: Complete GoogleAIStudio provider with Gemini 1.5/2.0 Flash/Pro models
    • 🆓 Free Tier Access: Leverage Google's generous free tier for development and testing
    • 🖥️ CLI Support: Full --provider google-ai integration across all commands
    • ⚡ Auto-Selection: Included in automatic provider selection algorithm
    • 🔑 Simple Setup: Single GOOGLE_AI_API_KEY environment variable configuration

Features

  • 📚 Documentation Architecture Overhaul: Complete README.md restructuring for better UX
    • 75% Size Reduction: Transformed from 800+ lines to ~200 lines focused on quick start
    • Progressive Disclosure: Clear path from basic → intermediate → advanced documentation
    • Specialized Documentation: Created 4 dedicated docs files for different audiences
    • Cross-References: Complete navigation system between all documentation files

New Documentation Structure

docs/
├── AI-ANALYSIS-TOOLS.md          # AI optimization and analysis tools
├── AI-WORKFLOW-TOOLS.md          # Development lifecycle tools
├── MCP-FOUNDATION.md             # Technical MCP architecture
└── GOOGLE-AI-STUDIO-INTEGRATION-ARCHIVE.md  # Integration details

Google AI Studio Provider

// New Google AI Studio usage
import { createBestAIProvider } from "@juspay/neurolink";

const provider = createBestAIProvider(); // Auto-includes Google AI Studio
const result = await provider.generateText("Hello, Gemini!");
# Quick setup with Google AI Studio (free tier)
export GOOGLE_AI_API_KEY="AIza-your-google-ai-key"
npx @juspay/neurolink generate-text "Hello, AI!" --provider google-ai

Enhanced Visual Content

  • Google AI Studio Demos: Complete visual documentation for new provider
  • CLI Demonstrations: Updated CLI videos showing google-ai provider
  • Professional Quality: 6 new videos and asciinema recordings

Technical Implementation

  • Provider Integration: src/lib/providers/googleAIStudio.ts
  • Models Supported: Gemini 1.5 Pro/Flash, Gemini 2.0 Flash/Pro
  • Authentication: Simple API key authentication via Google AI Studio
  • Testing: Complete test coverage including provider and CLI tests

Bug Fixes

  • 🔧 CLI Dependencies: Moved essential dependencies (ai, zod) from peer to regular dependencies
    • Issue: npx @juspay/neurolink commands failed due to missing dependencies
    • Solution: CLI now works out-of-the-box without manual dependency installation
    • Impact: Zero-friction CLI usage for all users

Breaking Changes

  • None - 100% backward compatibility maintained

1.4.0

Major Changes

  • 📚 MCP Documentation Master Plan: Complete external server connectivity documentation
    • 🔧 MCP Integration Guide: 400+ line comprehensive setup and usage guide
    • 📖 CLI Documentation: Complete MCP commands section with workflows
    • 🧪 Demo Integration: 5 MCP API endpoints for testing and demonstration
    • ⚙️ Configuration Templates: .env.example and .mcp-servers.example.json
    • 📋 API Reference: Complete MCP API documentation with examples

Features

  • External Server Connectivity: Full MCP (Model Context Protocol) support
  • 65+ Compatible Servers: Filesystem, GitHub, databases, web browsing, search
  • Professional CLI: Complete server lifecycle management
  • Demo Server Integration: Live MCP API endpoints
  • Configuration Management: Templates and examples for all deployment scenarios

MCP Server Support

# Install and manage external servers
neurolink mcp install filesystem
neurolink mcp install github
neurolink mcp test filesystem
neurolink mcp list --status
neurolink mcp execute filesystem read_file --path="/path/to/file"

MCP API Endpoints

// Demo server includes 5 MCP endpoints
GET  /api/mcp/servers          # List configured servers
POST /api/mcp/test/:server     # Test server connectivity
GET  /api/mcp/tools/:server    # Get available tools
POST /api/mcp/execute          # Execute MCP tools
POST /api/mcp/install/:server  # Install new servers

Documentation Updates

  • README.md: Complete MCP section with real-world examples
  • docs/MCP-INTEGRATION.md: 400+ line comprehensive MCP guide
  • docs/CLI-GUIDE.md: MCP commands section with workflow examples
  • docs/API-REFERENCE.md: Complete MCP API documentation
  • docs/README.md: Updated documentation index with MCP references

Configuration

  • .env.example: MCP environment variables section
  • .mcp-servers.example.json: Complete server configuration template
  • package.json: Updated description highlighting MCP capabilities

Breaking Changes

  • None - 100% backward compatibility maintained

1.3.0

Major Changes

  • 🎉 MCP Foundation (Model Context Protocol): NeuroLink transforms from AI SDK to Universal AI Development Platform
    • 🏭 MCP Server Factory: Lighthouse-compatible server creation with createMCPServer()
    • 🧠 Context Management: Rich context with 15+ fields + tool chain tracking
    • 📋 Tool Registry: Discovery, registration, execution + statistics
    • 🎼 Tool Orchestration: Single tools + sequential pipelines + error handling
    • 🤖 AI Provider Integration: Core AI tools with schema validation
    • 🔗 Integration Tests: 27/27 tests passing (100% success rate)

Features

  • Factory-First Architecture: MCP tools work internally, users see simple factory methods
  • Lighthouse Compatible: 99% compatible with existing Lighthouse MCP patterns
  • Enterprise Ready: Rich context, permissions, tool orchestration, analytics
  • Production Tested: <1ms tool execution, comprehensive error handling

Performance

  • Test Execution: 1.23s for 27 comprehensive tests
  • Tool Execution: 0-11ms per tool (well under 100ms target)
  • Pipeline Performance: 22ms for 2-step sequential pipeline
  • Memory Efficiency: Clean context management with automatic cleanup

Technical Implementation

src/lib/mcp/
├── factory.ts                  # createMCPServer() - Lighthouse compatible
├── context-manager.ts          # Rich context (15+ fields) + tool chain tracking
├── registry.ts                 # Tool discovery, registration, execution + statistics
├── orchestrator.ts             # Single tools + sequential pipelines + error handling
└── servers/aiProviders/       # AI Core Server with 3 tools integrated
    └── aiCoreServer.ts       # generate-text, select-provider, check-provider-status

Breaking Changes

  • None - 100% backward compatibility maintained

1.2.4

Patch Changes

  • 95d8ee6: Set up automated version bumping and publishing workflow with changesets integration