Run AI models locally on your machine with node.js bindings for llama.cpp. Enforce a JSON schema on the model output on the generation level
a lightweight no-dependency fork of transformers.js (only tokenizers)
Generate AI summaries of test results using a wide range of AI models like OpenAI, Anthropic, Gemini, Mistral, Grok, DeepSeek, Azure, Perplexity, and OpenRouter
Our library @lenml/llama2-tokenizer
has been deprecated. We are excited to introduce our new library @lenml/tokenizers
as its replacement, offering a broader set of features and an enhanced experience.