パッケージの詳細

@promptbook/utils

webgptorg2.6mCC-BY-4.00.92.0-27

It's time for a paradigm shift. The future of software in plain English, French or Latin

ai, llm, prompt, template

readme

✨ Promptbook

NPM Version of ![Promptbook logo - cube with letters P and B](./design/logo-h1.png) Promptbook Quality of package ![Promptbook logo - cube with letters P and B](./design/logo-h1.png) Promptbook Known Vulnerabilities Issues

🌟 New Features

⚠ Warning: This is a pre-release version of the library. It is not yet ready for production use. Please look at latest stable release.

📦 Package @promptbook/utils

To install this package, run:

# Install entire promptbook ecosystem
npm i ptbk

# Install just this package to save space
npm install @promptbook/utils

Utility functions used in the library, but also useful for individual use in pre and post-processing of LLM inputs and outputs.

Here is an overview of the functions that can be exported from the @promptbook/utils package and used in your own projects:

Simple templating

The prompt template tag function helps format prompt strings for LLM interactions. It handles string interpolation and maintains consistent formatting for multiline strings and lists and also handles a security to avoid prompt injection.

import { prompt } from '@promptbook/utils';

const promptString = prompt`
    Correct the following sentence:

    > ${unsecureUserInput}
`;

The prompt name could be overloaded by multiple things in your code. If you want to use the promptTemplate which is alias for prompt:

import { promptTemplate } from '@promptbook/utils';

const promptString = promptTemplate`
    Correct the following sentence:

    > ${unsecureUserInput}
`;

Advanced templating

There is a function templateParameters which is used to replace the parameters in given template optimized to LLM prompt templates.

import { templateParameters } from '@promptbook/utils';

templateParameters('Hello, {name}!', { name: 'world' }); // 'Hello, world!'

And also multiline templates with blockquotes

import { templateParameters, spaceTrim } from '@promptbook/utils';

templateParameters(
    spaceTrim(`
        Hello, {name}!

        > {answer}
    `),
    {
        name: 'world',
        answer: spaceTrim(`
            I'm fine,
            thank you!

            And you?
        `),
    },
);

// Hello, world!
//
// > I'm fine,
// > thank you!
// >
// > And you?

Counting

Theese functions are usefull to count stats about the input/output in human-like terms not tokens and bytes, you can use countCharacters, countLines, countPages, countParagraphs, countSentences, countWords

import { countWords } from '@promptbook/utils';

console.log(countWords('Hello, world!')); // 2

Splitting

Splitting functions are similar to counting but they return the splitted parts of the input/output, you can use splitIntoCharacters, splitIntoLines, splitIntoPages, splitIntoParagraphs, splitIntoSentences, splitIntoWords

import { splitIntoWords } from '@promptbook/utils';

console.log(splitIntoWords('Hello, world!')); // ['Hello', 'world']

Normalization

Normalization functions are used to put the string into a normalized form, you can use kebab-case PascalCase SCREAMING_CASE snake_case kebab-case

import { normalizeTo } from '@promptbook/utils';

console.log(normalizeTo['kebab-case']('Hello, world!')); // 'hello-world'
  • There are more normalization functions like capitalize, decapitalize, removeDiacritics,...
  • Theese can be also used as postprocessing functions in the POSTPROCESS command in promptbook

Postprocessing

Sometimes you need to postprocess the output of the LLM model, every postprocessing function that is available through POSTPROCESS command in promptbook is exported from @promptbook/utils. You can use:

Very often you will use unwrapResult, which is used to extract the result you need from output with some additional information:

import { unwrapResult } from '@promptbook/utils';

unwrapResult('Best greeting for the user is "Hi Pavol!"'); // 'Hi Pavol!'

Misc

See also the documentation for all the functions in the @promptbook/utils package, every function is documented by jsdoc, typed by typescript and tested by jest.

  • checkExpectations,
  • executionReportJsonToString,
  • isPassingExpectations,
  • isValidJsonString,
  • parseNumber

Rest of the documentation is common for entire promptbook ecosystem:

🤍 The Book Abstract

It's time for a paradigm shift! The future of software is in plain English, French or Latin.

During the computer revolution, we have seen multiple generations of computer languages, from the physical rewiring of the vacuum tubes through low-level machine code to the high-level languages like Python or JavaScript. And now, we're on the edge of the next revolution!

It's a revolution of writing software in plain human language that is understandable and executable by both humans and machines – and it's going to change everything!

The incredible growth in power of microprocessors and the Moore's Law have been the driving force behind the ever-more powerful languages, and it's been an amazing journey! Similarly, the large language models (like GPT or Claude) are the next big thing in language technology, and they're set to transform the way we interact with computers.

This shift is going to happen, whether we are ready for it or not. Our mission is to make it excellently, not just good.

Join us in this journey!

🚀 Get started

Take a look at the simple starter kit with books integrated into the Hello World sample applications:

💜 The Promptbook Project

Promptbook project is ecosystem of multiple projects and tools, following is a list of most important pieces of the project:

Project About
Book language Book is a human-understandable markup language for writing AI applications such as chatbots, knowledge bases, agents, avarars, translators, automations and more.
There is also a plugin for VSCode to support .book file extension
Promptbook Engine Promptbook engine can run applications written in Book language. It is released as multiple NPM packages and Docker HUB
Promptbook Studio Promptbook.studio is a web-based editor and runner for book applications. It is still in the experimental MVP stage.

Hello world examples:

We also have a community of developers and users of Promptbook:

And Promptbook.studio branded socials:

And Promptujeme sub-brand:

/Subbrand for Czech clients/

And Promptbook.city branded socials:

/Sub-brand for images and graphics generated via Promptbook prompting/

💙 The Book language

Following is the documentation and blueprint of the Book language.

Book is a language that can be used to write AI applications, agents, workflows, automations, knowledgebases, translators, sheet processors, email automations and more. It allows you to harness the power of AI models in human-like terms, without the need to know the specifics and technicalities of the models.

Example

# 🌟 My first Book

-   BOOK VERSION 1.0.0
-   URL https://promptbook.studio/hello.book
-   INPUT PARAMETER {topic}
-   OUTPUT PARAMETER {article}

# Write an article

-   PERSONA Jane, marketing specialist with prior experience in writing articles about technology and artificial intelligence
-   KNOWLEDGE https://wikipedia.org/
-   KNOWLEDGE ./journalist-ethics.pdf
-   EXPECT MIN 1 Sentence
-   EXPECT MAX 5 Pages

> Write an article about {topic}

-> {article}

Each part of the book defines one of 3 circles:

What: Workflows, Tasks and Parameters

What work needs to be done. Each book defines a workflow (scenario or pipeline), which is one or more tasks. Each workflow has a fixed input and output. For example, you have a book that generates an article from a topic. Once it generates an article about AI, once about marketing, once about cooking. The workflow (= your AI program) is the same, only the input and output change.

Related commands:

Who: Personas

Who does the work. Each task is performed by a persona. A persona is a description of your virtual employee. It is a higher abstraction than the model, tokens, temperature, top-k, top-p and other model parameters.

You can describe what you want in human language like Jane, creative writer with a sense of sharp humour instead of gpt-4-2024-13-31, temperature 1.2, top-k 40, STOP token ".\n",....

Personas can have access to different knowledge, tools and actions. They can also consult their work with other personas or user, if allowed.

Related commands:

How: Knowledge, Instruments and Actions

The resources used by the personas are used to do the work.

Related commands:

  • KNOWLEDGE of documents, websites, and other resources
  • INSTRUMENT for real-time data like time, location, weather, stock prices, searching the internet, calculations, etc.
  • ACTION for actions like sending emails, creating files, ending a workflow, etc.

General principles of book language

Book language is based on markdown. It is subset of markdown. It is designed to be easy to read and write. It is designed to be understandable by both humans and machines and without specific knowledge of the language.

The file has .book extension. It uses UTF-8 non BOM encoding.

Book has two variants: flat - which is just a prompt with no structure, and full - which has a structure with tasks, commands and prompts.

As it is source code, it can leverage all the features of version control systems like git and does not suffer from the problems of binary formats, proprietary formats, or no-code solutions.

But unlike programming languages, it is designed to be understandable by non-programmers and non-technical people.

🔒 Security

For information on reporting security vulnerabilities, see our Security Policy.

📦 Packages (for developers)

This library is divided into several packages, all are published from single monorepo. You can install all of them at once:

npm i ptbk

Or you can install them separately:

⭐ Marked packages are worth to try first

📚 Dictionary

The following glossary is used to clarify certain concepts:

General LLM / AI terms

  • Prompt drift is a phenomenon where the AI model starts to generate outputs that are not aligned with the original prompt. This can happen due to the model's training data, the prompt's wording, or the model's architecture.
  • Pipeline, workflow scenario or chain is a sequence of tasks that are executed in a specific order. In the context of AI, a pipeline can refer to a sequence of AI models that are used to process data.
  • Fine-tuning is a process where a pre-trained AI model is further trained on a specific dataset to improve its performance on a specific task.
  • Zero-shot learning is a machine learning paradigm where a model is trained to perform a task without any labeled examples. Instead, the model is provided with a description of the task and is expected to generate the correct output.
  • Few-shot learning is a machine learning paradigm where a model is trained to perform a task with only a few labeled examples. This is in contrast to traditional machine learning, where models are trained on large datasets.
  • Meta-learning is a machine learning paradigm where a model is trained on a variety of tasks and is able to learn new tasks with minimal additional training. This is achieved by learning a set of meta-parameters that can be quickly adapted to new tasks.
  • Retrieval-augmented generation is a machine learning paradigm where a model generates text by retrieving relevant information from a large database of text. This approach combines the benefits of generative models and retrieval models.
  • Longtail refers to non-common or rare events, items, or entities that are not well-represented in the training data of machine learning models. Longtail items are often challenging for models to predict accurately.

Note: This section is not complete dictionary, more list of general AI / LLM terms that has connection with Promptbook

💯 Core concepts

Advanced concepts

🚂 Promptbook Engine

Schema of Promptbook Engine

➕➖ When to use Promptbook?

➕ When to use

  • When you are writing app that generates complex things via LLM - like websites, articles, presentations, code, stories, songs,...
  • When you want to separate code from text prompts
  • When you want to describe complex prompt pipelines and don't want to do it in the code
  • When you want to orchestrate multiple prompts together
  • When you want to reuse parts of prompts in multiple places
  • When you want to version your prompts and test multiple versions
  • When you want to log the execution of prompts and backtrace the issues

See more

➖ When not to use

  • When you have already implemented single simple prompt and it works fine for your job
  • When OpenAI Assistant (GPTs) is enough for you
  • When you need streaming (this may be implemented in the future, see discussion).
  • When you need to use something other than JavaScript or TypeScript (other languages are on the way, see the discussion)
  • When your main focus is on something other than text - like images, audio, video, spreadsheets (other media types may be added in the future, see discussion)
  • When you need to use recursion (see the discussion)

See more

🐜 Known issues

🧼 Intentionally not implemented features

❔ FAQ

If you have a question start a discussion, open an issue or write me an email.

⌚ Changelog

See CHANGELOG.md

📜 License

Promptbook project is under BUSL 1.1 is an SPDX license

🎯 Todos

See TODO.md

🤝 Partners

🖋️ Contributing

You can also ⭐ star the project, follow us on GitHub or various other social networks.We are open to pull requests, feedback, and suggestions.

📞 Support

If you need help or have questions, please check our Support Resources.

更新履歴

⌚ Changelog

Released versions

0.20.0 (2023-12-29)

  • Change keyword USE to MODEL VARIANT
  • Allow to specify exact model eg. MODEL NAME gpt-4-1106-preview

0.20.1 (2024-01-15)

  • Add postprocessing function trimEndOfCodeBlock

0.20.2 (2024-01-16)

  • replaceParameters works with inlined JSONs

0.23.0 (2024-01-25)

  • You are able to send markdown code block in prompts (without traces of escaping)
  • Postprocessing function trimEndOfCodeBlock is not working with escaped code blocks JUST with markdown code blocks
  • Rename extractBlocksFromMarkdown to extractAllBlocksFromMarkdown

0.24.0 (2024-01-25)

  • Add postprocessing function trimCodeBlock
  • Add EXPECT command to promptbooks
  • Add ExecutionReport
  • Add parseNumber utility function
  • PtbkExecutor returns richer result and does not throw, just returns isSuccessful=false, You can use assertsExecutionSuccessful utility function to check if the execution was successful
  • Add assertsExecutionSuccessful utility function

0.25.0 (2024-02-03)

  • CreatePtbkExecutorSettings are not mandatory anymore

0.26.0 (2024-02-03)

  • Add EXPECT JSON command to promptbooks
  • Split internal representation EXPECT into EXPECT_AMOUNT and EXPECT_FORMAT

0.27.0 (2024-02-03)

Moving logic from promptbookStringToJson to createPtbkExecutor

  • Allow postprocessing and expectations in all execution types
  • Postprocessing is happening before checking expectations
  • In PromptbookJson postprocessing is represented internally in each PromptTemplateJson not as separate PromptTemplateJson
  • Introduce ExpectError
  • Rename maxNaturalExecutionAttempts to maxExecutionAttempts (because now it is not just for natural execution)
  • If title in promptbook contains emojis, pass it innto report
  • Fix description in report
  • Asking user infinite times for input if the input not matches the expectations

0.28.0 (2024-02-05)

Better execution report in markdown format

  • Add JOKER {foo} as a way how to skip part of the promptbook
  • Split UserInterfaceToolsPromptDialogOptions.prompt into promptTitle and promptMessage
  • Add UserInterfaceToolsPromptDialogOptions.priority
  • Add timing information to report
  • Maximum must be higher than minimum in EXPECT statement
  • Maximum 0 is not valid, should be at least 1 in EXPECT statement

0.29.0 (2024-02-06)

  • Allow to use custom postprocessing functions
  • Allow async postprocessing functions

0.30.0 (2024-02-09)

  • Remove Promptbook (just using JSON PromptbookJson format)
    • CreatePtbkExecutorOptions has PromptbookJson
  • Promptbooks are executed in parallel
    • PromptTemplateJson contains dependentParameterNames
    • validatePromptbookJson is checking for circular dependencies
    • Test that joker is one of the dependent parameters

0.31.0 (2024-02-12)

Better execution reports

  • Filter out voids in executionReportJsonToString
  • Add timing information to ExecutionReportJson (In both text and chart format)
  • Add money cost information to ExecutionReportJson (In both text and chart format)
  • Escape code blocks in markdown
  • Do not export replaceParameters utility function

0.32.0 (2024-02-12)

Export less functions from @promptbook/utils

0.33.0 (Skipped)

Iterating over parameters

  • Parameters can be both string and Array<string>
    • Array<string> will itterate over all values
    • You can use postprocessing functions or EXECUTE SCRIPT to split string into array and vice versa

0.34.0 (2024-02-19)

  • Do not remove emojis or formatting from task title in progress

0.35.0 (2024-03-01)

  • You can use prettifyMarkdown for postprocessing

0.35.1 (2024-03-06)

  • Add Mermaid graph to sample promptbooks
  • Fix spelling errors in OpenAI error messages

0.36.0 (2024-03-06)

Cleanup and renaming

  • Cleanup the project
  • Do not export unused types from @promptbook/types
  • Rename "Prompt template pipelines" to more meaningful "Promptbooks"
  • Remove DEFAULT_MODEL_REQUIREMENTS - You need to explicitly specify the requirements
  • Rename PromptTemplatePipelineLibrary -> PromptbookLibrary
  • Rename RemoteServerOptions.ptbkLibrary -> library
  • Add RemoteServerOptions.ptbkNames
  • Rename RemoteServerOptions.getPtp -> getPtbkByName
  • Do not use shortcut "Ptbk" but full "Promptbook" name in the code, classes, methods, etc.
  • Change command PTBK_URL to URL (but keep backward compatibility and preserve alias PTBK)
  • Change command PTBK_NAME to PROMPTBOOK_NAME (but keep backward compatibility and preserve alias PTBK)
  • Rename runRemoteServer -> startRemoteServer and return Destroyable object

0.37.0 (2024-03-08)

Explicit output parameters

  • Every promptbook has to have OUTPUT PARAMETER property in header

0.38.0 (2024-03-09)

Remove "I" prefix from interfaces and change interfaces to types.

  • Rename IAutomaticTranslator -> AutomaticTranslator
  • Rename ITranslatorOptions -> TranslatorOptions
  • Rename IGoogleAutomaticTranslatorOptions -> GoogleAutomaticTranslatorOptions
  • Rename ILindatAutomaticTranslatorOptions -> LindatAutomaticTranslatorOptions
  • Remove unused IPersonProfile
  • Remove unused ILicense
  • Remove unused IRepository

Note: Keeping "I" prefix in internal tooling like IEntity, IExecCommandOptions, IExecCommandOptions Note: Also keeping stuff imported from external libraries like IDestroyable

0.39.0 (2024-03-09)

Working on Promptbook Library. Identify promptbooks by URL.

  • Change PromptbookLibrary class to interface
  • Add SimplePromptbookLibrary class which implements PromptbookLibrary
  • Rename PromptbookLibrary.promptbookNames to PromptbookLibrary.pipelineUrls
  • Remove PromptbookLibrary.createExecutor to separate responsibility
  • Make more renamings and reorganizations in PromptbookLibrary
  • Make PromptbookLibrary.listPipelines async method
  • Make PromptbookLibrary.getPipelineByUrl async method

0.40.0 (2024-03-10)

Multiple factories for PromptbookLibrary, Custom errors, enhance templating

  • Throwing NotFoundError
  • Throwing PromptbookSyntaxError
  • Throwing PromptbookLogicError
  • Throwing PromptbookExecutionError
  • Throwing PromptbookReferenceError
  • Throwing UnexepctedError
  • Preserve col-chars in multi-line templates, See more in replaceParameters unit test
  • Change static methods of PromptbookLibrary to standalone functions
  • Static method createPromptbookLibraryFromSources receives spreaded arguments Array instead of Record
  • Add factory function createPromptbookLibraryFromPromise

0.41.0 (2024-03-23)

More options to create PromptbookLibrary

  • Utility createPromptbookLibraryFromDirectory
  • Utility createPromptbookLibraryFromUrl
  • Add extractBlock to build-in functions
  • Remove problematic usage of chalk and use colors instead
  • Export replaceParameters from @promptbook/utils

0.42.0 (2024-03-24)

Better logo and branding of Promptbook.

0.43.0 (2024-03-26)

CLI utils exported from @promptbook/cli

After install you can use promptbook command in terminal:

npm i @promptbook/utils
npx ptbk prettify 'promptbook/**/*.ptbk.md'

0.44.0 (2024-04-26)

  • Lower bundle size
  • Normalization library n12 is not used and all its functions are bringed to @promptbook/utils
  • Better error names
  • Better error used
  • Make ExpectError private
  • @promptbook/core is not be peer dependency of @promptbook/utils
  • Rename expectAmount in json to expectations
  • Expectations are passed into prompt object and used in natural tools
  • Add MockedFackedLlmExecutionTools
  • Add utils checkExpectations and isPassingExpectations
  • Better error messages from JavascriptEvalExecutionTools
  • Each exported NPM package has full README
  • spaceTrim is re-exported from @promptbook/utils

0.45.0 (2024-04-27)

More direct usage of OpenAI API, Refactoring

  • Pass directly Open AI otpions to OpenAiExecutionTools
    • Change openAiApiKey -> apiKey when creating new OpenAiExecutionTools
  • Change all import statements to import type when importing just types

0.46.0 (2024-04-28)

Reorganize packages

💡 Now you can just install promptbook or ptbk as alias for everything

  • New package promptbook as a link to all other packages
  • New package ptbk as an alias to promptbook
  • New package @promptbook/fake-llm
    • Move there MockedEchoLlmExecutionTools and MockedFackedLlmExecutionTools from @promptbook/core
  • New package @promptbook/langtail to prepare for Langtail integration

0.47.0 (2024-05-02)

Tools refactoring

  • Rename "natural" -> "llm"
  • Allow to pass multiple llm into ExecutionTools container
  • Export renderPromptbookMermaid through @promptbook/utils

0.48.0 and 0.49.0 (2024-05-08)

Better utilities (for Promptbase app)

  • Add reverse utility the promptbookJsonToString
  • Allow to put link callback into renderPromptbookMermaid
  • Better prompt template identification
  • Add function titleToName exported from @promptbook/utils
  • Add function renameParameter exported from @promptbook/utils
  • Rename "Script Template" to just "Script"

0.50.0 (2024-05-17)

Was accidentally released as earlier, re-released fully completed as 0.51.0

0.51.0 (2024-05-24)

Add new OpenaAI models gpt-4o and gpt-4o-2024-05-13

  • Add model gpt-4o
  • Add model gpt-4o-2024-05-13
  • Classes that implements LlmExecutionTools must expose compatible models
  • List OpenAI models dynamically
  • All GPT models have pricing information
  • Export OPENAI_MODELS from @promptbook/openai
  • Export types LlmTemplateJson, SimpleTemplateJson, ScriptJson, PromptDialogJson, Expectations from @promptbook/types
  • ModelRequirements.modelName is not required anymore
  • PromptbookExecutor does not require onProgress anymore
  • ExecutionTools does not require userInterface anymore, when not set, the user interface is disabled and promptbook which requires user interaction will fail
  • Export extractParameters, extractVariables and extractParametersFromPromptTemplate from @promptbook/utils
  • Add and export set operations difference, intersection and union from @promptbook/utils
  • Export POSTPROCESSING_FUNCTIONS from @promptbook/execute-javascript
  • No need to specify MODEL VARIANT and MODEL NAME in .ptbk.md explicitly, CHAT VARIANT will be used as default

0.52.0 (2024-06-06)

Add support for Claude \ Anthropic models via package @promptbook/anthropic-claude and add Azure OpenAI models via package @promptbook/azure-openai

  • Export MultipleLlmExecutionTools from @promptbook/core
  • Always use "modelName" not just "model"
  • Standartization of model providers
  • Delete @promptbook/wizzard
  • Move assertsExecutionSuccessful,checkExpectations,executionReportJsonToString,ExecutionReportStringOptions,ExecutionReportStringOptionsDefaults,isPassingExpectations,prettifyPromptbookString from @promptbook/utils to @promptbook/core
  • Make and use JavascriptExecutionTools as placeholder for better implementation with propper sandboxing
  • Implement createPromptbookLibraryFromDirectory export from @promptbook/core
  • Make PromptbookLibraryError
  • Check Promptbook URL uniqueness in SimplePromptbookLibrary (see [🦄])
  • Util createPromptbookLibraryFromPromise is not public anymore
  • Util forEachAsync export from @promptbook/utils

0.53.0 (2024-06-08)

Repair and organize imports

0.54.0 (2024-06-08)

  • Custom errors ExpectError,NotFoundError,PromptbookExecutionError,PromptbookLogicError,PromptbookLibraryError,PromptbookSyntaxError exported from @promptbook/core

0.55.0 (2024-06-15)

Better usage computation and shape

  • Change shape of PromptResult.usage
  • Remove types number_positive_or_zero and number_negative_or_zero
  • Export type PromptResultUsage, PromptResultUsageCounts and UncertainNumber from @promptbook/types
  • Export util addUsage from @promptbook/core
  • Put usage directly in result of each execution
  • Export function usageToWorktime from @promptbook/core

0.56.0 (2024-06-16)

Rename and reorganize libraries

  • Take createPromptbookLibraryFromDirectory from @promptbook/core -> @promptbook/node (to avoid dependency risk errors)
  • Rename @promptbook/fake-llmed -> @promptbook/fake-llm
  • Export PROMPTBOOK_ENGINE_VERSION from each package
  • Use export type in @promptbook/types

0.57.0 (2024-06-15)

Better JSON Mode

  • OpenAiExecutionTools will use JSON mode natively
  • OpenAiExecutionTools Do not fail on empty (but valid string) responses

0.58.0 (2024-06-26)

  • Internal reorganization of folders and files
  • Export types as type export

0.59.0 (2024-06-30)

Preparation for system for management of external knowledge (RAG), vector embeddings and propper building of pipeline collection.

  • Add MaterialKnowledgePieceJson
  • Add KnowledgeJson
  • Add prepareKnowledgeFromMarkdown exported from @promptbook/core
  • Change promptbookStringToJson to async function (and add promptbookStringToJsonSync for promptbooks without external knowledge)
  • Change createPromptbookLibraryFromSources to createPromptbookLibraryFromJson and allow only compiled jsons as input + it is not async anymore
  • Allow only jsons as input in createLibraryFromPromise
  • Class SimplePromptbookLibrary not exposed at all, only type PromptbookLibrary and constructors
  • Rename all createPromptbookLibraryFromXyz to createLibraryFromXyz
  • Misc Tool classes not requires options anymore (like CallbackInterfaceTools, OpenAiExecutionTools, AnthropicClaudeExecutionTools, etc.)
  • Add util libraryToJson exported from @promptbook/core
  • CLI util ptbk make ... can convert promptbooks to JSON
  • promptbookStringToJson automatically looks for promptbook-collection.json in root of given directory
  • Rename validatePromptbookJson to validatePromptbook
  • Create embed method on LLM tools, PromptEmbeddingResult, EmbeddingVector and embeddingVectorToString
  • createLibraryFromDirectory still DONT use prebuild library (just detects it)

0.60.0 (2024-07-15)

Renaming and making names more consistent and less disambigous

  • Rename word "promptbook"
    • Keep name "Promptbook" as name for this project.
    • Rename promptbook as pipeline of templates defined in .ptbk.md to "pipeline"
  • Rename word "library"
    • For library used as a collection of templates use name "collection"
    • For library used as this project and package use word "package"
  • Rename methods in LlmExecutionTools
    • gptChat -> callChatModel
    • gptComplete -> callCompletionModel
  • Rename custom errors
  • Rename folder promptbook-collection -> promptbook-collection
  • In CLI you ca use both promptbook and ptbk

0.61.0 (2024-07-8)

Big syntax additions Working external knowledge, personas, preparation for instruments and actions

  • Add reserved parameter names
  • Add SAMPLE command with notation for parameter samples to .ptbk.md files
  • Add KNOWLEDGE command to .ptbk.md files
  • Change EXECUTE command to BLOCK command
  • Change executionType -> templateType
  • Rename SynraxError to ParsingError
  • Rename extractParameters to extractParameterNames
  • Rename ExecutionError to PipelineExecutionError
  • Remove TemplateError and replace with ExecutionError
  • Allow deep structure (h3, h4,...) in .ptbk.md files
  • Add callEmbeddingModel to LlmExecutionTools
  • callChatModel and callCompletionModel are not required to be implemented in LlmExecutionTools anymore
  • Remove MultipleLlmExecutionTools and make joinLlmExecutionTools function
  • You can pass simple array of LlmExecutionTools into ExecutionTools and it will be joined automatically via joinLlmExecutionTools
  • Remove the MarkdownStructure and replace by simpler solution flattenMarkdown + splitMarkdownIntoSections + parseMarkdownSection which works just with markdown strings and export from @promptbook/utils <- [🕞]
  • Markdown utils are exported through @promptbook/markdown-utils (and removed from @promptbook/utils)
  • String normalizers goes alongside with types; for example normalizeTo_SCREAMING_CASE -> string_SCREAMING_CASE
  • Export isValidUrl, isValidPipelineUrl, isValidFilePath, isValidJavascriptName, isValidSemanticVersion, isHostnameOnPrivateNetwork, isUrlOnPrivateNetwork and isValidUuid from @promptbook/utils
  • Add systemMessage, temperature and seed to ModelRequirements
  • Code blocks can be noteted both by ``` and >
  • Add caching and storage
  • Export utity stringifyPipelineJson to stringify PipelineJson with pretty formatting of loooooong knowledge indexes from @promptbook/core

0.62.0 (2024-07-8)

[🎐] Better work with usage

  • Add usage to preparations and reports
  • Export function usageToHuman from @promptbook/core
  • Rename TotalCost to TotalUsage
  • Allow to reload cache
  • Fix error in uncertainNumber which always returned "uncertain 0"
  • [🐞] Fix usage counting in OpenAiExecutionTools

0.63.0 (2024-08-11)

Better system for imports, exports and dependencies

  • Manage package exports automatically
  • Automatically export all types from @promptbook/types
  • Protext runtime-specific code - for example protect browser-specific to never reach @promptbook/node
  • Consiese README - move things to discussions
  • Make Partial<ModelRequirements> and optional

0.64.0 was skipped

0.65.0 (2024-08-15-)

[🍜] Anonymous server

  • Anonymous server
  • LlmConfiguration and createLlmToolsFromConfiguration
  • Better names for knowledge sources
  • Rename keys inside prepared knowledge
  • Use MultipleLlmExecutionTools more
  • LLM tools providers have constructor functions, for example OpenAiExecutionTools -> createOpenAiExecutionTools
  • remoteServerUrl is string_base_url

0.66.0 (2024-08-19)

[🎰] Model updates and registers

  • Prefix all non-pure by $
  • Add model claude-3-5-sonnet-20240620 to AnthropicClaudeExecutionTools
  • [🐞] Fix usage counting in AnthropicClaudeExecutionTools
  • Update @anthropic-ai/sdk from 0.21.1 to 0.26.1
  • Update @azure/openai from 1.0.0-beta.12 to 2.0.0-beta.1
  • Update openai from 4.46.1 to 4.55.9
  • Add LlmExecutionToolsConstructor
  • Add $llmToolsConfigurationBoilerplatesRegister
  • Add $llmToolsRegister
  • Rename Openai ->OpenAi

0.67.0 (2024-08-21)

[🚉] Types and interfaces, JSON serialization

  • Enhance 🤍 The Promptbook Whitepaper
  • Enhance the README.md
  • ExecutionReportJson is fully serializable as JSON
  • [🛫] Prompt is fully serializable as JSON
  • Add type string_postprocessing_function_name
  • Add isSerializableAsJson utility function, use it to protect inputs and check outputs and export from @promptbook/utils
  • Add serializeError and deserializeError utility functions and export from @promptbook/utils
  • Rename ReferenceError to PipelineUrlError
  • Make index of all errors and export from @promptbook/core
  • Mark all entities that are fully serializable as JSON by [🚉]
  • When running in browser, auto add dangerouslyAllowBrowser from createOpenAiExecutionTools
  • RemoteLlmExecutionTools automatically retries on error
  • Rename client_id -> string_user_id and clientId -> userId

0.68.0 (2024-09-08)

[🍧] Commands and command parser

  • There are 2 different commands, EXPECT and FORMAT
  • Rename BLOCK command -> TEMPLATE
  • EXPECT JSON changed to FORMAT JSON
  • Change usagePlaces -> isUsedInPipelineHead + isUsedInPipelineTemplate
  • All parsers have functions $applyToPipelineJson, $applyToTemplateJson, stringify, takeFromPipelineJson and takeFromTemplateJson
  • PipelineJson has defaultModelRequirements
  • PipelineJson has Chat model variant as default without need to specify it explicitly
  • [🥜] Rename "Prompt template" -> "Template"
  • Rename PromptTemplateJson -> TemplateJson
  • Rename extractParameterNamesFromPromptTemplate -> extractParameterNamesFromTemplate
  • Rename PromptTemplateJsonCommon -> TemplateJsonCommon
  • Rename PromptTemplateParameterJson -> ParameterJson
  • Rename PipelineJson.promptTemplates -> PipelineJson.templates
  • Rename PromptDialogJson -> DialogTemplateJson
  • Rename PROMPT_DIALOG -> DIALOG_TEMPLATE
  • Rename ScriptJson -> ScriptTemplateJson
  • Rename SCRIPT -> SCRIPT_TEMPLATE
  • Rename LlmTemplateJson -> PromptTemplateJson
  • Rename ParsingError -> ParseError

0.69.0 (2024-09-)

Command FOREACH

  • Allow iterations with FOREACH command
  • Paremeter names are case insensitive and normalized
  • Big refactoring of createPipelineExecutor
  • Enhance and implement formats FormatDefinition
  • Allow to parse CSVs via CsvFormatDefinition
  • Change ListFormatDefinition -> TextFormatDefinition

0.70.0 ()

Support for local models - integrate Ollama

  • Make new package @promptbook/ollama
  • Add OllamaExecutionTools exported from @promptbook/ollama

0.71.0 (2024-11-07)

Knowledge scrapers [🐝]

  • Make new package @promptbook/pdf
  • Make new package @promptbook/documents
  • Make new package @promptbook/legacy-documents
  • Make new package @promptbook/website-crawler
  • Remove llm tools from PrepareAndScrapeOptions and add second arcument to misc preparation functions
  • Allow to import markdown files with knowledge
  • Allow to import .docx files with knowledge .docx -(Pandoc)-> .md
  • Allow to import .doc files with knowledge .doc -(LibreOffice)-> .docx -(Pandoc)-> .md
  • Allow to import .rtf files with knowledge .rtf -(LibreOffice)-> .docx -(Pandoc)-> .md
  • Allow to import websites with knowledge
  • Add new error KnowledgeScrapeError
  • Filesystem is passed as dependency
  • External programs are passed as dependency
  • Remove PipelineStringToJsonOptions in favour of PrepareAndScrapeOptions
  • Add MissingToolsError
  • Change FileStorage -> FileCacheStorage
  • Changed behavior of titleToName when passing URLs or file paths
  • Fix normalize functions when normalizing string containing slash char "/", "\"
  • Pass fs through ExecutionTools
  • Pass executables through ExecutionTools
  • Pass scrapers through ExecutionTools
  • Add utilities $provideExecutionToolsForBrowser and $provideExecutionToolsForNode and use them in samples
  • Add utilities $provideScrapersForBrowser and $provideScrapersForNode
  • Rename createLlmToolsFromConfigurationFromEnv -> $provideLlmToolsConfigurationFromEnv and createLlmToolsFromEnv -> $provideLlmToolsFromEnv
  • Rename getLlmToolsForTestingAndScriptsAndPlayground -> $provideLlmToolsForTestingAndScriptsAndPlayground
  • Rename getLlmToolsForCli -> $provideLlmToolsForCli
  • Change most Array -> ReadonlyArray
  • Unite CreatePipelineExecutorOptions and CreatePipelineExecutorSettings
  • Change --reload-cache to --reload in CLI
  • Prefix default values with DEFAULT_

0.72.0 (2024-11-07)

Support for Assistants API (GPTs) from OpenAI

  • Add OpenAiAssistantExecutionTools
  • OpenAiExecutionTools.createAssistantSubtools
  • Add UNCERTAIN_USAGE
  • LLM Tools getClient method are public
  • LLM Tools options are not private anymore but protected
  • getClient methods are public
  • In remote server allow to pass not only userId but also appId and customOptions
  • In remote server userId can not be undefined anymore but null
  • OpenAiExecutionTools recieves userId (not user)
  • Change Collection mode -> Application mode

0.73.0 (2024-11-08)

0.74.0 (2024-11-11)

  • Proposal for version 1.0.0 both in Promptbook and Book language
  • Allow to run books directly in cli via ptbk run ./path/to/book.ptbk.md
  • Fix security warnings in dependencies
  • Enhance countLines and countPages utility function
  • No need to explicitly define the input and output parameters
  • Allow empty pipelines
  • Add BlackholeStorage
  • Rename .ptbk.* -> .book.*
  • Split PROMPTBOOK_VERSION -> BOOK_LANGUAGE_VERSION + PROMPTBOOK_ENGINE_VERSION
  • Finish split between Promptbook framework and Book language

0.75.0 (2024-11-)

Formfactors, Rebranding

  • Add FormfactorCommand
  • Add Pipeline interfaces
  • Split ParameterJson into InputParameterJson, OutputParameterJson and IntermediateParameterJson
  • Reorganize /src folder
  • Rename Template -> Task
  • Rename TemplateCommand -> SectionCommand command
  • Make alongside SectionType the TaskType
  • 🤍 Change Whitepaper to Abstract
  • Rename default folder for your books from promptbook-collection -> books
  • Change claim of the project to "It's time for a paradigm shift! The future of software is in plain English, French or Latin."

0.76.0 (2024-12-07)

Skipped, because of the mistake in the versioning. (It should be pre-release)

0.77.0 (2024-12-10)

Support for more models, add @promptbook/vercel and @promptbook/google packages.

  • @promptbook/vercel - Adapter for Vercel functionalities
  • @promptbook/google - Integration with Google's Gemini API
  • Option userId can be passed into all tools and instead of null, it can be undefined
  • Rename $currentDate -> $getCurrentDate

0.78.0 (2024-12-14)

Utility functions

  • Add removePipelineCommand
  • Rename util renameParameter -> renamePipelineParameter
  • Rename util extractVariables -> extractVariablesFromScript
  • [👖] Utilities extractParameterNamesFromTask and renamePipelineParameter are not exported from @promptbook/utils but @promptbook/core because they are tightly interconnected with the Promptbook and cannot be used as universal utility

0.79.0 (2024-12-27)

Implicit formfactors

  • You don't need to specify the formfactor or input+output params explicitly. Implementing the formfactor interface is sufficient.
  • Fix in deep cloning of arrays

0.80.0 (2025-01-01)

Simple chat notation

  • High-level chat notation
  • High-level abstractions
  • Introduction of compilePipeline
  • Add utility orderJson exported from @promptbook/utils
  • Add utility exportJson exported from @promptbook/utils (in previous versions this util was private and known as $asDeeplyFrozenSerializableJson)
  • Circular objects with same family references are considered NOT serializable
  • Interactive mode for FORMFACTOR CHATBOT in CLI
  • Deprecate pipelineJsonToString
  • Deprecate unpreparePipeline
  • Rename pipelineStringToJson -> compilePipeline
  • Rename pipelineStringToJsonSync -> precompilePipeline

0.81.0 (2025-01-12)

Editing, templates and flat pipelines

  • Backup original book as sources in PipelineJson
  • fetch is passed through ExecutionTools to allow proxying in browser
  • Make new package @promptbook/editable and move misc editing tools there
  • Make new package @promptbook/templates and add function getBookTemplate
  • Rename replaceParameters -> templateParameters
  • Add valueToString and numberToString utility function
  • Allow boolean, number, null, undefined and full json parameters in templateParameters (alongside with string)
  • Change --output to --output in CLI ptbk make
  • Re-introduction of package @promptbook/wizzard
  • Allow flat pipelines
  • Root URL for flat pipelines
  • Change $provideLlmToolsForCli -> $provideLlmToolsForWizzardOrCli
  • Do not require .book.md in pipeline url
  • More file paths are considered as valid
  • Walk to the root of the project and find the nearest .env file
  • $provideLlmToolsConfigurationFromEnv, $provideLlmToolsFromEnv, $provideLlmToolsForWizzardOrCli, $provideLlmToolsForTestingAndScriptsAndPlayground are async
  • GENERATOR and IMAGE_GENERATOR formfactors
  • Rename removeContentComments -> removeMarkdownComments
  • Rename DEFAULT_TITLE -> DEFAULT_BOOK_TITLE
  • Rename precompilePipeline -> parsePipeline

0.82.0 (2025-01-16)

Compile via remote server

  • Add compilePipelineOnRemoteServer to package @promptbook/remote-client
  • Add preparePipelineOnRemoteServer to package @promptbook/remote-client
  • Changes in remote server that are not backward compatible
  • Add DEFAULT_TASK_TITLE
  • Enforce LF (\n) lines

0.83.0 and 0.84.0 (2025-02-04)

@promptbook/editable and integration of markitdown

  • Integrate markitdown and export through @promptbook/markitdown
  • Export parsing internals to @promptbook/editable
  • Rename sourceContent -> knowledgeSourceContent
  • Multiple functions to manipulate with PipelineString
  • book notation supports values interpolation
  • Make equivalent of book notation the prompt exported through @promptbook/utils
  • Flat books does not expect return parameter
  • Wizzard always returns simple result: string key in output
  • Using BUSL-1.1 license (only for @promptbook/utils keep using CC-BY-4.0)
  • Support of DeepSeek models
  • Support of o3-mini model by OpenAI
  • Change admin email to pavol@ptbk.io

0.85.0 (2025-02-17)

[🐚] Server queue and tasks

  • Publishing Promptbook into Docker Hub
  • Remote server run in both REST and Socket.io mode
  • Remote server can run entire books not just single prompt tasks (for now just in REST mode)
  • In future remote server will support callbacks / pingbacks
  • Remote server has internal task queue
  • Remote server can be started via ptbk start-server
  • Hide $randomSeed
  • Remove TaskProgress
  • Remove assertsExecutionSuccessful
  • PipelineExecutor: Change onProgress -> ExecutionTask
  • Remote server allows to set rootPath
  • Remote server can run in Docker
  • In future remote server persists its queue in SQLite / .promptbook / Neo4j
  • Do not generate stats for pre-releases to speed up the build process
  • Allow pipeline URLs on private and unsecured networks

0.86.0 (2025-02-18)

Use .book as default extension for books

0.88.0 (2025-03-19)

Scripting and execution

  • Rename @promptbook/execute-javascript -> @promptbook/javascript
  • Rename extractVariablesFromScript -> extractVariablesFromScript and export from @promptbook/javascript (not @promptbook/utils)
  • Add route executions/last to remote server
  • Add $provideScriptingForNode
  • Converts JSON strings to JSON objects
  • Add jsonStringsToJsons to @promptbook/utils
  • Increase DEFAULT_MAX_EXECUTION_ATTEMPTS from 3 -> 10
  • Add a unique ID to the error, this error needs to be serialised and deserialised.

0.89.0 (2025-04-15)

User system and spending of credits

  • Update typescript to 5.2.2
  • Remote server requires root url /, if you want to run multiple services on the same server, use 3rd or 4th degree subdomain
  • [🌬] Make websocket transport work
  • Allow to pass custom execution tools to promptbook server
  • CLI can be connected to Promptbook remote server
    • Allow to specify BRING_YOUR_OWN_KEYS / REMOTE_SERVER in cli commands ptbk run, ptbk make, ptbk list-models and ptbk start-server
  • CLI can login to Promptbook remote server via username + password and store the token
  • Add login to application mode on remote server
  • Add User token to application mode on remote server
  • Rename countTotalUsage -> countUsage and add spending()
  • Rename PromptResultUsage -> Usage
  • Delete OpenAiExecutionTools.createAssistantSubtools
  • RemoteServer exposes httpServer, expressApp and socketIoServer - you can add custom routes and middlewares
  • Adding OpenAPI specification and Swagger to remote server
  • @types/* imports are moved to devDependencies
  • Rename remoteUrl -> remoteServerUrl
  • Rename DEFAULT_REMOTE_URL -> DEFAULT_REMOTE_SERVER_URL
  • Remove DEFAULT_REMOTE_URL_PATH (it will be always socket.io)
  • rootPath is not required anymore
  • Rename types PromptbookServer_Identification -> Identification
  • Change scraperFetch -> promptbookFetch and add PromptbookFetchError
  • Better error handling in entire Promptbook engine
  • Catch non-error throws and wrap + rethrow them as WrappedError
  • Creating a default community health file
  • Functions isValidCsvString and isValidXmlString

0.90.0 and 0.91.0 were skipped

In pre-release

0.92.0 (2025-04-)

Models and Migrations and processing big tables

  • Models are picked by description
  • During preparation of the pipeline, not single model picked but all models which are relevant for task are sorted by relevance
  • Make real RAG of knowledge
  • Remove "(boilerplate)" from model names
  • Sort model providers by relevance
  • Export utility function filterModels from @promptbook/core
  • All OpenAI models contain description
  • All Anthropic models contain description
  • All DeepSeek models contain description
  • All Google models contain description
  • Fix remote server POST /login
  • Update and fix all status codes and responses in openapi
  • Migrate JSON.parse -> jsonParse (preparation for formats)
  • Migrate papaparse.parse -> csvParse (preparation for formats)
  • Rename FormatDefinition -> FormatParser
  • Limit rate of requests to models
  • Autoheal \r in CsvFormatParser CsvFormatDefinition
  • Add getIndexedDbStorage
  • Pipeline migrations
  • Add formfactor COMPLETION which emulates Completion variant of the model
  • Add JSDoc annotations to all entities which are exported from any package
  • When processing more than 50 values, if many items pass but some fail, use "~" for failed value and just console log the error.
  • Fix OpenAI pricing
  • Fix LLM cache

Drafts

0..0 (2024--)

createLibraryFromDirectory uses prebuild library

0..0 (2024--)

Better expectation format in PromptbookJson

0..0 (2024--)

Allow to split parameters into multiple values and iterate over them

0..0 (2024--)

  • Allow to specify model creativity eg. MODEL CREATIVITY EXTREME

0..0 (2024--)

Better script execution

  • Gettings rid of JavascriptEvalExecutionTools and implement propper isolated script execution in JavascriptExecutionTools
  • List all default postprocessing functions in @promptbook/utils README
  • Implement PythonExecutionTools for executing Python scripts

0..0 (2024--)

More options to create PromptbookLibrary

0..0 (2024--)

Intagration with Langtail

0..0 (2024--)

  • TODO: Add splitInto functions to @promptbook/utils besides all the count functions

  • Add countCharacters -> splitIntoCharacters

  • Add countLines -> splitIntoLines
  • Add countPages -> splitIntoPages
  • Add countParagraphs -> splitIntoParagraphs
  • Add countSentences -> splitIntoSentences
  • Add CountUtils -> splitIntoUtils
  • Add countWords -> splitIntoWords

0..0 (2024-0-)

More expect variations

  • Add command EXPECT "..." <- [🥤]
  • Add command EXPECT /.../i <- [🥤]
  • Add command EXPECT "...{foo}..." <- [🥤]
  • Add command EXPECT /...{foo}.../i <- [🥤]
  • Add command EXPECT JSON ARRAY and EXPECT JSON OBJECT (In future this will be suggar code for EXPECT JSON SCHEMA) <- [🥤]

Upcomming features

  • When postprocessing fails, retry in same way as failed expectations
  • When making next attempt for DIALOG BLOCK, preserve the previous user input <- [🌹]

1.0.0 Release

Across the repository there are marked [🍓] places that are required to be done before 1.0.0 release