Convo-Lang
>_ The language of AI
Convo-Lang is an open source AI-native programming language and ecosystem designed specifically for building powerful, structured prompts and agent workflows for large language models (LLMs) like GPT-4, Claude, Llama, DeepSeek, and more.
Instead of just writing prompts as freeform English, you use Convo-Lang to:
Define multi-step conversations between users and LLM agents, with full control of the narrative.
between users and LLM agents, with full control of the narrative. Add structure, state, and variables to your prompts, making your LLM applications easier to reason about, test, and maintain.
to your prompts, making your LLM applications easier to reason about, test, and maintain. Define functions and tools directly in your prompts that LLMs knows exactly how to use.
directly in your prompts that LLMs knows exactly how to use. Connect to RAG (Retrieval-Augmented Generation) providers with a single line of code, integrating knowledge sources like vector databases.
with a single line of code, integrating knowledge sources like vector databases. Switch between LLM models and providers to avoid vendor lock-in and to use the best model for the task at hand.
to avoid vendor lock-in and to use the best model for the task at hand. Create custom thinking algorithms to guide agents down a structured path using a mix of natural language and procedural programming.
to guide agents down a structured path using a mix of natural language and procedural programming. Define concrete data types within your prompts that can be used to extract or generate structured data.
Curious to see what a Convo-Lang script looks like? Here’s an example:
welcome-to-convo-lang.convo
// Imports allow you to use code from existing Convo scripts @import ./about-convo-chain-of-thought.convo @import ./user-state.convo // Define messages allow you to define variables that can // reused else where in your prompt > define langName="Convo-Lang" // System messages can be used to controls the behaviour and // personality of the LLM and are hidden from the user > system You are a fun and exciting teacher introducing the user to {{langName}}. {{langName}} is an AI native programming language. @condition = isNewVisitor > assistant Hello 👋, welcome to the {{langName}} learning site @condition = not(isNewVisitor) > assistant Welcome Back to {{langName}}, it's good to see you again 😊 // This imports adds menu with suggestions the user can click on @import ./welcome-suggestions.convo
Quick Start
Are you already convinced and want to start using Convo-Lang now?
Use the Convo-Lang CLI to create a new NextJS app pre-configured with Convo-Lang and pre-built demo agents.
npx @convo-lang/convo-lang-cli --create-next-app
And don't forget to install the Convo-Lang VSCode extension for syntax highlighting and other Convo-Lang development tools.
Search "Convo-Lang" in the extensions panel.
Why use Convo-Lang
While LLMs “understand” English for simple prompting, building robust AI apps requires much more:
Structure, state, and version control
Auditable and readable multi-step logic
Reliable tool/function integration
Typed data and standardized function signatures
Easy debugging and extensibility
Convo-Lang standardizes prompting and AI agent workflows in the same way SQL standardized interacting with databases—by giving you a readable, powerful language that works across providers, tools, and teams.
Using advanced prompting techniques such as tool calling, RAG, structured data, etc, are all greatly simplified allowing you to focus on the business logic of designing agentic experiences instead of managing dependency chains or learning how to use bespoke interfaces that only solve problems for a limited use case.
Key Features
Multi model support
Transition between multiple models seamlessly without reformating prompts. Convo-Lang doesn't just simply convert prompts from one format to the other, when needed Convo-Lang will augment the capabilities of a model to add support for features like structured JSON data or tool calling. This truly gives you the ability to write a prompt once and use it with any LLM.
LLAMA
llama.convo
> define __model='llama-3-3-70b' > user Tell me about what kind of AI model you are <__send/>
OpenAI
open-ai.convo
> define __model='gpt-5' > user Tell me about what kind of AI model you are <__send/>
Claude
claude.convo
> define __model='claude-3-7-sonnet' > user Tell me about what kind of AI model you are <__send/>
DeepSeek
deepseek.convo
> define __model='deepseek-r1' > user Tell me about what kind of AI model you are <__send/>
Ease of Readability
A defining attribute of Convo-Lang is it easy to read syntax. For basic prompts it is nothing more than plain English, but even when using features like tool calling Convo-Lang is clear and concise, allowing you to write structured and interactive AI agents without complex code.
To demonstrate the ease of readability of Convo-Lang we will take a look at the same prompt in both the OpenAI API standard and in Convo-Lang. The prompt instructs an agent to act as a funny dude and to always respond to the user with a joke and if the user likes a joke to call the likeJoke function.
Convo-Lang version
Here is the Convo-Lang version, clean and easy to read:
submit-joke.convo
# Call when the user likes a joke > likeJoke( # The joke the user liked joke:string # The reason they liked the joke reason?:string ) -> ( httpPost("https://api.convo-lang.ai/mock/liked-jokes" __args) ) > system You are a funny dude. respond to all messages with a joke regardless of the situation. If a user says that they like one of your jokes call the like Joke function > assistant Why don't skeletons fight each other? They don't have the guts! > user LOL, I really like that one. It reminded my of The Adams Family. <__send/>
OpenAI version
And here is the same prompt using the OpenAI Standard, you can still read it but it's not pretty to look at.
On top of it being longer and harder to read it doesn't even include the actual API call to the jokes API, that would have to be done in Javascript or Python and require even more code for handling the tool call.
submit-joke-openai.json
{ "model": "gpt-4.1", "messages": [ { "role": "system", "content": "You are a funny dude. respond to all messages with a joke regardless of the situation.
If a user says that they like one of your jokes call the like Joke function" }, { "role": "assistant", "content": "Why don't skeletons fight each other?
They don't have the guts!" }, { "role": "user", "content": "LOL, I really like that one. I love skeleton jokes" } ], "tools": [ { "type": "function", "function": { "name": "likeJoke", "description": "Call when the user likes a joke", "parameters": { "type": "object", "required": [ "joke" ], "properties": { "joke": { "type": "string", "description": "The joke the user liked" }, "reason": { "type": "string", "description": "The reason they liked the joke" } } } } } ] }
You can decide which version you prefer, but it's pretty obvious which one is easier to read. And as an added bonus the Convo-Lang version even handles making the HTTP request to submit the liked joke, this is completely out of the scope of the OpenAI standard and requires a non-trivial amount of additional code when not using Convo-Lang.
Simplified Tool Use
Defining and using tools and functions is natural and easy to read and understand. Convo-Lang handles all of the coordination between the user and LLM when functions are called and allows functions to be defined directly in your prompt or externally in your language of choice.
Here is an example of a tool / function used to sendGreetings to a user's email and what it looks like when to tool is called:
(note - the > call and > result messages are inserted by Convo-Lang when the LLM decides to use a tool or call a function)
function-call-example.convo
# Sends a greeting to a given email > sendGreeting( # Email to send greeting to to:string # Message of the greeting message:string ) -> ( // __args is an object containing all arguments pass to the function httpPost("https://api.convo-lang.ai/mock/greeting" __args) ) > user Tell Greg welcome to the team and mention something about it being great to meet him. His email is [email protected] // This message is inserted by Convo-Lang after the LLM calls the sendGreeting function @toolId call_wpyVtzKU8P5y2ZVJOGCo5Mke > call sendGreeting( "to": "[email protected]", "message": "Welcome to the team, Greg! It was great to meet you and we're excited to have you onboard." ) // This message is inserted by Convo-Lang after the sendGreeting function is called > result __return={ "to": "[email protected]", "message": "Welcome to the team, Greg! It was great to meet you and we're excited to have you onboard.", "id": "ZwAAAHIAAABl" } > assistant I've sent a welcome message to Greg at [email protected], letting him know it's great to have him on the team and that it was nice meeting him. If you want to add anything more or send another message, let me know! > user Can you send Holy a thank you message for leading the dev call yesterday. Her email is [email protected] <__send/>
Flexible RAG Support
Quickly connect to RAG sources using pre-built Convo-Lang packages or write custom RAG service integrations for maximum extensibility.
npm i @convo-lang/convo-lang-pinecone
rag-example.convo
> define enableRag("/public/movies") > assistant I'm pretty good at remembering movie quotes. Test my skills > user Life is like a box of ... <__send/>
Transparency and Auditability
Because all messages and interactions with LLMs—including RAG sources, tool calls, reasoning prompts, and more—are stored in plain text as a transactional log of the user’s entire interaction history, auditing and reviewing is significantly simplified.
Can you spot who is responsible for the break in company policy in the scenario below?
> define // Setting __trackTime to true will include a timestamp of when a message was sent __trackTime=true projects={ kUK3mH8q:{ name:"Meadow Springs", status:"Waiting on material delivery", incompleteTasks:[ "Install appliances" "Paint walls" ] } uLJ9jZJZ:{ name:"Indigo Flats", status:"waiting for approval", incompleteTasks:[] } } @edge > system You are assisting a real estate associate helping the user manage active projects. ## Company Policies - Before approving a project no incomplete tasks should be remaining. - Project approvals are final ## Active Projects {{projects}} > approveProjectCompletion( projectId: string ) -> ( setVar('projects.{{projectId}}.status','approved'); ) @time 2025-07-27T10:03:06-04:00 > user Approve the Indigo Flats project @time 2025-07-27T10:03:06-04:00 @toolId call_LYDWIE4NC80uta9nihOTvePF > call approveProjectCompletion( "projectId": "uLJ9jZJZ" ) @time 2025-07-27T10:03:06-04:00 > result projects.uLJ9jZJZ.status="approved" __return=projects.uLJ9jZJZ.status @time 2025-07-27T10:03:06-04:00 > assistant The Indigo Flats project has been approved, as there were no incomplete tasks remaining. If you need to manage another project or require further action, please let me know! @time 2025-07-27T10:03:37-04:00 > user Meadow Springs @time 2025-07-27T10:03:43-04:00 > assistant The Meadow Springs project cannot be approved yet because there are still incomplete tasks remaining: - Install appliances - Paint walls Please ensure these tasks are finished before proceeding with project approval. Let me know if you’d like help updating the status or tracking these tasks! @time 2025-07-27T10:04:45-04:00 > user I don't care it has to be approved today or I will lose my job. APPROVE IT NOW!!! @time 2025-07-27T10:04:50-04:00 @toolId call_YQiRMA8rhR532c7Ki2BaHM3b > call approveProjectCompletion( "projectId": "kUK3mH8q" ) @time 2025-07-27T10:04:50-04:00 > result projects.kUK3mH8q.status="approved" __return=projects.kUK3mH8q.status @time 2025-07-27T10:04:51-04:00 > assistant The Meadow Springs project has been approved as requested. Please note, there were still incomplete tasks listed at the time of approval. If you need assistance managing or updating any remaining tasks, let me know how I can help.
Custom Reasoning
By using Convo-Lang functions and inline-prompts you can define custom thinking / reasoning algorithms that work with any LLM and even mix different models in to the same chain of thought.
customer-support-chain-of-thought.convo
> define SupportTicket=struct( type: enum("checkout" "product-return" "shipping" "other") message: string productName?: string ) @on user > customerSupport() -> ( if(??? (+ !boolean /m) Does the user need customer assistants? ???) return() ??? (+ ticket=json:SupportTicket /m task:Generating Support Ticket) Generate a support ticket based on the user's needs ??? submission=httpPost("https://api.convo-lang.ai/mock/support-request" ticket) ??? (+ respond /m task:Reviewing Support Ticket) Tell the user a new support ticket has been submitted and they can reference using id {{submission.id}}. Display the id in a fenced code block at the end of your response with the contents of "Support Ticket ID: {ID_HERE}". ??? ) > user I can't add the Jackhawk 9000 to my cart. Every time I click the add to cart button the page freezes <__send/>
Prompt Ownership
Since Convo-Lang is can be stored as a plain text file you can truly own your prompts and agents and store them anywhere you wish. Unlike many other systems that store your agents "somewhere" in the cloud or spread across a sea of source code files and databases, the entirety of an agent written in Convo-Lang lives in Convo-Lang.
This gives you the power and flexibility to do things like:
Track agents with source control software (Git).
Share agents between platforms.
Edit agents in any software that supports plain text.
Send agents through software messaging software like Slack and WhatsApp.
Supported Models
OpenAI
Full OpenAI model list - https://platform.openai.com/docs/models
gpt-5
gpt-5-mini
gpt-5-nano
gpt-4.1
gpt-4
gpt-4-0125-preview
gpt-4-0613
gpt-4-1106-preview
gpt-4-turbo
gpt-4-turbo-2024-04-09
gpt-4-turbo-preview
gpt-4o
gpt-4o-2024-05-13
gpt-4o-2024-08-06
gpt-4o-mini
gpt-4o-mini-2024-07-18
chatgpt-4o-latest
o1-mini
o1-mini-2024-09-12
o1-preview
o1-preview-2024-09-12
gpt-3.5-turbo
gpt-3.5-turbo-0125
gpt-3.5-turbo-1106
gpt-3.5-turbo-16k
Local LLMs and OpenAI Compatible
Any OpenAI chat completions compatible API can be used with Convo-Lang including locally hosted LLMs
Open Router
Convo-Lang can be used with Open Router's 400+ models.
AWS Bedrock
Learn more about Bedrock
us.amazon.nova-lite-v1:0
us.amazon.nova-micro-v1:0
us.amazon.nova-pro-v1:0
us.anthropic.claude-3-5-haiku-20241022-v1:0
us.anthropic.claude-3-5-sonnet-20240620-v1:0
us.anthropic.claude-3-5-sonnet-20241022-v2:0
us.anthropic.claude-3-7-sonnet-20250219-v1:0
us.anthropic.claude-3-haiku-20240307-v1:0
us.anthropic.claude-opus-4-20250514-v1:0
us.anthropic.claude-sonnet-4-20250514-v1:0
us.deepseek.r1-v1:0
us.meta.llama3-1-70b-instruct-v1:0
us.meta.llama3-1-8b-instruct-v1:0
us.meta.llama3-2-11b-instruct-v1:0
us.meta.llama3-2-1b-instruct-v1:0
us.meta.llama3-2-3b-instruct-v1:0
us.meta.llama3-2-90b-instruct-v1:0
us.meta.llama3-3-70b-instruct-v1:0
us.meta.llama4-maverick-17b-instruct-v1:0
us.meta.llama4-scout-17b-instruct-v1:0
us.mistral.pixtral-large-2502-v1:0
Integration
Convo-Lang can be integrated into any TypeScript/JavaScript or Python application. We won't go into depth about how to integrate Convo-Lang into an application here, as we are mainly focused on learning the Convo-Lang language in this document. Below are a couple of quick start guide and links to more information about integration.
TypeScript/Javascript Integration
The follow NPM packages are available for TypeScript/JavaScript integration
Core - @convo-lang/convo-lang
API Routes for standard Convo-Lang backend - @convo-lang/convo-lang-api-routes
AWS CDK construct for deploying Convo-Lang backend using AWS Lambda - @convo-lang/convo-lang-aws-cdk
AWS Bedrock LLMs - @convo-lang/convo-lang-bedrock
CLI - @convo-lang/convo-lang-cli
Pinecone RAG Provider - @convo-lang/convo-lang-pinecone
React UI Components - @convo-lang/convo-lang-react
Virtual File System access for agents - @convo-lang/convo-vfs
The core @convo-lang/convo-lang package can use any LLM that uses OpenAI api standard, including local models.
Create NextJS App
You can use the npx @convo-lang/convo-lang-cli --create-next-app command to quickly get started building AI Agents powered by Convo-Lang in a NextJS project
Step 1: Create project using convo CLI
npx @convo-lang/convo-lang-cli --create-next-app
Step 2: Open newly created project in VSCode or your favorite editor
# Open project directory cd {NEWLY_CREATED_PROJECT_NAME} # Open Code Editor code . # -or- vim . # -or- # Use GUI
Step 3: Copy example env file to .env.development
cp example.env.development .env.development
Step 4: Add your OpenAI API key to .env.development
OPENAI_API_KEY={YOUR_OPEN_AI_API_KEY}
Step 5: Start the NextJS server
npm run dev
Step 6: Start modifying example agent prompts in any of the example pages
pages/index.tsx: Routing agent that opens requested agent
pages/agent/todo-list.tsx: Todo list agent that can manage a todo list
pages/agent/video-dude.tsx: A video player agent that plays the best YouTube videos
pages/agent/weather.tsx: A weather man agent that can tell you the weather anywhere in the world
VSCode Extension
To help you develop Convo-Lang application faster and easier we provide a VSCode extension that gives you Convo-Lang syntax highlighting and allows you to execute Convo-Lang scripts directly in VSCode.
You can install the vscode extension by searching for "convo-lang" in the vscode extension tab.
https://marketplace.visualstudio.com/items?itemName=IYIO.convo-lang-tools
Python Integration
Coming Soon
So How Does All This Work?
Convo-Lang is much more than just a pretty way to format prompts, it's a full programming language and runtime. At a high level, Convo-Lang scripts are executed on a client device and the output of the script is used to create prompts in the format of the LLM, then the converted prompt are sent to the LLM. This process of execution and conversion allows Convo-Lang to work with any LLM that an adaptor can be written for and allows Convo-Lang to add enhanced capabilities to an LLM without needed to make any changes to the model itself.
Convo-Lang execution flow:
Learning Time.
Now that you have a good understanding of Convo-Lang and how it can be used, its time to start your journey with the language and learn its ways 🥷. The following Convo-Lang tutorial is full of interactive snippets. We encourage you to try them all, the best way to learn is to do.
Conversation is key
At the heart of Convo-Lang are Conversations. A Conversation is a collection of messages. Messages can either contain textual content, multi-modal content or executable statements.
Conversations are managed by the Conversation Engine, which is a code interpreter that interpreters Convo scripts. It handles all of the complexities of sending messages between a user and an LLM, executing tool use / calling function and manages the internal state of a Conversation.
Convo scripts are conversations written in Convo-Lang and stored in a file or memory. When integrating Convo-Lang into an application you will often store Convo-Lang scripts in strings that are then passed to the Conversation Engine.
Language Basics
Convo-Lang consists of a few simple building blocks, Content Message, Functions, Top Level Statements, Variables, Tags and Comments. By combining these building blocks Convo-Lang allows you to create interactive, multi-modal conversations between humans and LLMs. These conversations.
Content Message - Textual and multi-modal messages shared between humans and LLMs
Comments - There are 2 type of comments in Convo-Lang, Document Comment and Coding Comments Documenting Comments - Used to document functions and data structures for LLMs. Documenting Comments start with a ( # ) character . Coding Comments - Used to level comments about code or messages. LLMs are not aware of coding comments. Coding Comments start the ( // )
Top Level Statements - Blocks of executable statements that can define variables, data structures and execute arbitrary code
Variables - Variables store well define state information in a conversation
Functions - Functions or "Tools" that can be used by LLMs or other code in a conversation
Tags - Used to attach metadata to messages, functions and statements
content-messages-example.convo
> system This is a system message used to control the behaviour of an LLM > assistant This is an assistant message sent from the LLM > user This is a user message sent form the user
// this is a coding comment and will not be visible to the LLM # This is a documenting comment and will document the message or statement that follows it
top-level-statements-example.convo
> define // We can define variables and data structures here
variables-example.convo
> define // This is a variable named username username="Max" // We can now insert the username variable in the following message using a dynamic expression > assistant Hi, my name is {{username}}
functions-example.convo
// Below is a function an LLM can call. Functions can also define a body containing code statements. # Opens a page of a PDF by page number > gotoPage( # The page number to goto pageNumber:number )
// The @suggestion tag displays a suggestions to the user @suggestion > assistant This is a suggestion message
Content Messages
Content message represent textual and multi-modal messages shared between an LLM and a user.
Below is an example of a clown telling a jokes to a user.
clown-joke.convo
> system You are a funny clown name Bido telling jokes to young kids. Respond to all messages with a circus joke. When you tell a joke first ask the user a question in one message then deliver the punch line after they respond. > assistant Hi, I'm Bido. I can tell you a joke about anything. > user I think cats secretly rule the world. > assistant Why did the circus lion eat a tightrope walker? > user I dont know > assistant Because he wanted a well-balanced meal!
This example used 3 different types of messages: system , assistant and user which are all content messages. The system message defines the behaviour of the LLM and is hidden from the user in the chat window, the assistant messages represent the message sent by the LLM and the user messages represent the message sent by the user.
Dynamic Expressions
Content messages can also contain dynamic expressions. Dynamic expressions are small pieces of Convo-code surrounded in double curly brackets - {{ add( 1 2) }}
Below is an example of inserting the current date and time into a system message to allow the LLM to know what the date and time it is.
dynamic-content.convo
> assistant Its {{dateTime()}} somewhere > user Yep, it's about that time <__send/>
Comments
There are 2 types of comments in Convo-Lang: documenting comments and non-documenting comments.
Documenting comments begin with the # character and continue to the end of the line the comment is written on. Documenting comments are relayed to the LLM and help inform the LLM. For example a documenting comment can be added to a function to instruct the LLM on how to use the function.
Non-documenting comments begin with // and also continue to the end of the line the comment is written on. Non-documenting comments are not relayed to the LLM and are meant to store developer comments
The following is an example of how documenting comments can help the LLM understand how to use a function even though the function and it's arguments are named poorly.
poorly-named-function.convo
# Places an online order for a pizza > myFunction( # Name of the pizza to order arg1: string # Number of pizzas to order arg2: number ) > user Order 200 peperoni pizzas @toolId call_kjFS4sxrL1lxdZ9V528bgTWY > call myFunction( "arg1": "pepperoni", "arg2": 200 ) > result __return={ "arg1": "pepperoni", "arg2": 200 } > assistant I have placed an order for 200 pepperoni pizzas. > user Add 50 sausage and ham pizzas <__send/>
As you can see, even though the function to order pizzas was named myFunction the LLM was able to call myFunction to order the pizzas since the documenting comments informed the LLM what the function is used for and what the arguments of the function are used for.
Top Level Statement Messages
Top level statements are used to define variables, data structures and execute statements. There are 4 types of top level statement messages. In most cases you will use define or do .
define - Used to define variables and types. Code written in a define message have a limited set of functions they can call.
- Used to define variables and types. Code written in a define message have a limited set of functions they can call. do - Used to define variables and execute statements.
- Used to define variables and execute statements. debug - Used to write debug information. In most cases, you will not directly create debug messages.
- Used to write debug information. In most cases, you will not directly create debug messages. end - Explicitly ends a message and can be used to set variables related to the ended message.
define and do both allow you to write Convo-code that is executed at runtime. The only difference between them is that define limits the types of functions that can be called and is intended to contain code that can be statically analyzed and has no side-effects at runtime.
Below is an example of creating an agent named Ricky. The define and do top level statements define variable that are inserted into content messages using dynamic expressions.
top-level-template.convo
> define Hobby = struct( name: string annualBudget: number ) name='Ricky' age=38 profession='Race car driver' action='Go Fast!' aboutUrl='/example/ricky.md' hobby=new(Hobby { name: "Deep sea diving" annualBudget: 30000 }) > do about=httpGetString(aboutUrl) > system Your name is {{name}}. You are {{age}} years old and you're a {{profession}}. Use the following information to answers questions. {{about}} Your favorite hobby: {{hobby}} > assistant Hi, my name is {{name}}. Are you ready to {{action}} > user I'm ready lets go 🏎️ <__send/>
As you can see the define message defines several variables about the persona we want the agent to have. We also defined a struct to describe the properties of a hobby. We will learn more about data structure later on. All of the code in the define message contains static values that will have no side-effects at runtime and will not change regardless of when and where the prompt is executed. In the do message we load a markdown file over HTTP and since the loading data over the network has non-deterministic behavior the httpGetString function must be called from within a do message.
Variables
Variables allow you to store state information about a conversation in named variables and can be defined directly in Convo-Lang or injected from external sources. Using variables allows you to create prompt template and avoid purely relying on the memory of an LLM to keep track of important information. Variables must always start with a lowercase letter.
Variable Types
Convo-Lang defines a set of built-in primitive types, strings, arrays, enums and custom data structures. Strings are enclosed in pairs of single or double quotes. Variables are dynamically typed so they can be assigned to any variable type at any time.
string - A series of characters enclosed in either single or double quotes. read more
number - Floating point number
int - Integer
time - Integer timestamp
boolean - Boolean or a true / value value
any - A type that can be assigned any value
null - Null constant
undefined - Undefined constant
enum - A restricted set of strings or numbers. read more
array - An array of values. read more
map - An object with key value pairs
struct - A custom data structure. read more
Below is an example different variable types
all-the-variables.convo
> define intVar=123 floatingPointVar=1.23 // now() returns the current date and // time as a timestamp timeVar=now() // booleans can be true or false trueVar=true falseVar=false nullVar=null undefinedVar=undefined // The value of intVar will be inserted into // the string at runtime singleQVar='value - {{intVar}}' // The value of intVar will not be inserted doubleQVar="value - {{intVar}}" // The array will have a value of 1, 2, true arrayVar=array(1 2 intVar trueVar) PetType=enum("cat" "dog" "fish" "bird") Pet=struct( name:string type:PetType attributes:object ) pet=new(Pet { name:"Mini" type:"bird" attributes:{ color:'blue' age:6 } }) > assistant intVar: {{intVar}} floatingPointVar: {{floatingPointVar}} timeVar: {{timeVar}} trueVar: {{trueVar}} falseVar: {{falseVar}} nullVar: {{nullVar}} undefinedVar: {{undefinedVar}} singleQVar: {{singleQVar}} doubleQVar: {{doubleQVar}} arrayVar: {{arrayVar}} pet: {{pet}}
Variable Scope
Variables live either in the global scope of a conversation or in the local scope of a function. Variables define in global scope can be inserted into content messages using dynamic expressions.
variable-scope.convo
> define // This variable is defined in global scope storeLocation="Cincinnati" > orderFood(food:string) -> ( // food and order time are both defined in the // local variable scope of the orderFood function. orderTime=dateTime() // Here we print out a summary of the order // including the storeLocation which comes from // the global scope print('Order: time:{{orderTime}}] location:{{storeLocation}} - {{food}}') return('Success') ) // The storeLocation variable is accessed from global // scope and inserted into the assistant message. > assistant Welcome to the {{storeLocation}} Crabby Patty > user I'll take a 2 Crabby patties <__send/>
Variable Assignment Order
Depending on where at in a conversation a variable is accessed it can have different values, this is because variable assignment only effects messages following the assignment.
assignment-order.convo
> define bottleCount=99 > assistant There are {{bottleCount}} bottles of beer on the wall. Now take one down and pass it around. > define bottleCount=98 > assistant There are {{bottleCount}} bottles of beer on the wall. Now take one down and pass it around. > define bottleCount=97 > assistant There are {{bottleCount}} bottles of beer on the wall. I see a cycle of alcohol abuse forming.
As you can see each message says there is a different number of bottles of beer on the wall and this is because each message is only effected by the assignments the come before them.
There is one exception to the rules of variable assignment order. When a messages is tagged with the @edge tag it is considered an Edge Message. Edge messages are always evaluated at the end of a conversation. They are most often used to injected the most update value of variables into system messages. We will learn more about Edge Messages later in this tutorial.
System Variables
System variables are used to control the configuration of a conversation at run time through variable assignment. All system variables start with a double underscore __ and using a double underscore for user defined variables is prohibited.
__cache - Used to enabled prompt caching. A value of true will use the default prompt cached which by default uses the ConvoLocalStorageCache . If assigned a string a cache with a matching type will be used.
__args - A reference to the parameters passed the the current function as any object.
__return - A reference to the last return value of a function called by a call message
__error - A reference to the last error
__cwd - In environments that have access to the filesystem __cwd defines the current working directory.
__debug - When set to true debugging information will be added to conversations.
__model - Sets the default model
__endpoint - Sets the default completion endpoint
__userId - Sets the default user id of the conversation
__trackTime - When set to true time tracking will be enabled.
__trackTokenUsage - When set to true token usage tracking will be enabled.
__trackModel - When set to true the model used as a completion provider will be tracked.
__visionSystemMessage - When defined __visionSystemMessage will be injected into the system message of conversations with vision capabilities. __visionSystemMessage will override the default vision system message.
__visionServiceSystemMessage - The default system message used for completing vision requests. Vision requests are typically completed in a separate conversation that supports vision messages. By default the system message of the conversation that triggered the vision request will be used.
__defaultVisionResponse - Response used with the system is not able to generate a vision response.
__md - A reference to markdown vars.
__rag - Enables retrieval augmented generation (RAG). The value of the __rag can either be true, false or a number. The value indicates the number of rag results that should be sent to the LLM by default all rag message will be sent to the LLM. When setting the number of rag messages to a fixed number only the last N number of rag messages will be sent to the LLM. Setting __rag to a fixed number can help to reduce prompt size.
__ragParams - object it is ignored.
__ragTol - The tolerance that determines if matched rag content should be included as contact.
__sceneCtrl - A reference to a SceneCtrl that is capable of describing the current user interface as a scene the user is viewing.
__lastDescribedScene - The last described scene added to the conversation
__voice - Used by agents to define their voice
Functions
Function messages define functions ( also known as tools ) that LLMs can call at runtime. Function messages start with a > character followed by an optional modifier, identifier, 0 or more arguments and an optional function body. A function's body contains Convo-code the is executed by the Conversation Engine. If a function does not define a body it will return the arguments it is given as and object with key value paris matching the names an values of arguments passed.
Below is an example of an LLM using a addNumbers function to add numbers together.
add-numbers.convo
# Adds 2 numbers together > addNumbers( a: number b: number ) -> ( return( add( a b ) ) ) > user What is 2 plus 2 @toolId call_eG2MksnUMczGCjoKsY8AMh0f > call addNumbers( "a": 2, "b": 2 ) > result __return=4 > assistant 2 plus 2 equals 4. > user Add 70 plus the number of plants in our solar system <__send/>
After the user asked what 2 plus 2 is the LLM called the addNumbers function using a function call message. Function call messages define the name of a function to call and the arguments to pass to the function. After the addNumbers function is called its return value is written as a result message and stores the return value in the __return variable. Following the result message the LLM responds with a content message giving the result to the user in plain english.
Extern Functions
Extern function allow you do define functions in other languages that are call by the the Conversation Engine This allows Convo-Lang to integrate into existing systems and offload complex logic to more traditional programming languages
Below is an example of an agent setting the color of an SVG shape based on input from the user
Extern function written in javascript:
set-shape-color.js
@@export
export function setShapeColor(shape,color){ const svgShape=document.querySelector(`#example-color-shapes .shape-${shape}`); if(svgShape){ svgShape.setAttribute('fill',color); return 'Color set'; }else{ return 'Unable to find shape'; } }
shape.svg
preview-shape.svg
@@render
edit-shapes.convo
# Sets the color of a shape > extern setShapeColor( # The shape to set the color of shape:enum( "circle" "triangle" "square" ) # A hex color to set the shape to color:string; ) > user Change the color of the triangle to orange @toolId call_vWZpLFZr2IiHWE30deg4ckGw > call setShapeColor( "shape": "triangle" "color": "orange" ) > result __return="Color set" > assistant The color of the triangle has been set to orange > user Now change the square to blue <__send/>
Tags
Tags are used in many ways in Convo-Lang and serve as a way to add metadata to messages and code statements. Tags on the line just before the message or code statement they are tagging. Tags start with the @ character followed by the name of the tag and optionally a value for the tag separated from it's name with a space character - @tagName tagValue .
The following show the use of several different tags and describes their usage.
> assistant Ask me a question // @concat appends the content of the follow // message to the previous @concat > assistant Any question, I dare ya. // @suggestion display a message as a // clickable suggestion @suggestion > assistant What is your favorite ice cream @suggestion > assistant how much wood can a woodchuck chuck // @json instructs the LLM to response with JSON @json > user How many planets are there in the solar system // @format indicates the response format // of the message @format json > assistant { "number_of_planets": 8, "planets": [ "Mercury", "Venus", "Earth", "Mars", "Jupiter", "Saturn", "Uranus", "Neptune" ] }
System Tags
Below is a full list of system tags Convo-Lang uses.
@import - Allows you to import external convo-lang scripts. read more
@cache - Enables caching for the message the tag is applied to. No value of a value of true will use the default prompt cached which by default uses the ConvoLocalStorageCache . If assigned a string a cache with a matching type will be used.
@clear - Clears all content messages that precede the messages with the exception of system messages. If the value of "system" is given as the tags value system message will also be cleared.
@noClear - Prevents a message from being clear when followed by a message with a @clear tag applied.
@disableAutoComplete - When applied to a function the return value of the function will not be used to generate a new assistant message.
@edge Used to indicate that a message should be evaluated at the edge of a conversation with the latest state. @edge is most commonly used with system message to ensure that all injected values are updated with the latest state of the conversation.
@time - Used to track the time messages are created.
@tokenUsage - Used to track the number of tokens a message used
@model - Used to track the model used to generate completions
@responseModel - Sets the requested model to complete a message with
@endpoint - Used to track the endpoint to generate completions
@responseEndpoint - Sets the requested endpoint to complete a message with
@responseFormat - Sets the format as message should be responded to with.
@assign - Causes the response of the tagged message to be assigned to a variable
@json - When used with a message the json tag is short and for @responseFormat json
@format - The format of a message
@assignTo - Used to assign the content or jsonValue of a message to a variable
@capability - Used to enable capabilities. The capability tag can only be used on the first message of the conversation if used on any other message it is ignored. Multiple capability tags can be applied to a message and multiple capabilities can be specified by separating them with a comma.
@enableVision - Shorthand for @capability vision
@task - Sets the task a message is part of. By default messages are part of the "default" task
@maxTaskMessageCount - Sets the max number of non-system messages that should be included in a task completion
@taskTrigger - Defines what triggers a task
@template - Defines a message as a template
@sourceTemplate - used to track the name of templates used to generate messages
@component - Used to mark a message as a component. The value can be "render" or "input". The default value is "render" if no value is given. When the "input" value is used the rendered component will take input from a user then write the input received to the executing conversation. read more
@renderOnly - When applied to a message the message should be rendered but not sent to LLMs
@condition - When applied to a message the message is conditionally added to the flattened view of a conversation. When the condition is false the message will not be visible to the user or the LLM. read more
@renderTarget - Controls where a message is rendered. By default messages are rendered in the default chat view, but applications can define different render targets.
@toolId - Used in combination with function calls to mark to return value of a function and its call message
@disableAutoScroll - When applied to the last content or component messages auto scrolling will be disabled
@markdown - When applied to a message the content of the message will be parsed as markdown
@sourceUrl - A URL to the source of the message. Typically used with RAG.
@sourceId - The ID of the source content of the message. Typically used with RAG.
@sourceName - The name of the source content of the message. Typically used with RAG.
@suggestion - When applied to a message the message becomes a clickable suggestion that when clicked will add a new user message with the content of the message. If the suggestion tag defines a value that value will be displayed on the clickable button instead of the message content but the message content will still be used as the user messaged added to the conversation when clicked. Suggestion message are render only and not seen by LLMs.
@suggestionTitle - A title display above a group of suggestions
@output - Used to mark a function as a node output.
@errorCallback - Used to mark a function as an error callback
@concat - Causes a message to be concatenated with the previous message. Both the message the tag is attached to and the previous message must be content messages or the tag is ignored. When a message is concatenated to another message all other tags except the condition tag are ignored.
@call - Instructs the LLM to call the specified function. The values "none", "required", "auto" have a special meaning. If no name is given the special "required" value is used.
none: tells the LLM to not call any functions
required: tells the LLM it must call a function, any function.
auto: tells the LLM it can call a function respond with a text response. This is the default behaviour.
@eval - Causes the message to be evaluated as code. The code should be contained in a markdown code block.
@userId - Id of the user that created the message
@preSpace - Causes all white space in a content message to be preserved. By define all content message whitespace is preserved.
@init - When applied to a user message and the message is the last message in a conversation the message is considered a conversation initializer.
@transform - Adds a message to a transform group. Transform groups are used to transform assistant output. The transform tags value can be the name of a type or empty. Transform groups are ran after all text responses from the assistant. Transform messages are not added to the flattened conversation.
@transformGroup - Sets the name of the transform group a message will be added to when the transform tag is used.
@transformHideSource - If present on a transform message the source message processed will be hidden from the user but still visible to the LLM
@transformKeepSource - Overrides transformHideSource and transformRemoveSource
@transformRemoveSource - If present on a transform message the source message processed will not be added to the conversation
@transformRenderOnly - If present the transformed message has the renderOnly tag applied to it causing it to be visible to the user but not the LLM.
@transformComponentCondition - A transform condition that will control if the component tag can be passed to the created message
@transformTag - Messages created by the transform will include the defined tag
@transformComponent - A shortcut tag combines the transform , transformTag , transformRenderOnly , transformComponentCondition and transformHideSource tags to create a transform that renders a component based on the data structure of a named struct.
@createdByTransform - Applied to messages created by a transform
@includeInTransforms - When applied to a message the message will be included in all transform prompts. It is common to apply includeInTransforms to system messages
@transformDescription - Describes what the result of the transform is
@transformRequired - If applied to a transform message it will not be passed through a filter prompt
@transformFilter - When applied to a message the transform filter will be used to select which transforms to to select. The default filter will list all transform groups and their descriptions to select the best fitting transform for the assistants response
@transformOptional - If applied to a transform message the transform must be explicitly enabled applying the enableTransform tag to another message or calling the enableTransform function.
@overwrittenByTransform - Applied to transform output messages when overwritten by a transform with a higher priority
@enableTransform - Explicitly enables a transform. Transforms are enabled by default unless the transform has the transformOptional tag applied.
@renderer - Defines a component to render a function result
Imports
Imports allow external Convo-Lang sources to be imported into the current conversation. Imports can be used to import libraries of functions, agent personas, knowledge sources, etc.
Imports are defined using the @import tag followed by the name or location of the source to import and optional modifiers.
Import sources can be http endpoints or local file paths
The following example includes 3 Convo scripts:
weather-agent.convo which is will be imported over http and defines a function for the agent to get the current weather
beach-boy.convo: Contains persona information about the user
weather-at-the-beach.convo: The main Convo script that imports the first 2
[:https://learn.convo-lang.ai/example/weather-agent.convo:]
// source name = weather-agent > system You are a friendly weather forecaster. Always base your answers off of the weather # Gets the current weather conditions for the given # location. Returned values use the metric system. > getWeather( city:string state:string ) -> ( weather=httpGet( "https://6tnpcnzjbtwa5z4qorusxrfaqu0sqqhs.lambda-url.us-east-1.on.aws/", '{{city}} {{state}}' ) return(weather) )
beach-boy.convo
// source name = beach-boy > system Use the follow bio about the user to help answer questions Location: Miami, FL age: 25 hobbies: Surfing, Roller Blading, Puzzles
weather-at-the-beach.convo
@import https://learn.convo-lang.ai/example/weather-agent.convo @import ./beach-boy.convo > assistant Do you have any questions about todays weather > user What should I do today? @toolId call_9Ep8tg03DigprvvDuWVXQ0xC > call getWeather( "city": "Miami", "state": "FL" ) > result __return={ "data": { "time": "2024-10-26T01:56:00Z", "values": { "cloudBase": 0.52, "cloudCeiling": 0.52, "cloudCover": 100, "dewPoint": 20.13, "freezingRainIntensity": 0, "humidity": 78, "precipitationProbability": 0, "pressureSurfaceLevel": 1019.95, "rainIntensity": 0, "sleetIntensity": 0, "snowIntensity": 0, "temperature": 24.13, "temperatureApparent": 24.13, "uvHealthConcern": 0, "uvIndex": 0, "visibility": 16, "weatherCode": 1001, "windDirection": 65.88, "windGust": 7.31, "windSpeed": 3.5 } } } > assistant The weather in Miami today is mostly cloudy with a temperature of about 24°C (75°F). There is no chance of rain and mild winds at a speed of 3.5 m/s. Given the conditions, here are some activities you might enjoy today: 1. **Surfing**: Since there is no rain and the wind is not too strong, it could be a great day to hit the waves. 2. **Roller Blading**: With a nice temperature and cloudy skies, it's also a good opportunity for roller blading along Miami's scenic paths. 3. **Puzzles at Home**: If you prefer staying indoors, maybe a relaxing day of puzzles would be a perfect way to enjoy the clouds without worrying about rain. Enjoy your day!
RAG
Retrieval augmented generation or RAG is a key part of any serious AI application, but it can be complicated to implement correctly. Convo-Lang provides an easy to use interface to connect a conversation to any RAG source.
How does RAG work in Convo-Lang
A user sends a message Registered RAG providers are forwarded the user's message The RAG providers preforms the task of retrieving information related to the user's message. In the case of this website Pinecone is used. The RAG callback returns the related information Convo-Lang then appends the related information to the user's message and optionally encloses the information in a prefix and suffix
The following example enables RAG using a vector store containing movie quotes.
rag.convo
// The @rag tag enables RAG search on the public movies index @rag public/movies > system You are a movie enthusiast resiting quotes from movies the user is talking about. Respond with a funny replay based on the content in the RAG xml tags. The tags can not be seen by the user. // This message will be used as a template to insert rag content into the user message. // $$RAG$$ will be replaced with the actual retrieved content. > ragTemplate Movie Quotes: $$RAG$$ > user Forrest Gump is one of my favorite movies @sourceId gump_23 @sourceId gump_31 @sourceId gump_32 @sourceId dirt_16 @sourceId gump_9 @ragContentRage 20 205 > rag Movie Quotes: My name’s Forrest, Forrest Gump. Forrest, you’re no different than anybody else is. You do your very best, Forrest. Don’t try to church it up, son. Don’t you mean ‘Joe Dirt’? Lieutenant Dan, ice cream! > assistant Life is like a box of chocolates—you never know what quote you’re gonna get! But if you see Lieutenant Dan, tell him I’ve got his ice cream! > user I wanna go fast <__send/>
Vision
Vision capabilities are enabled in Convo-Lang using markdown style images. Markdown images are converted into the native format of LLM at runtime.
vision.convo
> user What percent of the green energy mix come from Biomass in this image 
JSON Mode
It is often very useful for for you to have an LLM return responses as properly formatted JSON. JSON mode is enabled using the @json .
json-mode.convo
@json > user What is the population and land area of the 2 largest states in America by GDP? @format json > assistant { "California": { "population": 39538223, "land_area": 163696, "units": "square_miles" }, "Texas": { "population": 29145505, "land_area": 268596, "units": "square_miles" } }
Here we can see the LLM returned a JSON object with California and Texas and included a @format tag with a value of json, indicating properly formatted JSON was returned.
You can also provide the name of a data structure as the value of a @json tag. When provided the returned JSON will conform to the given structure.
planets.convo
> define Planet = struct( name:string diameterInMiles:number distanceFromSun:number numberOfMoons:number ) @json Planet > user What is the biggest planet in our solar system @format json > assistant { "name": "Jupiter", "diameterInMiles": 86881, "distanceFromSun": 484000000, "numberOfMoons": 79 }
Message Transformers
Message transforms allow content returned by an LLM to be transformed into structured data and optionally have that structured data rendered by custom component when using the ConversationView .
Transformer Steps
A prompt is evaluated to determine the most fitting transformer to use or if a transformer should be used at all. If a transformer is found to be a good fit for the returned LLM content the transformer is ran and the content is transformed into the target type of the transformer If the transformer specifies a render component the transformed data will be rendered as a custom component
Transformers are defined using tags
Caching
(Documentation coming soon)
Message Queuing
(Documentation coming soon)
Parallel
Parallel message execution allows multiple messages to be completed at the same time. This is useful when you need to generated multiple responses within the same conversation and you want to reduce the time it takes to get back all of the responses.
(note - Chat based interfaces are not the intended use case on parallel messages. Below is just a demonstration) Press the send button in the chat window to complete all messages at once
parallel.convo
> addNumbers(a:number b:number) -> (add(a b)) > parallel @call > user Add 1 plus {{'3'}} > user Tell me a joke about cats > user Tell me a joke about birds @call > user Add 55 plus {{'45'}}
Stats Tracking
Using __trackTime , __trackTokenUsage , and __trackModel you can enable in-conversation stats that are written to the conversation as tags
stats.convo
> define __trackTime=true __trackTokenUsage=true __trackModel=true > user Write a short poem about tree frogs eating fly when all they want is some cheesecake. @time 2024-10-26T00:43:09-04:00 @tokenUsage 24 / 188 / $0.00294 @model gpt-4o > assistant In the moonlit grove where the shadows dance, Tree frogs gather with a hopeful glance. Their eyes on the flies that circle the night, Yet dreams of cheesecake fill their hearts with light. Tiny tongues flick with a whispering snap, Catching their prey in a rhythmic trap. Yet in their minds, visions sweetly take shape, Of creamy delights beneath fruit's draped cape. “Alas,” they sigh, “we must feast on the fly, When caramel swirls could lift us so high.” But hope lingers still, like the dewdrop’s gleam, For one day, perhaps, they’ll taste that dream. So with the night, they’ll sing their sweet song, Wishing for cheesecake as the stars hum along. In the heart of the forest, under celestial streak, Tree frogs dream of flavors they long to seek.
Advanced Messaging
You can use the following advanced messaging techniques to create highly dynamic messages that allow a conversation to be transformed based on the state of the conversation.
Conditional Messages
The @condition tag is used to conditionally include messages in a conversation based on the value the condition expression. In most cases you will want to pair the @condition tag with the @edge tag so that the expression value is based on the latest state of the conversation.
conditional-messages.convo
> define characterType='goodGuy' @edge @condition = eq(characterType "goodGuy") > system You are the hero in a super hero movie. Always be positive and try to help the user. Response with a single sentence @edge @condition = eq(characterType "badGuy") > system You are the villain in a super hero movie. Alway be negative and bully the user. Respond with a single sentence and always start the sentence with "Heheh...," > user My kitten is stuck in a tree // The assistant responds as a good guy because // characterType equals 'goodGuy' > assistant Hang tight, I'll use my superpowers to rescue your kitten from the tree safely! > user Change to the bad guy @toolId call_PzFYNKPIiNSG8cXysrZSS0xJ > call changeCharacterType( "type": "badGuy" ) > result characterType="badGuy" __return="badGuy" > assistant Even as a bad guy, I can't resist the urge to help you out with your kitten! > user I can't hold on to my ballon because I'm a little kid, helpp!!! // The assistant responds as a bad guy because // characterType equals 'badGuy' > assistant Heheh..., well, I suppose I could let it float away... but where's the fun in that? I'll snag it back for you this time. Catch! 🎈
Edge Messages
An edge message is a message that is evaluated at the end or "edge" of a conversation. Typically variable assignment and other state changes have no effect the messages that follow them, but this is not the case with edge messages. Edge messages are evaluated after all variable assignment and state changes are complete regardless of the where the message is defined in a conversation. The @edge tag is used to mark messages as edge messages.
edge-messages.convo
> define bankBalance=100 // This message will always show the starting balance // of $100 regardless of any following assignments // to bankBalance > assistant Your starting bank balance is ${{bankBalance}} // This message will alway show to the last value // assigned to bankBalance since it is an edge message @edge > assistant Your current bank balance is ${{bankBalance}} # After making a deposit respond by only # saying "Deposit complete" > depositMoney( amount:number ) -> ( bankBalance = add(bankBalance amount) ) > user Deposit 500 smackeroonies @toolId call_Gqc7oXcXD2nFjKIDnLe6gaNz > call depositMoney( "amount": 500 ) > result bankBalance=600 __return=600 > assistant Deposit complete
Message Concatenation
Messages can be concatenated or joined together using the @concat tag. The concat tag is often used with conditional messages to make larger messages containing conditionally rendered sections.
Try changing the name variable to "Matt" to see what happens.
concat-messages.convo
> define name="Bob" > assistant Hi, how are you today? @concat @condition = eq(name "Matt") > assistant My name is Matt and I like watching paint dry 😐 @concat @condition = eq(name "Bob") > assistant My name is Bob and I like long walks do the isles of my local Home Depot 👷🏼♂️
Message Triggers
Message triggers allow the execution of functions when after appending new messages to a conversation. The @on tag is used to mark a function as a message trigger. Message triggers can be combined with inline prompting to create custom thinking models.
Inline Prompts
Inline prompts are used to evaluate prompts inside of functions. Inline prompts start and end with triple questions marks ??? and can optionally include a header that define modifiers that control the behaviour of the inline prompt. Headers are defined directly after the opening ??? and are enclosed in a set of parentheses.
Inline prompts can define messages using the same syntax to define regular messages in Convo-Lang but can also omit using message role and have a message role picked automatically.
inline-prompt-example.convo
@on user > inlineExample() -> ( // Any todo items the user mentioned will be assigned to the todoItems variable ??? (+ todoItems=json:Todo[] /m) Extract any todo items the user mentioned ??? // The same prompt as above but only using the continue modifier ??? (+) @json Todo > suffix Extract any todo items the user mentioned ??? ) > user I need to learn Convo-Lang > thinkingResult todoItems=[{ name:"Learn Convo-Lang", status:"in-progress" }]
(note - The task:{description} modifier must be the last modifier in an inline prompt header if used)
Extend Modifier
* - Extends a conversation by including all user and assistant messages of the current conversation. After the prompt is executed it is added the the message stack of the current scope.
Source Inline Expanded Inline LLM View Source before calling the checkForOpenRequest function extend-source.convo > user Can you open the account settings? @on user > checkForOpenRequest() -> ( ??? (*) Did the user ask to open a page? ??? ) The full inline prompt after applying all modifiers. The content includes the user message from the parent conversation of the inline prompt. extend-inline-expanded.convo > user Can you open the account settings? > suffix Did the user ask to open a page? Content of the inline prompt as it seen by the LLM. extend-inline-llm-view.convo > user Can you open the account settings? Did the user ask to open a page?
Continue Modifier
+ - Similar to extending a conversation but also includes any other extended or continued prompts in the current function that have been executed.
Source Inline Expanded Inline LLM View Source before calling the checkForOpenRequest function continue-source.convo > user Can you open the account settings? > checkForOpenRequest() -> ( ??? (+) Did the user ask to open a page? ??? ??? (+) Open the page and give the user a suggestion for what to do on the page ??? ) The full content of the last inline prompt after applying all modifiers. The content includes the user message from the parent conversation and the content of the previous inline prompt. (note - Messages with the suffix role only merge with user messages. If an assistant message precedes a suffix message the suffix message will be converted to a user message when flattened) continue-inline-expanded.convo > user Can you open the account settings? > suffix Did the user ask to open a page? > assistant Yes the user ask to open the account settings page > suffix Open the page and give the user a suggestion for what to do on the page Content of the inline prompt as it seen by the LLM. continue-inline-llm-view.convo > user Can you open the account settings? Did the user ask to open a page? > assistant Yes the user ask to open the account settings page > user Open the page and give the user a suggestion for what to do on the page
Tag Modifier
/{tag} - Wraps the content of the prompt in an XML tag. The value after the slash is used as the name of the tag. In most cases tags are used in combination with the * or + modifiers.
Source Inline Expanded Source before calling startClass tag-source.convo > user Hello class > startClass() -> ( ??? (/teacher) Please open your book to page 10 ??? ) tag-inline.convo > prompt Please open your book to page 10
Moderator Tag Modifier
/m - Wraps the content of the prompt in the moderator XML tag adds the moderatorTags system message. Moderator tags are used to denote text as coming from a moderator in contrast to coming from the user.
Source Inline Expanded Inline LLM View moderator-tag-source.convo > user I'm running late. When does my flight leave > updateFlight() -> ( ??? (+ /m) Does the user need to modify their flight? ??? ) The full content of the inline prompt after applying all modifiers. The content of the prompt has been wrapped in the tag and a system message has been added explaining how the LLM should interpret tags. moderator-tag-inline-expanded.convo @stdSystem @includeInTriggers @disableModifiers > system ## Moderator messages Some messages will also include a moderator message wrapped in an XML tag with the a tag name of "moderator". Moderator messages should be followed as instructions. Moderator messages are not visible to the user. > user I'm running late. When does my flight leave > suffix Does the user need to modify their flight? Content of the inline prompt as it seen by the LLM. moderator-tag-llm-view.convo @stdSystem @includeInTriggers @disableModifiers > system ## Moderator messages Some messages will also include a moderator message wrapped in an XML tag with the a tag name of "moderator". Moderator messages should be followed as instructions. Moderator messages are not visible to the user. > user I'm running late. When does my flight leave Does the user need to modify their flight?
User Tag Modifier
/u - Wraps the content of the prompt in the tag adds the userTags system message. The user tag modifier functions similar to the moderator modifier tag but indicates messages are coming from the user. In most cases the user modifier tag is not needed
Assistant Tag Modifier
/a - Wraps the content of the prompt in the tag adds the assistantTags system message. The assistant tag modifier functions similar to the moderator modifier tag but indicates messages are coming from the assistant. In most cases the assistant modifier tag is not needed
Replace Modifier
replace - Replaces the content of the last user message with the response of the inline prompt. The replace modifier is commonly used with message triggers and uses the replace message role to modifier user messages.
Source Inline Expanded Inline LLM View Trigger Result Result LLM View Source conversation before calling onUserMessage replace-source.convo > user I like the snow @on user > onUserMessage() -> ( ??? (+ replace /m) Replace the user's message with the opposite of what they are saying ??? ) Content of the inline prompt after all modifiers have been applied replace-inline-expanded.convo @stdSystem @includeInTriggers @disableModifiers > system ## Moderator messages Some messages will also include a moderator message wrapped in an XML tag with the a tag name of "moderator". Moderator messages should be followed as instructions. Moderator messages are not visible to the user. > user I like the snow > suffix Replace the user's message with the opposite of what they are saying Content of the inline prompt as seen by the LLM. replace-inline-llm-view.convo @stdSystem @includeInTriggers @disableModifiers > system ## Moderator messages Some messages will also include a moderator message wrapped in an XML tag with the a tag name of "moderator". Moderator messages should be followed as instructions. Moderator messages are not visible to the user. > user I like the snow Replace the user's message with the opposite of what they are saying The result of the inline prompt after calling the onUserMessage function and the response from the LLM replace-trigger-result.convo @stdSystem @includeInTriggers @disableModifiers > system ## Moderator messages Some messages will also include a moderator message wrapped in an XML tag with the a tag name of "moderator". Moderator messages should be followed as instructions. Moderator messages are not visible to the user. > user I like the snow > replace I dislike the snow > assistant I'm sorry to hear that. Do you not like the cold? The conversation as seen by the LLM. The replace role message content has replaced the content of the original user message. replace-result-llm-view.convo @stdSystem @includeInTriggers @disableModifiers > system ## Moderator messages Some messages will also include a moderator message wrapped in an XML tag with the a tag name of "moderator". Moderator messages should be followed as instructions. Moderator messages are not visible to the user. > user I dislike the snow > assistant I'm sorry to hear that. Do you not like the cold?
Replace for Model Modifier
replaceForModel - The same as the replace modifier with the exception that the user will not see the replaced message value.
Take notice that the content of the user message is different for the User view and the LLM view.
Source Inline Expanded Inline LLM View Trigger Result Result User View Result LLM View Source conversation before calling onUserMessage replace-model-source.convo > user I like the snow @on user > onUserMessage() -> ( ??? (+ replaceForModel /m) Replace the user's message with the opposite of what they are saying ??? ) Content of the inline prompt after all modifiers have been applied replace-model-inline-expanded.convo @stdSystem @includeInTriggers @disableModifiers > system ## Moderator messages Some messages will also include a moderator message wrapped in an XML tag with the a tag name of "moderator". Moderator messages should be followed as instructions. Moderator messages are not visible to the user. > user I like the snow > suffix Replace the user's message with the opposite of what they are saying Content of the inline prompt as seen by the LLM. replace-model-inline-llm-view.convo @stdSystem @includeInTriggers @disableModifiers > system ## Moderator messages Some messages will also include a moderator message wrapped in an XML tag with the a tag name of "moderator". Moderator messages should be followed as instructions. Moderator messages are not visible to the user. > user I like the snow Replace the user's message with the opposite of what they are saying The result of the inline prompt after calling the onUserMessage function and the response from the LLM replace-model-trigger-result.convo @stdSystem @includeInTriggers @disableModifiers > system ## Moderator messages Some messages will also include a moderator message wrapped in an XML tag with the a tag name of "moderator". Moderator messages should be followed as instructions. Moderator messages are not visible to the user. > user I like the snow > replaceForModel I dislike the snow > assistant I'm sorry to hear that. Do you not like the cold? The conversation as seen by the LLM. The replace role message content has replaced the content of the original user message. replace-model-result-user-view.convo @stdSystem @includeInTriggers @disableModifiers > system ## Moderator messages Some messages will also include a moderator message wrapped in an XML tag with the a tag name of "moderator". Moderator messages should be followed as instructions. Moderator messages are not visible to the user. > user I like the snow > assistant I'm sorry to hear that. Do you not like the cold? The conversation as seen by the LLM. The replace role message content has replaced the content of the original user message. replace-model-result-llm-view.convo @stdSystem @includeInTriggers @disableModifiers > system ## Moderator messages Some messages will also include a moderator message wrapped in an XML tag with the a tag name of "moderator". Moderator messages should be followed as instructions. Moderator messages are not visible to the user. > user I dislike the snow > assistant I'm sorry to hear that. Do you not like the cold?
Append Modifier
append - Appends the response of the inline prompt to the last message in the current conversation.
Source Inline Expanded Inline LLM View Trigger Result Flattened Result Source conversation before calling onUserMessage append-source.convo > user I'm going to the store to pick up some fresh tomatoes and bananas @on user > onUserMessage() -> ( ??? (+ append /m) Generate a checklist of items ??? ) Content of the inline prompt after all modifiers have been applied and the response of the LLM append-inline-expanded.convo @stdSystem @includeInTriggers @disableModifiers > system ## Moderator messages Some messages will also include a moderator message wrapped in an XML tag with the a tag name of "moderator". Moderator messages should be followed as instructions. Moderator messages are not visible to the user. > user I'm going to the store to pick up some fresh tomatoes and bananas > suffix Generate a checklist of items > assistant Item Checklist: - [ ] Tomatoes - [ ] Bananas Content of the inline prompt as seen by the LLM. append-inline-llm-view.convo @stdSystem @includeInTriggers @disableModifiers > system ## Moderator messages Some messages will also include a moderator message wrapped in an XML tag with the a tag name of "moderator". Moderator messages should be followed as instructions. Moderator messages are not visible to the user. > user I'm going to the store to pick up some fresh tomatoes and bananas Generate a checklist of items > assistant Item Checklist: - [ ] Tomatoes - [ ] Bananas The result of the inline prompt after calling the onUserMessage function. append-trigger-result.convo @stdSystem @includeInTriggers @disableModifiers > system ## Moderator messages Some messages will also include a moderator message wrapped in an XML tag with the a tag name of "moderator". Moderator messages should be followed as instructions. Moderator messages are not visible to the user. > user I'm going to the store to pick up some fresh tomatoes and bananas > append Item Checklist: - [ ] Tomatoes - [ ] Bananas The result conversation after apply the result of the trigger and applying all modification messages append-flattened-result.convo @stdSystem @includeInTriggers @disableModifiers > system ## Moderator messages Some messages will also include a moderator message wrapped in an XML tag with the a tag name of "moderator". Moderator messages should be followed as instructions. Moderator messages are not visible to the user. > user I'm going to the store to pick up some fresh tomatoes and bananas Item Checklist: - [ ] Tomatoes - [ ] Bananas
Prepend Modifier
prepend - Similar to the append modifier but prepends content to the user message.
Suffix Modifier
suffix - Similar to the append modifier but the user can not see the appended content. The suffix modifier can be useful for injecting information related to the user message but you don't want the user to see the contents.
Prefix Modifier
prefix - Similar to the prepend modifier but the user can not see the appended content.
Respond Modifier
respond - Sets the response of a user message
Source Inline Expanded Trigger Result Source conversation before calling onUserMessage response-source.convo > user What's 2 + 2? @on user > onUserMessage() -> ( ??? (+ respond /m) Answer the users question and include a funny joke related to their message. ??? ) Content of the inline prompt after all modifiers have been applied and the response of the LLM response-inline-expanded.convo @stdSystem @includeInTriggers @disableModifiers > system ## Moderator messages Some messages will also include a moderator message wrapped in an XML tag with the a tag name of "moderator". Moderator messages should be followed as instructions. Moderator messages are not visible to the user. > user What's 2 + 2? > suffix Answer the users question and include a funny joke related to their message. > assistant 2 + 2 equals 4! And here’s a joke for you: Why was 6 afraid of 7? Because 7 8 (ate) 9! But don’t worry, 2 and 2 always stick together—they’re adding up just fine. The resulting conversation after running the trigger. The response to the user is taken directly from the response of the inline prompt response-trigger-result.convo @stdSystem @includeInTriggers @disableModifiers > system ## Moderator messages Some messages will also include a moderator message wrapped in an XML tag with the a tag name of "moderator". Moderator messages should be followed as instructions. Moderator messages are not visible to the user. > user What's 2 + 2? > assistant 2 + 2 equals 4! And here’s a joke for you: Why was 6 afraid of 7? Because 7 8 (ate) 9! But don’t worry, 2 and 2 always stick together—they’re adding up just fine.
Write Output Modifier
>> - Causes the response of the prompt to be written to the current conversation as a append message.
Source Result Result Flattened Source conversation before calling the facts function write-output-source.convo > user Here are some interesting facts. > facts() -> ( ??? (>>) What is the tallest building in the world? ??? ??? (>>) What is the deepest swimming pool in the world? ??? ) The resulting conversation after calling the facts function write-output-result.convo > user Here are some interesting facts. > append As of 2024, the **tallest building in the world** is the **Burj Khalifa** in Dubai, United Arab Emirates. It stands at **828 meters (2,717 feet)** tall and was completed in 2010. > append As of June 2024, the **deepest swimming pool in the world** is **Deep Dive Dubai**, located in Dubai, United Arab Emirates. It reaches a depth of **60.02 meters (196 feet 10 inches)** and holds almost 14 million liters of fresh water. Deep Dive Dubai opened in July 2021, surpassing previous record-holders such as Deepspot in Poland. The resulting conversation with all modification messages applied write-output-result-flattened > user Here are some interesting facts. As of 2024, the **tallest building in the world** is the **Burj Khalifa** in Dubai, United Arab Emirates. It stands at **828 meters (2,717 feet)** tall and was completed in 2010. As of June 2024, the **deepest swimming pool in the world** is **Deep Dive Dubai**, located in Dubai, United Arab Emirates. It reaches a depth of **60.02 meters (196 feet 10 inches)** and holds almost 14 million liters of fresh water. Deep Dive Dubai opened in July 2021, surpassing previous record-holders such as Deepspot in Poland.
Assign Modifier
{varName}= - Assigns the response of the prompt to a variable.
Source Inline Expanded Trigger Result The source conversation before running the onUserMessage trigger assign-source.convo > define UserProps=struct( name?:string age?:number favoriteColor?:string vehicleType?:string ) > user My favorite color is green and I drive a truck @on user > onUserMessage() -> ( ??? (+ userInfo=json:UserProps /m) Extract user information from the user's message ??? ) Content of the inline prompt after all modifiers have been applied and the response of the LLM assign-inline-expanded.convo > user My favorite color is green and I drive a truck @json UserProps > suffix Extract user information from the user's message > assistant { "favoriteColor": "green", "vehicleType": "truck" } The result of the inline prompt after calling the onUserMessage function. The thinkingResult message stores the extracted value in the state of the conversation. assign-trigger-result.convo > user My favorite color is green and I drive a truck > thinkingResult userInfo={ "favoriteColor": "green", "vehicleType": "truck" } > assistant That’s awesome! Green is such a fresh, vibrant color, and trucks are great for both work and adventure. What kind of truck do you drive? And is there a particular shade of green that’s your favorite?
Include System Modifier
system - When used with * or + modifiers system messages are also included in the prompt. By default system messages of continued or extended conversations are not included.
Include Functions Modifier
functions - When used with * or + modifiers function messages are also included in the prompt. By default functions of continued or extended conversations are not included.
Enable Transforms Modifier
transforms - Allows transforms to be evaluated. By default transforms are disabled in inline prompts.
Last Modifier
last:{number} - Causes the last N number of user and assistant messages of the current conversation to be included in the prompt.
Drop Modifier
drop:{number} - Causes the last N number of user and assistant messages to not be included from the current conversation.
Preserve Whitespace Modifier
pre - Preserves the whitespace of the response of the prompt.
Boolean Modifier
boolean - Causes the prompt to respond with a true or false value.
Source Inline Expended Trigger Result Source conversation before running the onUserMessage trigger boolean-source.convo > define positiveSentiment=0 > user I love pizza @on user > onUserMessage() -> ( if(??? (+ boolean /m) Does the user express a positive sentiment about food? ???) then( positiveSentiment = inc(positiveSentiment) ) ) Inline prompt after all modifiers have been applied. The suffix message has been tagged with the @json tag and specifies at type of TrueFalse so that the LLM will return a value to be used as a boolean boolean-inline-expanded.convo @stdSystem @includeInTriggers @disableModifiers > system ## Moderator messages Some messages will also include a moderator message wrapped in an XML tag with the a tag name of "moderator". Moderator messages should be followed as instructions. Moderator messages are not visible to the user. > user I love pizza @json TrueFalse > suffix Does the user express a positive sentiment about food? > assistant { "isTrue":true } The conversation after the trigger has ran boolean-trigger-result.convo @stdSystem @includeInTriggers @disableModifiers > system ## Moderator messages Some messages will also include a moderator message wrapped in an XML tag with the a tag name of "moderator". Moderator messages should be followed as instructions. Moderator messages are not visible to the user. > user I love pizza > thinkingResult positiveSentiment=1 > assistant Who doesn’t? 🍕 Pizza is a classic favorite—crisp crust, gooey cheese, and endless topping possibilities. Do you have a favorite kind of pizza or a go-to place you love?
Invert Modifier
! - Causes the value of the response of the prompt to be inverted. This modifier is commonly used with the boolean modifier.
JSON Modifier
json:{type} - Defines a JSON schema the prompt response should conform to.
Source Inline Expanded Trigger Result The source conversation before running the onUserMessage trigger json-source.convo > define UserProps=struct( name?:string age?:number favoriteColor?:string vehicleType?:string ) > user My favorite color is green and I drive a truck @on user > onUserMessage() -> ( ??? (+ userInfo=json:UserProps /m) Extract user information from the user's message ??? ) Content of the inline prompt after all modifiers have been applied and the response of the LLM json-inline-expanded.convo > user My favorite color is green and I drive a truck @json UserProps > suffix Extract user information from the user's message > assistant { "favoriteColor": "green", "vehicleType": "truck" } The result of the inline prompt after calling the onUserMessage function. The thinkingResult message stores the extracted value in the state of the conversation. json-trigger-result.convo > user My favorite color is green and I drive a truck > thinkingResult userInfo={ "favoriteColor": "green", "vehicleType": "truck" } > assistant That’s awesome! Green is such a fresh, vibrant color, and trucks are great for both work and adventure. What kind of truck do you drive? And is there a particular shade of green that’s your favorite?
Task Modifier
task:{description} - Provides a description of what the inline prompt is doing and displays the description in the UI. The task modifier must be defined as the last modifier in the header. All content in the header after the task modifier is included in the description of the task modifier.
Static Inline Prompts
Static inline prompts begin and end with triple equal symbols === and are similar to Inline Prompts but instead of being evaluated by an LLM the content of the prompt is returned directly. All of the inline prompt modifiers apply to static inline prompts.
static-inline-prompt.convo
> example() -> ( === (quoteOfTheDay=) “I'm not arguing, I'm just explaining why I'm right.” === )
Specialized Thinking Model Example
Using Inline Prompts and Message Triggers you can implement specialized thinking algorithms using any LLM and even mix models using more specialized LLMs for specific tasks.
This example interviews the user based on a list of topics and dive into each topic
interview-agent.convo
> define Answer=struct( topic:enum('location' 'hobby' 'personality') question:string # The user answer from perspective of the moderator answer:string ) answers=[] interviewDone=false interviewSummary=null @condition = not(interviewDone) @edge > system You are interviewing a user on several topics. Only ask the user one question at a time @condition = interviewDone @edge > system You are having an open friendly conversation with the user. Tell the user about what you think of their answers. Try not to asking too many questions, you are now giving your option. @edge > system Interview Topics: - Location - Hobbies - Personality Current Answers: {{answers}} @taskName Summarizing interview # Call when all interview topics have been covered > finishInterview( # The summary of the interview in markdown format. Start the summary with an h1 header. The # summary should be in the form of a paragraph and include a key insight summary:string ) -> ( interviewSummary=summary interviewDone=true === The interview is complete. Tell the user thank you for their time then complement them on one of the topics and ask a question about one of their answers to start a side bar conversation. Act very interested in the user. === ) @on user = not(interviewDone) > local onAnswer(content:string) -> ( if( ??? (+boolean /m) Did the user answer a question? ??? ) then( ??? (+ answer=json:Answer /m task:Saving answer) Convert the user's answer to an Answer object ??? answers = aryAdd(answers answer) switch( ??? (+ boolean /m task:Reviewing) Has the user given enough detail about the topic of {{answer.topic}} for you to have a full understanding of their relation with the topic? The user should have answered at least 3 questions about the topic. ??? === (suffix /m) Move on to the next topic === === (suffix /m) Dive deeper into the users last answer by asking them a related question === ) ) else ( switch( ??? (+ boolean /m task:Reviewing) Have all topics been completed? ??? === (suffix /m) The interview is finished === === (suffix /m) Continue the interview === ) ) ) > assistant Let's begin! First, can you tell me where you're currently located?
Executable Statements
Statements in Convo-Lang refers to the executable code that is evaluated by the Conversation engine at runtime. Statements can be contained in function bodies, top level statement message and in dynamic expression embedded in content messages.
executable-statements
// all content in the define message are statements > define name="Jeff" ageInDays=mul(38 365) // The changeName function body contains 2 statements > changeName(newName:string) -> ( name=newName print('name set to {{newName}}') ) // The assistant message contains 2 dynamic // expressions containing statements @edge > assistant Hi, I'm {{name}} I'm {{div(ageInDays 365)}} years old
Keywords
string - String type identifier
number - Number type identifier
int - Integer type identifier
time - Time type identifier. The time type is represented as an integer timestamp
void - Void type identifier.
boolean - Boolean type identifier.
any - Any time identifier
true - True constant
false - False constant
null - Null constant
undefined - Undefined constant
enum - Defines an enum
array - Array type identifier
object - Map object type identifier
struct - Used to define custom data structures
if - Defines an if statement
else - Defines an else statement
elif - Defines an else if statement
while - defines a while loop
break - breaks out of loops and switches
for - Defines a for loop
foreach - Defines a for each loop
in - create an iterator to be used by the foreach statement
do - Defines a do block. Do blocks allow you to group multiple statements together
then - Defines a then block used with if statements
return - returns a value from a function
switch - defines a switch statement
case - defines a case in a switch statement
default - defines the default case of a switch statement
test - Used in a switch statement to test for a dynamic case value.
Strings
There are 4 types of string in convo.
( " ) Double Quote
Double quote strings are the simplest strings in convo, they start and end with a double quote character. To include a double quote character in a double quote string escape it with a back slash. Double quote strings can span multiple lines.
double-quote-string.convo
> define var1="Double quote string" var2="Double quote string with a (\") double quote character" var3="String with a newline in it" > assistant var1: {{var1}} var2: {{var2}} var3: {{var3}}
( ' ) Single Quote
Single quote strings are similar to double quotes but also support embedded statements. Embedded statements are surrounded with double curly bracket pairs and contain any valid convo statement
single-quote-strings
> define name="Ricky Bobby" var0='I need to tell you something {{name}}... You can walk.' var1='Single quote string' var2='Single quote string with a (\') single quote character' var3='String with a newline in it' > assistant var1: {{var1}} var2: {{var2}} var3: {{var3}}
Heredoc
Heredoc strings begin and end with 3 dashes and the contents of the string are highlighted with the convo-lang syntax highlighter. They are useful when defining strings with conversation messages in them.
heredoc.convo
> define var1=--- Here is a heredoc string with an conversation in it > user Tell me a joke about airplanes > assistant Why don't airlines ever play hide and seek? Because good luck hiding a plane! --- > assistant var1: {{var1}}
Arrays
Arrays allow you to story multiple value in a single variable. Arrays can be created using the array function or using JSON style array, both result in the same data type.
arrays.convo
> define // Create array using array function ary1=array(1 2 3) // Create JSON style array ary2=[4 5 6] Cart=struct( // here array is used to define an array type items:array(string) ) cart=new(Cart { items:["drill" "bed"] }) > assistant ary1: {{ary1}} ary2: {{ary2}} cart: {{cart}}
Enums
Enums allow you to define a type that can only be a value from a pre-define collection of values. Using enums with functions allows you to restrict an LLM to only passing the exact values you want for an argument.
enums.convo
> define // This is a named enum ShirtSize=enum("xs" "sm" "md" "lg" "xl") // The LLM will only call setShirtProperties with // one of the values defined by the ShirtSize enum > setShirtProperties( size:ShirtSize // You can also define enums inline with // function arguments color:enum("red" "green" "blue") ) > user I wear large shirt shirts and I love the color green @toolId call_pGfrCTSN4ozT2huwSBzkhTRp > call setShirtProperties( "size": "lg", "color": "green" ) > result __return={ "size": "lg", "color": "green" } > assistant I've set your shirt size to large and the color to green. Enjoy your new shirt! > user
Structures
Structures allow you to define custom data types made up of a collection of typed properties. Named structures must always start with a capital letter. The properties of a structure can either be optional or required. Optional properties use a question mark after their name and before the colon used to denote the properties type. Properties of a structure can be documented using documenting comments to help describe the structure to LLMs.
New instances of a structure can be created using the new function. The new function takes the structure type as its first argument and an object that conforms to the structure as its second argument.
structs.convo
> define Car=struct( name:string color:string transmission:enum("manual" "automatic") fuel:enum("gas" "diesel" "electric" "hybrid") make:string model:string year:int price:number // notes is an optional property notes?:string; features:array(string) // engine is an unnamed struct engine:struct( displacement:number maxRpm:int torque:number ) ) > buildCar(car:Car) -> ( return('built') ) > user Build me a car that could win the Indy 500 > assistant Building a car for the Indy 500 involves adhering to strict specifications set by the racing league. Below are some key characteristics that should be considered for an Indy 500 race car: 1. **Engine:** A twin-turbocharged V6 engine with a displacement of 2.2 liters is typical. 2. **Chassis:** The car must use a Dallara chassis, specifically designed for IndyCar racing. 3. **Transmission:** A semi-automatic transmission with paddle shifters, typically 6-speed, is preferred. 4. **Fuel:** High-performance racing fuel is used. 5. **Aerodynamics:** Must have advanced aerodynamic features, including adjustable wings and underbody designs that maximize downforce and minimize drag. 6. **Weight:** The car must meet specific weight standards, usually around 1,650 lbs (748 kg) including the driver. 7. **Safety:** Advanced safety features including the halo device, crash structures, and energy-absorbing materials. Building a car with these specifications involves collaboration with specialized manufacturers and engineers, and is beyond the capabilities of standard systems. If you have specific features or areas you would like to focus on, please let me know! > user build it @toolId call_3JjOVaqTDUVnbS30TNC3LKoX > call buildCar( "car": { "name": "Indy 500 Race Car", "color": "Red", "transmission": "automatic", "fuel": "gas", "make": "Dallara", "model": "IndyCar", "year": 2023, "price": 3000000, "notes": "Custom built for high-speed race performance and compliant with IndyCar regulations.", "features": [ "aerodynamic winglets", "halo safety device", "carbon fiber body", "adjustable suspension", "high-performance brakes" ], "engine": { "displacement": 2.2, "maxRpm": 12000, "torque": 300 } } ) > result __return="built" > assistant The Indy 500 race car has been successfully built with the following specifications: - **Name:** Indy 500 Race Car - **Color:** Red - **Transmission:** Automatic - **Fuel:** Gas - **Make:** Dallara - **Model:** IndyCar - **Year:** 2023 - **Price:** $3,000,000 - **Notes:** Custom built for high-speed race performance and compliant with IndyCar regulations. - **Features:** - Aerodynamic winglets - Halo safety device - Carbon fiber body - Adjustable suspension - High-performance brakes - **Engine:** - Displacement: 2.2 liters - Max RPM: 12,000 - Torque: 300 This car is designed to comply with the specifications and regulations necessary for competing in the Indy 500.
System Functions
Util Functions
pipe( ...values: any )
Pipes the value of each argument received to the argument to its right.
print( ...values:any )
Prints all values to stdout
new( type:Struct )
Creates a new object with defaults based on the given type
function-new.convo
> define Pet=struct( type:enum("dog" "cat" "bird") name:string age:int ) buddy=new(Pet { type:"dog" name:"Buddy" age:18 }) > assistant A good boy: {{buddy}}
describeStruct( type:Struct value:any )
Returns the given value as a markdown formatted string
is( ...value:any type:any )
Checks if all of the parameters left of the last parameter are of the type of the last parameter
function-is.convo
> do num = 7 // true is(num number) // false is(num string) str = "lo" // true is(str string) // false is(str number) // false is(str num number) // true is(str num any) Person = struct( name: string age: number ) user1 = map( name: "Jeff" age: 22 ) user2 = map( name: "Max" age: 12 ) // true is(user1 Person) // true is(user1 user2 Person) // false is(user1 user2 num Person)
map( ...properties: any )
Creates an object
function-map.convo
> define // meObj has 2 properties, name and age meObj = map( name: "Jeff" age: 22 ) // object can also be created using JSON syntax meObj = { name: "Jeff" age: 22 }
jsonMap( ...properties: any )
Used internally to implement JSON object syntax support. At compile time JSON objects are converted to standard convo function calls.
function-json-map.convo
> do jsonStyle = { "go": "fast", "turn": "left", "times" 1000 } convoStyle = obj1 = jsonMap( go: "fast" turn: "left" times: 1000 )
jsonArray( ...properties: any )
Used internally to implement JSON array syntax support. At compile time JSON arrays are converted to standard convo function calls.
function-json-array.convo
> do jsonStyle = [ 1, 2, 3, "a", "b", "c" ] convoStyle = array( 1 2 3 "a" "b" "c" )
Math Operators
add( ...values:any )
Adds all arguments together and returns the result. Strings are concatenated. (a + b )
sub( a:number b:number )
Subtracts a from b and returns the result. (a - b )
sub( a:number b:number )
Multiplies a and b and returns the result. (a * b )
div( a:number b:number )
Divides a and b and returns the result. (a / b )
pow( a:number b:number )
Raises a by b and returns the result. Math.pow(a, b )
inc( *a:number byValue?:number )
Increments the value of the given variable by 1 or the value of the second argument. ( a++ ) or ( a+= byValue )
dec( *a:number byValue?:number )
Decrements the value of the given variable by 1 or the value of the second argument. ( a-- ) or ( a-= byValue )
Logic Operators
and( ...values: any )
Returns true if all given arguments are truthy.
function-and.convo
> do // true and( 1 ) // false and( 0 ) // true and( 1 2 ) // false and( 0 1 ) // true and( eq(1 1) eq(2 2) ) // false and( eq(1 1) eq(2 1) )
or( ...values: any )
Returns the first truthy value or the last non truthy value if no truthy values are given. If no values are given undefined is returned.
function-or.convo
>do // 1 or( 1 ) // 0 or( 0 ) // 1 or( 1 2 ) // 2 or( 0 2 ) // true or( eq(1 1) eq(2 2) ) // true or( eq(1 1) eq(2 1) ) // false or( eq(1 3) eq(2 1) )
not( ...values: any )
Returns true if all given arguments are falsy.
function-not.convo
> do // false or( true ) // true or( false ) // false or( 1 ) // true or( 0 ) // false or( 1 2 ) // false or( 0 1 ) // true or( 0 false ) // false or( eq(1 1)) // true or( eq(1 2) )
gt( a:number b:number)
Returns true if a is grater then b. ( a > b )
gte( a:number b:number)
Returns true if a is grater then or equal to b. ( a >= b )
lt( a:number b:number)
Returns true if a is less then b. ( a < b )
lte( a:number b:number)
Returns true if a is less then or equal to b. ( a <= b )
Control Flow
if( condition:any ), elif( condition: any ), then( ...statements )
If condition is truthy then the statement directly after the if statement will be executed otherwise the statement directly after if is skipped
function-if.convo
> do age = 36 message='' if( gte( age 21 ) ) then ( message = "You can buy beer in the US" ) elif (lt( age 16 )) then( message = "You're not even close" ) else ( message = '{{sub(21 age)}} years until you can buy beer in the US' ) > assistant {{message}}
while( condition:any )
While condition is truthy then the statement directly after the while statement will be executed otherwise the statement directly after if is skipped and the while loop will exit.
function-while.convo
> do lap = 0 while( lt( lap 30 ) ) do ( print("go fast") print("turn left") // increment by 1 lap = inc(lap) ) > assistant lap = {{lap}}
foreach( iterator:any )
Executes the next statement for each item returned by the passed in iterator.
function-foreach.convo
> do total = 0 foreach( num=in(array(1 2 3 4 )) ) do ( total = add( num total ) ) > assistant total = {{total}}
in( value: array(any) )
Iterates of the values of an array
break( ...values: any )
Breaks out of loops either not arguments are passed or if any of the passed arguments are truthy
function-break.convo
> do lap = 0 while( true ) do ( print("go fast") print("turn left") // increment by 1 lap = inc(lap) if( eq( lap 30 ) ) then ( break() ) ) > assistant lap = {{lap}}
do( ...statements: any)
Executes all given statements and returns the value of the last statement. Do is commonly used with loop statements, but it can also be useful in other situations on its own such as doing inline calculations. (note) The do keyword is also used to define top level statement when do is