diff --git a/overview/architecture.mdx b/overview/architecture.mdx index 04e29ed..d08d86f 100644 --- a/overview/architecture.mdx +++ b/overview/architecture.mdx @@ -10,7 +10,7 @@ Components are composite data structures; that is, a `Component` can be made up Backends are the engine that actually run the LLM. Backends consume Components, format the Component, pass the formatted input to an LLM, and return model outputs, which are then parsed back into CBlocks or Components. -During the course of an interaction with an LLM, several Components and CBlocks may be created. Logic for handling this trace of interactions is provided by a `Context` object. Some book-keeping needs to be done in order for Contexts to approporiately handle a trace of Components and CBlocks. The `MelleaSession` class, which is created by `mellea.start_session()`, does this book-keeping a simple wrapper around Contexts and Backends. +During the course of an interaction with an LLM, several Components and CBlocks may be created. Logic for handling this trace of interactions is provided by a `Context` object. Some book-keeping needs to be done in order for Contexts to approporiately handle a trace of Components and CBlocks. The `MelleaSession` class, which is created by `mellea.start_session()`, does this book-keeping with a simple wrapper around Contexts and Backends. When we call `m.instruct()`, the `MelleaSession.instruct` method creates a component called an `Instruction`. Instructions are part of the Mellea standard library. diff --git a/overview/requirements.mdx b/overview/requirements.mdx index 48cfce2..b2a4fe1 100644 --- a/overview/requirements.mdx +++ b/overview/requirements.mdx @@ -184,7 +184,7 @@ print(write_email(m, "Olivia", Most LLM apis allow you to specify options to modify the request: temperature, max_tokens, seed, etc... Mellea supports specifying these options during backend initialization and when calling session-level functions with the `model_options` parameter. -Mellea supports many different types of inference engines (ollama, openai-compatible vllm, huggingface, etc.). These inference engines, which we call `Backend`s, provide different and sometimes inconsistent dict keysets for specifying model options. For the most common options among model providers, Mellea provides some engine-agnostic options, which can be used by typing [`ModelOption.`](../mellea/backends/types.py) in your favorite IDE; for example, temperature can be specified as `{"{ModelOption.TEMPERATURE": 0}` and this will "just work" across all inference engines. +Mellea supports many different types of inference engines (ollama, openai-compatible, vllm, huggingface, etc.). These inference engines, which we call `Backend`s, provide different and sometimes inconsistent dict keysets for specifying model options. For the most common options among model providers, Mellea provides some engine-agnostic options, which can be used by typing [`ModelOption.`](../mellea/backends/types.py) in your favorite IDE; for example, temperature can be specified as `{"{ModelOption.TEMPERATURE": 0}` and this will "just work" across all inference engines. You can add any key-value pair supported by the backend to the `model_options` dictionary, and those options will be passed along to the inference engine \*even if a Mellea-specific `ModelOption.` is defined for that option. This means you can safely copy over model option parameters from exiting codebases as-is: