What is Elemental?

Elemental is a multi-agent framework designed with a focus on the modularity of agentic workflow stages. It enables the creation and management of single-agent or multi-agent systems with ease. The core functionality revolves around dynamically planning how to solve assigned tasks and executing those plans with the help of an agent team. Elemental supports the programmatic creation of flexible and custom workflows and includes a no-code interface for easy management of agents and tasks.
Why another agent library?
We believe that the current landscape of agent frameworks is still fragmented and a unified approach and standardization is yet to emerge. We aim to provide a framework that is flexible, extensible, and easy to use and modify to your needs. Elemental is designed to be modular, allowing users to easily add or remove components as needed. Low barrier to getting started and experimenting is big part of AttoAgents focus.
Main features
- Creation of agents with different capabilities and roles.
- Multi-agent task execution.
- Custom language model per agent (including different inference engines and size of the model - direct support for Ollama, Llama.cpp, OpenAI and compatible APIs, Anthropic).
- Simple model selection per agent e.g.
ollama|gemma3
oropenai|gpt-4.1-mini
. - Variables in Jinja format, e.g.
{{ agent_persona }}
. - Default dynamic orchestrator with dynamic planning, execution, re-planning, composition and verification steps.
- Simple command line interface with agent configuration provided by YAML file.
- Tool execution with extendable interface to provide native tools executable with any language model.
- Reasoning and conversational agent prompt strategies.
- MCP Tools with complete toolset or individual tool level selection.
How to get started?
Requirements
Elemental is a Python library and requires Python 3.12 or higher. To use Elemental you will need access to a language model provider. We recommend starting with Ollama. If you do not have it on your system please download and install Ollama from https://ollama.com/download. After the installation download a language model, e.g. ollama pull qwen3:4b
.
Installation
Elemental is available on PyPI and can be installed using pip:
# Install Elemental
pip install elemental-agents
First example
As first example we will create very simple assistant using the small language model Qwen3 4B. Save this YAML file to config-example.yaml
file or download it from here.
workflowName: ModelTest
workflow:
- executor
executor:
- name: Assistant
type: Simple
persona: >-
You are expert researcher and great
communicator of complex topics using
simple terms. You always give comprehensive
and extensive responses that consider the
task at hand.
tools: []
llm: ollama|qwen3:4b
temperature: 0
frequencyPenalty: 0
presencePenalty: 0
topP: 1
maxTokens: 2000
stopWords: <PAUSE>, STOP
template: >
{{ agent_persona }}
Follow user's instruction. Do this on
a stepwise basis and double-check each step,
one at a time. Use markdown in your response
for more readable format.
Now we can run the agent using the command line interface:
# Run the agent
python -m elemental_agents.main.elemental --config config-example.yaml --instruction "Why is the sky blue?"
Hit enter your first agent will start working on providing an answer.
Environment file .env
Elemental uses standard environment file .env
to store your API keys and general settings of the library. You can create this file in the same directory as your YAML configuration file / or Python application. The example file may look like this: openai_api_key="<OPENAI API KEY HERE>"
openai_streaming=False
openai_max_tokens=10000
default_engine="ollama"
custom_max_tokens=2000
google_search_api_key="<GOOGLE SEARCH API HERE>"
google_cse_id="<GOOGLE CSE ID HERE>"
google_search_timeout=5
wikipedia_user_agent="<Agents/1.0 YOUR EMAIL HERE>"
observer_destination="screen"
mcpServers='{"Github": {"command": "npx", "args": ["-y","@modelcontextprotocol/server-github"], "env": {"GITHUB_PERSONAL_ACCESS_TOKEN": "<YOUR GITHUB TOKEN>"}}}'
For more details on the configuration file and environment variables please refer to the documentation.
Below we will focus on examples of using Elemental to programmatically create various agents and teams.
Examples
Simple assistant
The simplest agent configuration in Elemental is setup of an simple assistant that does not have ability to execute tools and serves as an interface to the language model. The assistant however is conversation aware and includes `ShortMemory` that stores the conversation history.
from loguru import logger
from elemental_agents.core.agent.agent_factory import AgentFactory
TASK = "Why is the sky blue?"
SESSION = "TestSession"
factory = AgentFactory()
assistant = factory.create(
agent_name="AssistantAgent",
agent_persona="Simple always helpful assistant",
agent_type="simple",
llm_model="ollama|gemma3",
)
result = assistant.run(task=TASK, input_session=SESSION)
logger.info(f"Result: {result}")
In this example we create a simple assistant agent using the AgentFactory
class.
ReAct agent - reasoning and tools
More complex and complete agents can be created with utilizing one of the iterative reasoning prompt strategies like ReAct. In this case agent will be able to utilize tools by executing actions and brining the results as observations. In the example below we equip the agent with several tools.
from loguru import logger
from elemental_agents.core.agent.agent_factory import AgentFactory
TASK = "Calculate the sum of 5 and 3."
SESSION = "Test Session"
factory = AgentFactory()
assistant = factory.create(
agent_name="AssistantAgent",
agent_persona="You are a helpful assistant.",
agent_type="ReAct",
llm_model="openai|gpt-4.1-mini",
tools=["Calculator", "CurrentTime", "NoAction"],
)
result = assistant.run(
task=TASK,
input_session=SESSION
)
logger.info(f"Result: {result}")
The task demonstrates the need to use the Calculator
tool. In this example we utilize language model provided by OpenAI API and need to provide the API key in the .env
file. The agent will be able to use the calculator tool to perform the calculation and return the result. The CurrentTime
tool is used to get the current time and NoAction
tool is used when no action is needed in strict format of ReAct prompt.
Elemental does not rely on the function calling ability of particular language model and handles the definition of actions (which select tools and their parameter) with the prompt strategy of an agent.
PlanReAct - ReAct agent with internal planning
Similarly to ReAct agent we can define more complex prompt strategy that includes internal planning. By selecting agent_type="PlanReAct"
we can create an agent that augments ReAct strategy by internal planning.
from loguru import logger
from elemental_agents.core.agent.agent_factory import AgentFactory
TASK = "Calculate the sum of 5 and 3."
SESSION = "Test Session"
factory = AgentFactory()
assistant = factory.create(
agent_name="AssistantAgent",
agent_persona="You are a helpful assistant.",
agent_type="PlanReAct",
llm_model="openai|gpt-4.1-mini",
tools=["Calculator", "CurrentTime", "NoAction"],
)
result = assistant.run(task=TASK, input_session=SESSION)
logger.info(f"Result: {result}")
Conversational agent team
A team of agents that are meant to work together may be defined by first creating the individual agents and then creating agent team with GenericAgentTeam
class. To enable the conversational character of the agents they need to be created with agent_type="ConvPlanReAct"
. This enables the conversational character and awareness of the team with prompt strategy
from loguru import logger
from elemental_agents.core.agent.agent_factory import AgentFactory
from elemental_agents.core.agent_team.generic_agent_team import GenericAgentTeam
from elemental_agents.core.selector.agent_selector_factory import AgentSelectorFactory
factory = AgentFactory()
agent1 = factory.create(
agent_name="AssistantAgent",
agent_persona="You are a helpful assistant.",
agent_type="ConvPlanReAct",
llm_model="openai|gpt-4.1-mini",
tools=["Calculator", "CurrentTime", "NoAction"],
)
agent2 = factory.create(
agent_name="ProgrammerAgent",
agent_persona="You are a helpful programmer.",
agent_type="ConvPlanReAct",
llm_model="openai|gpt-4.1-mini",
tools=["Calculator", "CurrentTime", "NoAction"],
)
selector_factory = AgentSelectorFactory()
agent_selector = selector_factory.create(
selector_name="conversational", lead_agent="AssistantAgent"
)
agent_team = GenericAgentTeam(selector=agent_selector)
agent_team.register_agent("AssistantAgent", agent1, "ConvPlanReAct")
agent_team.register_agent("ProgrammerAgent", agent2, "ConvPlanReAct")
result = agent_team.run(
task="What is the color of sky on Mars?",
input_session="Example Session"
)
logger.info(f"Result: {result}")
Orchestrated team of agents - external planning and task queue
While a single agent may be used with internal planning prompt strategy like PlanReAct
, the planning process may be done using a specialized planning agent. In this case agent creates the plan and populates a task queue. This process is orchestrated with flexible DynamicAgentOrchestrator
class and may also include more steps including replanning done during the execution.
The example below includes two simple agents to illustrate the process.
from loguru import logger
from elemental_agents.core.agent.agent_factory import AgentFactory
from elemental_agents.core.orchestration.dynamic_agent_orchestrator import (
DynamicAgentOrchestrator,
)
factory = AgentFactory()
planner_agent = factory.create(
agent_name="PlannerAgent",
agent_persona="",
agent_type="planner",
llm_model="openai|gpt-4.1-mini",
)
executor_agent = factory.create(
agent_name="ExecutorAgent",
agent_persona="You are an expert software engineer.",
agent_type="ReAct",
llm_model="openai|gpt-4.1-mini",
tools=[
"Calculator",
"CurrentTime",
"NoAction",
"ReadFiles",
"WriteFile",
"ListFiles"
],
)
orchestrator = DynamicAgentOrchestrator(planner=planner_agent, executor=executor_agent)
result = orchestrator.run(
instruction="Create FastAPI backend for a TODO application.",
input_session="Example Session"
)
logger.info(f"Result: {result}")
The above example utilizes two steps in the workflow that DynamicAgentOrchestrator
manages. The complete list includes planner
, plan_verifier
, replanner
, executor
, verifier
, and composer
.
Model Context Protocol Servers
To use MCP Servers in Elemental one needs to define them in the configuration file using mcpConfig
variable, e.g.
mcpServers='{"Github": {"command": "npx", "args": ["-y","@modelcontextprotocol/server-github"], "env": {"GITHUB_PERSONAL_ACCESS_TOKEN": "<YOUR GITHUB TOKEN>"}}}'
The above value of mcpConfig
variable adds the Github MCP server. More than one server may be defined in a similar fashion by adding additional entries in the JSON blob.
A tool from an MCP server may be then added to the agent seamlessly with MCP|server_name|tool_name
syntax. In the example below we add search_repositories
tool from Github MCP server defined above as MCP|Github|search_repositories
.
from loguru import logger
from elemental_agents.core.agent.agent_factory import AgentFactory
TASK = "Search Github repositories for REST API creation in Python."
SESSION = "Test Session"
factory = AgentFactory()
assistant = factory.create(
agent_name="AssistantAgent",
agent_persona="You are a helpful assistant.",
agent_type="ReAct",
llm_model="openai|gpt-4.1-mini",
tools=["Calculator", "CurrentTime", "NoAction", "MCP|Github|search_repositories"],
)
result = assistant.run(task=TASK, input_session=SESSION)
logger.info(f"Result: {result}")
To make all tools provided by an MCP server available to the agent use MCP|server_name|*
as a tool name. This will query the tools and register all of them. The example above may be modified by changing MCP|Github|search_repositories
to MCP|Github|*
.