Python framework for AI agents logic-only coding with streaming, tool calls, and multi-LLM provider support.
Only the "fairly stable" versions are published on PyPi, but to get the latest experimental versions, clone this repository and install it !
pip install open-taranis --upgradeFor package on PyPi
or
git clone https://github.com/SyntaxError4Life/open-taranis && cd open-taranis/ && pip install .For last version
Simplest
import open_taranis as T
client = T.clients.openrouter() # API_KEY in env_var
messages = [
T.create_user_prompt("Tell me about yourself")
]
stream = T.clients.openrouter_request(
client=client,
messages=messages,
model="nvidia/nemotron-3-nano-30b-a3b:free",
)
print("assistant : ",end="")
for token, tool, tool_bool in T.handle_streaming(stream) :
if token :
print(token, end="")To create a simple display using gradio as backend
import open_taranis as T
import open_taranis.web_front as W
import gradio as gr
gr.ChatInterface(
fn=W.chat_fn_gradio(
client=T.clients.openrouter(), # API_KEY in env_var
request=T.clients.openrouter_request,
model="nvidia/nemotron-3-nano-30b-a3b:free",
_system_prompt="You are an agent named **Taranis**"
).create_fn(),
title="web front"
).launch()Make a simple agent with a context windows on the 6 last turns
import open_taranis as T
class Agent(T.agent_base):
def __init__(self):
super().__init__()
self.client = T.clients.openrouter()
self._system_prompt = [T.create_system_prompt(
"You're an agent nammed **Taranis** !"
)]
def create_stream(self):
return T.clients.openrouter_request(
client=self.client,
messages=self._system_prompt+self.messages,
model="nvidia/nemotron-3-nano-30b-a3b:free"
)
def manage_messages(self):
self.messages = self.messages[-12:] # Each turn have 1 user and 1 assistant
My_agent = Agent()
while True :
prompt = input("user : ")
print("\n\nagent : ", end="")
for t in My_agent(prompt):
print(t, end="", flush=True)
print("\n\n","="*60,"\n")taranis help: in the name...taranis update: upgrade the frameworktaranis open: open the TUI
/helpto start
- Base of the docs (coding some things before the real docs)
- v0.0.1: start
- v0.0.x: Add and confirm other API providers (in the cloud, not locally)
- v0.1.x: Functionality verifications in examples
- > v0.2.0: Add features for logic-only coding approach, start with
agent_base - v0.3.x: Add a full agent in TUI and upgrade web client deployments
- The rest will follow soon.
v0.0.x : The start
- v0.0.4 : Add xai and groq provider
- v0.0.6 : Add huggingface provider and args for clients.veniceai_request
v0.1.x : Gradio, commands and TUI
- v0.1.0 : Start the docs, add update-checker and preparing for the continuation of the project...
- v0.1.1 : Code to deploy a frontend with gradio added (no complex logic at the moment, ex: tool_calls)
- v0.1.2 : Fixed a display bug in the web_front and experimentally added ollama as a backend
- v0.1.3 : Fixed the memory reset in the web_front and remove ollama module for openai front (work 100 times better)
- v0.1.4 : Fixed
web_frontfor native use on huggingface, as well ashandle_streamingwhich had tool retrieval issues - v0.1.7 : Added a TUI and commands, detection of env variables (API keys) and tools in the framework
v0.2.x : Agents
- v0.2.0 : Adding
agent_base
