AI

Building a Secure and Memory-Enabled Cipher Workflow for AI Agents with Dynamic LLM Selection and API Integration

In this tutorial, we are going by building compressed, but it works at full capacity BladesWork on the existing work. We start to capture the API Gemini key in the Colab user interface without exposing it in the code. Then we perform a dynamic LLM selection function that can automatically switch between Openai, Gemini or Anthropic based on the available API key. The preparation stage ensures the installation of Node.js and Cipher Cli, after which we create a cipher.yml composition to enable the memory factor with a long -term recall. We create assistance functions to run directly from Python, storing the main project decisions as continuous memories, recovering them upon request, and finally the code rotates in API mode for external integration. verify Full codes here.

import os, getpass
os.environ["GEMINI_API_KEY"] = getpass.getpass("Enter your Gemini API key: ").strip()


import subprocess, tempfile, pathlib, textwrap, time, requests, shlex


def choose_llm():
   if os.getenv("OPENAI_API_KEY"):
       return "openai", "gpt-4o-mini", "OPENAI_API_KEY"
   if os.getenv("GEMINI_API_KEY"):
       return "gemini", "gemini-2.5-flash", "GEMINI_API_KEY"
   if os.getenv("ANTHROPIC_API_KEY"):
       return "anthropic", "claude-3-5-haiku-20241022", "ANTHROPIC_API_KEY"
   raise RuntimeError("Set one API key before running.")

We start entering the Gemini API key safely using GETPASS so that it remains hidden in the Colab user interface. After that we define the Cyice_lm () function that checks our environmental variables and automatically defines the appropriate LLM provider, model and key based on what is available. verify Full codes here.

def run(cmd, check=True, env=None):
   print("▸", cmd)
   p = subprocess.run(cmd, shell=True, text=True, capture_output=True, env=env)
   if p.stdout: print(p.stdout)
   if p.stderr: print(p.stderr)
   if check and p.returncode != 0:
       raise RuntimeError(f"Command failed: {cmd}")
   return p

We create a Run () function that performs Shell orders, prints both STDOUT and STDORRR, and raises an error if the matter fails when enabling the check, making our workflow implementation more transparent and reliable. verify Full codes here.

def ensure_node_and_cipher():
   run("sudo apt-get update -y && sudo apt-get install -y nodejs npm", check=False)
   run("npm install -g @byterover/cipher")

We select Surent_node_and_cipher () to install node.js, NPM, and Cipher Cli worldwide, ensuring that our environment has all the necessary dependencies before running any lips -related orders. verify Full codes here.

def write_cipher_yml(workdir, provider, model, key_env):
   cfg = """
llm:
 provider: {provider}
 model: {model}
 apiKey: ${key_env}
systemPrompt:
 enabled: true
 content: |
   You are an AI programming assistant with long-term memory of prior decisions.
embedding:
 disabled: true
mcpServers:
 filesystem:
   type: stdio
   command: npx
   args: ['-y','@modelcontextprotocol/server-filesystem','.']
""".format(provider=provider, model=model, key_env=key_env)


   (workdir / "memAgent").mkdir(parents=True, exist_ok=True)
   (workdir / "memAgent" / "cipher.yml").write_text(cfg.strip() + "n")

We apply Write_cipher_yml () to create a CIPHER.YML composition within the Memagent folder, put the LLM supplier, a model, the chosen API key, enable the system of the system with long -term memory, and record the MCP file server for file operations. verify Full codes here.

def cipher_once(text, env=None, cwd=None):
   cmd = f'cipher {shlex.quote(text)}'
   p = subprocess.run(cmd, shell=True, text=True, capture_output=True, env=env, cwd=cwd)
   print("Cipher says:n", p.stdout or p.stderr)
   return p.stdout.strip() or p.stderr.strip()

We select Cipher_once () to turn on one encrypted order with the presented text, pick up and display it, and return the response, allowing us to interact with software encryption from Python. verify Full codes here.

def start_api(env, cwd):
   proc = subprocess.Popen("cipher --mode api", shell=True, env=env, cwd=cwd,
                           stdout=subprocess.PIPE, stderr=subprocess.STDOUT, text=True)
   for _ in range(30):
       try:
           r = requests.get("http://127.0.0.1:3000/health", timeout=2)
           if r.ok:
               print("API /health:", r.text)
               break
       except: pass
       time.sleep(1)
   return proc

We create Start_API () to launch Cipher in API as a sub -process, then explore it over and over to the end /health point until it responds, ensuring that the API server is ready before follow -up. verify Full codes here.

def main():
   provider, model, key_env = choose_llm()
   ensure_node_and_cipher()
   workdir = pathlib.Path(tempfile.mkdtemp(prefix="cipher_demo_"))
   write_cipher_yml(workdir, provider, model, key_env)
   env = os.environ.copy()


   cipher_once("Store decision: use pydantic for config validation; pytest fixtures for testing.", env, str(workdir))
   cipher_once("Remember: follow conventional commits; enforce black + isort in CI.", env, str(workdir))


   cipher_once("What did we standardize for config validation and Python formatting?", env, str(workdir))


   api_proc = start_api(env, str(workdir))
   time.sleep(3)
   api_proc.terminate()


if __name__ == "__main__":
   main()

In MAIN (), we choose the LLM provider, install the dependencies, create a temporary work guide with the composition of Cipher.yml. Then we store the main project decisions in the CIPHER memory, inquire about it again, and finally we start the Cipher API server shortly before closing it, indicating all of the CLI and API reactions.

In conclusion, we have a working coding environment that secure the keys to the API, and the correct LLM provider selects automatically, and it creates an agent that supports the entire memory through Python automation. Our implementation includes the registration of the decision, the recovery of the memory and the live API end point, all of which were coordinated in a suitable workflow with notes/colum. This makes the preparation capable of reusing other AI’s development pipelines, allowing us to store knowledge and inquire about the project programming with a lightweight and easy to post.


verify Full codes here. Do not hesitate to check our GitHub page for lessons, symbols and notebooks. Also, do not hesitate to follow us twitter And do not forget to join 100K+ ML Subreddit And subscribe to Our newsletter.


Asif Razzaq is the CEO of Marktechpost Media Inc .. As a pioneer and vision engineer, ASIF is committed to harnessing the potential of artificial intelligence for social goodness. His last endeavor is to launch the artificial intelligence platform, Marktechpost, which highlights its in -depth coverage of machine learning and deep learning news, which is technically intact and can be easily understood by a wide audience. The platform is proud of more than 2 million monthly views, which shows its popularity among the masses.

Don’t miss more hot News like this! Click here to discover the latest in AI news!

2025-08-12 02:02:00

Related Articles

Back to top button