Tree Traversal Agent Architecture: Structured Exploration with Local LLMs

published 7 days ago
A branch of a sakura tree, representing the tree traversal agent architecture
Tracing thoughts along branches helps make sure the focus and context are maintained

I wanted to see how well agents powered by a locally running reasoning model (Deepseek R1 32B) and an instruction model (Qwen 2.5 32B) could work together. The architecture for this experiment combines divergent thinking with a structured thinking framework that prevents solution drift and enables systematic knowledge building.

The script used to run this experiment can be found on GitHub. It's free to use and modify as you see fit, it's also primed for configuration and abstraction for whatever your use case is.

A Playground for Structured Thought

At its core, the script leverages a simple concept: thoughts as nodes in a traversable tree. Each thought maintains its lineage while branching into new directions, ensuring it maintains both the original context and an awareness for which thoughts are adjacent.

    initial_seed = generate_thoughts(
        context=PROBLEM_STATEMENT_CONTEXT,
        constraints=THINKING_CONSTRAINTS
    )
    
    thought_tree = explore_thought_tree(
        context=PROBLEM_STATEMENT_CONTEXT,
        seed=initial_seed,
        constraints=THINKING_CONSTRAINTS
    )
    
    branch_insights = process_thought_tree(thought_tree, PROCESSING_RULES)

This implementation offers three key advantages over a simple breadth first approach:

  • Local Model Synergy: Leverages Deepseek R1 32B for reasoning and Qwen 2.5 32B for instruction following, trying to leverage the best of both worlds (divergent and structured thinking)
  • Built-in Semantic Filtering: Automatically prunes redundant branches using similarity checks
  • Configurable Depth/Similarity: Adjust exploration parameters to match your specific needs

Core Architecture: Tree Traversal for Focused Exploration

The exploration process follows a structured flow:

  1. Initial Seed Generation: Create root thoughts based on the problem context
  2. Branch Identification: Analyze each thought for potential exploration paths
  3. Depth-controlled Exploration: Recursively explore branches while respecting depth limits
  4. Semantic Similarity Pruning: Filter out redundant or too-similar thoughts
# Main execution flow
initial_seed = generate_thoughts(context, constraints)
thought_tree = explore_thought_tree(context, initial_seed, constraints)
process_thought_tree(thought_tree)

Configurability: Adapt to Your Workflow

The script's behavior can be fine-tuned through configuration:

MODEL_CONFIG = MODEL_CONFIG = {
    "reasoning_model": "deepseek-r1:32b",  
    "instruction_model": "qwen2.5:32b",   
    "ollama_endpoint": "http://localhost:11434",
}

GENERATION_CONFIG = {
    "temperature_reasoning": 0.8,    
    "temperature_instruction": 0.7,  
    "max_branch_depth": 5,          
    "similarity_threshold": 0.6      
}

These parameters let you control:

  • Temperature settings to balance creativity and focus
  • Branch depth for controlling exploration intensity
  • Similarity thresholds for managing thought novelty

Experiment Yourself

To try the script, you'll need ollama and some Python dependencies:

ollama run deepseek-r1:32b
ollama run qwen2.5:32b
pip install -r requirements.txt

Here's a sample usage exploring game design concepts:

PROBLEM_STATEMENT_CONTEXT = """
I need a title for my new movie set in a cyberpunk dystopia
"""

THINKING_CONSTRAINTS = """## THINKING CONSTRAINTS
- Identify 2 complementary approaches
- Find 1 paradoxical element
- Suggest 3 concrete adjustments"""

Practical Applications

This structured approach to thought exploration proves particularly valuable for:

  • Early-stage Problem Exploration: Map out solution spaces while maintaining clear paths back to core problems
  • Overcoming Creative Blocks: Generate structured alternatives when stuck on a particular approach
  • Systematic Knowledge Synthesis: Build connected understanding of complex topics
  • Team Brainstorming Documentation: Create traceable thought evolution for collaborative projects

What Was Learned Since Last Time and What's Next

In my previous exploration of breadth-first agent architectures, I learned about their use for finding unknown unknowns. However, the outputs were still hard to bring together into a cohesive solution and it wasn't obvious which adjacenices should be connected. This tree traversal hopes to address these issues, but I need to do more testing.

At a minimum, this was a good exercise in vetting how capable Qwen 2.5 32B is at giving more consistency to the thoughts and outputs from Deepseek R1 32B. That's a huge win since I while the divergent outputs from Deepseek have been interesting to read they're not all that useful in an agent context without more structure.

To structured creativity,
James