Self-Hosted AI Stack Series
    Part 6 of 8

    CrewAI — Multi-Agent Workflows

    Deploy autonomous AI teams on your VPS for research, analysis, and content creation.

    40 minutes
    8GB recommended
    Prerequisites

    Completed Part 1 (Ollama), Python 3.10+, basic Python familiarity

    Time to Complete

    35–45 minutes

    Recommended Plan

    8GB ($40/mo) recommended for larger context windows

    Looking for a quick-start guide? Check out our standalone CrewAI Deployment Guide for a streamlined setup walkthrough.

    Introduction

    Single-model chat is useful, but the real power of AI comes from orchestrating multiple agents that collaborate. CrewAI lets you define AI "crews" — teams of specialized agents that work together on complex tasks.

    Research, analysis, writing, review — all automated, all private, all on your VPS.

    CrewAI Architecture

    Core Concepts
    ┌─────────────────────────────────────────────────┐
    │                    CREW                          │
    │  Process: Sequential / Hierarchical              │
    │                                                  │
    │  ┌──────────┐  ┌──────────┐  ┌──────────┐       │
    │  │ AGENT 1  │  │ AGENT 2  │  │ AGENT 3  │       │
    │  │Researcher│─▸│ Analyst  │─▸│  Writer  │       │
    │  │          │  │          │  │          │       │
    │  │ Role     │  │ Role     │  │ Role     │       │
    │  │ Goal     │  │ Goal     │  │ Goal     │       │
    │  │Backstory │  │Backstory │  │Backstory │       │
    │  └────┬─────┘  └────┬─────┘  └────┬─────┘       │
    │       │             │             │              │
    │  ┌────▼─────┐  ┌────▼─────┐  ┌────▼─────┐       │
    │  │  TASK 1  │  │  TASK 2  │  │  TASK 3  │       │
    │  │ Research │  │ Analyze  │  │  Write   │       │
    │  │ Topic    │  │ Findings │  │  Report  │       │
    │  └──────────┘  └──────────┘  └──────────┘       │
    │                     │                            │
    │              ┌──────▼──────┐                     │
    │              │   TOOLS     │                     │
    │              │ Web Search  │                     │
    │              │ File Access │                     │
    │              │ RAG Query   │                     │
    │              └─────────────┘                     │
    └─────────────────────────────────────────────────┘
                          │
                   ┌──────▼──────┐
                   │   Ollama    │
                   │  (Local)    │
                   └─────────────┘

    Installing CrewAI

    Set up Python environment
    # Create project directory
    mkdir -p ~/ai-stack/crewai && cd ~/ai-stack/crewai
    
    # Create virtual environment
    python3 -m venv venv
    source venv/bin/activate
    
    # Install CrewAI
    pip install crewai crewai-tools

    Configure CrewAI to use your local Ollama instance instead of OpenAI:

    .env
    OPENAI_API_BASE=http://localhost:11434/v1
    OPENAI_API_KEY=ollama
    OPENAI_MODEL_NAME=mistral

    Your First Crew: Content Research Pipeline

    Build a 3-agent crew that researches, analyzes, and writes:

    research_crew.py
    #!/usr/bin/env python3
    """Content research pipeline using CrewAI + Ollama."""
    
    from crewai import Agent, Task, Crew, Process
    from langchain_community.llms import Ollama
    
    # Configure local LLM
    llm = Ollama(model="mistral", base_url="http://localhost:11434")
    
    # Define Agents
    researcher = Agent(
        role="Research Specialist",
        goal="Gather comprehensive information on the given topic",
        backstory="""You are an expert researcher with years of experience
        in finding and synthesizing information from multiple sources.
        You excel at identifying key trends and data points.""",
        llm=llm,
        verbose=True
    )
    
    analyst = Agent(
        role="Data Analyst",
        goal="Analyze research findings and extract actionable insights",
        backstory="""You are a skilled analyst who turns raw research
        into structured insights. You identify patterns, gaps, and
        opportunities that others miss.""",
        llm=llm,
        verbose=True
    )
    
    writer = Agent(
        role="Technical Writer",
        goal="Create a clear, well-structured report from the analysis",
        backstory="""You are a professional writer who excels at
        making complex topics accessible. Your reports are concise,
        actionable, and well-organized.""",
        llm=llm,
        verbose=True
    )
    
    # Define Tasks
    research_task = Task(
        description="""Research the topic: {topic}
        Gather key facts, statistics, trends, and expert opinions.
        Focus on recent developments and practical implications.""",
        expected_output="A comprehensive research brief with key findings",
        agent=researcher
    )
    
    analysis_task = Task(
        description="""Analyze the research findings:
        - Identify the top 3 key trends
        - Highlight opportunities and risks
        - Compare with industry benchmarks""",
        expected_output="A structured analysis with actionable insights",
        agent=analyst
    )
    
    writing_task = Task(
        description="""Write a professional report that includes:
        - Executive summary (3 sentences)
        - Key findings (bullet points)
        - Detailed analysis
        - Recommendations""",
        expected_output="A polished, publication-ready report",
        agent=writer
    )
    
    # Assemble Crew
    crew = Crew(
        agents=[researcher, analyst, writer],
        tasks=[research_task, analysis_task, writing_task],
        process=Process.sequential,
        verbose=True
    )
    
    if __name__ == "__main__":
        result = crew.kickoff(
            inputs={"topic": "self-hosted AI infrastructure trends in 2025"}
        )
        print("\n" + "="*60)
        print("FINAL REPORT:")
        print("="*60)
        print(result)
    python3 research_crew.py

    Custom Tools Integration

    Connect CrewAI to your RAG pipeline from Part 3:

    rag_tool.py
    from crewai_tools import BaseTool
    from qdrant_client import QdrantClient
    from langchain_community.embeddings import OllamaEmbeddings
    
    class RAGSearchTool(BaseTool):
        name: str = "Company Knowledge Search"
        description: str = "Search the company knowledge base for relevant information"
        
        def _run(self, query: str) -> str:
            embeddings = OllamaEmbeddings(
                base_url="http://localhost:11434",
                model="nomic-embed-text"
            )
            client = QdrantClient(
                url="http://localhost:6333",
                api_key="your-qdrant-api-key"
            )
            
            vector = embeddings.embed_query(query)
            results = client.search(
                collection_name="documents",
                query_vector=vector,
                limit=5
            )
            
            context = "\n\n".join([r.payload["text"] for r in results])
            return f"Relevant context:\n{context}"
    
    # Add to your agent:
    # researcher = Agent(..., tools=[RAGSearchTool()])

    Advanced Crew Patterns

    SEO Content Pipeline

    Keyword Researcher → Content Strategist → Writer → SEO Reviewer. Outputs optimized blog posts with meta descriptions and heading structure.

    Code Review Crew

    Security Auditor → Performance Analyst → Documentation Checker. Reviews PRs for vulnerabilities, performance issues, and missing docs.

    Customer Research Team

    Data Collector → Sentiment Analyst → Report Writer. Processes customer feedback and generates actionable reports.

    Running Crews as Services

    Wrap crews in a FastAPI endpoint for webhook/cron triggering:

    crew_api.py
    from fastapi import FastAPI
    from pydantic import BaseModel
    import uvicorn
    
    app = FastAPI()
    
    class CrewRequest(BaseModel):
        topic: str
    
    @app.post("/run-research")
    async def run_research(request: CrewRequest):
        # Import your crew from above
        result = crew.kickoff(inputs={"topic": request.topic})
        return {"result": str(result)}
    
    if __name__ == "__main__":
        uvicorn.run(app, host="0.0.0.0", port=8001)
    pip install fastapi uvicorn
    python3 crew_api.py

    What's Next?

    You've deployed autonomous AI teams that would cost hundreds in API fees per run through commercial providers. In Part 7: n8n + Ollama, we connect these agents to real-world automation — triggering workflows from emails, webhooks, schedules, and app integrations.