Many companies want to automate internal workflows with AI. Support tickets, bug reports, documentation search, customer questions, and developer tasks can all be partially automated.
In this guide we will build a simple but powerful AI workflow orchestration system using open-source tools:
- Open WebUI — internal AI interface for employees
- n8n — workflow orchestrator
- Qdrant — semantic search database for AI knowledge
- PostgreSQL — structured data storage
This stack lets you build things like:
- AI-assisted support ticket triage
- Internal company AI assistant
- Automated bug reporting workflows
- AI answering questions using company documentation
Everything here can run on one Linux server.
1. What We Are Building
Imagine a SaaS company workflow:
- A customer submits a support request.
- The system automatically:
- analyzes the request with AI
- searches company documentation
- creates internal tasks
- suggests a reply to support staff
The workflow looks like this:
This is called workflow orchestration.
2. Server Requirements
For a small company setup:
Recommended server:
- Ubuntu 22.04
- 16 GB RAM
- 4 CPU cores
- 100 GB disk
Install Docker first.
sudo apt update
sudo apt install docker.io docker-compose -y
Enable Docker:
sudo systemctl enable docker
sudo systemctl start docker
3. Project Structure
Create a working directory:
mkdir ai-workflows
cd ai-workflows
Create the main configuration file:
nano docker-compose.yml
4. Docker Compose Setup
Paste the following configuration.
version: "3"
services:
postgres:
image: postgres:15
environment:
POSTGRES_USER: ai
POSTGRES_PASSWORD: ai123
POSTGRES_DB: workflows
volumes:
- postgres_data:/var/lib/postgresql/data
ports:
- "5432:5432"
qdrant:
image: qdrant/qdrant
ports:
- "6333:6333"
volumes:
- qdrant_data:/qdrant/storage
n8n:
image: n8nio/n8n
ports:
- "5678:5678"
environment:
- DB_TYPE=postgresdb
- DB_POSTGRESDB_HOST=postgres
- DB_POSTGRESDB_DATABASE=workflows
- DB_POSTGRESDB_USER=ai
- DB_POSTGRESDB_PASSWORD=ai123
depends_on:
- postgres
open-webui:
image: ghcr.io/open-webui/open-webui:main
ports:
- "3000:8080"
volumes:
- openwebui_data:/app/backend/data
volumes:
postgres_data:
qdrant_data:
openwebui_data:
Start everything:
docker compose up -d
Docker will download and start all services.
5. Access the Tools
After startup, open your browser.
Open WebUI: http://SERVER-IP:3000
Create your admin account.
n8n dashboard: http://SERVER-IP:5678
Qdrant API: http://SERVER-IP:6333
PostgreSQL: localhost:5432
At this point your AI infrastructure is running.
6. Add an AI Model
Open WebUI needs a model backend. The easiest option is to run Ollama on the host.
Install Ollama (local model server):
curl -fsSL https://ollama.com/install.sh | sh
Run a model:
ollama run llama3
Now connect Open WebUI to: http://host.docker.internal:11434
Your AI assistant is ready.
7. Storing Knowledge in Qdrant
AI becomes much more useful when it can search company knowledge.
Examples of documents to store:
- Product documentation
- Support procedures
- API documentation
- Internal manuals
Example collection creation:
curl -X PUT http://localhost:6333/collections/company_docs \
-H 'Content-Type: application/json' \
-d '{
"vectors": {
"size": 384,
"distance": "Cosine"
}
}'
Documents are converted into embeddings and stored in Qdrant. AI can now search them semantically.
8. Creating the Workflow in n8n
Open the n8n dashboard. Create a new workflow.
Example automation: Webhook → AI Analysis → Knowledge Search → Store Ticket → Suggest Reply
Step 1: Webhook trigger
POST /support-ticket
Step 2: AI analysis
Send ticket text to the AI model.
Step 3: Knowledge search
Query Qdrant for similar documentation.
Step 4: Save ticket
Insert structured data into PostgreSQL:
INSERT INTO tickets
(customer_email, summary, category, priority)
VALUES ($1,$2,$3,$4)
Step 5: Suggested reply
The AI combines ticket content and documentation results to generate a draft response.
9. Example AI Prompt
A practical prompt used inside the workflow:
You are a SaaS support assistant.
Customer message:
{{ticket_text}}
Relevant documentation:
{{qdrant_results}}
Write a clear support reply.
Do not invent information.
This produces a high-quality suggested response for the support team.
10. Real Company Use Cases
Once the system works, you can expand it.
AI documentation assistant
Employees ask questions about internal systems.
Automated support classification
Tickets automatically routed to the correct team.
Developer bug report generation
AI extracts logs and creates developer tickets.
Customer onboarding assistant
AI answers new customer questions using documentation.
11. Why This Stack Works
This setup separates responsibilities:
- Open WebUI — Human interface to AI
- n8n — Automation and orchestration
- Qdrant — AI knowledge memory
- PostgreSQL — Structured business data
Together they form a modern AI workflow platform.
12. Final Advice
Start simple. Do not try to automate everything at once.
A good first project is: "AI suggests replies to support tickets."
Once that works, expand to:
- Ticket routing
- Knowledge search
- Developer workflows
This is exactly how many companies introduce AI into real business operations. Small automation steps, built on a solid foundation.
Technical Glossary
- Workflow Orchestration
- Automated coordination of tasks across multiple services and APIs. Tools like n8n enable non-engineers to design complex automation logic.
- Vector Database
- Specialized database for storing embeddings (numerical representations of meaning). Qdrant enables semantic search and similarity matching for AI systems.
- Embedding
- A numerical vector representation of text or data that captures semantic meaning. Generated by AI models, embeddings power similarity search and knowledge retrieval (RAG).
- RAG (Retrieval-Augmented Generation)
- Technique where AI retrieves relevant documents/data before generating answers, improving accuracy and context awareness.
- Webhook
- HTTP callback that allows one service to notify another when an event occurs. Enables real-time integration between applications.
Ready to Build AI Workflows?
If you want to implement a production-ready AI automation stack with n8n, Qdrant, and all the infrastructure to support it, let's discuss how to deploy this on your infrastructure.