ollama_server
ipw.agents.mcp.ollama_server
¶
Ollama MCP server for local models.
OllamaMCPServer
¶
Bases: BaseMCPServer
MCP server for local models via Ollama.
Supports any model available in Ollama (Llama, Qwen, DeepSeek, etc.)
Example
server = OllamaMCPServer( model_name="llama3.2:1b", base_url="http://localhost:11434" )
result = server.execute("What is 2+2?") print(result.content) # "4" print(result.cost_usd) # 0.0 (local model, no cost)
Source code in intelligence-per-watt/src/ipw/agents/mcp/ollama_server.py
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 | |
__init__(model_name, base_url=None, telemetry_collector=None, event_recorder=None, **ollama_params)
¶
Initialize Ollama MCP server.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
model_name
|
str
|
Ollama model name (e.g., "llama3.2:1b", "qwen2.5:0.5b") |
required |
base_url
|
Optional[str]
|
Ollama server URL (default: http://127.0.0.1:11434) |
None
|
telemetry_collector
|
Optional[Any]
|
Energy monitor collector |
None
|
event_recorder
|
Optional[Any]
|
EventRecorder for per-action tracking |
None
|
**ollama_params
|
Any
|
Additional Ollama parameters (temperature, etc.) |
{}
|
Source code in intelligence-per-watt/src/ipw/agents/mcp/ollama_server.py
health_check()
¶
Check if Ollama server is available and model is loaded.
Source code in intelligence-per-watt/src/ipw/agents/mcp/ollama_server.py
list_available_models()
¶
List all models available in Ollama.