Skip to content

Installation

Requirements

Requirement Version Purpose
Python >= 3.13 Runtime
uv latest Package management
Rust stable Building the energy monitor
protoc >= 3.0 Protocol Buffer compiler for gRPC

Install

git clone https://github.com/HazyResearch/intelligence-per-watt.git
cd intelligence-per-watt
bash intelligence-per-watt/scripts/setup.sh   # (1)!
source .venv/bin/activate
  1. Auto-installs uv, creates a Python 3.13 venv, installs the package, and builds the energy monitor. Pass extras as arguments: bash intelligence-per-watt/scripts/setup.sh ollama react
git clone https://github.com/HazyResearch/intelligence-per-watt.git
cd intelligence-per-watt
uv venv && source .venv/bin/activate
uv run scripts/build_energy_monitor.py
uv pip install -e intelligence-per-watt
git clone https://github.com/HazyResearch/intelligence-per-watt.git
cd intelligence-per-watt
uv venv && source .venv/bin/activate
cd energy-monitor && cargo build --release && cd ..
uv pip install -e intelligence-per-watt

Extras

Install only what you need:

uv pip install -e 'intelligence-per-watt[ollama]'     # Ollama client
uv pip install -e 'intelligence-per-watt[vllm]'       # vLLM offline client
uv pip install -e 'intelligence-per-watt[react]'      # ReAct agent (Agno)
uv pip install -e 'intelligence-per-watt[openhands]'   # OpenHands agent
uv pip install -e 'intelligence-per-watt[terminus]'    # Terminus agent
uv pip install -e 'intelligence-per-watt[agents]'     # All agents
uv pip install -e 'intelligence-per-watt[tavily]'     # Tavily web search
uv pip install -e 'intelligence-per-watt[flops]'      # FLOPs estimation
uv pip install -e 'intelligence-per-watt[all]'        # Everything

Platform Setup

Requires NVIDIA driver >= 525 (NVML ships with it).

Telemetry: GPU power, energy, temperature, memory, utilization, tensor core utilization (Ampere+). Optional CPU energy via RAPL.

# Enable RAPL CPU energy (optional, as root)
chmod o+r /sys/class/powercap/intel-rapl/intel-rapl:0/energy_uj

Requires ROCm >= 5.0 with rocm-smi accessible.

Telemetry: GPU power, energy, temperature, memory, utilization. Optional CPU energy via RAPL.

Requires macOS 13+ on M1/M2/M3/M4 with sudo access.

Telemetry: GPU, CPU, and ANE power/energy via powermetrics. CPU memory usage.

Note

No GPU memory or utilization reporting (Apple Unified Memory). Requires password or passwordless sudo for powermetrics.

Falls back to RAPL for CPU energy, or a null collector (memory only) if RAPL is unavailable.

# Load RAPL kernel module if needed
sudo modprobe intel_rapl_common

Inference Runtime

curl -fsSL https://ollama.ai/install.sh | sh
ollama pull llama3.2:1b
ollama serve
pip install vllm  # Requires NVIDIA GPU
vllm serve meta-llama/Llama-3.1-8B-Instruct --port 8000

No local setup needed -- set your API key:

export OPENAI_API_KEY=sk-...

Environment Variables

Create a .env file in the project root (loaded automatically via python-dotenv):

# Required for LLM judge evaluation
OPENAI_API_KEY=sk-...

# Optional
ANTHROPIC_API_KEY=sk-ant-...   # Anthropic models
TAVILY_API_KEY=tvly-...        # Web search in agents

Verify Installation

ipw --help                              # CLI available
ipw list all                            # Registered components
uv run scripts/test_energy_monitor.py   # Hardware telemetry
pytest intelligence-per-watt            # Test suite