Things

Updated April 2026

Compute

Displays

Mobile

AI

The Windows workstation runs Ollama with the RTX 5090 when I want inference local. The Mac mini connects over Tailscale as a client for agents and automation. Heavier work routes to MiniMax in the cloud.

Models rotate frequently. A mix of Gemma, Qwen, and other open-weight models from 2B to 32B, depending on the task. The 5090's 32GB VRAM handles up to ~30B at full speed on pure GPU inference.

Software

Subscriptions

Previously Used

Hardware

Self-hosted

Software