Part 2 - Installing Ollama
Your Step-by-Step Guide to Local AI
The journey to your personal AI lab begins with a surprisingly simple step: installing Ollama. Unlike many AI toolkits that require navigating complex dependency trees or compilation processes, Ollama’s creators have prioritized making the installation process as frictionless as possible. Let’s get your lab equipment set up.
System Requirements
Before we dive in, let’s make sure your machine is ready to host these powerful models:
-
Recommended:
- 16GB+ RAM (8GB minimum, but you’ll be limited to smaller models)
- Multi-core CPU (the more cores, the better)
- 10GB+ free disk space for models
- macOS 12+ or a modern Linux distribution
-
Minimum viable setup:
- 8GB RAM (you’ll be limited to smaller models)
- Dual-core CPU (expect slower responses)
- 5GB free disk space
- macOS 12+ or Linux with glibc 2.31+
Remember: The better your hardware, the faster and more capable your personal AI lab will be. Even modest improvements in RAM can significantly enhance performance.
macOS Installation
Apple Silicon and Intel Macs both work wonderfully with Ollama, and the installation process is refreshingly straightforward. You have two options:
Option 1: Using Homebrew (Recommended)
If you already have Homebrew installed (and most developers do), this is the simplest method:
-
Install via Homebrew:
brew install ollama
-
Start the Ollama service:
ollama serve
-
Verify the installation: Open a new Terminal window and run:
ollama --version
You should see the version number displayed, confirming that Ollama is installed and available from the command line.
Option 2: Manual Installation
If you prefer not to use Homebrew:
-
Download the official installer:
- Visit ollama.ai and click the download button
- Or use this direct link: https://ollama.ai/download/Ollama-darwin.zip
-
Install the application:
- Open the downloaded .zip file
- Drag the Ollama app to your Applications folder
- Launch Ollama from your Applications folder
-
Verify the installation: Open Terminal and run:
ollama --version
That’s it! No complex configuration files, no dependency nightmares. Ollama runs as a background service and is now ready to pull models.
Linux Installation
For Linux users, the process is almost as simple, with a convenient one-line installer:
-
Run the installation script:
curl -fsSL https://ollama.ai/install.sh | sh
-
Start the Ollama service:
ollama serve
-
Verify the installation: In a new terminal window, run:
ollama --version
For most Linux distributions, this is all you need. The script detects your environment and installs the appropriate version.
Distribution-Specific Notes
- Ubuntu/Debian: The installer should work out of the box on Ubuntu 22.04+
-
Arch Linux: Ollama is available in the AUR as
ollama-bin
- Fedora/RHEL: The installer works on Fedora 37+ and RHEL 9+
Optional: Installing UV for Python Script Execution
For some examples later in this series, we’ll use UV, a fast Python package installer and resolver that also provides the convenient uvx
command for running Python scripts without permanent installation. This is optional and only needed if you want to follow the document extraction examples.
To install UV on macOS, simply run:
brew install uv
For Linux or other systems, you can follow the installation instructions at the UV documentation site.
Troubleshooting Common Installation Issues
-
“Command not found” error: The Ollama binary might not be in your PATH. Try logging out and back in, or manually add it with:
export PATH=$PATH:/usr/local/bin
-
Permission issues: If you see permission errors, you might need to run:
sudo chown -R $(whoami) /usr/local/bin/ollama
-
Service won’t start: If the Ollama service fails to start, check for port conflicts:
lsof -i :11434
(Ollama uses port 11434 by default)
Post-Installation Verification
Let’s make sure everything is working correctly:
-
Check that the service is running:
ollama ls
curl http://localhost:11434/api/tags
This should return an empty JSON array (
{"models":[]}
) since you haven’t pulled any models yet. -
Run a simple health check:
ollama run llama3.1:latest "Say hello"
This will download a model (if you don’t have it already) and run a simple inference. If you see a greeting in response, your installation is working perfectly!
If you get the following error:
pulling manifest Error: pull model manifest: file does not exist
Check the name of the models you have and rename the model.
Either way, if ollama --version
returns some value, you are good to proceed to the next step.
Congratulations! You’ve set up the foundation of your AI lab. The hard part is over, and the exciting part begins. In our next post, we’ll explore model selection and management to find the right AI models for your needs.