{"id":28379,"date":"2025-09-02T10:30:00","date_gmt":"2025-09-02T08:30:00","guid":{"rendered":"https:\/\/monraspberry.com\/?p=28379"},"modified":"2025-08-18T15:27:20","modified_gmt":"2025-08-18T13:27:20","slug":"install-ollama-raspberry-pi","status":"publish","type":"post","link":"https:\/\/monraspberry.com\/en\/installer-ollama-raspberry-pi\/","title":{"rendered":"Installing Ollama on Raspberry Pi: complete tutorial and practical guide"},"content":{"rendered":"<h2 class=\"wp-block-heading\">Introduction<\/h2>\n\n\n\n<p>Artificial intelligence is now available to everyone, and not just on powerful servers. With <strong><a href=\"https:\/\/monraspberry.com\/en\/ollama-raspberry-pi\/\" target=\"_blank\" data-type=\"link\" data-id=\"https:\/\/monraspberry.com\/ollama-raspberry-pi\/\" rel=\"noreferrer noopener\">Ollama<\/a><\/strong>you can run language models (LLMs) directly in <strong>local<\/strong>.<br>This tutorial shows you how to install Ollama on a <strong>Raspberry Pi 5 (or Pi 4)<\/strong>and how to start using your first AI models locally.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">1. Prerequisites<\/h2>\n\n\n\n<h3 class=\"wp-block-heading\">Recommended equipment<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong><a href=\"https:\/\/monraspberry.com\/en\/product\/raspberry-pi-5-8gb\/\">Raspberry Pi 5<\/a><\/strong> (or Pi 4, but less powerful).<\/li>\n\n\n\n<li><strong>8 GB RAM<\/strong> recommended (4 GB possible with smaller models).<\/li>\n\n\n\n<li><strong>Card <a href=\"https:\/\/monraspberry.com\/en\/product\/sandisk-ultra-micro-sd-card\/\">microSD 32 GB<\/a> minimum<\/strong> or better : <strong><a href=\"https:\/\/monraspberry.com\/en\/cat\/raspberry-pi-5\/raspberry-pi-5-accessories\/\">SSD NVMe<\/a> \/ USB<\/strong>.<\/li>\n\n\n\n<li><strong>Internet connection<\/strong> to download Ollama and the models.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Software<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Raspberry Pi OS (64-bit)<\/strong> based on Debian Bookworm or Bullseye.<\/li>\n\n\n\n<li>An accessible terminal (via keyboard\/screen or SSH).<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\">2. System update<\/h2>\n\n\n\n<p>Before installing, update your Pi :<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code\"><pre class=\"brush: plain; title: ; notranslate\" title=\"\">\nsudo apt update &amp;&amp; sudo apt upgrade -y\n<\/pre><\/div>\n\n\n<p>Then restart :<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code\"><pre class=\"brush: plain; title: ; notranslate\" title=\"\">\nsudo reboot\n<\/pre><\/div>\n\n\n<h2 class=\"wp-block-heading\">3. Installing Ollama on Raspberry Pi<\/h2>\n\n\n\n<p>Ollama offers an official installation script. Download it and run it:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code\"><pre class=\"brush: plain; title: ; notranslate\" title=\"\">\ncurl -fsSL https:\/\/ollama.com\/install.sh | sh\n<\/pre><\/div>\n\n\n<p>\u26a1 This installs Ollama and automatically configures the system service.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">4. Check installation<\/h2>\n\n\n\n<p>Once installed, check that Ollama is active:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code\"><pre class=\"brush: plain; title: ; notranslate\" title=\"\">\nollama --version\n<\/pre><\/div>\n\n\n<p>You should see a version number displayed.<\/p>\n\n\n\n<p>You can also check that the service is running:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code\"><pre class=\"brush: plain; title: ; notranslate\" title=\"\">\nsystemctl status ollama\n<\/pre><\/div>\n\n\n<h2 class=\"wp-block-heading\">5. Download and run an IA model<\/h2>\n\n\n\n<h3 class=\"wp-block-heading\">Example with LLaMA 2<\/h3>\n\n\n\n<p>To execute <strong>LLaMA 2<\/strong>type :<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code\"><pre class=\"brush: plain; title: ; notranslate\" title=\"\">\nollama run llama2\n<\/pre><\/div>\n\n\n<p>The template will be automatically downloaded (several hundred MB in size).<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Example with Mistral<\/h3>\n\n\n<div class=\"wp-block-syntaxhighlighter-code\"><pre class=\"brush: plain; title: ; notranslate\" title=\"\">\nollama run mistral\n<\/pre><\/div>\n\n\n<p>Example with a lighter model (TinyLlama)<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code\"><pre class=\"brush: plain; title: ; notranslate\" title=\"\">\nollama run tinyllama\n<\/pre><\/div>\n\n\n<p>\ud83d\udc49 On the Raspberry Pi, prefer the <strong>streamlined and quantified models<\/strong> (Q4, Q5), otherwise performance will be very slow.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">6. Interact with Ollama<\/h2>\n\n\n\n<p>Once the model has been launched, you can chat directly in the terminal.<\/p>\n\n\n\n<p>Example:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code\"><pre class=\"brush: plain; title: ; notranslate\" title=\"\">\n&gt; Hello, who are you?\nI'm a language model powered by Ollama!\n<\/pre><\/div>\n\n\n<p>To exit :<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code\"><pre class=\"brush: plain; title: ; notranslate\" title=\"\">\n\/bye\n<\/pre><\/div>\n\n\n<h2 class=\"wp-block-heading\">7. Using Ollama via API REST<\/h2>\n\n\n\n<p>Ollama integrates a <strong>Local API<\/strong>. You can access it on the port <code>11434<\/code>.<\/p>\n\n\n\n<p>Example with <code>curl<\/code> :<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code\"><pre class=\"brush: plain; title: ; notranslate\" title=\"\">\ncurl http:\/\/localhost:11434\/api\/generate -d '{\n  \"model\": \"llama2\",\n  \"prompt\": \"Explain the Raspberry Pi in 3 sentences.\"\n}'\n<\/pre><\/div>\n\n\n<p>\ud83d\udc49 Result: a JSON response containing the generated text.<\/p>\n\n\n\n<p>This allows you to integrate Ollama into your <strong>applications, websites or home automation projects<\/strong>.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">8. Optimizations for Raspberry Pi<\/h2>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Using an SSD<\/strong> rather than a microSD card \u2192 faster and more reliable.<\/li>\n\n\n\n<li><strong>Choose lightweight models<\/strong> TinyLlama, GPT4All-J, Vicuna quantis\u00e9.<\/li>\n\n\n\n<li><strong>Limit long prompts<\/strong> The Pi has little memory, so avoid overly large entries.<\/li>\n\n\n\n<li><strong>Ventilation<\/strong> If you use the Pi 5 intensively, you will need to install a fan or a <a href=\"https:\/\/monraspberry.com\/en\/product\/neo-box-for-raspberry-pi-5\/\">ventilated housing<\/a>.<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\">9. Routine troubleshooting<\/h2>\n\n\n\n<h3 class=\"wp-block-heading\">Ollama won't start<\/h3>\n\n\n\n<p>Check the service:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code\"><pre class=\"brush: plain; title: ; notranslate\" title=\"\">\nsystemctl restart ollama\n<\/pre><\/div>\n\n\n<h3 class=\"wp-block-heading\">Insufficient memory error<\/h3>\n\n\n\n<p>Try a smaller model or reduce the context size. Example:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code\"><pre class=\"brush: plain; title: ; notranslate\" title=\"\">\nOLLAMA_CONTEXT=512 ollama run tinyllama\n<\/pre><\/div>\n\n\n<h3 class=\"wp-block-heading\">Slow download<\/h3>\n\n\n\n<p>Use an Ethernet connection for the first download of models.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">10. Concrete applications with Ollama on the Pi<\/h2>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Local personal assistant<\/strong> \u2192 command-line chatbot.<\/li>\n\n\n\n<li><strong>Home AI server<\/strong> \u2192 connected to Home Assistant to control your home in natural language.<\/li>\n\n\n\n<li><strong>Learning<\/strong> \u2192 understand how LLMs work and their limitations.<\/li>\n\n\n\n<li><strong>Development<\/strong> \u2192 create local apps (intelligent note-taking, summaries, text generators).<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\">Conclusion<\/h2>\n\n\n\n<p>Install <strong>Ollama on Raspberry Pi<\/strong> is an accessible and exciting way to explore artificial intelligence locally. Of course, the Pi can't compete with GPU-equipped servers, but it does make it possible to <strong>test, learn and create concrete projects<\/strong> with minimal cost and energy consumption.<\/p>\n\n\n\n<p>\ud83d\udc49 With Ollama, your Raspberry Pi becomes a <strong>personal AI mini-server<\/strong>entirely under your control.<\/p>","protected":false},"excerpt":{"rendered":"<p>Learn how to install and configure Ollama on Raspberry Pi to run your AI models locally. Complete step-by-step tutorial.<\/p>","protected":false},"author":1,"featured_media":28381,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[308],"tags":[],"class_list":["post-28379","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-tutos"],"featured_image_src":{"landsacpe":["https:\/\/monraspberry.com\/wp-content\/uploads\/2025\/08\/Installer-Ollama-sur-Raspberry-Pi-1140x445.png",1140,445,true],"list":["https:\/\/monraspberry.com\/wp-content\/uploads\/2025\/08\/Installer-Ollama-sur-Raspberry-Pi-463x348.png",463,348,true],"medium":["https:\/\/monraspberry.com\/wp-content\/uploads\/2025\/08\/Installer-Ollama-sur-Raspberry-Pi-300x169.png",300,169,true],"full":["https:\/\/monraspberry.com\/wp-content\/uploads\/2025\/08\/Installer-Ollama-sur-Raspberry-Pi.png",1920,1080,false]},"jetpack_featured_media_url":"https:\/\/monraspberry.com\/wp-content\/uploads\/2025\/08\/Installer-Ollama-sur-Raspberry-Pi.png","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/monraspberry.com\/en\/wp-json\/wp\/v2\/posts\/28379","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/monraspberry.com\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/monraspberry.com\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/monraspberry.com\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/monraspberry.com\/en\/wp-json\/wp\/v2\/comments?post=28379"}],"version-history":[{"count":0,"href":"https:\/\/monraspberry.com\/en\/wp-json\/wp\/v2\/posts\/28379\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/monraspberry.com\/en\/wp-json\/wp\/v2\/media\/28381"}],"wp:attachment":[{"href":"https:\/\/monraspberry.com\/en\/wp-json\/wp\/v2\/media?parent=28379"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/monraspberry.com\/en\/wp-json\/wp\/v2\/categories?post=28379"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/monraspberry.com\/en\/wp-json\/wp\/v2\/tags?post=28379"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}