✓
Copied
CanIRunAICanIRunAI
Docs|PT
Docs
PortuguesPT
 ██████╗ █████╗ ███╗   ██╗    ██╗    ██████╗ ██╗   ██╗███╗   ██╗     █████╗ ██╗██████╗
██╔════╝██╔══██╗████╗  ██║    ██║    ██╔══██╗██║   ██║████╗  ██║    ██╔══██╗██║╚════██╗
██║     ███████║██╔██╗ ██║    ██║    ██████╔╝██║   ██║██╔██╗ ██║    ███████║██║  ▄███╔╝
██║     ██╔══██║██║╚██╗██║    ██║    ██╔══██╗██║   ██║██║╚██╗██║    ██╔══██║██║  ▀▀══╝
╚██████╗██║  ██║██║ ╚████║    ██║    ██║  ██║╚██████╔╝██║ ╚████║    ██║  ██║██║  ██╗
 ╚═════╝╚═╝  ╚═╝╚═╝  ╚═══╝    ╚═╝    ╚═╝  ╚═╝ ╚═════╝ ╚═╝  ╚═══╝    ╚═╝  ╚═╝╚═╝  ╚═╝
 ██████╗ █████╗ ███╗   ██╗    ██╗    ██████╗ ██╗   ██╗███╗   ██╗     █████╗ ██╗██████╗
██╔════╝██╔══██╗████╗  ██║    ██║    ██╔══██╗██║   ██║████╗  ██║    ██╔══██╗██║╚════██╗
██║     ███████║██╔██╗ ██║    ██║    ██████╔╝██║   ██║██╔██╗ ██║    ███████║██║  ▄███╔╝
██║     ██╔══██║██║╚██╗██║    ██║    ██╔══██╗██║   ██║██║╚██╗██║    ██╔══██║██║  ▀▀══╝
╚██████╗██║  ██║██║ ╚████║    ██║    ██║  ██║╚██████╔╝██║ ╚████║    ██║  ██║██║  ██╗
 ╚═════╝╚═╝  ╚═╝╚═╝  ╚═══╝    ╚═╝    ╚═╝  ╚═╝ ╚═════╝ ╚═╝  ╚═══╝    ╚═╝  ╚═╝╚═╝  ╚═╝
 ██████╗ █████╗ ███╗   ██╗    ██╗    ██████╗ ██╗   ██╗███╗   ██╗     █████╗ ██╗██████╗
██╔════╝██╔══██╗████╗  ██║    ██║    ██╔══██╗██║   ██║████╗  ██║    ██╔══██╗██║╚════██╗
██║     ███████║██╔██╗ ██║    ██║    ██████╔╝██║   ██║██╔██╗ ██║    ███████║██║  ▄███╔╝
██║     ██╔══██║██║╚██╗██║    ██║    ██╔══██╗██║   ██║██║╚██╗██║    ██╔══██║██║  ▀▀══╝
╚██████╗██║  ██║██║ ╚████║    ██║    ██║  ██║╚██████╔╝██║ ╚████║    ██║  ██║██║  ██╗
 ╚═════╝╚═╝  ╚═╝╚═╝  ╚═══╝    ╚═╝    ╚═╝  ╚═╝ ╚═════╝ ╚═╝  ╚═══╝    ╚═╝  ╚═╝╚═╝  ╚═╝

Find out which AI models your PC can run

We detect your hardware automatically and show the best models for you.

Detecting hardware...
82
of 100
◈High-End Tier
79
Tier S
12
Tier A
2
Tier B
3
Tier C
13
Tier D
Compatible93
Limited16

✓ Runs well models up to 8-14B

⚠ May struggle with models above 14B

Upgrade Path
RTX 3060YOU
Score 8293 models
RTX 4080 16GB
Score 91106 models~R$ 6.500
RTX 4090 24GB
Score 97109 models~R$ 12.000
📢
Advertise hereReach local AI enthusiasts looking for hardware and tools

Recommended Models

116 models
1GoogleGemma 3 12B
ChatBest choice
10s8GBBalanced
10s8GBBalancedBest choice
2MetaLlama 3.2 Vision 11B
Multimodal
10s8GBBalanced
10s8GBBalanced
3NONous Hermes 2
Chat
9s7GBBalanced
9s7GBBalanced
4Mistral AIMistral Nemo 12B
Chat
10s7GBBalanced
10s7GBBalanced
5GoogleGemma 2 9B
Chat
8s6.5GBBalanced
8s6.5GBBalanced
6UPSolar 10.7B
Chat
9s6.5GBBalanced
9s6.5GBBalanced
7YIYi Coder 9B
Coding
9s6GBBalanced
9s6GBBalanced
8YIYi 1.5 9B
Chat
9s6GBBalanced
9s6GBBalanced
9AlibabaQwen 2.5 VL 7B
Multimodal
9s6GBBalanced
9s6GBBalanced
10MetaLlama 3.1 8B
Chat
7s5.5GBBalanced
7s5.5GBBalanced
11AlibabaQwen3 8B
Chat
7s5.5GBBalanced
7s5.5GBBalanced
12DeepSeekDeepSeek R1 8B
Reasoning
7s5.5GBBalanced
7s5.5GBBalanced
1–12 of 116
...

Learn to Run Local AI

Tutorials, demos and benchmarks from top channels

Host ALL your AI locally

NetworkChuck

Ollama Course – Build AI Apps Locally

freeCodeCamp

DeepSeek R1 powered VS Code Extension

Fireship

The Ollama Course: Intro to Ollama

Technovangelist
Ollama

Ready to run?

Install Ollama and run any model with a single command. Click "ollama pull" on the cards above to copy the command.

Detect via Terminal

One command. Detects GPU, CPU and RAM, shows the top 3 models for you and opens the site with everything filled in.

$npx can-i-run-ai
terminal
$ npx can-i-run-ai
╔══════════════════════════════════════════════╗
║ CanIRunAI — Quais IAs seu PC roda? ║
╚══════════════════════════════════════════════╝
✓ GPU NVIDIA GeForce RTX 3060
✓ CPU AMD Ryzen 7 5800X
✓ RAM 32 GB
Score: ████████████████████ 100/100
Top 3 modelos pra voce:
────────────────────────────────────────────
🥇 Qwen3 32B (Chat)
S EXCELENTE · 12 tok/s
$ ollama pull qwen3:32b
🥈 Qwen 2.5 Coder 14B (Code)
A BOM · 26 tok/s
🥉 QwQ 32B (Raciocinio)
S EXCELENTE · 12 tok/s
────────────────────────────────────────────
+87 modelos, filtros avancados e mais:
https://canirunai.kc1t.com?gpu=rtx3060&ram=32

Zero dependencies

Uses only Node.js built-ins. Nothing to install besides npx.

Cross-platform

Windows, macOS and Linux. Detects hardware via native OS commands.

100% open source

Open source on GitHub. No data sent to servers.

Native apps — coming soon

WindowsCOMING SOON

Windows 10 and 11 — portable .exe

macOSCOMING SOON

Apple Silicon & Intel — native app

LinuxCOMING SOON

Ubuntu, Fedora, Arch — AppImage

About CanIRunAI

CanIRunAI helps you find out which AI models can run on your computer. We detect your hardware automatically and calculate compatibility with dozens of popular open-source models.

Auto detection

GPU, RAM and CPU detected via WebGL and browser APIs.

Estimated performance

Tok/s and response time scaled by your GPU bandwidth.

100% local

No data leaves your browser. Everything runs client-side.

Compatible with Ollama, LM Studio and other local runtimes. Benchmark data based on real measurements with RTX 3060 as baseline.

© 2026 CanIRunAI
Docs