AI Power Progress iA
All Resources / Topics / Topic / llama.cpp
Resource detail

llama.cpp

Core local inference stack for CPU / GPU quantized deployment and experimentation.

beginner docs foundation ggml-org inference-engine learning-paths llm-engineering local-ai rag repo

Resource Metadata

Category

Local AI / LLM Engineering / RAG

Provider

ggml-org

Type

repo

Level

Foundation

Topic

Local AI / LLM Engineering / RAG

Track

Local AI / LLM Engineering / RAG

Section

Learning path

Format

Repo / docs

Status

publishable

Commercial

candidate

Featured

no

Fast start

yes

Sequence

3.0

Priority

Fast

Primary source

direct_links_master

Sources

direct_links_master, mega_open_hub

ID

baed814293f774c7

Open Resource

Fallback Access

Continue Learning

Keep momentum with nearby resources and structured tracks.

Learning placement: track: Local AI / LLM Engineering / RAG ยท stage: Foundation

Tags: beginner docs foundation ggml-org inference-engine learning-paths llm-engineering local-ai rag repo

Related Resources

Similar items by topic, tags, and provider (metadata-only).

reponanGitHub

FLAN Collection

GitHub

One of the best open instruction mixtures; includes FLAN, P3, Super-Natural Instructions, and more.

docsZeroOllama

Ollama

Ollama

Fastest path to running modern local models on a workstation.

docsFoundationOllama

Ollama Docs

Ollama

Official documentation for running and integrating local models with a simple developer workflow.

repoAdvancedMeta

FAISS

Meta

Library for efficient similarity search and clustering of dense vectors at large scale.

docsZeroOpen WebUI

Open WebUI

Open WebUI

Gives you an offline-friendly interface for local models, documents, and workflows.

docsBuilddeepset

Haystack

deepset

Solid framework for retrieval pipelines, agents, evaluation, and production patterns.