AI Power Progress iA
All Resources / Topics / Topic / llama.cpp
Resource detail

llama.cpp

Core local inference stack for CPU / GPU quantized deployment and experimentation.

beginner docs foundation ggml-org inference-engine learning-paths llm-engineering local-ai rag repo

Resource Metadata

Category

Local AI / LLM Engineering / RAG

Provider

ggml-org

Type

repo

Level

Foundation

Topic

Local AI / LLM Engineering / RAG

Track

Local AI / LLM Engineering / RAG

Section

Learning path

Format

Repo / docs

Status

publishable

Commercial

candidate

Featured

no

Fast start

yes

Sequence

3.0

Priority

Fast

Primary source

direct_links_master

Sources

direct_links_master, mega_open_hub

ID

baed814293f774c7

Open Resource

Fallback Access