AI Power Progress iA
All Resources / Topics / Topic / llama.cpp
Resource detail

llama.cpp

Run LLMs locally; quantization, GGUF, and CPU/GPU backends.

intermediate repo

Resource Metadata

Category

Local AI

Provider

curated

Type

repo

Level

unknown

Topic

Local AI (On-Prem / Edge)

Track

Local AI (On-Prem / Edge)

Section

n/a

Format

n/a

Status

publishable

Commercial

unknown

Featured

no

Fast start

no

Sequence

n/a

Priority

n/a

Primary source

open_source_ai_hub_catalog

Sources

learning_paths, open_source_ai_hub_catalog

ID

26679f1ab2f3714f

Open Resource

Fallback Access

Continue Learning

Keep momentum with nearby resources and structured tracks.

Learning placement: track: Local AI (On-Prem / Edge) ยท stage: build

Tags: intermediate repo

Related Resources

Similar items by topic, tags, and provider (metadata-only).

repobuildcurated

vLLM

curated

High-throughput LLM serving and inference optimization.

repofoundationcurated

Ollama

curated

Local model serving with simple model management.

repobuildcurated

Qdrant

curated

Vector database with filtering; good for RAG apps.

repobuildcurated

OpenBCI

curated

Open-source biosensing ecosystem; hardware + software.

repobuildcurated

MONAI

curated

Medical imaging deep learning framework.

repobuildXJTU / SumYoung (GitHub)

XJTU-SY Bearing Dataset

XJTU / SumYoung (GitHub)

Run-to-failure bearing dataset for RUL modeling, sequence learning, and prognostics.

repobuildVerilator

Verilator

Verilator

High-performance Verilog/SystemVerilog simulator for fast testing and CI.