Setup Wizard

Get automation ready in minutes.

Run system checks, install missing Python packages, and verify local automation tools before you connect devices.

Recommended before automation
Install pyserial, MQTT, and OpenCV if needed.
Keep this page open while installing packages.
Guided setup

Follow these steps once, then reuse forever.

1
Check AI status

Confirm Ollama or Hugging Face is available and models are loaded.

2
Select models

Pick the default model for Tutor, Generator, and Automation.

3
Run system checks

Verify device drivers and automation prerequisites.

AI status

Check if the AI runtime is reachable and ready.

Available Ollama models

Detected local models on this machine.

Scanning...

Active model

Select the Ollama model to use across the site.

AI provider

Switch between auto, local Ollama, or Hugging Face.

Hugging Face text model

Used by tutor, generator, and automation.

Hugging Face embedding model

Used for retrieval and semantic search.

Hugging Face vision model

Used for multimodal and image tasks.

System checks

Install runtime packages

AI setup for local models, embeddings, and automation

This setup wizard is the fastest way to configure AI providers, local models, and retrieval-ready embeddings. Use it before running the AI Tutor, automation flows, or BCI analysis.

Recommended for: AI setup, local LLM validation, embedding model selection, and automation readiness.
Best practice: run system checks after changing models or providers.
Tip: verify /setup, then jump to /tutor or /automation for immediate use.

FAQ

Which models should I pick? Start with the default curated models, then swap based on speed or domain needs.
Do I need Hugging Face? No. You can run fully local with Ollama or use auto selection.
Why do checks matter? They confirm the runtime can serve AI tasks, embeddings, and automation safely.
IA Tutor
Open
Ask a question to get guidance.
AI Assist
Use this to augment any workflow on the page.