intel/ipex-llm

Accelerate local LLM inference and finetuning (LLaMA, Mistral, ChatGLM, Qwen, DeepSeek, Mixtral, Gemma, Phi, MiniCPM, Qwen-VL, MiniCPM-V, etc.) on Intel XPU (e.g., local PC with iGPU and NPU, discrete GPU such as Arc, Flex and Max); seamlessly integrate with llama.cpp, Ollama, HuggingFace, LangChain, LlamaIndex, vLLM, DeepSpeed, Axolotl, etc.

PythonStars 8538Forks 1387Watchers 8538Open issues 1489License Apache License 2.0
Details
仓库信息
Ownerintel
Homepage
Last pushed2025-10-14
Last updated2025-12-15
Issues fetched at

Stats

Community at a glance

Loading...

Loading

--

Loading

--

Loading

--

Loading

--