llamacpp binary package in openKylin Nile V2.0 arm64
This package contains the llama.cpp core runtime libraries.
The main goal of llama.cpp is to enable LLM inference with minimal setup and state-of-the-art performance on a wide variety of hardware - locally and in the cloud.
Publishing history
| Date | Status | Target | Component | Section | Priority | Phased updates | Version |
|---|