llamacpp-dev binary package in openKylin Nile.bedrock loong64

 This package contains the header files to compile applications that use llama.cpp, and contains the binary files to run llama.cpp.
 The main goal of llama.cpp is to enable LLM inference with minimal setup and state-of-the-art performance on a wide variety of hardware - locally and in the cloud.

Publishing history

Date Status Target Pocket Component Section Priority Phased updates Version