Binary package “kytensor-llm” in openkylin nile.bedrock

Inference of Meta's LLaMA model (and others) in pure C/C++

 This package contains the llama.cpp core runtime libraries.
 The main goal of llama.cpp is to enable LLM inference with minimal setup and state-of-the-art performance on a wide variety of hardware - locally and in the cloud.