We provide a set of usage methods compatible with the OpenAI API, simply install the StackFlow package.
sudo apt install lib-llm llm-sys llm-llm llm-openai-api
sudo apt install llm-model-qwen2.5-ha-0.5b-ctx-axcl
curl http://192.168.20.27:8000/v1/models \
-H "Content-Type: application/json"
curl http://192.168.20.27:8000/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer sk-xxxxxxxx" \
-d '{
"model": "qwen2.5-HA-0.5B-ctx-axcl",
"messages": [
{"role": "developer", "content": "You are a helpful home assistant."},
{"role": "user", "content": "Turn on the light!"}
]
}'
from openai import OpenAI
client = OpenAI(
api_key="sk-",
base_url="http://192.168.20.27:8000/v1"
)
client.models.list()
print(client.models.list())
from openai import OpenAI
client = OpenAI(
api_key="sk-",
base_url="http://192.168.20.27:8000/v1"
)
completion = client.chat.completions.create(
model="qwen2.5-HA-0.5B-ctx-axcl",
messages=[
{"role": "developer", "content": "You are a helpful home assistant."},
{"role": "user", "content": "Turn on the light!"}
]
)
print(completion.choices[0].message)
Get ChatBox
Click settings, and add the model provider.
In API Host, fill in the RaspberryPi's IP and API path, retrieve and add the installed models.
Create a new chat, select the qwen2.5-HA-0.5-axcl model provided by LLM8850.
Modify the maximum context message length to 0.
Supports setting System Prompt.