Ollama提供了部分OpenAI API的兼容,以帮助将现有应用程序连接到Ollama。今天大家就来一起看看该怎么处理吧。
from openai import OpenAI client = OpenAI( base_url='http://localhost:11434/v1/', # required but ignored api_key='ollama', ) chat_completion = client.chat.completions.create( messages=[ { 'role': 'user', 'content': 'Say this is a test', } ], model='llama3', )
import OpenAI from 'openai' const openai = new OpenAI({ baseURL: 'http://localhost:11434/v1/', // required but ignored apiKey: 'ollama', }) const chatCompletion = await openai.chat.completions.create({ messages: [{ role: 'user', content: 'Say this is a test' }], model: 'llama3', })
curl http://localhost:11434/v1/chat/completions \ -H "Content-Type: application/json" \ -d '{ "model": "llama3", "messages": [ { "role": "system", "content": "You are a helpful assistant." }, { "role": "user", "content": "Hello!" } ] }'
在使用模型之前,先进行本地拉取模型
ollama pull llama3
对于依赖于默认OpenAI模型名称(如gpt-3.5-turbo)的工具,使用ollama cp命令将现有模型名称复制到临时名称:
ollama cp llama3 gpt-3.5-turbo
然后,可以在模型字段中指定这个新模型名
curl http://localhost:11434/v1/chat/completions \ -H "Content-Type: application/json" \ -d '{ "model": "gpt-3.5-turbo", "messages": [ { "role": "user", "content": "Hello!" } ] }'
powered by kaifamiao