Skip to content

Latest commit

 

History

History
172 lines (135 loc) · 6.83 KB

README.md

File metadata and controls

172 lines (135 loc) · 6.83 KB

Query LLM

Query LLM is a simple, zero-dependency CLI tool for querying large language models (LLMs). It works seamlessly with both cloud-based LLM services (e.g., OpenAI GPT, Groq, OpenRouter) and locally hosted LLMs (e.g., llama.cpp, LocalAI, Ollama, etc.). Internally, it guides the LLM to perform step-by-step reasoning using the Chain of Thought method.

To run Query LLM, ensure that Node.js (v18 or higher) or Bun is installed.

./query-llm.js

To obtain quick responses, pipe a question directly:

echo "Top travel destinations in Indonesia?" | ./query-llm.js

For specific tasks:

echo "Translate 'thank you' into German" | ./query-llm.js

For simpler interactions with LLMs using zero-shot prompting, refer to the sister project, ask-llm.

Using Local LLM Servers

Supported local LLM servers include llama.cpp, Jan, Ollama, and LocalAI.

To utilize llama.cpp locally with its inference engine, ensure to load a quantized model such as Phi-3.5 Mini, or Llama-3.1 8B. Adjust the environment variable LLM_API_BASE_URL accordingly:

/path/to/llama-server -m Meta-Llama-3.1-8B-Instruct-Q4_K_M.gguf
export LLM_API_BASE_URL=http://127.0.0.1:8080/v1

To use Jan with its local API server, refer to its documentation and load a model like Phi-3 Mini, LLama-3 8B, or OpenHermes 2.5 and set the environment variable LLM_API_BASE_URL:

export LLM_API_BASE_URL=http://127.0.0.1:1337/v1
export LLM_CHAT_MODEL='llama3-8b-instruct'

To use Ollama locally, load a model and configure the environment variable LLM_API_BASE_URL:

ollama pull phi3.5
export LLM_API_BASE_URL=http://127.0.0.1:11434/v1
export LLM_CHAT_MODEL='phi3.5'

For LocalAI, initiate its container and adjust the environment variable LLM_API_BASE_URL:

docker run -ti -p 8080:8080 localai/localai tinyllama-chat
export LLM_API_BASE_URL=http://localhost:3928/v1

Using Managed LLM Services

Supported LLM services include AI21, Deep Infra, DeepSeek, Fireworks, Groq, Hyperbolic, Lepton, Mistral, Novita, Octo, OpenAI, OpenRouter, and Together.

For configuration specifics, refer to the relevant section. The examples use Llama-3.1 8B (or GPT-4o Mini for OpenAI), but any LLM with at least 7B parameters should work just as well, such as Mistral 7B, Qwen-2 7B, or Gemma-2 9B.

export LLM_API_BASE_URL=https://api.ai21.com/studio/v1
export LLM_API_KEY="yourownapikey"
export LLM_CHAT_MODEL=jamba-1.5-mini
export LLM_API_BASE_URL=https://api.deepinfra.com/v1/openai
export LLM_API_KEY="yourownapikey"
export LLM_CHAT_MODEL="meta-llama/Meta-Llama-3.1-8B-Instruct"
export LLM_API_BASE_URL=https://api.deepseek.com/v1
export LLM_API_KEY="yourownapikey"
export LLM_CHAT_MODEL="deepseek-chat"
export LLM_API_BASE_URL=https://api.fireworks.ai/inference/v1
export LLM_API_KEY="yourownapikey"
export LLM_CHAT_MODEL="accounts/fireworks/models/llama-v3p1-8b-instruct"
export LLM_API_BASE_URL=https://api.groq.com/openai/v1
export LLM_API_KEY="yourownapikey"
export LLM_CHAT_MODEL="llama-3.1-8b-instant"
export LLM_API_BASE_URL=https://api.hyperbolic.xyz/v1
export LLM_API_KEY="yourownapikey"
export LLM_CHAT_MODEL="meta-llama/Meta-Llama-3.1-8B-Instruct"
export LLM_API_BASE_URL=https://llama3-1-8b.lepton.run/api/v1
export LLM_API_KEY="yourownapikey"
export LLM_CHAT_MODEL="llama3-1-8b"
export LLM_API_BASE_URL=https://api.mistral.ai/v1
export LLM_API_KEY="yourownapikey"
export LLM_CHAT_MODEL="open-mistral-7b"
export LLM_API_BASE_URL=https://api.novita.ai/v3/openai
export LLM_API_KEY="yourownapikey"
export LLM_CHAT_MODEL="meta-llama/llama-3.1-8b-instruct"
export LLM_API_BASE_URL=https://text.octoai.run/v1/
export LLM_API_KEY="yourownapikey"
export LLM_CHAT_MODEL="meta-llama-3.1-8b-instruct"
export LLM_API_BASE_URL=https://api.openai.com/v1
export LLM_API_KEY="yourownapikey"
export LLM_CHAT_MODEL="gpt-4o-mini"
export LLM_API_BASE_URL=https://openrouter.ai/api/v1
export LLM_API_KEY="yourownapikey"
export LLM_CHAT_MODEL="meta-llama/llama-3.1-8b-instruct"
export LLM_API_BASE_URL=https://api.together.xyz/v1
export LLM_API_KEY="yourownapikey"
export LLM_CHAT_MODEL="meta-llama/Meta-Llama-3.1-8B-Instruct-Turbo"

Evaluating Questions

If there is a text file containing pairs of User and Assistant messages, it can be evaluated with Query LLM:

User: Which planet is the largest?
Assistant: The largest planet is /Jupiter/.

User: and the smallest?
Assistant: The smallest planet is /Mercury/.

Assuming the above content is in qa.txt, executing the following command will initiate a multi-turn conversation with the LLM, asking questions sequentially and verifying answers using regular expressions:

./query-llm.js qa.txt

For additional examples, please refer to the tests/ subdirectory.

Two environment variables can be used to modify the behavior:

  • LLM_DEBUG_FAIL_EXIT: When set, Query LLM will exit immediately upon encountering an incorrect answer, and subsequent questions in the file will not be processed.

  • LLM_DEBUG_PIPELINE: When set, and if the expected regular expression does not match the answer, the internal LLM pipeline will be printed to stdout.