Files
upage/.env.example
LIlGG 4b8f37d1e5 Cleanup LLM modules, prompts & config updates
Remove deprecated Claude-specific LLM implementation files (chat-stream-text.ts, constants.ts, tools index) and related DEFAULT_NUM_CTX usage in Ollama provider. Tighten system prompts to require pages and key content be visible without JavaScript and avoid relying on scripts to show primary content. Update tests to cover the new page-generation system prompt behavior. Adjust configuration: change STORAGE_DIR default to ./public/uploads and remove legacy MAX_TOKENS/MAX_RESPONSE_SEGMENTS/DEFAULT_NUM_CTX entries from .env.example and docker-compose files. Propagate STORAGE_DIR change to documentation and update CLAUDE.md to reference the new agent files.
2026-04-30 18:46:52 +08:00

66 lines
2.8 KiB
Plaintext

# Rename this file to .env once you have filled in the below environment variables!
# Whether to enable file logging
USAGE_LOG_FILE=true
# Include this environment variable if you want more logging for debugging locally
LOG_LEVEL=debug
# Operating environment, different from NODE_ENV. NODE_ENV is determined at build time, while this variable is used for enabling certain features in different environments
# development | production | test
OPERATING_ENV=production
# Resource file storage location
STORAGE_DIR=./public/uploads
# Maximum upload size for attachments
MAX_UPLOAD_SIZE_MB=5
# Enabled model providers, currently supporting Anthropic, Cohere, Deepseek, DouBao, Ernie, Google, Groq,
# HuggingFace, Hyperbolic, Kimi, Mistral, Ollama, OpenAI, OpenRouter, Perplexity, Qwen, xAI,
# ZhiPu, Together, LMStudio, AmazonBedrock, Github
LLM_PROVIDER=
# BASE URL of the current model provider, some providers require this to be set, such as OpenAI, Ollama, LMStudio
# DONT USE http://localhost:11434 due to IPV6 issues
# USE EXAMPLE http://127.0.0.1:11434
PROVIDER_BASE_URL=
# API KEY of the current provider, used to request the model API. Some providers do not require this to be set.
# Specifically, if the model provider is AmazonBedrock, this should be a JSON string, reference:
# https://console.aws.amazon.com/iam/home
# The JSON should include the following keys:
# - region: The AWS region where Bedrock is available.
# - accessKeyId: Your AWS access key ID.
# - secretAccessKey: Your AWS secret access key.
# - sessionToken (optional): Temporary session token if using an IAM role or temporary credentials.
# Example JSON:
# {"region": "us-east-1", "accessKeyId": "yourAccessKeyId", "secretAccessKey": "yourSecretAccessKey", "sessionToken": "yourSessionToken"}
PROVIDER_API_KEY=
# MODEL used for page generation (should correspond to LLM_PROVIDER)
LLM_DEFAULT_MODEL=
# MODEL used for auxiliary page generation, such as summarization and pre-analysis. (should correspond to LLM_PROVIDER)
LLM_MINOR_MODEL=
# Optional vision sidecar provider. Configure this when the main model cannot read images directly.
LLM_VISION_PROVIDER=
LLM_VISION_MODEL=
VISION_PROVIDER_BASE_URL=
VISION_PROVIDER_API_KEY=
# Get your Serper API Key https://serper.dev/
SERPER_API_KEY=
# Get your Weather API Key https://www.weatherapi.com/my/
WEATHER_API_KEY=
# Environment variables required for Logto integration
# Logto endpoint
LOGTO_ENDPOINT=
# Logto application ID
LOGTO_APP_ID=
# Logto application secret
LOGTO_APP_SECRET=
# Application base URL, modify according to actual deployment environment
LOGTO_BASE_URL=http://localhost:5173
# Random 36-character string, used to encrypt Logto cookies.
LOGTO_COOKIE_SECRET=
# Whether to enable Logto authentication in development environment, set to false to not enforce authentication in development
LOGTO_ENABLE=false