* fix text * remove lint from docker publish workflow * gemini base url docs * feat: add multimodal support for openai-compatible providers - Add helper function to check OpenAI-compatible provider availability per mode - Update provider detection to support language, embedding, STT, and TTS modalities - Implement mode-specific environment variable detection (LLM, EMBEDDING, STT, TTS) - Maintain backward compatibility with generic OPENAI_COMPATIBLE_BASE_URL - Add comprehensive unit tests for all configuration scenarios - Update .env.example with mode-specific environment variables - Update provider support matrix in ai-models.md - Create comprehensive openai-compatible.md setup guide This enables users to configure different OpenAI-compatible endpoints for different AI capabilities (e.g., LM Studio for language models, dedicated server for embeddings) while maintaining full backward compatibility. * upgrade * chore: change docker release strategy |
||
|---|---|---|
| .. | ||
| README.md | ||
| test_models_api.py | ||
| test_source_chat.py | ||
| test_source_chat_api.py | ||
README.md
Coming Soon