commit
407dc9a401
75
CHANGELOG.md
75
CHANGELOG.md
|
@ -5,6 +5,81 @@ All notable changes to this project will be documented in this file.
|
||||||
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/),
|
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/),
|
||||||
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
|
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
|
||||||
|
|
||||||
|
## [0.6.23] - 2025-08-21
|
||||||
|
|
||||||
|
### Added
|
||||||
|
|
||||||
|
- ⚡ **Asynchronous Chat Payload Processing**: Refactored the chat completion pipeline to return a response immediately for streaming requests involving web search or tool calls. This enables users to stop ongoing generations promptly and preventing network timeouts during lengthy preprocessing phases, thus significantly improving user experience and responsiveness.
|
||||||
|
- 📁 **Asynchronous File Upload with Polling**: Implemented an asynchronous file upload process with frontend polling to resolve gateway timeouts and improve reliability when uploading large files. This ensures that even lengthy file processing, such as embedding or transcription, does not block the user interface or lead to connection timeouts, providing a smoother experience for all file operations.
|
||||||
|
- 📈 **Database Performance Indexes and Migration Script**: Introduced new database indexes on the "chat", "tag", and "function" tables to significantly enhance query performance for SQLite and PostgreSQL installations. For existing deployments, a new Alembic migration script is included to seamlessly apply these indexes, ensuring faster filtering and sorting operations across the platform.
|
||||||
|
- ✨ **Enhanced Database Performance Options**: Introduced new configurable options to significantly improve database performance, especially for SQLite. This includes "DATABASE_ENABLE_SQLITE_WAL" to enable SQLite WAL (Write-Ahead Logging) mode for concurrent operations, and "DATABASE_DEDUPLICATE_INTERVAL" which, in conjunction with a new deduplication mechanism, reduces redundant updates to "user.last_active_at", minimizing write conflicts across all database types.
|
||||||
|
- 💾 **Save Temporary Chats Button**: Introduced a new 'Save Chat' button for conversations initiated in temporary mode. This allows users to permanently save valuable temporary conversations to their chat history, providing greater flexibility and ensuring important discussions are not lost.
|
||||||
|
- 📂 **Chat Movement Options in Menu**: Added the ability to move chats directly to folders from the chat menu. This enhances chat organization and allows users to manage their conversations more efficiently by relocating them between folders with ease.
|
||||||
|
- 💬 **Language-Aware Follow-Up Suggestions**: Enhanced the AI's follow-up question generation to dynamically adapt to the primary language of the current chat. Follow-up prompts will now be suggested in the same language the user and AI are conversing in, ensuring more natural and contextually relevant interactions.
|
||||||
|
- 👤 **Expanded User Profile Details**: Introduced new user profile fields including username, bio, gender, and date of birth, allowing for more comprehensive user customization and information management. This enhancement includes corresponding updates to the database schema, API, and user interface for seamless integration.
|
||||||
|
- 👥 **Direct Navigation to User Groups from User Edit**: Enhanced the user edit modal to include a direct link to the associated user group. This allows administrators to quickly navigate from a user's profile to their group settings, streamlining user and group management workflows.
|
||||||
|
- 🔧 **Enhanced External Tool Server Compatibility**: Improved handling of responses from external tool servers, allowing both the backend and frontend to process plain text content in addition to JSON, ensuring greater flexibility and integration with diverse tool outputs.
|
||||||
|
- 🗣️ **Enhanced Audio Transcription Language Fallback and Deepgram Support**: Implemented a robust language fallback mechanism for both OpenAI and Deepgram Speech-to-Text (STT) API calls. If a specified language parameter is not supported by the model or provider, the system will now intelligently retry the transcription without the language parameter or with a default, ensuring greater reliability and preventing failed API calls. This also specifically adds and refines support for the audio language parameter in Deepgram API integrations.
|
||||||
|
- ⚡ **Optimized Hybrid Search Performance for BM25 Weight Configuration**: Enhanced hybrid search to significantly improve performance when the BM25 weight is set to 0 or less. This optimization intelligently disables unnecessary collection retrieval and BM25 ranking calculations, leading to faster search results without impacting accuracy for configurations that do not utilize lexical search contributions.
|
||||||
|
- 🔒 **Configurable Code Interpreter Module Blacklist**: Introduced the "CODE_INTERPRETER_BLACKLISTED_MODULES" environment variable, allowing administrators to specify Python modules that are forbidden from being imported or executed within the code interpreter. This significantly enhances the security posture by mitigating risks associated with arbitrary code execution, such as unauthorized data access, system manipulation, or outbound connections.
|
||||||
|
- 🔐 **Enhanced OAuth Role Claim Handling**: Improved compatibility with diverse OAuth providers by allowing role claims to be supplied as single strings or integers, in addition to arrays. The system now automatically normalizes these single-value claims into arrays for consistent processing, streamlining integration with identity providers that format role data differently.
|
||||||
|
- ⚙️ **Configurable Tool Call Timeout**: Introduced the "AIOHTTP_CLIENT_TIMEOUT" environment variable, allowing administrators to specify custom timeout durations for external tool calls, which is crucial for integrations with tools that have varying or extended response times.
|
||||||
|
- 🛠️ **Improved Tool Callable Generation for Google genai SDK**: Enhanced the creation of tool callables to directly support native function calling within the Google 'genai' SDK. This refactoring ensures proper signature inference and removes extraneous parameters, enabling seamless integration for advanced AI workflows using Google's generative AI models.
|
||||||
|
- ✨ **Dynamic Loading of 'kokoro-js'**: Implemented dynamic loading for the 'kokoro-js' library, preventing failures and improving compatibility on older iOS browsers that may not support direct imports or certain modern JavaScript APIs like 'DecompressionStream'.
|
||||||
|
- 🖥️ **Improved Command List Visibility on Small Screens**: Resolved an issue where the top items in command lists (e.g., Knowledge Base, Models, Prompts) were hidden or overlapped by the header on smaller screen sizes or specific browser zoom levels. The command option lists now dynamically adjust their height, ensuring all items are fully visible and accessible with proper scrolling.
|
||||||
|
- 📦 **Improved Docker Image Compatibility for Arbitrary UIDs**: Fixed issues preventing the Open WebUI container from running in environments with arbitrary User IDs (UIDs), such as OpenShift's restricted Security Context Constraints (SCC). The Dockerfile has been updated to correctly set file system permissions for "/app" and "/root" directories, ensuring they are writable by processes running with a supplemental GID 0, thus resolving permission errors for Python libraries and application caches.
|
||||||
|
- ♿ **Accessibility Enhancements**: Significantly improved the semantic structure of chat messages by using "section", "h2", "ul", and "li" HTML tags, and enhanced screen reader compatibility by explicitly hiding decorative images with "aria-hidden" attributes. This refactoring provides clearer structural context and improves overall accessibility and web standards compliance for the conversation flow.
|
||||||
|
- 🌐 **Localization & Internationalization Improvements**: Significantly expanded internationalization support throughout the user interface, translating numerous user-facing strings in toast messages, placeholders, and other UI elements. This, alongside continuous refinement and expansion of translations for languages including Brazilian Portuguese, Kabyle (Taqbaylit), Czech, Finnish, Chinese (Simplified), Chinese (Traditional), and German, and general fixes for several other translation files, further enhances linguistic coverage and user experience.
|
||||||
|
|
||||||
|
### Fixed
|
||||||
|
|
||||||
|
- 🛡️ **Resolved Critical OIDC SSO Login Failure**: Fixed a critical issue where OIDC Single Sign-On (SSO) logins failed due to an error in setting the authentication token as a cookie during the redirect process. This ensures reliable and seamless authentication for users utilizing OIDC providers, restoring full login functionality that was impacted by previous security hardening.
|
||||||
|
- ⚡ **Prevented UI Blocking by Unreachable Webhooks**: Resolved a critical performance and user experience issue where synchronous webhook calls to unreachable or slow endpoints would block the entire user interface for all users. Webhook requests are now processed asynchronously using "aiohttp", ensuring that the UI remains responsive and functional even if webhook delivery encounters delays or failures.
|
||||||
|
- 🔒 **Password Change Option Hidden for Externally Authenticated Users**: Resolved an issue where the password change dialog was visible to users authenticated via external methods (e.g., LDAP, OIDC, Trusted Header). The option to change a password in user settings is now correctly hidden for these users, as their passwords are managed externally, streamlining the user interface and preventing confusion.
|
||||||
|
- 💬 **Resolved Temporary Chat and Permission Enforcement Issues**: Fixed a bug where temporary chats (identified by "chat_id = local") incorrectly triggered database checks, leading to 404 errors. This also resolves the issue where the 'USER_PERMISSIONS_CHAT_TEMPORARY_ENFORCED' setting was not functioning as intended, ensuring temporary chat mode now works correctly for user roles.
|
||||||
|
- 🔐 **Admin Model Visibility for Administrators**: Private models remained visible and usable for administrators in the chat model selector, even when the intended privacy setting ("ENABLE_ADMIN_WORKSPACE_CONTENT_ACCESS" - now renamed to "BYPASS_ADMIN_ACCESS_CONTROL") was disabled. This ensures consistent enforcement of model access controls and adherence to the principle of least privilege.
|
||||||
|
- 🔍 **Clarified Web Search Engine Label for DDGS**: Addressed user confusion and inaccurate labeling by renaming "duckduckgo" to "DDGS" (Dux Distributed Global Search) in the web search engine selector. This clarifies that the system utilizes DDGS, a metasearch library that aggregates results from various search providers, accurately reflecting its underlying functionality rather than implying exclusive use of DuckDuckGo's search engine.
|
||||||
|
- 🛠️ **Improved Settings UI Reactivity and Visibility**: Resolved an issue where settings tabs for 'Connections' and 'Tools' did not dynamically update their visibility based on global administrative feature flags (e.g., 'enable_direct_connections'). The UI now reactively shows or hides these sections, ensuring a consistent and clear experience when administrators control feature availability.
|
||||||
|
- 🎚️ **Restored Model and Banner Reordering Functionality**: Fixed a bug that prevented administrators from reordering models in the Admin Panel's 'Models' settings and banners in the 'Interface' settings via drag-and-drop. The sortable functionality has been restored, allowing for proper customization of display order.
|
||||||
|
- 📝 **Restored Custom Pending User Overlay Visibility**: Fixed an issue where the custom title and description configured for pending users were not visible. The application now correctly exposes these UI configuration settings to pending users, ensuring that the custom onboarding messages are displayed as intended.
|
||||||
|
- 📥 **Fixed Community Function Import Compatibility**: Resolved an issue that prevented the successful import of function files downloaded from openwebui.com due to schema differences. The system now correctly processes these files, allowing for seamless integration of community-contributed functions.
|
||||||
|
- 📦 **Fixed Stale Ollama Version in Docker Images**: Resolved an issue where the Ollama installation within Docker images could become stale due to caching during the build process. The Dockerfile now includes a mechanism to invalidate the build cache for the Ollama installation step, ensuring that the latest version of Ollama is always installed.
|
||||||
|
- 🗄️ **Improved Milvus Query Handling for Large Datasets**: Fixed a "MilvusException" that occurred when attempting to query more than 16384 entries from a Milvus collection. The query logic has been refactored to use "query_iterator()", enabling efficient fetching of larger result sets in batches and resolving the previous limitation on the number of entries that could be retrieved.
|
||||||
|
- 🐛 **Restored Message Toolbar Icons for Empty Messages with Files**: Fixed an issue where the edit, copy, and delete icons were not displayed on user messages that contained an attached file but no text content. This ensures full interaction capabilities for all message types, allowing users to manage their messages consistently.
|
||||||
|
- 💬 **Resolved Streaming Interruption for Kimi-Dev Models**: Fixed an issue where streaming responses from Kimi-Dev models would halt prematurely upon encountering specific 'thinking' tokens (◁think▷, ◁/think▷). The system now correctly processes these tokens, ensuring uninterrupted streaming and proper handling of hidden or collapsible thinking sections.
|
||||||
|
- 🔍 **Enhanced Knowledge Base Search Functionality**: Improved the search capability within the 'Knowledge' section of the Workspace. Previously, searching for knowledge bases required exact term matches or starting with the first letter. Now, the search algorithm has been refined to allow broader, less exact matches, making it easier and more intuitive to find relevant knowledge bases.
|
||||||
|
- 📝 **Resolved Chinese Input 'Enter' Key Issue (macOS & iOS Safari)**: Fixed a bug where pressing the 'Enter' key during text composition with Input Method Editors (IMEs) on macOS and iOS Safari browsers would prematurely send the message. The system now robustly handles the composition state by addressing a 'compositionend' event bug specific to Safari, ensuring a smooth and expected typing experience for users of various languages, including Chinese and Korean.
|
||||||
|
- 🔐 **Resolved OAUTH_GROUPS_CLAIM Configuration Issue**: Fixed a bug where the "OAUTH_GROUPS_CLAIM" environment variable was not correctly parsed due to a typo in the configuration file. This ensures that OAuth group management features, including automatic group creation, now correctly utilize the specified claim from the identity provider, allowing for seamless integration with external user directories like Keycloak.
|
||||||
|
- 🗄️ **Resolved Azure PostgreSQL pgvector Extension Permissions**: Fixed an issue preventing the creation of "pgvector" and "pgcrypto" extensions on Azure PostgreSQL Flexible Servers due to permission limitations (e.g., 'Only members of "azure_pg_admin" are allowed to use "CREATE EXTENSION"'). The extension creation process now includes a conditional check, ensuring seamless deployment and compatibility with Azure PostgreSQL environments even with restricted database user permissions.
|
||||||
|
- 🛠️ **Improved Backend Path Resolution and Alembic Stability**: Fixed issues causing Alembic database migrations to fail due to incorrect path resolution within the application. By implementing canonical path resolution for core directories and refining Alembic configuration, the robustness and correctness of internal pathing have been significantly enhanced, ensuring reliable database operations.
|
||||||
|
- 📊 **Resolved Arena Model Identification in Feedback History**: Fixed an issue where the model used for feedback in arena settings was incorrectly reported as 'arena-model' in the evaluation history. The system now correctly logs and displays the actual model ID that received the feedback, restoring clarity and enabling proper analysis of model performance in arena environments.
|
||||||
|
- 🎨 **Resolved Icon Overlap in 'Her' Theme**: Fixed a visual glitch in the 'Her' theme where icons would overlap on the loading screen and certain icons appeared incongruous. The display has been corrected to ensure proper visual presentation and theme consistency.
|
||||||
|
- 🛠️ **Resolved Model Sorting TypeError with Null Names**: Fixed a "TypeError" that occurred in the "/api/models" endpoint when sorting models with null or missing names. The model sorting logic has been improved to gracefully handle such edge cases by ensuring that model IDs and names are treated as empty strings if their values are null or undefined, preventing comparison errors and improving API stability.
|
||||||
|
- 💬 **Resolved Silently Dropped Streaming Response Chunks**: Fixed an issue where the final partial chunks of streaming chat responses could be silently dropped, leading to incomplete message delivery. The system now reliably flush any pending delta data upon stream termination, early breaks (e.g., code interpreter tags), or connection closure, ensuring complete and accurate response delivery.
|
||||||
|
- 📱 **Disabled Overscroll for iOS Frontend**: Fixed an issue where overscrolling was enabled on iOS devices, causing unexpected scrolling behavior over fixed or sticky elements within the PWA. Overscroll has now been disabled, providing a more native application-like experience for iOS users.
|
||||||
|
- 📝 **Resolved Code Block Input Issue with Shift+Enter**: Fixed a bug where typing three backticks followed by a language and then pressing Shift+Enter would cause the code block prefix to disappear, preventing proper code formatting. The system now correctly preserves the code block syntax, ensuring consistent behavior for multi-line code input.
|
||||||
|
- 🛠️ **Improved OpenAI Model List Handling for Null Names**: Fixed an edge case where some OpenAI-compatible API providers might return models with a null value for their 'name' field. This could lead to issues like broken model list sorting. The system now gracefully handles these instances by removing the null 'name' key, ensuring stable model retrieval and display.
|
||||||
|
- 🔍 **Resolved DDGS Concurrent Request Configuration**: Fixed an issue where the configured number of concurrent requests was not being honored for the DDGS (Dux Distributed Global Search) metasearch engine. The system now correctly applies the specified concurrency setting, improving efficiency for web searches.
|
||||||
|
- 🛠️ **Improved Tool List Synchronization in Multi-Replica Deployments**: Resolved an issue where tool updates were not consistently reflected across all instances in multi-replica environments, leading to stale tool lists for users on other replicas. The tool list in the message input menu is now automatically refreshed each time it is accessed, ensuring all users always see the most current set of available tools.
|
||||||
|
- 🛠️ **Resolved Duplicate Tool Name Collision**: Fixed an issue where tools with identical names from different external servers were silently removed, preventing their simultaneous use. The system now correctly handles tool name collisions by internally prefixing tools with their server identifier, allowing multiple instances of similarly named tools from different servers to be active and usable by LLMs.
|
||||||
|
- 🖼️ **Resolved Image Generation API Size Parameter Issue**: Fixed a bug where the "/api/v1/images/generations" API endpoint did not correctly apply the 'size' parameter specified in the request payload for image generation. The system now properly honors the requested image dimensions (e.g., '1980x1080'), ensuring that generated images match the user's explicit size preference rather than defaulting to settings.
|
||||||
|
- 🗄️ **Resolved S3 Vector Upload Limitations**: Fixed an issue that prevented uploading more than 500 vectors to S3 Vector buckets due to API limitations, which resulted in a "ValidationException". S3 vector uploads are now batched in groups of 500, ensuring successful processing of larger datasets.
|
||||||
|
- 🛠️ **Fixed Tool Installation Error During Startup**: Resolved a "NoneType" error that occurred during tool installation at startup when 'tool.user' was unexpectedly null. The system now includes a check to ensure 'tool.user' exists before attempting to access its properties, preventing crashes and ensuring robust tool initialization.
|
||||||
|
- 🛠️ **Improved Azure OpenAI GPT-5 Parameter Handling**: Fixed an issue with Azure OpenAI SDK parameter handling to correctly support GPT-5 models. The 'max_tokens' parameter is now appropriately converted to 'max_completion_tokens' for GPT-5 models, ensuring consistent behavior and proper function execution similar to existing o-series models.
|
||||||
|
- 🐛 **Resolved Exception with Missing Group Permissions**: Fixed an exception that occurred in the access control logic when group permission objects were missing or null. The system now correctly handles cases where groups may not have explicit permission definitions, ensuring that 'None' checks prevent errors and maintain application stability when processing user permissions.
|
||||||
|
- 🛠️ **Improved OpenAI API Base URL Handling**: Fixed an issue where a trailing slash in the 'OPENAI_API_BASE_URL' configuration could lead to models not being detected or the endpoint failing. The system now automatically removes trailing slashes from the configured URL, ensuring robust and consistent connections to OpenAI-compatible APIs.
|
||||||
|
- 🖼️ **Resolved S3-Compatible Storage Upload Failures**: Fixed an issue where uploads to S3-compatible storage providers would fail with an "XAmzContentSHA256Mismatch" error. The system now correctly handles checksum calculations, ensuring reliable file and image uploads to S3-compatible services.
|
||||||
|
- 🌐 **Corrected 'Releases' Link**: Fixed an issue where the 'Releases' button in the user menu directed to an incorrect URL, now correctly linking to the Open WebUI GitHub releases page.
|
||||||
|
- 🛠️ **Resolved Model Sorting Errors with Null or Undefined Names**: Fixed multiple "TypeError" instances that occurred when attempting to sort model lists where model names were null or undefined. The sorting logic across various UI components (including Ollama model selection, leaderboard, and admin model settings) has been made more robust by gracefully handling absent model names, preventing crashes and ensuring consistent alphabetical sorting based on available name or ID.
|
||||||
|
- 🎨 **Resolved Banner Dismissal Issue with Iteration IDs**: Fixed a bug where dismissing banners could lead to unintended multiple banner dismissals or other incorrect behavior, especially when banners lacked unique iteration IDs. Unique IDs are now assigned during banner iteration, ensuring proper individual dismissal and consistent display behavior.
|
||||||
|
|
||||||
|
### Changed
|
||||||
|
|
||||||
|
- 🛂 **Environment Variable for Admin Access Control**: The environment variable "ENABLE_ADMIN_WORKSPACE_CONTENT_ACCESS" has been renamed to "BYPASS_ADMIN_ACCESS_CONTROL". This new name more accurately reflects its function as a control to allow administrators to bypass model access restrictions. Users are encouraged to update their configurations to use the new variable name; existing configurations using the old name will still be honored for backward compatibility.
|
||||||
|
- 🗂️ **Core Directory Path Resolution Updated**: The internal mechanism for resolving core application directory paths ("OPEN_WEBUI_DIR", "BACKEND_DIR", "BASE_DIR") has been updated to use canonical resolution via "Path().resolve()". This change improves path reliability but may require adjustments for any external scripts or configurations that previously relied on specific non-canonical path interpretations.
|
||||||
|
- 🗃️ **Database Performance Options**: New database performance options, "DATABASE_ENABLE_SQLITE_WAL" and "DATABASE_DEDUPLICATE_INTERVAL", are now available. If "DATABASE_ENABLE_SQLITE_WAL" is enabled, SQLite will operate in WAL mode, which may alter SQLite's file locking behavior. If "DATABASE_DEDUPLICATE_INTERVAL" is set to a non-zero value, the "user.last_active_at" timestamp will be updated less frequently, leading to slightly less real-time accuracy for this specific field but significantly reducing database write conflicts and improving overall performance. Both options are disabled by default.
|
||||||
|
- 🌐 **Renamed Web Search Concurrency Setting**: The environment variable "WEB_SEARCH_CONCURRENT_REQUESTS" has been renamed to "WEB_LOADER_CONCURRENT_REQUESTS". This change clarifies its scope, explicitly applying to the concurrency of the web loader component (which fetches content from search results) rather than the initial search engine query. Users relying on the old environment variable name for configuring web search concurrency must update their configurations to use "WEB_LOADER_CONCURRENT_REQUESTS".
|
||||||
|
|
||||||
## [0.6.22] - 2025-08-11
|
## [0.6.22] - 2025-08-11
|
||||||
|
|
||||||
### Added
|
### Added
|
||||||
|
|
47
Dockerfile
47
Dockerfile
|
@ -108,29 +108,13 @@ RUN echo -n 00000000-0000-0000-0000-000000000000 > $HOME/.cache/chroma/telemetry
|
||||||
# Make sure the user has access to the app and root directory
|
# Make sure the user has access to the app and root directory
|
||||||
RUN chown -R $UID:$GID /app $HOME
|
RUN chown -R $UID:$GID /app $HOME
|
||||||
|
|
||||||
RUN if [ "$USE_OLLAMA" = "true" ]; then \
|
# Install common system dependencies
|
||||||
apt-get update && \
|
RUN apt-get update && \
|
||||||
# Install pandoc and netcat
|
apt-get install -y --no-install-recommends \
|
||||||
apt-get install -y --no-install-recommends git build-essential pandoc netcat-openbsd curl && \
|
git build-essential pandoc gcc netcat-openbsd curl jq \
|
||||||
apt-get install -y --no-install-recommends gcc python3-dev && \
|
python3-dev \
|
||||||
# for RAG OCR
|
ffmpeg libsm6 libxext6 \
|
||||||
apt-get install -y --no-install-recommends ffmpeg libsm6 libxext6 && \
|
&& rm -rf /var/lib/apt/lists/*
|
||||||
# install helper tools
|
|
||||||
apt-get install -y --no-install-recommends curl jq && \
|
|
||||||
# install ollama
|
|
||||||
curl -fsSL https://ollama.com/install.sh | sh && \
|
|
||||||
# cleanup
|
|
||||||
rm -rf /var/lib/apt/lists/*; \
|
|
||||||
else \
|
|
||||||
apt-get update && \
|
|
||||||
# Install pandoc, netcat and gcc
|
|
||||||
apt-get install -y --no-install-recommends git build-essential pandoc gcc netcat-openbsd curl jq && \
|
|
||||||
apt-get install -y --no-install-recommends gcc python3-dev && \
|
|
||||||
# for RAG OCR
|
|
||||||
apt-get install -y --no-install-recommends ffmpeg libsm6 libxext6 && \
|
|
||||||
# cleanup
|
|
||||||
rm -rf /var/lib/apt/lists/*; \
|
|
||||||
fi
|
|
||||||
|
|
||||||
# install python dependencies
|
# install python dependencies
|
||||||
COPY --chown=$UID:$GID ./backend/requirements.txt ./requirements.txt
|
COPY --chown=$UID:$GID ./backend/requirements.txt ./requirements.txt
|
||||||
|
@ -152,7 +136,13 @@ RUN pip3 install --no-cache-dir uv && \
|
||||||
fi; \
|
fi; \
|
||||||
chown -R $UID:$GID /app/backend/data/
|
chown -R $UID:$GID /app/backend/data/
|
||||||
|
|
||||||
|
# Install Ollama if requested
|
||||||
|
RUN if [ "$USE_OLLAMA" = "true" ]; then \
|
||||||
|
date +%s > /tmp/ollama_build_hash && \
|
||||||
|
echo "Cache broken at timestamp: `cat /tmp/ollama_build_hash`" && \
|
||||||
|
curl -fsSL https://ollama.com/install.sh | sh && \
|
||||||
|
rm -rf /var/lib/apt/lists/*; \
|
||||||
|
fi
|
||||||
|
|
||||||
# copy embedding weight from build
|
# copy embedding weight from build
|
||||||
# RUN mkdir -p /root/.cache/chroma/onnx_models/all-MiniLM-L6-v2
|
# RUN mkdir -p /root/.cache/chroma/onnx_models/all-MiniLM-L6-v2
|
||||||
|
@ -170,6 +160,15 @@ EXPOSE 8080
|
||||||
|
|
||||||
HEALTHCHECK CMD curl --silent --fail http://localhost:${PORT:-8080}/health | jq -ne 'input.status == true' || exit 1
|
HEALTHCHECK CMD curl --silent --fail http://localhost:${PORT:-8080}/health | jq -ne 'input.status == true' || exit 1
|
||||||
|
|
||||||
|
# Minimal, atomic permission hardening for OpenShift (arbitrary UID):
|
||||||
|
# - Group 0 owns /app and /root
|
||||||
|
# - Directories are group-writable and have SGID so new files inherit GID 0
|
||||||
|
RUN set -eux; \
|
||||||
|
chgrp -R 0 /app /root || true; \
|
||||||
|
chmod -R g+rwX /app /root || true; \
|
||||||
|
find /app -type d -exec chmod g+s {} + || true; \
|
||||||
|
find /root -type d -exec chmod g+s {} + || true
|
||||||
|
|
||||||
USER $UID:$GID
|
USER $UID:$GID
|
||||||
|
|
||||||
ARG BUILD_HASH
|
ARG BUILD_HASH
|
||||||
|
|
|
@ -10,7 +10,7 @@ script_location = migrations
|
||||||
|
|
||||||
# sys.path path, will be prepended to sys.path if present.
|
# sys.path path, will be prepended to sys.path if present.
|
||||||
# defaults to the current working directory.
|
# defaults to the current working directory.
|
||||||
prepend_sys_path = .
|
prepend_sys_path = ..
|
||||||
|
|
||||||
# timezone to use when rendering the date within the migration file
|
# timezone to use when rendering the date within the migration file
|
||||||
# as well as the filename.
|
# as well as the filename.
|
||||||
|
|
|
@ -510,7 +510,7 @@ OAUTH_EMAIL_CLAIM = PersistentConfig(
|
||||||
OAUTH_GROUPS_CLAIM = PersistentConfig(
|
OAUTH_GROUPS_CLAIM = PersistentConfig(
|
||||||
"OAUTH_GROUPS_CLAIM",
|
"OAUTH_GROUPS_CLAIM",
|
||||||
"oauth.oidc.group_claim",
|
"oauth.oidc.group_claim",
|
||||||
os.environ.get("OAUTH_GROUP_CLAIM", "groups"),
|
os.environ.get("OAUTH_GROUPS_CLAIM", os.environ.get("OAUTH_GROUP_CLAIM", "groups")),
|
||||||
)
|
)
|
||||||
|
|
||||||
ENABLE_OAUTH_ROLE_MANAGEMENT = PersistentConfig(
|
ENABLE_OAUTH_ROLE_MANAGEMENT = PersistentConfig(
|
||||||
|
@ -953,6 +953,9 @@ GEMINI_API_BASE_URL = os.environ.get("GEMINI_API_BASE_URL", "")
|
||||||
|
|
||||||
if OPENAI_API_BASE_URL == "":
|
if OPENAI_API_BASE_URL == "":
|
||||||
OPENAI_API_BASE_URL = "https://api.openai.com/v1"
|
OPENAI_API_BASE_URL = "https://api.openai.com/v1"
|
||||||
|
else:
|
||||||
|
if OPENAI_API_BASE_URL.endswith("/"):
|
||||||
|
OPENAI_API_BASE_URL = OPENAI_API_BASE_URL[:-1]
|
||||||
|
|
||||||
OPENAI_API_KEYS = os.environ.get("OPENAI_API_KEYS", "")
|
OPENAI_API_KEYS = os.environ.get("OPENAI_API_KEYS", "")
|
||||||
OPENAI_API_KEYS = OPENAI_API_KEYS if OPENAI_API_KEYS != "" else OPENAI_API_KEY
|
OPENAI_API_KEYS = OPENAI_API_KEYS if OPENAI_API_KEYS != "" else OPENAI_API_KEY
|
||||||
|
@ -1355,6 +1358,14 @@ ENABLE_ADMIN_WORKSPACE_CONTENT_ACCESS = (
|
||||||
os.environ.get("ENABLE_ADMIN_WORKSPACE_CONTENT_ACCESS", "True").lower() == "true"
|
os.environ.get("ENABLE_ADMIN_WORKSPACE_CONTENT_ACCESS", "True").lower() == "true"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
BYPASS_ADMIN_ACCESS_CONTROL = (
|
||||||
|
os.environ.get(
|
||||||
|
"BYPASS_ADMIN_ACCESS_CONTROL",
|
||||||
|
os.environ.get("ENABLE_ADMIN_WORKSPACE_CONTENT_ACCESS", "True"),
|
||||||
|
).lower()
|
||||||
|
== "true"
|
||||||
|
)
|
||||||
|
|
||||||
ENABLE_ADMIN_CHAT_ACCESS = (
|
ENABLE_ADMIN_CHAT_ACCESS = (
|
||||||
os.environ.get("ENABLE_ADMIN_CHAT_ACCESS", "True").lower() == "true"
|
os.environ.get("ENABLE_ADMIN_CHAT_ACCESS", "True").lower() == "true"
|
||||||
)
|
)
|
||||||
|
@ -1565,7 +1576,7 @@ FOLLOW_UP_GENERATION_PROMPT_TEMPLATE = PersistentConfig(
|
||||||
)
|
)
|
||||||
|
|
||||||
DEFAULT_FOLLOW_UP_GENERATION_PROMPT_TEMPLATE = """### Task:
|
DEFAULT_FOLLOW_UP_GENERATION_PROMPT_TEMPLATE = """### Task:
|
||||||
Suggest 3-5 relevant follow-up questions or prompts that the user might naturally ask next in this conversation as a **user**, based on the chat history, to help continue or deepen the discussion.
|
Suggest 3-5 relevant follow-up questions or prompts in the chat's primary language that the user might naturally ask next in this conversation as a **user**, based on the chat history, to help continue or deepen the discussion.
|
||||||
### Guidelines:
|
### Guidelines:
|
||||||
- Write all follow-up questions from the user’s point of view, directed to the assistant.
|
- Write all follow-up questions from the user’s point of view, directed to the assistant.
|
||||||
- Make questions concise, clear, and directly related to the discussed topic(s).
|
- Make questions concise, clear, and directly related to the discussed topic(s).
|
||||||
|
@ -1857,6 +1868,11 @@ CODE_INTERPRETER_JUPYTER_TIMEOUT = PersistentConfig(
|
||||||
),
|
),
|
||||||
)
|
)
|
||||||
|
|
||||||
|
CODE_INTERPRETER_BLOCKED_MODULES = [
|
||||||
|
library.strip()
|
||||||
|
for library in os.environ.get("CODE_INTERPRETER_BLOCKED_MODULES", "").split(",")
|
||||||
|
if library.strip()
|
||||||
|
]
|
||||||
|
|
||||||
DEFAULT_CODE_INTERPRETER_PROMPT = """
|
DEFAULT_CODE_INTERPRETER_PROMPT = """
|
||||||
#### Tools Available
|
#### Tools Available
|
||||||
|
@ -2611,6 +2627,14 @@ WEB_LOADER_ENGINE = PersistentConfig(
|
||||||
os.environ.get("WEB_LOADER_ENGINE", ""),
|
os.environ.get("WEB_LOADER_ENGINE", ""),
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
|
WEB_LOADER_CONCURRENT_REQUESTS = PersistentConfig(
|
||||||
|
"WEB_LOADER_CONCURRENT_REQUESTS",
|
||||||
|
"rag.web.loader.concurrent_requests",
|
||||||
|
int(os.getenv("WEB_LOADER_CONCURRENT_REQUESTS", "10")),
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
ENABLE_WEB_LOADER_SSL_VERIFICATION = PersistentConfig(
|
ENABLE_WEB_LOADER_SSL_VERIFICATION = PersistentConfig(
|
||||||
"ENABLE_WEB_LOADER_SSL_VERIFICATION",
|
"ENABLE_WEB_LOADER_SSL_VERIFICATION",
|
||||||
"rag.web.loader.ssl_verification",
|
"rag.web.loader.ssl_verification",
|
||||||
|
|
|
@ -17,14 +17,17 @@ from open_webui.constants import ERROR_MESSAGES
|
||||||
# Load .env file
|
# Load .env file
|
||||||
####################################
|
####################################
|
||||||
|
|
||||||
OPEN_WEBUI_DIR = Path(__file__).parent # the path containing this file
|
# Use .resolve() to get the canonical path, removing any '..' or '.' components
|
||||||
print(OPEN_WEBUI_DIR)
|
ENV_FILE_PATH = Path(__file__).resolve()
|
||||||
|
|
||||||
BACKEND_DIR = OPEN_WEBUI_DIR.parent # the path containing this file
|
# OPEN_WEBUI_DIR should be the directory where env.py resides (open_webui/)
|
||||||
BASE_DIR = BACKEND_DIR.parent # the path containing the backend/
|
OPEN_WEBUI_DIR = ENV_FILE_PATH.parent
|
||||||
|
|
||||||
print(BACKEND_DIR)
|
# BACKEND_DIR is the parent of OPEN_WEBUI_DIR (backend/)
|
||||||
print(BASE_DIR)
|
BACKEND_DIR = OPEN_WEBUI_DIR.parent
|
||||||
|
|
||||||
|
# BASE_DIR is the parent of BACKEND_DIR (open-webui-dev/)
|
||||||
|
BASE_DIR = BACKEND_DIR.parent
|
||||||
|
|
||||||
try:
|
try:
|
||||||
from dotenv import find_dotenv, load_dotenv
|
from dotenv import find_dotenv, load_dotenv
|
||||||
|
@ -336,6 +339,21 @@ else:
|
||||||
except Exception:
|
except Exception:
|
||||||
DATABASE_POOL_RECYCLE = 3600
|
DATABASE_POOL_RECYCLE = 3600
|
||||||
|
|
||||||
|
DATABASE_ENABLE_SQLITE_WAL = (
|
||||||
|
os.environ.get("DATABASE_ENABLE_SQLITE_WAL", "False").lower() == "true"
|
||||||
|
)
|
||||||
|
|
||||||
|
DATABASE_USER_ACTIVE_STATUS_UPDATE_INTERVAL = os.environ.get(
|
||||||
|
"DATABASE_USER_ACTIVE_STATUS_UPDATE_INTERVAL", None
|
||||||
|
)
|
||||||
|
if DATABASE_USER_ACTIVE_STATUS_UPDATE_INTERVAL is not None:
|
||||||
|
try:
|
||||||
|
DATABASE_USER_ACTIVE_STATUS_UPDATE_INTERVAL = float(
|
||||||
|
DATABASE_USER_ACTIVE_STATUS_UPDATE_INTERVAL
|
||||||
|
)
|
||||||
|
except Exception:
|
||||||
|
DATABASE_USER_ACTIVE_STATUS_UPDATE_INTERVAL = 0.0
|
||||||
|
|
||||||
RESET_CONFIG_ON_START = (
|
RESET_CONFIG_ON_START = (
|
||||||
os.environ.get("RESET_CONFIG_ON_START", "False").lower() == "true"
|
os.environ.get("RESET_CONFIG_ON_START", "False").lower() == "true"
|
||||||
)
|
)
|
||||||
|
@ -677,6 +695,7 @@ AUDIT_EXCLUDED_PATHS = [path.lstrip("/") for path in AUDIT_EXCLUDED_PATHS]
|
||||||
####################################
|
####################################
|
||||||
|
|
||||||
ENABLE_OTEL = os.environ.get("ENABLE_OTEL", "False").lower() == "true"
|
ENABLE_OTEL = os.environ.get("ENABLE_OTEL", "False").lower() == "true"
|
||||||
|
ENABLE_OTEL_TRACES = os.environ.get("ENABLE_OTEL_TRACES", "False").lower() == "true"
|
||||||
ENABLE_OTEL_METRICS = os.environ.get("ENABLE_OTEL_METRICS", "False").lower() == "true"
|
ENABLE_OTEL_METRICS = os.environ.get("ENABLE_OTEL_METRICS", "False").lower() == "true"
|
||||||
ENABLE_OTEL_LOGS = os.environ.get("ENABLE_OTEL_LOGS", "False").lower() == "true"
|
ENABLE_OTEL_LOGS = os.environ.get("ENABLE_OTEL_LOGS", "False").lower() == "true"
|
||||||
|
|
||||||
|
|
|
@ -47,7 +47,7 @@ from open_webui.utils.misc import (
|
||||||
)
|
)
|
||||||
from open_webui.utils.payload import (
|
from open_webui.utils.payload import (
|
||||||
apply_model_params_to_body_openai,
|
apply_model_params_to_body_openai,
|
||||||
apply_model_system_prompt_to_body,
|
apply_system_prompt_to_body,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
|
@ -253,9 +253,7 @@ async def generate_function_chat_completion(
|
||||||
if params:
|
if params:
|
||||||
system = params.pop("system", None)
|
system = params.pop("system", None)
|
||||||
form_data = apply_model_params_to_body_openai(params, form_data)
|
form_data = apply_model_params_to_body_openai(params, form_data)
|
||||||
form_data = apply_model_system_prompt_to_body(
|
form_data = apply_system_prompt_to_body(system, form_data, metadata, user)
|
||||||
system, form_data, metadata, user
|
|
||||||
)
|
|
||||||
|
|
||||||
pipe_id = get_pipe_id(form_data)
|
pipe_id = get_pipe_id(form_data)
|
||||||
function_module = get_function_module_by_id(request, pipe_id)
|
function_module = get_function_module_by_id(request, pipe_id)
|
||||||
|
|
|
@ -14,9 +14,10 @@ from open_webui.env import (
|
||||||
DATABASE_POOL_RECYCLE,
|
DATABASE_POOL_RECYCLE,
|
||||||
DATABASE_POOL_SIZE,
|
DATABASE_POOL_SIZE,
|
||||||
DATABASE_POOL_TIMEOUT,
|
DATABASE_POOL_TIMEOUT,
|
||||||
|
DATABASE_ENABLE_SQLITE_WAL,
|
||||||
)
|
)
|
||||||
from peewee_migrate import Router
|
from peewee_migrate import Router
|
||||||
from sqlalchemy import Dialect, create_engine, MetaData, types
|
from sqlalchemy import Dialect, create_engine, MetaData, event, types
|
||||||
from sqlalchemy.ext.declarative import declarative_base
|
from sqlalchemy.ext.declarative import declarative_base
|
||||||
from sqlalchemy.orm import scoped_session, sessionmaker
|
from sqlalchemy.orm import scoped_session, sessionmaker
|
||||||
from sqlalchemy.pool import QueuePool, NullPool
|
from sqlalchemy.pool import QueuePool, NullPool
|
||||||
|
@ -114,6 +115,16 @@ elif "sqlite" in SQLALCHEMY_DATABASE_URL:
|
||||||
engine = create_engine(
|
engine = create_engine(
|
||||||
SQLALCHEMY_DATABASE_URL, connect_args={"check_same_thread": False}
|
SQLALCHEMY_DATABASE_URL, connect_args={"check_same_thread": False}
|
||||||
)
|
)
|
||||||
|
|
||||||
|
def on_connect(dbapi_connection, connection_record):
|
||||||
|
cursor = dbapi_connection.cursor()
|
||||||
|
if DATABASE_ENABLE_SQLITE_WAL:
|
||||||
|
cursor.execute("PRAGMA journal_mode=WAL")
|
||||||
|
else:
|
||||||
|
cursor.execute("PRAGMA journal_mode=DELETE")
|
||||||
|
cursor.close()
|
||||||
|
|
||||||
|
event.listen(engine, "connect", on_connect)
|
||||||
else:
|
else:
|
||||||
if isinstance(DATABASE_POOL_SIZE, int):
|
if isinstance(DATABASE_POOL_SIZE, int):
|
||||||
if DATABASE_POOL_SIZE > 0:
|
if DATABASE_POOL_SIZE > 0:
|
||||||
|
|
|
@ -57,6 +57,7 @@ from open_webui.utils.logger import start_logger
|
||||||
from open_webui.socket.main import (
|
from open_webui.socket.main import (
|
||||||
app as socket_app,
|
app as socket_app,
|
||||||
periodic_usage_pool_cleanup,
|
periodic_usage_pool_cleanup,
|
||||||
|
get_event_emitter,
|
||||||
get_models_in_use,
|
get_models_in_use,
|
||||||
get_active_user_ids,
|
get_active_user_ids,
|
||||||
)
|
)
|
||||||
|
@ -185,6 +186,7 @@ from open_webui.config import (
|
||||||
FIRECRAWL_API_BASE_URL,
|
FIRECRAWL_API_BASE_URL,
|
||||||
FIRECRAWL_API_KEY,
|
FIRECRAWL_API_KEY,
|
||||||
WEB_LOADER_ENGINE,
|
WEB_LOADER_ENGINE,
|
||||||
|
WEB_LOADER_CONCURRENT_REQUESTS,
|
||||||
WHISPER_MODEL,
|
WHISPER_MODEL,
|
||||||
WHISPER_VAD_FILTER,
|
WHISPER_VAD_FILTER,
|
||||||
WHISPER_LANGUAGE,
|
WHISPER_LANGUAGE,
|
||||||
|
@ -327,6 +329,7 @@ from open_webui.config import (
|
||||||
ENABLE_MESSAGE_RATING,
|
ENABLE_MESSAGE_RATING,
|
||||||
ENABLE_USER_WEBHOOKS,
|
ENABLE_USER_WEBHOOKS,
|
||||||
ENABLE_EVALUATION_ARENA_MODELS,
|
ENABLE_EVALUATION_ARENA_MODELS,
|
||||||
|
BYPASS_ADMIN_ACCESS_CONTROL,
|
||||||
USER_PERMISSIONS,
|
USER_PERMISSIONS,
|
||||||
DEFAULT_USER_ROLE,
|
DEFAULT_USER_ROLE,
|
||||||
PENDING_USER_OVERLAY_CONTENT,
|
PENDING_USER_OVERLAY_CONTENT,
|
||||||
|
@ -375,6 +378,7 @@ from open_webui.config import (
|
||||||
RESPONSE_WATERMARK,
|
RESPONSE_WATERMARK,
|
||||||
# Admin
|
# Admin
|
||||||
ENABLE_ADMIN_CHAT_ACCESS,
|
ENABLE_ADMIN_CHAT_ACCESS,
|
||||||
|
BYPASS_ADMIN_ACCESS_CONTROL,
|
||||||
ENABLE_ADMIN_EXPORT,
|
ENABLE_ADMIN_EXPORT,
|
||||||
# Tasks
|
# Tasks
|
||||||
TASK_MODEL,
|
TASK_MODEL,
|
||||||
|
@ -463,6 +467,7 @@ from open_webui.utils.redis import get_redis_connection
|
||||||
from open_webui.tasks import (
|
from open_webui.tasks import (
|
||||||
redis_task_command_listener,
|
redis_task_command_listener,
|
||||||
list_task_ids_by_item_id,
|
list_task_ids_by_item_id,
|
||||||
|
create_task,
|
||||||
stop_task,
|
stop_task,
|
||||||
list_tasks,
|
list_tasks,
|
||||||
) # Import from tasks.py
|
) # Import from tasks.py
|
||||||
|
@ -853,7 +858,10 @@ app.state.config.WEB_SEARCH_ENGINE = WEB_SEARCH_ENGINE
|
||||||
app.state.config.WEB_SEARCH_DOMAIN_FILTER_LIST = WEB_SEARCH_DOMAIN_FILTER_LIST
|
app.state.config.WEB_SEARCH_DOMAIN_FILTER_LIST = WEB_SEARCH_DOMAIN_FILTER_LIST
|
||||||
app.state.config.WEB_SEARCH_RESULT_COUNT = WEB_SEARCH_RESULT_COUNT
|
app.state.config.WEB_SEARCH_RESULT_COUNT = WEB_SEARCH_RESULT_COUNT
|
||||||
app.state.config.WEB_SEARCH_CONCURRENT_REQUESTS = WEB_SEARCH_CONCURRENT_REQUESTS
|
app.state.config.WEB_SEARCH_CONCURRENT_REQUESTS = WEB_SEARCH_CONCURRENT_REQUESTS
|
||||||
|
|
||||||
app.state.config.WEB_LOADER_ENGINE = WEB_LOADER_ENGINE
|
app.state.config.WEB_LOADER_ENGINE = WEB_LOADER_ENGINE
|
||||||
|
app.state.config.WEB_LOADER_CONCURRENT_REQUESTS = WEB_LOADER_CONCURRENT_REQUESTS
|
||||||
|
|
||||||
app.state.config.WEB_SEARCH_TRUST_ENV = WEB_SEARCH_TRUST_ENV
|
app.state.config.WEB_SEARCH_TRUST_ENV = WEB_SEARCH_TRUST_ENV
|
||||||
app.state.config.BYPASS_WEB_SEARCH_EMBEDDING_AND_RETRIEVAL = (
|
app.state.config.BYPASS_WEB_SEARCH_EMBEDDING_AND_RETRIEVAL = (
|
||||||
BYPASS_WEB_SEARCH_EMBEDDING_AND_RETRIEVAL
|
BYPASS_WEB_SEARCH_EMBEDDING_AND_RETRIEVAL
|
||||||
|
@ -916,7 +924,10 @@ try:
|
||||||
app.state.config.RAG_EMBEDDING_MODEL,
|
app.state.config.RAG_EMBEDDING_MODEL,
|
||||||
RAG_EMBEDDING_MODEL_AUTO_UPDATE,
|
RAG_EMBEDDING_MODEL_AUTO_UPDATE,
|
||||||
)
|
)
|
||||||
|
if (
|
||||||
|
app.state.config.ENABLE_RAG_HYBRID_SEARCH
|
||||||
|
and not app.state.config.BYPASS_EMBEDDING_AND_RETRIEVAL
|
||||||
|
):
|
||||||
app.state.rf = get_rf(
|
app.state.rf = get_rf(
|
||||||
app.state.config.RAG_RERANKING_ENGINE,
|
app.state.config.RAG_RERANKING_ENGINE,
|
||||||
app.state.config.RAG_RERANKING_MODEL,
|
app.state.config.RAG_RERANKING_MODEL,
|
||||||
|
@ -924,6 +935,8 @@ try:
|
||||||
app.state.config.RAG_EXTERNAL_RERANKER_API_KEY,
|
app.state.config.RAG_EXTERNAL_RERANKER_API_KEY,
|
||||||
RAG_RERANKING_MODEL_AUTO_UPDATE,
|
RAG_RERANKING_MODEL_AUTO_UPDATE,
|
||||||
)
|
)
|
||||||
|
else:
|
||||||
|
app.state.rf = None
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
log.error(f"Error updating models: {e}")
|
log.error(f"Error updating models: {e}")
|
||||||
pass
|
pass
|
||||||
|
@ -1281,8 +1294,12 @@ async def get_models(
|
||||||
|
|
||||||
model_info = Models.get_model_by_id(model["id"])
|
model_info = Models.get_model_by_id(model["id"])
|
||||||
if model_info:
|
if model_info:
|
||||||
if user.id == model_info.user_id or has_access(
|
if (
|
||||||
|
(user.role == "admin" and BYPASS_ADMIN_ACCESS_CONTROL)
|
||||||
|
or user.id == model_info.user_id
|
||||||
|
or has_access(
|
||||||
user.id, type="read", access_control=model_info.access_control
|
user.id, type="read", access_control=model_info.access_control
|
||||||
|
)
|
||||||
):
|
):
|
||||||
filtered_models.append(model)
|
filtered_models.append(model)
|
||||||
|
|
||||||
|
@ -1317,11 +1334,17 @@ async def get_models(
|
||||||
model_order_dict = {model_id: i for i, model_id in enumerate(model_order_list)}
|
model_order_dict = {model_id: i for i, model_id in enumerate(model_order_list)}
|
||||||
# Sort models by order list priority, with fallback for those not in the list
|
# Sort models by order list priority, with fallback for those not in the list
|
||||||
models.sort(
|
models.sort(
|
||||||
key=lambda x: (model_order_dict.get(x["id"], float("inf")), x["name"])
|
key=lambda model: (
|
||||||
|
model_order_dict.get(model.get("id", ""), float("inf")),
|
||||||
|
(model.get("name", "") or ""),
|
||||||
|
)
|
||||||
)
|
)
|
||||||
|
|
||||||
# Filter out models that the user does not have access to
|
# Filter out models that the user does not have access to
|
||||||
if user.role == "user" and not BYPASS_MODEL_ACCESS_CONTROL:
|
if (
|
||||||
|
user.role == "user"
|
||||||
|
or (user.role == "admin" and not BYPASS_ADMIN_ACCESS_CONTROL)
|
||||||
|
) and not BYPASS_MODEL_ACCESS_CONTROL:
|
||||||
models = get_filtered_models(models, user)
|
models = get_filtered_models(models, user)
|
||||||
|
|
||||||
log.debug(
|
log.debug(
|
||||||
|
@ -1392,7 +1415,9 @@ async def chat_completion(
|
||||||
model_info = Models.get_model_by_id(model_id)
|
model_info = Models.get_model_by_id(model_id)
|
||||||
|
|
||||||
# Check if user has access to the model
|
# Check if user has access to the model
|
||||||
if not BYPASS_MODEL_ACCESS_CONTROL and user.role == "user":
|
if not BYPASS_MODEL_ACCESS_CONTROL and (
|
||||||
|
user.role != "admin" or not BYPASS_ADMIN_ACCESS_CONTROL
|
||||||
|
):
|
||||||
try:
|
try:
|
||||||
check_model_access(user, model)
|
check_model_access(user, model)
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
|
@ -1444,6 +1469,7 @@ async def chat_completion(
|
||||||
}
|
}
|
||||||
|
|
||||||
if metadata.get("chat_id") and (user and user.role != "admin"):
|
if metadata.get("chat_id") and (user and user.role != "admin"):
|
||||||
|
if metadata["chat_id"] != "local":
|
||||||
chat = Chats.get_chat_by_id_and_user_id(metadata["chat_id"], user.id)
|
chat = Chats.get_chat_by_id_and_user_id(metadata["chat_id"], user.id)
|
||||||
if chat is None:
|
if chat is None:
|
||||||
raise HTTPException(
|
raise HTTPException(
|
||||||
|
@ -1454,29 +1480,22 @@ async def chat_completion(
|
||||||
request.state.metadata = metadata
|
request.state.metadata = metadata
|
||||||
form_data["metadata"] = metadata
|
form_data["metadata"] = metadata
|
||||||
|
|
||||||
form_data, metadata, events = await process_chat_payload(
|
|
||||||
request, form_data, user, metadata, model
|
|
||||||
)
|
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
log.debug(f"Error processing chat payload: {e}")
|
log.debug(f"Error processing chat metadata: {e}")
|
||||||
if metadata.get("chat_id") and metadata.get("message_id"):
|
|
||||||
# Update the chat message with the error
|
|
||||||
Chats.upsert_message_to_chat_by_id_and_message_id(
|
|
||||||
metadata["chat_id"],
|
|
||||||
metadata["message_id"],
|
|
||||||
{
|
|
||||||
"error": {"content": str(e)},
|
|
||||||
},
|
|
||||||
)
|
|
||||||
|
|
||||||
raise HTTPException(
|
raise HTTPException(
|
||||||
status_code=status.HTTP_400_BAD_REQUEST,
|
status_code=status.HTTP_400_BAD_REQUEST,
|
||||||
detail=str(e),
|
detail=str(e),
|
||||||
)
|
)
|
||||||
|
|
||||||
|
async def process_chat(request, form_data, user, metadata, model):
|
||||||
try:
|
try:
|
||||||
|
form_data, metadata, events = await process_chat_payload(
|
||||||
|
request, form_data, user, metadata, model
|
||||||
|
)
|
||||||
|
|
||||||
response = await chat_completion_handler(request, form_data, user)
|
response = await chat_completion_handler(request, form_data, user)
|
||||||
if metadata.get("chat_id") and metadata.get("message_id"):
|
if metadata.get("chat_id") and metadata.get("message_id"):
|
||||||
|
try:
|
||||||
Chats.upsert_message_to_chat_by_id_and_message_id(
|
Chats.upsert_message_to_chat_by_id_and_message_id(
|
||||||
metadata["chat_id"],
|
metadata["chat_id"],
|
||||||
metadata["message_id"],
|
metadata["message_id"],
|
||||||
|
@ -1484,14 +1503,26 @@ async def chat_completion(
|
||||||
"model": model_id,
|
"model": model_id,
|
||||||
},
|
},
|
||||||
)
|
)
|
||||||
|
except:
|
||||||
|
pass
|
||||||
|
|
||||||
return await process_chat_response(
|
return await process_chat_response(
|
||||||
request, response, form_data, user, metadata, model, events, tasks
|
request, response, form_data, user, metadata, model, events, tasks
|
||||||
)
|
)
|
||||||
|
except asyncio.CancelledError:
|
||||||
|
log.info("Chat processing was cancelled")
|
||||||
|
try:
|
||||||
|
event_emitter = get_event_emitter(metadata)
|
||||||
|
await event_emitter(
|
||||||
|
{"type": "task-cancelled"},
|
||||||
|
)
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
log.debug(f"Error in chat completion: {e}")
|
pass
|
||||||
|
except Exception as e:
|
||||||
|
log.debug(f"Error processing chat payload: {e}")
|
||||||
if metadata.get("chat_id") and metadata.get("message_id"):
|
if metadata.get("chat_id") and metadata.get("message_id"):
|
||||||
# Update the chat message with the error
|
# Update the chat message with the error
|
||||||
|
try:
|
||||||
Chats.upsert_message_to_chat_by_id_and_message_id(
|
Chats.upsert_message_to_chat_by_id_and_message_id(
|
||||||
metadata["chat_id"],
|
metadata["chat_id"],
|
||||||
metadata["message_id"],
|
metadata["message_id"],
|
||||||
|
@ -1499,12 +1530,29 @@ async def chat_completion(
|
||||||
"error": {"content": str(e)},
|
"error": {"content": str(e)},
|
||||||
},
|
},
|
||||||
)
|
)
|
||||||
|
except:
|
||||||
|
pass
|
||||||
|
|
||||||
raise HTTPException(
|
raise HTTPException(
|
||||||
status_code=status.HTTP_400_BAD_REQUEST,
|
status_code=status.HTTP_400_BAD_REQUEST,
|
||||||
detail=str(e),
|
detail=str(e),
|
||||||
)
|
)
|
||||||
|
|
||||||
|
if (
|
||||||
|
metadata.get("session_id")
|
||||||
|
and metadata.get("chat_id")
|
||||||
|
and metadata.get("message_id")
|
||||||
|
):
|
||||||
|
# Asynchronous Chat Processing
|
||||||
|
task_id, _ = await create_task(
|
||||||
|
request.app.state.redis,
|
||||||
|
process_chat(request, form_data, user, metadata, model),
|
||||||
|
id=metadata["chat_id"],
|
||||||
|
)
|
||||||
|
return {"status": True, "task_id": task_id}
|
||||||
|
else:
|
||||||
|
return await process_chat(request, form_data, user, metadata, model)
|
||||||
|
|
||||||
|
|
||||||
# Alias for chat_completion (Legacy)
|
# Alias for chat_completion (Legacy)
|
||||||
generate_chat_completions = chat_completion
|
generate_chat_completions = chat_completion
|
||||||
|
@ -1704,6 +1752,16 @@ async def get_app_config(request: Request):
|
||||||
}
|
}
|
||||||
if user is not None and (user.role in ["admin", "user"])
|
if user is not None and (user.role in ["admin", "user"])
|
||||||
else {
|
else {
|
||||||
|
**(
|
||||||
|
{
|
||||||
|
"ui": {
|
||||||
|
"pending_user_overlay_title": app.state.config.PENDING_USER_OVERLAY_TITLE,
|
||||||
|
"pending_user_overlay_content": app.state.config.PENDING_USER_OVERLAY_CONTENT,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if user and user.role == "pending"
|
||||||
|
else {}
|
||||||
|
),
|
||||||
**(
|
**(
|
||||||
{
|
{
|
||||||
"metadata": {
|
"metadata": {
|
||||||
|
@ -1717,7 +1775,7 @@ async def get_app_config(request: Request):
|
||||||
}
|
}
|
||||||
if app.state.LICENSE_METADATA
|
if app.state.LICENSE_METADATA
|
||||||
else {}
|
else {}
|
||||||
)
|
),
|
||||||
}
|
}
|
||||||
),
|
),
|
||||||
}
|
}
|
||||||
|
|
|
@ -0,0 +1,46 @@
|
||||||
|
"""Add indexes
|
||||||
|
|
||||||
|
Revision ID: 018012973d35
|
||||||
|
Revises: d31026856c01
|
||||||
|
Create Date: 2025-08-13 03:00:00.000000
|
||||||
|
|
||||||
|
"""
|
||||||
|
|
||||||
|
from alembic import op
|
||||||
|
import sqlalchemy as sa
|
||||||
|
|
||||||
|
revision = "018012973d35"
|
||||||
|
down_revision = "d31026856c01"
|
||||||
|
branch_labels = None
|
||||||
|
depends_on = None
|
||||||
|
|
||||||
|
|
||||||
|
def upgrade():
|
||||||
|
# Chat table indexes
|
||||||
|
op.create_index("folder_id_idx", "chat", ["folder_id"])
|
||||||
|
op.create_index("user_id_pinned_idx", "chat", ["user_id", "pinned"])
|
||||||
|
op.create_index("user_id_archived_idx", "chat", ["user_id", "archived"])
|
||||||
|
op.create_index("updated_at_user_id_idx", "chat", ["updated_at", "user_id"])
|
||||||
|
op.create_index("folder_id_user_id_idx", "chat", ["folder_id", "user_id"])
|
||||||
|
|
||||||
|
# Tag table index
|
||||||
|
op.create_index("user_id_idx", "tag", ["user_id"])
|
||||||
|
|
||||||
|
# Function table index
|
||||||
|
op.create_index("is_global_idx", "function", ["is_global"])
|
||||||
|
|
||||||
|
|
||||||
|
def downgrade():
|
||||||
|
# Chat table indexes
|
||||||
|
op.drop_index("folder_id_idx", table_name="chat")
|
||||||
|
op.drop_index("user_id_pinned_idx", table_name="chat")
|
||||||
|
op.drop_index("user_id_archived_idx", table_name="chat")
|
||||||
|
op.drop_index("updated_at_user_id_idx", table_name="chat")
|
||||||
|
op.drop_index("folder_id_user_id_idx", table_name="chat")
|
||||||
|
|
||||||
|
# Tag table index
|
||||||
|
op.drop_index("user_id_idx", table_name="tag")
|
||||||
|
|
||||||
|
# Function table index
|
||||||
|
|
||||||
|
op.drop_index("is_global_idx", table_name="function")
|
|
@ -0,0 +1,32 @@
|
||||||
|
"""update user table
|
||||||
|
|
||||||
|
Revision ID: 3af16a1c9fb6
|
||||||
|
Revises: 018012973d35
|
||||||
|
Create Date: 2025-08-21 02:07:18.078283
|
||||||
|
|
||||||
|
"""
|
||||||
|
|
||||||
|
from typing import Sequence, Union
|
||||||
|
|
||||||
|
from alembic import op
|
||||||
|
import sqlalchemy as sa
|
||||||
|
|
||||||
|
# revision identifiers, used by Alembic.
|
||||||
|
revision: str = "3af16a1c9fb6"
|
||||||
|
down_revision: Union[str, None] = "018012973d35"
|
||||||
|
branch_labels: Union[str, Sequence[str], None] = None
|
||||||
|
depends_on: Union[str, Sequence[str], None] = None
|
||||||
|
|
||||||
|
|
||||||
|
def upgrade() -> None:
|
||||||
|
op.add_column("user", sa.Column("username", sa.String(length=50), nullable=True))
|
||||||
|
op.add_column("user", sa.Column("bio", sa.Text(), nullable=True))
|
||||||
|
op.add_column("user", sa.Column("gender", sa.Text(), nullable=True))
|
||||||
|
op.add_column("user", sa.Column("date_of_birth", sa.Date(), nullable=True))
|
||||||
|
|
||||||
|
|
||||||
|
def downgrade() -> None:
|
||||||
|
op.drop_column("user", "username")
|
||||||
|
op.drop_column("user", "bio")
|
||||||
|
op.drop_column("user", "gender")
|
||||||
|
op.drop_column("user", "date_of_birth")
|
|
@ -73,11 +73,6 @@ class ProfileImageUrlForm(BaseModel):
|
||||||
profile_image_url: str
|
profile_image_url: str
|
||||||
|
|
||||||
|
|
||||||
class UpdateProfileForm(BaseModel):
|
|
||||||
profile_image_url: str
|
|
||||||
name: str
|
|
||||||
|
|
||||||
|
|
||||||
class UpdatePasswordForm(BaseModel):
|
class UpdatePasswordForm(BaseModel):
|
||||||
password: str
|
password: str
|
||||||
new_password: str
|
new_password: str
|
||||||
|
|
|
@ -10,7 +10,7 @@ from open_webui.models.folders import Folders
|
||||||
from open_webui.env import SRC_LOG_LEVELS
|
from open_webui.env import SRC_LOG_LEVELS
|
||||||
|
|
||||||
from pydantic import BaseModel, ConfigDict
|
from pydantic import BaseModel, ConfigDict
|
||||||
from sqlalchemy import BigInteger, Boolean, Column, String, Text, JSON
|
from sqlalchemy import BigInteger, Boolean, Column, String, Text, JSON, Index
|
||||||
from sqlalchemy import or_, func, select, and_, text
|
from sqlalchemy import or_, func, select, and_, text
|
||||||
from sqlalchemy.sql import exists
|
from sqlalchemy.sql import exists
|
||||||
from sqlalchemy.sql.expression import bindparam
|
from sqlalchemy.sql.expression import bindparam
|
||||||
|
@ -41,6 +41,20 @@ class Chat(Base):
|
||||||
meta = Column(JSON, server_default="{}")
|
meta = Column(JSON, server_default="{}")
|
||||||
folder_id = Column(Text, nullable=True)
|
folder_id = Column(Text, nullable=True)
|
||||||
|
|
||||||
|
__table_args__ = (
|
||||||
|
# Performance indexes for common queries
|
||||||
|
# WHERE folder_id = ...
|
||||||
|
Index("folder_id_idx", "folder_id"),
|
||||||
|
# WHERE user_id = ... AND pinned = ...
|
||||||
|
Index("user_id_pinned_idx", "user_id", "pinned"),
|
||||||
|
# WHERE user_id = ... AND archived = ...
|
||||||
|
Index("user_id_archived_idx", "user_id", "archived"),
|
||||||
|
# WHERE user_id = ... ORDER BY updated_at DESC
|
||||||
|
Index("updated_at_user_id_idx", "updated_at", "user_id"),
|
||||||
|
# WHERE folder_id = ... AND user_id = ...
|
||||||
|
Index("folder_id_user_id_idx", "folder_id", "user_id"),
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
class ChatModel(BaseModel):
|
class ChatModel(BaseModel):
|
||||||
model_config = ConfigDict(from_attributes=True)
|
model_config = ConfigDict(from_attributes=True)
|
||||||
|
|
|
@ -6,7 +6,7 @@ from open_webui.internal.db import Base, JSONField, get_db
|
||||||
from open_webui.models.users import Users
|
from open_webui.models.users import Users
|
||||||
from open_webui.env import SRC_LOG_LEVELS
|
from open_webui.env import SRC_LOG_LEVELS
|
||||||
from pydantic import BaseModel, ConfigDict
|
from pydantic import BaseModel, ConfigDict
|
||||||
from sqlalchemy import BigInteger, Boolean, Column, String, Text
|
from sqlalchemy import BigInteger, Boolean, Column, String, Text, Index
|
||||||
|
|
||||||
log = logging.getLogger(__name__)
|
log = logging.getLogger(__name__)
|
||||||
log.setLevel(SRC_LOG_LEVELS["MODELS"])
|
log.setLevel(SRC_LOG_LEVELS["MODELS"])
|
||||||
|
@ -31,6 +31,8 @@ class Function(Base):
|
||||||
updated_at = Column(BigInteger)
|
updated_at = Column(BigInteger)
|
||||||
created_at = Column(BigInteger)
|
created_at = Column(BigInteger)
|
||||||
|
|
||||||
|
__table_args__ = (Index("is_global_idx", "is_global"),)
|
||||||
|
|
||||||
|
|
||||||
class FunctionMeta(BaseModel):
|
class FunctionMeta(BaseModel):
|
||||||
description: Optional[str] = None
|
description: Optional[str] = None
|
||||||
|
@ -250,9 +252,7 @@ class FunctionsTable:
|
||||||
|
|
||||||
return user_settings["functions"]["valves"].get(id, {})
|
return user_settings["functions"]["valves"].get(id, {})
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
log.exception(
|
log.exception(f"Error getting user values by id {id} and user id {user_id}")
|
||||||
f"Error getting user values by id {id} and user id {user_id}: {e}"
|
|
||||||
)
|
|
||||||
return None
|
return None
|
||||||
|
|
||||||
def update_user_valves_by_id_and_user_id(
|
def update_user_valves_by_id_and_user_id(
|
||||||
|
|
|
@ -8,7 +8,7 @@ from open_webui.internal.db import Base, get_db
|
||||||
|
|
||||||
from open_webui.env import SRC_LOG_LEVELS
|
from open_webui.env import SRC_LOG_LEVELS
|
||||||
from pydantic import BaseModel, ConfigDict
|
from pydantic import BaseModel, ConfigDict
|
||||||
from sqlalchemy import BigInteger, Column, String, JSON, PrimaryKeyConstraint
|
from sqlalchemy import BigInteger, Column, String, JSON, PrimaryKeyConstraint, Index
|
||||||
|
|
||||||
log = logging.getLogger(__name__)
|
log = logging.getLogger(__name__)
|
||||||
log.setLevel(SRC_LOG_LEVELS["MODELS"])
|
log.setLevel(SRC_LOG_LEVELS["MODELS"])
|
||||||
|
@ -24,6 +24,11 @@ class Tag(Base):
|
||||||
user_id = Column(String)
|
user_id = Column(String)
|
||||||
meta = Column(JSON, nullable=True)
|
meta = Column(JSON, nullable=True)
|
||||||
|
|
||||||
|
__table_args__ = (
|
||||||
|
PrimaryKeyConstraint("id", "user_id", name="pk_id_user_id"),
|
||||||
|
Index("user_id_idx", "user_id"),
|
||||||
|
)
|
||||||
|
|
||||||
# Unique constraint ensuring (id, user_id) is unique, not just the `id` column
|
# Unique constraint ensuring (id, user_id) is unique, not just the `id` column
|
||||||
__table_args__ = (PrimaryKeyConstraint("id", "user_id", name="pk_id_user_id"),)
|
__table_args__ = (PrimaryKeyConstraint("id", "user_id", name="pk_id_user_id"),)
|
||||||
|
|
||||||
|
|
|
@ -175,7 +175,7 @@ class ToolsTable:
|
||||||
tool = db.get(Tool, id)
|
tool = db.get(Tool, id)
|
||||||
return tool.valves if tool.valves else {}
|
return tool.valves if tool.valves else {}
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
log.exception(f"Error getting tool valves by id {id}: {e}")
|
log.exception(f"Error getting tool valves by id {id}")
|
||||||
return None
|
return None
|
||||||
|
|
||||||
def update_tool_valves_by_id(self, id: str, valves: dict) -> Optional[ToolValves]:
|
def update_tool_valves_by_id(self, id: str, valves: dict) -> Optional[ToolValves]:
|
||||||
|
|
|
@ -4,14 +4,17 @@ from typing import Optional
|
||||||
from open_webui.internal.db import Base, JSONField, get_db
|
from open_webui.internal.db import Base, JSONField, get_db
|
||||||
|
|
||||||
|
|
||||||
|
from open_webui.env import DATABASE_USER_ACTIVE_STATUS_UPDATE_INTERVAL
|
||||||
from open_webui.models.chats import Chats
|
from open_webui.models.chats import Chats
|
||||||
from open_webui.models.groups import Groups
|
from open_webui.models.groups import Groups
|
||||||
|
from open_webui.utils.misc import throttle
|
||||||
|
|
||||||
|
|
||||||
from pydantic import BaseModel, ConfigDict
|
from pydantic import BaseModel, ConfigDict
|
||||||
from sqlalchemy import BigInteger, Column, String, Text
|
from sqlalchemy import BigInteger, Column, String, Text, Date
|
||||||
from sqlalchemy import or_
|
from sqlalchemy import or_
|
||||||
|
|
||||||
|
import datetime
|
||||||
|
|
||||||
####################
|
####################
|
||||||
# User DB Schema
|
# User DB Schema
|
||||||
|
@ -23,20 +26,28 @@ class User(Base):
|
||||||
|
|
||||||
id = Column(String, primary_key=True)
|
id = Column(String, primary_key=True)
|
||||||
name = Column(String)
|
name = Column(String)
|
||||||
|
|
||||||
email = Column(String)
|
email = Column(String)
|
||||||
|
username = Column(String(50), nullable=True)
|
||||||
|
|
||||||
role = Column(String)
|
role = Column(String)
|
||||||
profile_image_url = Column(Text)
|
profile_image_url = Column(Text)
|
||||||
|
|
||||||
last_active_at = Column(BigInteger)
|
bio = Column(Text, nullable=True)
|
||||||
updated_at = Column(BigInteger)
|
gender = Column(Text, nullable=True)
|
||||||
created_at = Column(BigInteger)
|
date_of_birth = Column(Date, nullable=True)
|
||||||
|
|
||||||
|
info = Column(JSONField, nullable=True)
|
||||||
|
settings = Column(JSONField, nullable=True)
|
||||||
|
|
||||||
api_key = Column(String, nullable=True, unique=True)
|
api_key = Column(String, nullable=True, unique=True)
|
||||||
settings = Column(JSONField, nullable=True)
|
|
||||||
info = Column(JSONField, nullable=True)
|
|
||||||
|
|
||||||
oauth_sub = Column(Text, unique=True)
|
oauth_sub = Column(Text, unique=True)
|
||||||
|
|
||||||
|
last_active_at = Column(BigInteger)
|
||||||
|
|
||||||
|
updated_at = Column(BigInteger)
|
||||||
|
created_at = Column(BigInteger)
|
||||||
|
|
||||||
|
|
||||||
class UserSettings(BaseModel):
|
class UserSettings(BaseModel):
|
||||||
ui: Optional[dict] = {}
|
ui: Optional[dict] = {}
|
||||||
|
@ -47,20 +58,27 @@ class UserSettings(BaseModel):
|
||||||
class UserModel(BaseModel):
|
class UserModel(BaseModel):
|
||||||
id: str
|
id: str
|
||||||
name: str
|
name: str
|
||||||
|
|
||||||
email: str
|
email: str
|
||||||
|
username: Optional[str] = None
|
||||||
|
|
||||||
role: str = "pending"
|
role: str = "pending"
|
||||||
profile_image_url: str
|
profile_image_url: str
|
||||||
|
|
||||||
|
bio: Optional[str] = None
|
||||||
|
gender: Optional[str] = None
|
||||||
|
date_of_birth: Optional[datetime.date] = None
|
||||||
|
|
||||||
|
info: Optional[dict] = None
|
||||||
|
settings: Optional[UserSettings] = None
|
||||||
|
|
||||||
|
api_key: Optional[str] = None
|
||||||
|
oauth_sub: Optional[str] = None
|
||||||
|
|
||||||
last_active_at: int # timestamp in epoch
|
last_active_at: int # timestamp in epoch
|
||||||
updated_at: int # timestamp in epoch
|
updated_at: int # timestamp in epoch
|
||||||
created_at: int # timestamp in epoch
|
created_at: int # timestamp in epoch
|
||||||
|
|
||||||
api_key: Optional[str] = None
|
|
||||||
settings: Optional[UserSettings] = None
|
|
||||||
info: Optional[dict] = None
|
|
||||||
|
|
||||||
oauth_sub: Optional[str] = None
|
|
||||||
|
|
||||||
model_config = ConfigDict(from_attributes=True)
|
model_config = ConfigDict(from_attributes=True)
|
||||||
|
|
||||||
|
|
||||||
|
@ -69,6 +87,14 @@ class UserModel(BaseModel):
|
||||||
####################
|
####################
|
||||||
|
|
||||||
|
|
||||||
|
class UpdateProfileForm(BaseModel):
|
||||||
|
profile_image_url: str
|
||||||
|
name: str
|
||||||
|
bio: Optional[str] = None
|
||||||
|
gender: Optional[str] = None
|
||||||
|
date_of_birth: Optional[datetime.date] = None
|
||||||
|
|
||||||
|
|
||||||
class UserListResponse(BaseModel):
|
class UserListResponse(BaseModel):
|
||||||
users: list[UserModel]
|
users: list[UserModel]
|
||||||
total: int
|
total: int
|
||||||
|
@ -311,6 +337,7 @@ class UsersTable:
|
||||||
except Exception:
|
except Exception:
|
||||||
return None
|
return None
|
||||||
|
|
||||||
|
@throttle(DATABASE_USER_ACTIVE_STATUS_UPDATE_INTERVAL)
|
||||||
def update_user_last_active_by_id(self, id: str) -> Optional[UserModel]:
|
def update_user_last_active_by_id(self, id: str) -> Optional[UserModel]:
|
||||||
try:
|
try:
|
||||||
with get_db() as db:
|
with get_db() as db:
|
||||||
|
@ -346,7 +373,8 @@ class UsersTable:
|
||||||
user = db.query(User).filter_by(id=id).first()
|
user = db.query(User).filter_by(id=id).first()
|
||||||
return UserModel.model_validate(user)
|
return UserModel.model_validate(user)
|
||||||
# return UserModel(**user.dict())
|
# return UserModel(**user.dict())
|
||||||
except Exception:
|
except Exception as e:
|
||||||
|
print(e)
|
||||||
return None
|
return None
|
||||||
|
|
||||||
def update_user_settings_by_id(self, id: str, updated: dict) -> Optional[UserModel]:
|
def update_user_settings_by_id(self, id: str, updated: dict) -> Optional[UserModel]:
|
||||||
|
|
|
@ -124,6 +124,8 @@ def query_doc_with_hybrid_search(
|
||||||
hybrid_bm25_weight: float,
|
hybrid_bm25_weight: float,
|
||||||
) -> dict:
|
) -> dict:
|
||||||
try:
|
try:
|
||||||
|
# BM_25 required only if weight is greater than 0
|
||||||
|
if hybrid_bm25_weight > 0:
|
||||||
log.debug(f"query_doc_with_hybrid_search:doc {collection_name}")
|
log.debug(f"query_doc_with_hybrid_search:doc {collection_name}")
|
||||||
bm25_retriever = BM25Retriever.from_texts(
|
bm25_retriever = BM25Retriever.from_texts(
|
||||||
texts=collection_result.documents[0],
|
texts=collection_result.documents[0],
|
||||||
|
@ -337,6 +339,8 @@ def query_collection_with_hybrid_search(
|
||||||
# Fetch collection data once per collection sequentially
|
# Fetch collection data once per collection sequentially
|
||||||
# Avoid fetching the same data multiple times later
|
# Avoid fetching the same data multiple times later
|
||||||
collection_results = {}
|
collection_results = {}
|
||||||
|
# Only retrieve entire collection if bm_25 calculation is required
|
||||||
|
if hybrid_bm25_weight > 0:
|
||||||
for collection_name in collection_names:
|
for collection_name in collection_names:
|
||||||
try:
|
try:
|
||||||
log.debug(
|
log.debug(
|
||||||
|
@ -348,7 +352,9 @@ def query_collection_with_hybrid_search(
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
log.exception(f"Failed to fetch collection {collection_name}: {e}")
|
log.exception(f"Failed to fetch collection {collection_name}: {e}")
|
||||||
collection_results[collection_name] = None
|
collection_results[collection_name] = None
|
||||||
|
else:
|
||||||
|
for collection_name in collection_names:
|
||||||
|
collection_results[collection_name] = []
|
||||||
log.info(
|
log.info(
|
||||||
f"Starting hybrid search for {len(queries)} queries in {len(collection_names)} collections..."
|
f"Starting hybrid search for {len(queries)} queries in {len(collection_names)} collections..."
|
||||||
)
|
)
|
||||||
|
@ -946,6 +952,7 @@ class RerankCompressor(BaseDocumentCompressor):
|
||||||
) -> Sequence[Document]:
|
) -> Sequence[Document]:
|
||||||
reranking = self.reranking_function is not None
|
reranking = self.reranking_function is not None
|
||||||
|
|
||||||
|
scores = None
|
||||||
if reranking:
|
if reranking:
|
||||||
scores = self.reranking_function(
|
scores = self.reranking_function(
|
||||||
[(query, doc.page_content) for doc in documents]
|
[(query, doc.page_content) for doc in documents]
|
||||||
|
@ -959,8 +966,12 @@ class RerankCompressor(BaseDocumentCompressor):
|
||||||
)
|
)
|
||||||
scores = util.cos_sim(query_embedding, document_embedding)[0]
|
scores = util.cos_sim(query_embedding, document_embedding)[0]
|
||||||
|
|
||||||
|
if scores:
|
||||||
docs_with_scores = list(
|
docs_with_scores = list(
|
||||||
zip(documents, scores.tolist() if not isinstance(scores, list) else scores)
|
zip(
|
||||||
|
documents,
|
||||||
|
scores.tolist() if not isinstance(scores, list) else scores,
|
||||||
|
)
|
||||||
)
|
)
|
||||||
if self.r_score:
|
if self.r_score:
|
||||||
docs_with_scores = [
|
docs_with_scores = [
|
||||||
|
@ -978,3 +989,8 @@ class RerankCompressor(BaseDocumentCompressor):
|
||||||
)
|
)
|
||||||
final_results.append(doc)
|
final_results.append(doc)
|
||||||
return final_results
|
return final_results
|
||||||
|
else:
|
||||||
|
log.warning(
|
||||||
|
"No valid scores found, check your reranking function. Returning original documents."
|
||||||
|
)
|
||||||
|
return documents
|
||||||
|
|
|
@ -1,5 +1,7 @@
|
||||||
from pymilvus import MilvusClient as Client
|
from pymilvus import MilvusClient as Client
|
||||||
from pymilvus import FieldSchema, DataType
|
from pymilvus import FieldSchema, DataType
|
||||||
|
from pymilvus import connections, Collection
|
||||||
|
|
||||||
import json
|
import json
|
||||||
import logging
|
import logging
|
||||||
from typing import Optional
|
from typing import Optional
|
||||||
|
@ -188,6 +190,8 @@ class MilvusClient(VectorDBBase):
|
||||||
return self._result_to_search_result(result)
|
return self._result_to_search_result(result)
|
||||||
|
|
||||||
def query(self, collection_name: str, filter: dict, limit: Optional[int] = None):
|
def query(self, collection_name: str, filter: dict, limit: Optional[int] = None):
|
||||||
|
connections.connect(uri=MILVUS_URI, token=MILVUS_TOKEN, db_name=MILVUS_DB)
|
||||||
|
|
||||||
# Construct the filter string for querying
|
# Construct the filter string for querying
|
||||||
collection_name = collection_name.replace("-", "_")
|
collection_name = collection_name.replace("-", "_")
|
||||||
if not self.has_collection(collection_name):
|
if not self.has_collection(collection_name):
|
||||||
|
@ -201,72 +205,36 @@ class MilvusClient(VectorDBBase):
|
||||||
for key, value in filter.items()
|
for key, value in filter.items()
|
||||||
]
|
]
|
||||||
)
|
)
|
||||||
max_limit = 16383 # The maximum number of records per request
|
|
||||||
all_results = []
|
|
||||||
if limit is None:
|
|
||||||
# Milvus default limit for query if not specified is 16384, but docs mention iteration.
|
|
||||||
# Let's set a practical high number if "all" is intended, or handle true pagination.
|
|
||||||
# For now, if limit is None, we'll fetch in batches up to a very large number.
|
|
||||||
# This part could be refined based on expected use cases for "get all".
|
|
||||||
# For this function signature, None implies "as many as possible" up to Milvus limits.
|
|
||||||
limit = (
|
|
||||||
16384 * 10
|
|
||||||
) # A large number to signify fetching many, will be capped by actual data or max_limit per call.
|
|
||||||
log.info(
|
|
||||||
f"Limit not specified for query, fetching up to {limit} results in batches."
|
|
||||||
)
|
|
||||||
|
|
||||||
# Initialize offset and remaining to handle pagination
|
collection = Collection(f"{self.collection_prefix}_{collection_name}")
|
||||||
offset = 0
|
collection.load()
|
||||||
remaining = limit
|
all_results = []
|
||||||
|
|
||||||
try:
|
try:
|
||||||
log.info(
|
log.info(
|
||||||
f"Querying collection {self.collection_prefix}_{collection_name} with filter: '{filter_string}', limit: {limit}"
|
f"Querying collection {self.collection_prefix}_{collection_name} with filter: '{filter_string}', limit: {limit}"
|
||||||
)
|
)
|
||||||
# Loop until there are no more items to fetch or the desired limit is reached
|
|
||||||
while remaining > 0:
|
|
||||||
current_fetch = min(
|
|
||||||
max_limit, remaining if isinstance(remaining, int) else max_limit
|
|
||||||
)
|
|
||||||
log.debug(
|
|
||||||
f"Querying with offset: {offset}, current_fetch: {current_fetch}"
|
|
||||||
)
|
|
||||||
|
|
||||||
results = self.client.query(
|
iterator = collection.query_iterator(
|
||||||
collection_name=f"{self.collection_prefix}_{collection_name}",
|
|
||||||
filter=filter_string,
|
filter=filter_string,
|
||||||
output_fields=[
|
output_fields=[
|
||||||
"id",
|
"id",
|
||||||
"data",
|
"data",
|
||||||
"metadata",
|
"metadata",
|
||||||
], # Explicitly list needed fields. Vector not usually needed in query.
|
],
|
||||||
limit=current_fetch,
|
limit=limit, # Pass the limit directly; None means no limit.
|
||||||
offset=offset,
|
|
||||||
)
|
)
|
||||||
|
|
||||||
if not results:
|
while True:
|
||||||
log.debug("No more results from query.")
|
result = iterator.next()
|
||||||
break
|
if not result:
|
||||||
|
iterator.close()
|
||||||
all_results.extend(results)
|
|
||||||
results_count = len(results)
|
|
||||||
log.debug(f"Fetched {results_count} results in this batch.")
|
|
||||||
|
|
||||||
if isinstance(remaining, int):
|
|
||||||
remaining -= results_count
|
|
||||||
|
|
||||||
offset += results_count
|
|
||||||
|
|
||||||
# Break the loop if the results returned are less than the requested fetch count (means end of data)
|
|
||||||
if results_count < current_fetch:
|
|
||||||
log.debug(
|
|
||||||
"Fetched less than requested, assuming end of results for this query."
|
|
||||||
)
|
|
||||||
break
|
break
|
||||||
|
all_results += result
|
||||||
|
|
||||||
log.info(f"Total results from query: {len(all_results)}")
|
log.info(f"Total results from query: {len(all_results)}")
|
||||||
return self._result_to_get_result([all_results])
|
return self._result_to_get_result([all_results])
|
||||||
|
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
log.exception(
|
log.exception(
|
||||||
f"Error querying collection {self.collection_prefix}_{collection_name} with filter '{filter_string}' and limit {limit}: {e}"
|
f"Error querying collection {self.collection_prefix}_{collection_name} with filter '{filter_string}' and limit {limit}: {e}"
|
||||||
|
|
|
@ -111,11 +111,35 @@ class PgvectorClient(VectorDBBase):
|
||||||
|
|
||||||
try:
|
try:
|
||||||
# Ensure the pgvector extension is available
|
# Ensure the pgvector extension is available
|
||||||
self.session.execute(text("CREATE EXTENSION IF NOT EXISTS vector;"))
|
# Use a conditional check to avoid permission issues on Azure PostgreSQL
|
||||||
|
self.session.execute(
|
||||||
|
text(
|
||||||
|
"""
|
||||||
|
DO $$
|
||||||
|
BEGIN
|
||||||
|
IF NOT EXISTS (SELECT 1 FROM pg_extension WHERE extname = 'vector') THEN
|
||||||
|
CREATE EXTENSION IF NOT EXISTS vector;
|
||||||
|
END IF;
|
||||||
|
END $$;
|
||||||
|
"""
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
if PGVECTOR_PGCRYPTO:
|
if PGVECTOR_PGCRYPTO:
|
||||||
# Ensure the pgcrypto extension is available for encryption
|
# Ensure the pgcrypto extension is available for encryption
|
||||||
self.session.execute(text("CREATE EXTENSION IF NOT EXISTS pgcrypto;"))
|
# Use a conditional check to avoid permission issues on Azure PostgreSQL
|
||||||
|
self.session.execute(
|
||||||
|
text(
|
||||||
|
"""
|
||||||
|
DO $$
|
||||||
|
BEGIN
|
||||||
|
IF NOT EXISTS (SELECT 1 FROM pg_extension WHERE extname = 'pgcrypto') THEN
|
||||||
|
CREATE EXTENSION IF NOT EXISTS pgcrypto;
|
||||||
|
END IF;
|
||||||
|
END $$;
|
||||||
|
"""
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
if not PGVECTOR_PGCRYPTO_KEY:
|
if not PGVECTOR_PGCRYPTO_KEY:
|
||||||
raise ValueError(
|
raise ValueError(
|
||||||
|
|
|
@ -197,13 +197,23 @@ class S3VectorClient(VectorDBBase):
|
||||||
"metadata": metadata,
|
"metadata": metadata,
|
||||||
}
|
}
|
||||||
)
|
)
|
||||||
# Insert vectors
|
|
||||||
|
# Insert vectors in batches of 500 (S3 Vector API limit)
|
||||||
|
batch_size = 500
|
||||||
|
for i in range(0, len(vectors), batch_size):
|
||||||
|
batch = vectors[i : i + batch_size]
|
||||||
self.client.put_vectors(
|
self.client.put_vectors(
|
||||||
vectorBucketName=self.bucket_name,
|
vectorBucketName=self.bucket_name,
|
||||||
indexName=collection_name,
|
indexName=collection_name,
|
||||||
vectors=vectors,
|
vectors=batch,
|
||||||
|
)
|
||||||
|
log.info(
|
||||||
|
f"Inserted batch {i//batch_size + 1}: {len(batch)} vectors into index '{collection_name}'."
|
||||||
|
)
|
||||||
|
|
||||||
|
log.info(
|
||||||
|
f"Completed insertion of {len(vectors)} vectors into index '{collection_name}'."
|
||||||
)
|
)
|
||||||
log.info(f"Inserted {len(vectors)} vectors into index '{collection_name}'.")
|
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
log.error(f"Error inserting vectors: {e}")
|
log.error(f"Error inserting vectors: {e}")
|
||||||
raise
|
raise
|
||||||
|
@ -258,16 +268,29 @@ class S3VectorClient(VectorDBBase):
|
||||||
"metadata": metadata,
|
"metadata": metadata,
|
||||||
}
|
}
|
||||||
)
|
)
|
||||||
# Upsert vectors (using put_vectors for upsert semantics)
|
|
||||||
|
# Upsert vectors in batches of 500 (S3 Vector API limit)
|
||||||
|
batch_size = 500
|
||||||
|
for i in range(0, len(vectors), batch_size):
|
||||||
|
batch = vectors[i : i + batch_size]
|
||||||
|
if i == 0: # Log sample info for first batch only
|
||||||
log.info(
|
log.info(
|
||||||
f"Upserting {len(vectors)} vectors. First vector sample: key={vectors[0]['key']}, data_type={type(vectors[0]['data']['float32'])}, data_len={len(vectors[0]['data']['float32'])}"
|
f"Upserting batch 1: {len(batch)} vectors. First vector sample: key={batch[0]['key']}, data_type={type(batch[0]['data']['float32'])}, data_len={len(batch[0]['data']['float32'])}"
|
||||||
)
|
)
|
||||||
|
else:
|
||||||
|
log.info(
|
||||||
|
f"Upserting batch {i//batch_size + 1}: {len(batch)} vectors."
|
||||||
|
)
|
||||||
|
|
||||||
self.client.put_vectors(
|
self.client.put_vectors(
|
||||||
vectorBucketName=self.bucket_name,
|
vectorBucketName=self.bucket_name,
|
||||||
indexName=collection_name,
|
indexName=collection_name,
|
||||||
vectors=vectors,
|
vectors=batch,
|
||||||
|
)
|
||||||
|
|
||||||
|
log.info(
|
||||||
|
f"Completed upsert of {len(vectors)} vectors into index '{collection_name}'."
|
||||||
)
|
)
|
||||||
log.info(f"Upserted {len(vectors)} vectors into index '{collection_name}'.")
|
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
log.error(f"Error upserting vectors: {e}")
|
log.error(f"Error upserting vectors: {e}")
|
||||||
raise
|
raise
|
||||||
|
|
|
@ -11,7 +11,10 @@ log.setLevel(SRC_LOG_LEVELS["RAG"])
|
||||||
|
|
||||||
|
|
||||||
def search_duckduckgo(
|
def search_duckduckgo(
|
||||||
query: str, count: int, filter_list: Optional[list[str]] = None
|
query: str,
|
||||||
|
count: int,
|
||||||
|
filter_list: Optional[list[str]] = None,
|
||||||
|
concurrent_requests: Optional[int] = None,
|
||||||
) -> list[SearchResult]:
|
) -> list[SearchResult]:
|
||||||
"""
|
"""
|
||||||
Search using DuckDuckGo's Search API and return the results as a list of SearchResult objects.
|
Search using DuckDuckGo's Search API and return the results as a list of SearchResult objects.
|
||||||
|
@ -25,6 +28,9 @@ def search_duckduckgo(
|
||||||
# Use the DDGS context manager to create a DDGS object
|
# Use the DDGS context manager to create a DDGS object
|
||||||
search_results = []
|
search_results = []
|
||||||
with DDGS() as ddgs:
|
with DDGS() as ddgs:
|
||||||
|
if concurrent_requests:
|
||||||
|
ddgs.threads = concurrent_requests
|
||||||
|
|
||||||
# Use the ddgs.text() method to perform the search
|
# Use the ddgs.text() method to perform the search
|
||||||
try:
|
try:
|
||||||
search_results = ddgs.text(
|
search_results = ddgs.text(
|
||||||
|
|
|
@ -11,7 +11,7 @@ def get_filtered_results(results, filter_list):
|
||||||
return results
|
return results
|
||||||
filtered_results = []
|
filtered_results = []
|
||||||
for result in results:
|
for result in results:
|
||||||
url = result.get("url") or result.get("link", "")
|
url = result.get("url") or result.get("link", "") or result.get("href", "")
|
||||||
if not validators.url(url):
|
if not validators.url(url):
|
||||||
continue
|
continue
|
||||||
domain = urlparse(url).netloc
|
domain = urlparse(url).netloc
|
||||||
|
|
|
@ -550,6 +550,11 @@ def transcription_handler(request, file_path, metadata):
|
||||||
|
|
||||||
metadata = metadata or {}
|
metadata = metadata or {}
|
||||||
|
|
||||||
|
languages = [
|
||||||
|
metadata.get("language", None) if WHISPER_LANGUAGE == "" else WHISPER_LANGUAGE,
|
||||||
|
None, # Always fallback to None in case transcription fails
|
||||||
|
]
|
||||||
|
|
||||||
if request.app.state.config.STT_ENGINE == "":
|
if request.app.state.config.STT_ENGINE == "":
|
||||||
if request.app.state.faster_whisper_model is None:
|
if request.app.state.faster_whisper_model is None:
|
||||||
request.app.state.faster_whisper_model = set_faster_whisper_model(
|
request.app.state.faster_whisper_model = set_faster_whisper_model(
|
||||||
|
@ -561,11 +566,7 @@ def transcription_handler(request, file_path, metadata):
|
||||||
file_path,
|
file_path,
|
||||||
beam_size=5,
|
beam_size=5,
|
||||||
vad_filter=request.app.state.config.WHISPER_VAD_FILTER,
|
vad_filter=request.app.state.config.WHISPER_VAD_FILTER,
|
||||||
language=(
|
language=languages[0],
|
||||||
metadata.get("language", None)
|
|
||||||
if WHISPER_LANGUAGE == ""
|
|
||||||
else WHISPER_LANGUAGE
|
|
||||||
),
|
|
||||||
)
|
)
|
||||||
log.info(
|
log.info(
|
||||||
"Detected language '%s' with probability %f"
|
"Detected language '%s' with probability %f"
|
||||||
|
@ -585,22 +586,27 @@ def transcription_handler(request, file_path, metadata):
|
||||||
elif request.app.state.config.STT_ENGINE == "openai":
|
elif request.app.state.config.STT_ENGINE == "openai":
|
||||||
r = None
|
r = None
|
||||||
try:
|
try:
|
||||||
|
for language in languages:
|
||||||
|
payload = {
|
||||||
|
"model": request.app.state.config.STT_MODEL,
|
||||||
|
}
|
||||||
|
|
||||||
|
if language:
|
||||||
|
payload["language"] = language
|
||||||
|
|
||||||
r = requests.post(
|
r = requests.post(
|
||||||
url=f"{request.app.state.config.STT_OPENAI_API_BASE_URL}/audio/transcriptions",
|
url=f"{request.app.state.config.STT_OPENAI_API_BASE_URL}/audio/transcriptions",
|
||||||
headers={
|
headers={
|
||||||
"Authorization": f"Bearer {request.app.state.config.STT_OPENAI_API_KEY}"
|
"Authorization": f"Bearer {request.app.state.config.STT_OPENAI_API_KEY}"
|
||||||
},
|
},
|
||||||
files={"file": (filename, open(file_path, "rb"))},
|
files={"file": (filename, open(file_path, "rb"))},
|
||||||
data={
|
data=payload,
|
||||||
"model": request.app.state.config.STT_MODEL,
|
|
||||||
**(
|
|
||||||
{"language": metadata.get("language")}
|
|
||||||
if metadata.get("language")
|
|
||||||
else {}
|
|
||||||
),
|
|
||||||
},
|
|
||||||
)
|
)
|
||||||
|
|
||||||
|
if r.status_code == 200:
|
||||||
|
# Successful transcription
|
||||||
|
break
|
||||||
|
|
||||||
r.raise_for_status()
|
r.raise_for_status()
|
||||||
data = r.json()
|
data = r.json()
|
||||||
|
|
||||||
|
@ -641,11 +647,14 @@ def transcription_handler(request, file_path, metadata):
|
||||||
"Content-Type": mime,
|
"Content-Type": mime,
|
||||||
}
|
}
|
||||||
|
|
||||||
# Add model if specified
|
for language in languages:
|
||||||
params = {}
|
params = {}
|
||||||
if request.app.state.config.STT_MODEL:
|
if request.app.state.config.STT_MODEL:
|
||||||
params["model"] = request.app.state.config.STT_MODEL
|
params["model"] = request.app.state.config.STT_MODEL
|
||||||
|
|
||||||
|
if language:
|
||||||
|
params["language"] = language
|
||||||
|
|
||||||
# Make request to Deepgram API
|
# Make request to Deepgram API
|
||||||
r = requests.post(
|
r = requests.post(
|
||||||
"https://api.deepgram.com/v1/listen?smart_format=true",
|
"https://api.deepgram.com/v1/listen?smart_format=true",
|
||||||
|
@ -653,6 +662,11 @@ def transcription_handler(request, file_path, metadata):
|
||||||
params=params,
|
params=params,
|
||||||
data=file_data,
|
data=file_data,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
if r.status_code == 200:
|
||||||
|
# Successful transcription
|
||||||
|
break
|
||||||
|
|
||||||
r.raise_for_status()
|
r.raise_for_status()
|
||||||
response_data = r.json()
|
response_data = r.json()
|
||||||
|
|
||||||
|
|
|
@ -15,10 +15,9 @@ from open_webui.models.auths import (
|
||||||
SigninResponse,
|
SigninResponse,
|
||||||
SignupForm,
|
SignupForm,
|
||||||
UpdatePasswordForm,
|
UpdatePasswordForm,
|
||||||
UpdateProfileForm,
|
|
||||||
UserResponse,
|
UserResponse,
|
||||||
)
|
)
|
||||||
from open_webui.models.users import Users
|
from open_webui.models.users import Users, UpdateProfileForm
|
||||||
from open_webui.models.groups import Groups
|
from open_webui.models.groups import Groups
|
||||||
|
|
||||||
from open_webui.constants import ERROR_MESSAGES, WEBHOOK_MESSAGES
|
from open_webui.constants import ERROR_MESSAGES, WEBHOOK_MESSAGES
|
||||||
|
@ -73,7 +72,13 @@ class SessionUserResponse(Token, UserResponse):
|
||||||
permissions: Optional[dict] = None
|
permissions: Optional[dict] = None
|
||||||
|
|
||||||
|
|
||||||
@router.get("/", response_model=SessionUserResponse)
|
class SessionUserInfoResponse(SessionUserResponse):
|
||||||
|
bio: Optional[str] = None
|
||||||
|
gender: Optional[str] = None
|
||||||
|
date_of_birth: Optional[datetime.date] = None
|
||||||
|
|
||||||
|
|
||||||
|
@router.get("/", response_model=SessionUserInfoResponse)
|
||||||
async def get_session_user(
|
async def get_session_user(
|
||||||
request: Request, response: Response, user=Depends(get_current_user)
|
request: Request, response: Response, user=Depends(get_current_user)
|
||||||
):
|
):
|
||||||
|
@ -121,6 +126,9 @@ async def get_session_user(
|
||||||
"name": user.name,
|
"name": user.name,
|
||||||
"role": user.role,
|
"role": user.role,
|
||||||
"profile_image_url": user.profile_image_url,
|
"profile_image_url": user.profile_image_url,
|
||||||
|
"bio": user.bio,
|
||||||
|
"gender": user.gender,
|
||||||
|
"date_of_birth": user.date_of_birth,
|
||||||
"permissions": user_permissions,
|
"permissions": user_permissions,
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -137,7 +145,7 @@ async def update_profile(
|
||||||
if session_user:
|
if session_user:
|
||||||
user = Users.update_user_by_id(
|
user = Users.update_user_by_id(
|
||||||
session_user.id,
|
session_user.id,
|
||||||
{"profile_image_url": form_data.profile_image_url, "name": form_data.name},
|
form_data.model_dump(),
|
||||||
)
|
)
|
||||||
if user:
|
if user:
|
||||||
return user
|
return user
|
||||||
|
@ -625,7 +633,7 @@ async def signup(request: Request, response: Response, form_data: SignupForm):
|
||||||
)
|
)
|
||||||
|
|
||||||
if request.app.state.config.WEBHOOK_URL:
|
if request.app.state.config.WEBHOOK_URL:
|
||||||
post_webhook(
|
await post_webhook(
|
||||||
request.app.state.WEBUI_NAME,
|
request.app.state.WEBUI_NAME,
|
||||||
request.app.state.config.WEBHOOK_URL,
|
request.app.state.config.WEBHOOK_URL,
|
||||||
WEBHOOK_MESSAGES.USER_SIGNUP(user.name),
|
WEBHOOK_MESSAGES.USER_SIGNUP(user.name),
|
||||||
|
|
|
@ -209,7 +209,7 @@ async def send_notification(name, webui_url, channel, message, active_user_ids):
|
||||||
)
|
)
|
||||||
|
|
||||||
if webhook_url:
|
if webhook_url:
|
||||||
post_webhook(
|
await post_webhook(
|
||||||
name,
|
name,
|
||||||
webhook_url,
|
webhook_url,
|
||||||
f"#{channel.name} - {webui_url}/channels/{channel.id}\n\n{message.content}",
|
f"#{channel.name} - {webui_url}/channels/{channel.id}\n\n{message.content}",
|
||||||
|
|
|
@ -36,7 +36,7 @@ router = APIRouter()
|
||||||
|
|
||||||
@router.get("/", response_model=list[ChatTitleIdResponse])
|
@router.get("/", response_model=list[ChatTitleIdResponse])
|
||||||
@router.get("/list", response_model=list[ChatTitleIdResponse])
|
@router.get("/list", response_model=list[ChatTitleIdResponse])
|
||||||
async def get_session_user_chat_list(
|
def get_session_user_chat_list(
|
||||||
user=Depends(get_verified_user), page: Optional[int] = None
|
user=Depends(get_verified_user), page: Optional[int] = None
|
||||||
):
|
):
|
||||||
try:
|
try:
|
||||||
|
|
|
@ -9,8 +9,8 @@ from open_webui.config import BannerModel
|
||||||
|
|
||||||
from open_webui.utils.tools import (
|
from open_webui.utils.tools import (
|
||||||
get_tool_server_data,
|
get_tool_server_data,
|
||||||
get_tool_servers_data,
|
|
||||||
get_tool_server_url,
|
get_tool_server_url,
|
||||||
|
set_tool_servers,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
|
@ -114,10 +114,7 @@ async def set_tool_servers_config(
|
||||||
request.app.state.config.TOOL_SERVER_CONNECTIONS = [
|
request.app.state.config.TOOL_SERVER_CONNECTIONS = [
|
||||||
connection.model_dump() for connection in form_data.TOOL_SERVER_CONNECTIONS
|
connection.model_dump() for connection in form_data.TOOL_SERVER_CONNECTIONS
|
||||||
]
|
]
|
||||||
|
await set_tool_servers(request)
|
||||||
request.app.state.TOOL_SERVERS = await get_tool_servers_data(
|
|
||||||
request.app.state.config.TOOL_SERVER_CONNECTIONS
|
|
||||||
)
|
|
||||||
|
|
||||||
return {
|
return {
|
||||||
"TOOL_SERVER_CONNECTIONS": request.app.state.config.TOOL_SERVER_CONNECTIONS,
|
"TOOL_SERVER_CONNECTIONS": request.app.state.config.TOOL_SERVER_CONNECTIONS,
|
||||||
|
|
|
@ -6,8 +6,10 @@ from fnmatch import fnmatch
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
from typing import Optional
|
from typing import Optional
|
||||||
from urllib.parse import quote
|
from urllib.parse import quote
|
||||||
|
import asyncio
|
||||||
|
|
||||||
from fastapi import (
|
from fastapi import (
|
||||||
|
BackgroundTasks,
|
||||||
APIRouter,
|
APIRouter,
|
||||||
Depends,
|
Depends,
|
||||||
File,
|
File,
|
||||||
|
@ -18,6 +20,7 @@ from fastapi import (
|
||||||
status,
|
status,
|
||||||
Query,
|
Query,
|
||||||
)
|
)
|
||||||
|
|
||||||
from fastapi.responses import FileResponse, StreamingResponse
|
from fastapi.responses import FileResponse, StreamingResponse
|
||||||
from open_webui.constants import ERROR_MESSAGES
|
from open_webui.constants import ERROR_MESSAGES
|
||||||
from open_webui.env import SRC_LOG_LEVELS
|
from open_webui.env import SRC_LOG_LEVELS
|
||||||
|
@ -42,7 +45,6 @@ from pydantic import BaseModel
|
||||||
log = logging.getLogger(__name__)
|
log = logging.getLogger(__name__)
|
||||||
log.setLevel(SRC_LOG_LEVELS["MODELS"])
|
log.setLevel(SRC_LOG_LEVELS["MODELS"])
|
||||||
|
|
||||||
|
|
||||||
router = APIRouter()
|
router = APIRouter()
|
||||||
|
|
||||||
|
|
||||||
|
@ -83,13 +85,64 @@ def has_access_to_file(
|
||||||
############################
|
############################
|
||||||
|
|
||||||
|
|
||||||
|
def process_uploaded_file(request, file, file_path, file_item, file_metadata, user):
|
||||||
|
try:
|
||||||
|
if file.content_type:
|
||||||
|
stt_supported_content_types = getattr(
|
||||||
|
request.app.state.config, "STT_SUPPORTED_CONTENT_TYPES", []
|
||||||
|
)
|
||||||
|
|
||||||
|
if any(
|
||||||
|
fnmatch(file.content_type, content_type)
|
||||||
|
for content_type in (
|
||||||
|
stt_supported_content_types
|
||||||
|
if stt_supported_content_types
|
||||||
|
and any(t.strip() for t in stt_supported_content_types)
|
||||||
|
else ["audio/*", "video/webm"]
|
||||||
|
)
|
||||||
|
):
|
||||||
|
file_path = Storage.get_file(file_path)
|
||||||
|
result = transcribe(request, file_path, file_metadata)
|
||||||
|
|
||||||
|
process_file(
|
||||||
|
request,
|
||||||
|
ProcessFileForm(
|
||||||
|
file_id=file_item.id, content=result.get("text", "")
|
||||||
|
),
|
||||||
|
user=user,
|
||||||
|
)
|
||||||
|
elif (not file.content_type.startswith(("image/", "video/"))) or (
|
||||||
|
request.app.state.config.CONTENT_EXTRACTION_ENGINE == "external"
|
||||||
|
):
|
||||||
|
process_file(request, ProcessFileForm(file_id=file_item.id), user=user)
|
||||||
|
else:
|
||||||
|
log.info(
|
||||||
|
f"File type {file.content_type} is not provided, but trying to process anyway"
|
||||||
|
)
|
||||||
|
process_file(request, ProcessFileForm(file_id=file_item.id), user=user)
|
||||||
|
|
||||||
|
Files.update_file_data_by_id(
|
||||||
|
file_item.id,
|
||||||
|
{"status": "completed"},
|
||||||
|
)
|
||||||
|
except Exception as e:
|
||||||
|
log.error(f"Error processing file: {file_item.id}")
|
||||||
|
Files.update_file_data_by_id(
|
||||||
|
file_item.id,
|
||||||
|
{
|
||||||
|
"status": "failed",
|
||||||
|
"error": str(e.detail) if hasattr(e, "detail") else str(e),
|
||||||
|
},
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
@router.post("/", response_model=FileModelResponse)
|
@router.post("/", response_model=FileModelResponse)
|
||||||
def upload_file(
|
def upload_file(
|
||||||
request: Request,
|
request: Request,
|
||||||
|
background_tasks: BackgroundTasks,
|
||||||
file: UploadFile = File(...),
|
file: UploadFile = File(...),
|
||||||
metadata: Optional[dict | str] = Form(None),
|
metadata: Optional[dict | str] = Form(None),
|
||||||
process: bool = Query(True),
|
process: bool = Query(True),
|
||||||
internal: bool = False,
|
|
||||||
user=Depends(get_verified_user),
|
user=Depends(get_verified_user),
|
||||||
):
|
):
|
||||||
log.info(f"file.content_type: {file.content_type}")
|
log.info(f"file.content_type: {file.content_type}")
|
||||||
|
@ -112,7 +165,7 @@ def upload_file(
|
||||||
# Remove the leading dot from the file extension
|
# Remove the leading dot from the file extension
|
||||||
file_extension = file_extension[1:] if file_extension else ""
|
file_extension = file_extension[1:] if file_extension else ""
|
||||||
|
|
||||||
if (not internal) and request.app.state.config.ALLOWED_FILE_EXTENSIONS:
|
if process and request.app.state.config.ALLOWED_FILE_EXTENSIONS:
|
||||||
request.app.state.config.ALLOWED_FILE_EXTENSIONS = [
|
request.app.state.config.ALLOWED_FILE_EXTENSIONS = [
|
||||||
ext for ext in request.app.state.config.ALLOWED_FILE_EXTENSIONS if ext
|
ext for ext in request.app.state.config.ALLOWED_FILE_EXTENSIONS if ext
|
||||||
]
|
]
|
||||||
|
@ -129,13 +182,16 @@ def upload_file(
|
||||||
id = str(uuid.uuid4())
|
id = str(uuid.uuid4())
|
||||||
name = filename
|
name = filename
|
||||||
filename = f"{id}_{filename}"
|
filename = f"{id}_{filename}"
|
||||||
tags = {
|
contents, file_path = Storage.upload_file(
|
||||||
|
file.file,
|
||||||
|
filename,
|
||||||
|
{
|
||||||
"OpenWebUI-User-Email": user.email,
|
"OpenWebUI-User-Email": user.email,
|
||||||
"OpenWebUI-User-Id": user.id,
|
"OpenWebUI-User-Id": user.id,
|
||||||
"OpenWebUI-User-Name": user.name,
|
"OpenWebUI-User-Name": user.name,
|
||||||
"OpenWebUI-File-Id": id,
|
"OpenWebUI-File-Id": id,
|
||||||
}
|
},
|
||||||
contents, file_path = Storage.upload_file(file.file, filename, tags)
|
)
|
||||||
|
|
||||||
file_item = Files.insert_new_file(
|
file_item = Files.insert_new_file(
|
||||||
user.id,
|
user.id,
|
||||||
|
@ -144,6 +200,9 @@ def upload_file(
|
||||||
"id": id,
|
"id": id,
|
||||||
"filename": name,
|
"filename": name,
|
||||||
"path": file_path,
|
"path": file_path,
|
||||||
|
"data": {
|
||||||
|
**({"status": "pending"} if process else {}),
|
||||||
|
},
|
||||||
"meta": {
|
"meta": {
|
||||||
"name": name,
|
"name": name,
|
||||||
"content_type": file.content_type,
|
"content_type": file.content_type,
|
||||||
|
@ -153,51 +212,19 @@ def upload_file(
|
||||||
}
|
}
|
||||||
),
|
),
|
||||||
)
|
)
|
||||||
|
|
||||||
if process:
|
if process:
|
||||||
try:
|
background_tasks.add_task(
|
||||||
if file.content_type:
|
process_uploaded_file,
|
||||||
stt_supported_content_types = getattr(
|
|
||||||
request.app.state.config, "STT_SUPPORTED_CONTENT_TYPES", []
|
|
||||||
)
|
|
||||||
|
|
||||||
if any(
|
|
||||||
fnmatch(file.content_type, content_type)
|
|
||||||
for content_type in (
|
|
||||||
stt_supported_content_types
|
|
||||||
if stt_supported_content_types
|
|
||||||
and any(t.strip() for t in stt_supported_content_types)
|
|
||||||
else ["audio/*", "video/webm"]
|
|
||||||
)
|
|
||||||
):
|
|
||||||
file_path = Storage.get_file(file_path)
|
|
||||||
result = transcribe(request, file_path, file_metadata)
|
|
||||||
|
|
||||||
process_file(
|
|
||||||
request,
|
request,
|
||||||
ProcessFileForm(file_id=id, content=result.get("text", "")),
|
file,
|
||||||
user=user,
|
file_path,
|
||||||
|
file_item,
|
||||||
|
file_metadata,
|
||||||
|
user,
|
||||||
)
|
)
|
||||||
elif (not file.content_type.startswith(("image/", "video/"))) or (
|
return {"status": True, **file_item.model_dump()}
|
||||||
request.app.state.config.CONTENT_EXTRACTION_ENGINE == "external"
|
|
||||||
):
|
|
||||||
process_file(request, ProcessFileForm(file_id=id), user=user)
|
|
||||||
else:
|
else:
|
||||||
log.info(
|
|
||||||
f"File type {file.content_type} is not provided, but trying to process anyway"
|
|
||||||
)
|
|
||||||
process_file(request, ProcessFileForm(file_id=id), user=user)
|
|
||||||
|
|
||||||
file_item = Files.get_file_by_id(id=id)
|
|
||||||
except Exception as e:
|
|
||||||
log.exception(e)
|
|
||||||
log.error(f"Error processing file: {file_item.id}")
|
|
||||||
file_item = FileModelResponse(
|
|
||||||
**{
|
|
||||||
**file_item.model_dump(),
|
|
||||||
"error": str(e.detail) if hasattr(e, "detail") else str(e),
|
|
||||||
}
|
|
||||||
)
|
|
||||||
|
|
||||||
if file_item:
|
if file_item:
|
||||||
return file_item
|
return file_item
|
||||||
else:
|
else:
|
||||||
|
@ -331,6 +358,60 @@ async def get_file_by_id(id: str, user=Depends(get_verified_user)):
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
|
@router.get("/{id}/process/status")
|
||||||
|
async def get_file_process_status(
|
||||||
|
id: str, stream: bool = Query(False), user=Depends(get_verified_user)
|
||||||
|
):
|
||||||
|
file = Files.get_file_by_id(id)
|
||||||
|
|
||||||
|
if not file:
|
||||||
|
raise HTTPException(
|
||||||
|
status_code=status.HTTP_404_NOT_FOUND,
|
||||||
|
detail=ERROR_MESSAGES.NOT_FOUND,
|
||||||
|
)
|
||||||
|
|
||||||
|
if (
|
||||||
|
file.user_id == user.id
|
||||||
|
or user.role == "admin"
|
||||||
|
or has_access_to_file(id, "read", user)
|
||||||
|
):
|
||||||
|
if stream:
|
||||||
|
MAX_FILE_PROCESSING_DURATION = 3600 * 2
|
||||||
|
|
||||||
|
async def event_stream(file_item):
|
||||||
|
for _ in range(MAX_FILE_PROCESSING_DURATION):
|
||||||
|
file_item = Files.get_file_by_id(file_item.id)
|
||||||
|
if file_item:
|
||||||
|
data = file_item.model_dump().get("data", {})
|
||||||
|
status = data.get("status")
|
||||||
|
|
||||||
|
if status:
|
||||||
|
event = {"status": status}
|
||||||
|
if status == "failed":
|
||||||
|
event["error"] = data.get("error")
|
||||||
|
|
||||||
|
yield f"data: {json.dumps(event)}\n\n"
|
||||||
|
if status in ("completed", "failed"):
|
||||||
|
break
|
||||||
|
else:
|
||||||
|
# Legacy
|
||||||
|
break
|
||||||
|
|
||||||
|
await asyncio.sleep(0.5)
|
||||||
|
|
||||||
|
return StreamingResponse(
|
||||||
|
event_stream(file),
|
||||||
|
media_type="text/event-stream",
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
return {"status": file.data.get("status", "pending")}
|
||||||
|
else:
|
||||||
|
raise HTTPException(
|
||||||
|
status_code=status.HTTP_404_NOT_FOUND,
|
||||||
|
detail=ERROR_MESSAGES.NOT_FOUND,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
############################
|
############################
|
||||||
# Get File Data Content By Id
|
# Get File Data Content By Id
|
||||||
############################
|
############################
|
||||||
|
|
|
@ -469,7 +469,9 @@ def upload_image(request, image_data, content_type, metadata, user):
|
||||||
"content-type": content_type,
|
"content-type": content_type,
|
||||||
},
|
},
|
||||||
)
|
)
|
||||||
file_item = upload_file(request, file, metadata=metadata, internal=True, user=user)
|
file_item = upload_file(
|
||||||
|
request, file=file, metadata=metadata, process=False, user=user
|
||||||
|
)
|
||||||
url = request.app.url_path_for("get_file_content_by_id", id=file_item.id)
|
url = request.app.url_path_for("get_file_content_by_id", id=file_item.id)
|
||||||
return url
|
return url
|
||||||
|
|
||||||
|
@ -483,11 +485,15 @@ async def image_generations(
|
||||||
# if IMAGE_SIZE = 'auto', default WidthxHeight to the 512x512 default
|
# if IMAGE_SIZE = 'auto', default WidthxHeight to the 512x512 default
|
||||||
# This is only relevant when the user has set IMAGE_SIZE to 'auto' with an
|
# This is only relevant when the user has set IMAGE_SIZE to 'auto' with an
|
||||||
# image model other than gpt-image-1, which is warned about on settings save
|
# image model other than gpt-image-1, which is warned about on settings save
|
||||||
width, height = (
|
|
||||||
tuple(map(int, request.app.state.config.IMAGE_SIZE.split("x")))
|
size = "512x512"
|
||||||
if "x" in request.app.state.config.IMAGE_SIZE
|
if "x" in request.app.state.config.IMAGE_SIZE:
|
||||||
else (512, 512)
|
size = request.app.state.config.IMAGE_SIZE
|
||||||
)
|
|
||||||
|
if "x" in form_data.size:
|
||||||
|
size = form_data.size
|
||||||
|
|
||||||
|
width, height = tuple(map(int, size.split("x")))
|
||||||
|
|
||||||
r = None
|
r = None
|
||||||
try:
|
try:
|
||||||
|
|
|
@ -25,7 +25,7 @@ from open_webui.utils.access_control import has_access, has_permission
|
||||||
|
|
||||||
|
|
||||||
from open_webui.env import SRC_LOG_LEVELS
|
from open_webui.env import SRC_LOG_LEVELS
|
||||||
from open_webui.config import ENABLE_ADMIN_WORKSPACE_CONTENT_ACCESS
|
from open_webui.config import BYPASS_ADMIN_ACCESS_CONTROL
|
||||||
from open_webui.models.models import Models, ModelForm
|
from open_webui.models.models import Models, ModelForm
|
||||||
|
|
||||||
|
|
||||||
|
@ -43,7 +43,7 @@ router = APIRouter()
|
||||||
async def get_knowledge(user=Depends(get_verified_user)):
|
async def get_knowledge(user=Depends(get_verified_user)):
|
||||||
knowledge_bases = []
|
knowledge_bases = []
|
||||||
|
|
||||||
if user.role == "admin" and ENABLE_ADMIN_WORKSPACE_CONTENT_ACCESS:
|
if user.role == "admin" and BYPASS_ADMIN_ACCESS_CONTROL:
|
||||||
knowledge_bases = Knowledges.get_knowledge_bases()
|
knowledge_bases = Knowledges.get_knowledge_bases()
|
||||||
else:
|
else:
|
||||||
knowledge_bases = Knowledges.get_knowledge_bases_by_user_id(user.id, "read")
|
knowledge_bases = Knowledges.get_knowledge_bases_by_user_id(user.id, "read")
|
||||||
|
@ -91,7 +91,7 @@ async def get_knowledge(user=Depends(get_verified_user)):
|
||||||
async def get_knowledge_list(user=Depends(get_verified_user)):
|
async def get_knowledge_list(user=Depends(get_verified_user)):
|
||||||
knowledge_bases = []
|
knowledge_bases = []
|
||||||
|
|
||||||
if user.role == "admin" and ENABLE_ADMIN_WORKSPACE_CONTENT_ACCESS:
|
if user.role == "admin" and BYPASS_ADMIN_ACCESS_CONTROL:
|
||||||
knowledge_bases = Knowledges.get_knowledge_bases()
|
knowledge_bases = Knowledges.get_knowledge_bases()
|
||||||
else:
|
else:
|
||||||
knowledge_bases = Knowledges.get_knowledge_bases_by_user_id(user.id, "write")
|
knowledge_bases = Knowledges.get_knowledge_bases_by_user_id(user.id, "write")
|
||||||
|
|
|
@ -15,7 +15,7 @@ from fastapi import APIRouter, Depends, HTTPException, Request, status
|
||||||
|
|
||||||
from open_webui.utils.auth import get_admin_user, get_verified_user
|
from open_webui.utils.auth import get_admin_user, get_verified_user
|
||||||
from open_webui.utils.access_control import has_access, has_permission
|
from open_webui.utils.access_control import has_access, has_permission
|
||||||
from open_webui.config import ENABLE_ADMIN_WORKSPACE_CONTENT_ACCESS
|
from open_webui.config import BYPASS_ADMIN_ACCESS_CONTROL
|
||||||
|
|
||||||
router = APIRouter()
|
router = APIRouter()
|
||||||
|
|
||||||
|
@ -27,7 +27,7 @@ router = APIRouter()
|
||||||
|
|
||||||
@router.get("/", response_model=list[ModelUserResponse])
|
@router.get("/", response_model=list[ModelUserResponse])
|
||||||
async def get_models(id: Optional[str] = None, user=Depends(get_verified_user)):
|
async def get_models(id: Optional[str] = None, user=Depends(get_verified_user)):
|
||||||
if user.role == "admin" and ENABLE_ADMIN_WORKSPACE_CONTENT_ACCESS:
|
if user.role == "admin" and BYPASS_ADMIN_ACCESS_CONTROL:
|
||||||
return Models.get_models()
|
return Models.get_models()
|
||||||
else:
|
else:
|
||||||
return Models.get_models_by_user_id(user.id)
|
return Models.get_models_by_user_id(user.id)
|
||||||
|
@ -117,7 +117,7 @@ async def get_model_by_id(id: str, user=Depends(get_verified_user)):
|
||||||
model = Models.get_model_by_id(id)
|
model = Models.get_model_by_id(id)
|
||||||
if model:
|
if model:
|
||||||
if (
|
if (
|
||||||
user.role == "admin"
|
(user.role == "admin" and BYPASS_ADMIN_ACCESS_CONTROL)
|
||||||
or model.user_id == user.id
|
or model.user_id == user.id
|
||||||
or has_access(user.id, "read", model.access_control)
|
or has_access(user.id, "read", model.access_control)
|
||||||
):
|
):
|
||||||
|
|
|
@ -47,7 +47,7 @@ from open_webui.utils.misc import (
|
||||||
from open_webui.utils.payload import (
|
from open_webui.utils.payload import (
|
||||||
apply_model_params_to_body_ollama,
|
apply_model_params_to_body_ollama,
|
||||||
apply_model_params_to_body_openai,
|
apply_model_params_to_body_openai,
|
||||||
apply_model_system_prompt_to_body,
|
apply_system_prompt_to_body,
|
||||||
)
|
)
|
||||||
from open_webui.utils.auth import get_admin_user, get_verified_user
|
from open_webui.utils.auth import get_admin_user, get_verified_user
|
||||||
from open_webui.utils.access_control import has_access
|
from open_webui.utils.access_control import has_access
|
||||||
|
@ -415,15 +415,15 @@ async def get_all_models(request: Request, user: UserModel = None):
|
||||||
try:
|
try:
|
||||||
loaded_models = await get_ollama_loaded_models(request, user=user)
|
loaded_models = await get_ollama_loaded_models(request, user=user)
|
||||||
expires_map = {
|
expires_map = {
|
||||||
m["name"]: m["expires_at"]
|
m["model"]: m["expires_at"]
|
||||||
for m in loaded_models["models"]
|
for m in loaded_models["models"]
|
||||||
if "expires_at" in m
|
if "expires_at" in m
|
||||||
}
|
}
|
||||||
|
|
||||||
for m in models["models"]:
|
for m in models["models"]:
|
||||||
if m["name"] in expires_map:
|
if m["model"] in expires_map:
|
||||||
# Parse ISO8601 datetime with offset, get unix timestamp as int
|
# Parse ISO8601 datetime with offset, get unix timestamp as int
|
||||||
dt = datetime.fromisoformat(expires_map[m["name"]])
|
dt = datetime.fromisoformat(expires_map[m["model"]])
|
||||||
m["expires_at"] = int(dt.timestamp())
|
m["expires_at"] = int(dt.timestamp())
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
log.debug(f"Failed to get loaded models: {e}")
|
log.debug(f"Failed to get loaded models: {e}")
|
||||||
|
@ -1330,7 +1330,7 @@ async def generate_chat_completion(
|
||||||
system = params.pop("system", None)
|
system = params.pop("system", None)
|
||||||
|
|
||||||
payload = apply_model_params_to_body_ollama(params, payload)
|
payload = apply_model_params_to_body_ollama(params, payload)
|
||||||
payload = apply_model_system_prompt_to_body(system, payload, metadata, user)
|
payload = apply_system_prompt_to_body(system, payload, metadata, user)
|
||||||
|
|
||||||
# Check if user has access to the model
|
# Check if user has access to the model
|
||||||
if not bypass_filter and user.role == "user":
|
if not bypass_filter and user.role == "user":
|
||||||
|
@ -1519,7 +1519,7 @@ async def generate_openai_chat_completion(
|
||||||
system = params.pop("system", None)
|
system = params.pop("system", None)
|
||||||
|
|
||||||
payload = apply_model_params_to_body_openai(params, payload)
|
payload = apply_model_params_to_body_openai(params, payload)
|
||||||
payload = apply_model_system_prompt_to_body(system, payload, metadata, user)
|
payload = apply_system_prompt_to_body(system, payload, metadata, user)
|
||||||
|
|
||||||
# Check if user has access to the model
|
# Check if user has access to the model
|
||||||
if user.role == "user":
|
if user.role == "user":
|
||||||
|
|
|
@ -39,7 +39,7 @@ from open_webui.env import SRC_LOG_LEVELS
|
||||||
|
|
||||||
from open_webui.utils.payload import (
|
from open_webui.utils.payload import (
|
||||||
apply_model_params_to_body_openai,
|
apply_model_params_to_body_openai,
|
||||||
apply_model_system_prompt_to_body,
|
apply_system_prompt_to_body,
|
||||||
)
|
)
|
||||||
from open_webui.utils.misc import (
|
from open_webui.utils.misc import (
|
||||||
convert_logit_bias_input_to_json,
|
convert_logit_bias_input_to_json,
|
||||||
|
@ -361,9 +361,18 @@ async def get_all_models_responses(request: Request, user: UserModel) -> list:
|
||||||
prefix_id = api_config.get("prefix_id", None)
|
prefix_id = api_config.get("prefix_id", None)
|
||||||
tags = api_config.get("tags", [])
|
tags = api_config.get("tags", [])
|
||||||
|
|
||||||
for model in (
|
model_list = (
|
||||||
response if isinstance(response, list) else response.get("data", [])
|
response if isinstance(response, list) else response.get("data", [])
|
||||||
):
|
)
|
||||||
|
if not isinstance(model_list, list):
|
||||||
|
# Catch non-list responses
|
||||||
|
model_list = []
|
||||||
|
|
||||||
|
for model in model_list:
|
||||||
|
# Remove name key if its value is None #16689
|
||||||
|
if "name" in model and model["name"] is None:
|
||||||
|
del model["name"]
|
||||||
|
|
||||||
if prefix_id:
|
if prefix_id:
|
||||||
model["id"] = (
|
model["id"] = (
|
||||||
f"{prefix_id}.{model.get('id', model.get('name', ''))}"
|
f"{prefix_id}.{model.get('id', model.get('name', ''))}"
|
||||||
|
@ -693,6 +702,10 @@ def get_azure_allowed_params(api_version: str) -> set[str]:
|
||||||
return allowed_params
|
return allowed_params
|
||||||
|
|
||||||
|
|
||||||
|
def is_openai_reasoning_model(model: str) -> bool:
|
||||||
|
return model.lower().startswith(("o1", "o3", "o4", "gpt-5"))
|
||||||
|
|
||||||
|
|
||||||
def convert_to_azure_payload(url, payload: dict, api_version: str):
|
def convert_to_azure_payload(url, payload: dict, api_version: str):
|
||||||
model = payload.get("model", "")
|
model = payload.get("model", "")
|
||||||
|
|
||||||
|
@ -700,7 +713,7 @@ def convert_to_azure_payload(url, payload: dict, api_version: str):
|
||||||
allowed_params = get_azure_allowed_params(api_version)
|
allowed_params = get_azure_allowed_params(api_version)
|
||||||
|
|
||||||
# Special handling for o-series models
|
# Special handling for o-series models
|
||||||
if model.startswith("o") and model.endswith("-mini"):
|
if is_openai_reasoning_model(model):
|
||||||
# Convert max_tokens to max_completion_tokens for o-series models
|
# Convert max_tokens to max_completion_tokens for o-series models
|
||||||
if "max_tokens" in payload:
|
if "max_tokens" in payload:
|
||||||
payload["max_completion_tokens"] = payload["max_tokens"]
|
payload["max_completion_tokens"] = payload["max_tokens"]
|
||||||
|
@ -750,7 +763,7 @@ async def generate_chat_completion(
|
||||||
system = params.pop("system", None)
|
system = params.pop("system", None)
|
||||||
|
|
||||||
payload = apply_model_params_to_body_openai(params, payload)
|
payload = apply_model_params_to_body_openai(params, payload)
|
||||||
payload = apply_model_system_prompt_to_body(system, payload, metadata, user)
|
payload = apply_system_prompt_to_body(system, payload, metadata, user)
|
||||||
|
|
||||||
# Check if user has access to the model
|
# Check if user has access to the model
|
||||||
if not bypass_filter and user.role == "user":
|
if not bypass_filter and user.role == "user":
|
||||||
|
@ -806,10 +819,7 @@ async def generate_chat_completion(
|
||||||
key = request.app.state.config.OPENAI_API_KEYS[idx]
|
key = request.app.state.config.OPENAI_API_KEYS[idx]
|
||||||
|
|
||||||
# Check if model is a reasoning model that needs special handling
|
# Check if model is a reasoning model that needs special handling
|
||||||
is_reasoning_model = (
|
if is_openai_reasoning_model(payload["model"]):
|
||||||
payload["model"].lower().startswith(("o1", "o3", "o4", "gpt-5"))
|
|
||||||
)
|
|
||||||
if is_reasoning_model:
|
|
||||||
payload = openai_reasoning_model_handler(payload)
|
payload = openai_reasoning_model_handler(payload)
|
||||||
elif "api.openai.com" not in url:
|
elif "api.openai.com" not in url:
|
||||||
# Remove "max_completion_tokens" from the payload for backward compatibility
|
# Remove "max_completion_tokens" from the payload for backward compatibility
|
||||||
|
|
|
@ -10,7 +10,7 @@ from open_webui.models.prompts import (
|
||||||
from open_webui.constants import ERROR_MESSAGES
|
from open_webui.constants import ERROR_MESSAGES
|
||||||
from open_webui.utils.auth import get_admin_user, get_verified_user
|
from open_webui.utils.auth import get_admin_user, get_verified_user
|
||||||
from open_webui.utils.access_control import has_access, has_permission
|
from open_webui.utils.access_control import has_access, has_permission
|
||||||
from open_webui.config import ENABLE_ADMIN_WORKSPACE_CONTENT_ACCESS
|
from open_webui.config import BYPASS_ADMIN_ACCESS_CONTROL
|
||||||
|
|
||||||
router = APIRouter()
|
router = APIRouter()
|
||||||
|
|
||||||
|
@ -21,7 +21,7 @@ router = APIRouter()
|
||||||
|
|
||||||
@router.get("/", response_model=list[PromptModel])
|
@router.get("/", response_model=list[PromptModel])
|
||||||
async def get_prompts(user=Depends(get_verified_user)):
|
async def get_prompts(user=Depends(get_verified_user)):
|
||||||
if user.role == "admin" and ENABLE_ADMIN_WORKSPACE_CONTENT_ACCESS:
|
if user.role == "admin" and BYPASS_ADMIN_ACCESS_CONTROL:
|
||||||
prompts = Prompts.get_prompts()
|
prompts = Prompts.get_prompts()
|
||||||
else:
|
else:
|
||||||
prompts = Prompts.get_prompts_by_user_id(user.id, "read")
|
prompts = Prompts.get_prompts_by_user_id(user.id, "read")
|
||||||
|
@ -31,7 +31,7 @@ async def get_prompts(user=Depends(get_verified_user)):
|
||||||
|
|
||||||
@router.get("/list", response_model=list[PromptUserResponse])
|
@router.get("/list", response_model=list[PromptUserResponse])
|
||||||
async def get_prompt_list(user=Depends(get_verified_user)):
|
async def get_prompt_list(user=Depends(get_verified_user)):
|
||||||
if user.role == "admin" and ENABLE_ADMIN_WORKSPACE_CONTENT_ACCESS:
|
if user.role == "admin" and BYPASS_ADMIN_ACCESS_CONTROL:
|
||||||
prompts = Prompts.get_prompts()
|
prompts = Prompts.get_prompts()
|
||||||
else:
|
else:
|
||||||
prompts = Prompts.get_prompts_by_user_id(user.id, "write")
|
prompts = Prompts.get_prompts_by_user_id(user.id, "write")
|
||||||
|
|
|
@ -5,7 +5,6 @@ import os
|
||||||
import shutil
|
import shutil
|
||||||
import asyncio
|
import asyncio
|
||||||
|
|
||||||
|
|
||||||
import uuid
|
import uuid
|
||||||
from datetime import datetime
|
from datetime import datetime
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
|
@ -281,6 +280,18 @@ async def update_embedding_config(
|
||||||
log.info(
|
log.info(
|
||||||
f"Updating embedding model: {request.app.state.config.RAG_EMBEDDING_MODEL} to {form_data.embedding_model}"
|
f"Updating embedding model: {request.app.state.config.RAG_EMBEDDING_MODEL} to {form_data.embedding_model}"
|
||||||
)
|
)
|
||||||
|
if request.app.state.config.RAG_EMBEDDING_ENGINE == "":
|
||||||
|
# unloads current internal embedding model and clears VRAM cache
|
||||||
|
request.app.state.ef = None
|
||||||
|
request.app.state.EMBEDDING_FUNCTION = None
|
||||||
|
import gc
|
||||||
|
|
||||||
|
gc.collect()
|
||||||
|
if DEVICE_TYPE == "cuda":
|
||||||
|
import torch
|
||||||
|
|
||||||
|
if torch.cuda.is_available():
|
||||||
|
torch.cuda.empty_cache()
|
||||||
try:
|
try:
|
||||||
request.app.state.config.RAG_EMBEDDING_ENGINE = form_data.embedding_engine
|
request.app.state.config.RAG_EMBEDDING_ENGINE = form_data.embedding_engine
|
||||||
request.app.state.config.RAG_EMBEDDING_MODEL = form_data.embedding_model
|
request.app.state.config.RAG_EMBEDDING_MODEL = form_data.embedding_model
|
||||||
|
@ -449,6 +460,7 @@ async def get_rag_config(request: Request, user=Depends(get_admin_user)):
|
||||||
"WEB_SEARCH_TRUST_ENV": request.app.state.config.WEB_SEARCH_TRUST_ENV,
|
"WEB_SEARCH_TRUST_ENV": request.app.state.config.WEB_SEARCH_TRUST_ENV,
|
||||||
"WEB_SEARCH_RESULT_COUNT": request.app.state.config.WEB_SEARCH_RESULT_COUNT,
|
"WEB_SEARCH_RESULT_COUNT": request.app.state.config.WEB_SEARCH_RESULT_COUNT,
|
||||||
"WEB_SEARCH_CONCURRENT_REQUESTS": request.app.state.config.WEB_SEARCH_CONCURRENT_REQUESTS,
|
"WEB_SEARCH_CONCURRENT_REQUESTS": request.app.state.config.WEB_SEARCH_CONCURRENT_REQUESTS,
|
||||||
|
"WEB_LOADER_CONCURRENT_REQUESTS": request.app.state.config.WEB_LOADER_CONCURRENT_REQUESTS,
|
||||||
"WEB_SEARCH_DOMAIN_FILTER_LIST": request.app.state.config.WEB_SEARCH_DOMAIN_FILTER_LIST,
|
"WEB_SEARCH_DOMAIN_FILTER_LIST": request.app.state.config.WEB_SEARCH_DOMAIN_FILTER_LIST,
|
||||||
"BYPASS_WEB_SEARCH_EMBEDDING_AND_RETRIEVAL": request.app.state.config.BYPASS_WEB_SEARCH_EMBEDDING_AND_RETRIEVAL,
|
"BYPASS_WEB_SEARCH_EMBEDDING_AND_RETRIEVAL": request.app.state.config.BYPASS_WEB_SEARCH_EMBEDDING_AND_RETRIEVAL,
|
||||||
"BYPASS_WEB_SEARCH_WEB_LOADER": request.app.state.config.BYPASS_WEB_SEARCH_WEB_LOADER,
|
"BYPASS_WEB_SEARCH_WEB_LOADER": request.app.state.config.BYPASS_WEB_SEARCH_WEB_LOADER,
|
||||||
|
@ -504,6 +516,7 @@ class WebConfig(BaseModel):
|
||||||
WEB_SEARCH_TRUST_ENV: Optional[bool] = None
|
WEB_SEARCH_TRUST_ENV: Optional[bool] = None
|
||||||
WEB_SEARCH_RESULT_COUNT: Optional[int] = None
|
WEB_SEARCH_RESULT_COUNT: Optional[int] = None
|
||||||
WEB_SEARCH_CONCURRENT_REQUESTS: Optional[int] = None
|
WEB_SEARCH_CONCURRENT_REQUESTS: Optional[int] = None
|
||||||
|
WEB_LOADER_CONCURRENT_REQUESTS: Optional[int] = None
|
||||||
WEB_SEARCH_DOMAIN_FILTER_LIST: Optional[List[str]] = []
|
WEB_SEARCH_DOMAIN_FILTER_LIST: Optional[List[str]] = []
|
||||||
BYPASS_WEB_SEARCH_EMBEDDING_AND_RETRIEVAL: Optional[bool] = None
|
BYPASS_WEB_SEARCH_EMBEDDING_AND_RETRIEVAL: Optional[bool] = None
|
||||||
BYPASS_WEB_SEARCH_WEB_LOADER: Optional[bool] = None
|
BYPASS_WEB_SEARCH_WEB_LOADER: Optional[bool] = None
|
||||||
|
@ -651,9 +664,6 @@ async def update_rag_config(
|
||||||
if form_data.ENABLE_RAG_HYBRID_SEARCH is not None
|
if form_data.ENABLE_RAG_HYBRID_SEARCH is not None
|
||||||
else request.app.state.config.ENABLE_RAG_HYBRID_SEARCH
|
else request.app.state.config.ENABLE_RAG_HYBRID_SEARCH
|
||||||
)
|
)
|
||||||
# Free up memory if hybrid search is disabled
|
|
||||||
if not request.app.state.config.ENABLE_RAG_HYBRID_SEARCH:
|
|
||||||
request.app.state.rf = None
|
|
||||||
|
|
||||||
request.app.state.config.TOP_K_RERANKER = (
|
request.app.state.config.TOP_K_RERANKER = (
|
||||||
form_data.TOP_K_RERANKER
|
form_data.TOP_K_RERANKER
|
||||||
|
@ -807,6 +817,18 @@ async def update_rag_config(
|
||||||
)
|
)
|
||||||
|
|
||||||
# Reranking settings
|
# Reranking settings
|
||||||
|
if request.app.state.config.RAG_RERANKING_ENGINE == "":
|
||||||
|
# Unloading the internal reranker and clear VRAM memory
|
||||||
|
request.app.state.rf = None
|
||||||
|
request.app.state.RERANKING_FUNCTION = None
|
||||||
|
import gc
|
||||||
|
|
||||||
|
gc.collect()
|
||||||
|
if DEVICE_TYPE == "cuda":
|
||||||
|
import torch
|
||||||
|
|
||||||
|
if torch.cuda.is_available():
|
||||||
|
torch.cuda.empty_cache()
|
||||||
request.app.state.config.RAG_RERANKING_ENGINE = (
|
request.app.state.config.RAG_RERANKING_ENGINE = (
|
||||||
form_data.RAG_RERANKING_ENGINE
|
form_data.RAG_RERANKING_ENGINE
|
||||||
if form_data.RAG_RERANKING_ENGINE is not None
|
if form_data.RAG_RERANKING_ENGINE is not None
|
||||||
|
@ -836,6 +858,10 @@ async def update_rag_config(
|
||||||
)
|
)
|
||||||
|
|
||||||
try:
|
try:
|
||||||
|
if (
|
||||||
|
request.app.state.config.ENABLE_RAG_HYBRID_SEARCH
|
||||||
|
and not request.app.state.config.BYPASS_EMBEDDING_AND_RETRIEVAL
|
||||||
|
):
|
||||||
request.app.state.rf = get_rf(
|
request.app.state.rf = get_rf(
|
||||||
request.app.state.config.RAG_RERANKING_ENGINE,
|
request.app.state.config.RAG_RERANKING_ENGINE,
|
||||||
request.app.state.config.RAG_RERANKING_MODEL,
|
request.app.state.config.RAG_RERANKING_MODEL,
|
||||||
|
@ -916,6 +942,9 @@ async def update_rag_config(
|
||||||
request.app.state.config.WEB_SEARCH_CONCURRENT_REQUESTS = (
|
request.app.state.config.WEB_SEARCH_CONCURRENT_REQUESTS = (
|
||||||
form_data.web.WEB_SEARCH_CONCURRENT_REQUESTS
|
form_data.web.WEB_SEARCH_CONCURRENT_REQUESTS
|
||||||
)
|
)
|
||||||
|
request.app.state.config.WEB_LOADER_CONCURRENT_REQUESTS = (
|
||||||
|
form_data.web.WEB_LOADER_CONCURRENT_REQUESTS
|
||||||
|
)
|
||||||
request.app.state.config.WEB_SEARCH_DOMAIN_FILTER_LIST = (
|
request.app.state.config.WEB_SEARCH_DOMAIN_FILTER_LIST = (
|
||||||
form_data.web.WEB_SEARCH_DOMAIN_FILTER_LIST
|
form_data.web.WEB_SEARCH_DOMAIN_FILTER_LIST
|
||||||
)
|
)
|
||||||
|
@ -1067,6 +1096,7 @@ async def update_rag_config(
|
||||||
"WEB_SEARCH_TRUST_ENV": request.app.state.config.WEB_SEARCH_TRUST_ENV,
|
"WEB_SEARCH_TRUST_ENV": request.app.state.config.WEB_SEARCH_TRUST_ENV,
|
||||||
"WEB_SEARCH_RESULT_COUNT": request.app.state.config.WEB_SEARCH_RESULT_COUNT,
|
"WEB_SEARCH_RESULT_COUNT": request.app.state.config.WEB_SEARCH_RESULT_COUNT,
|
||||||
"WEB_SEARCH_CONCURRENT_REQUESTS": request.app.state.config.WEB_SEARCH_CONCURRENT_REQUESTS,
|
"WEB_SEARCH_CONCURRENT_REQUESTS": request.app.state.config.WEB_SEARCH_CONCURRENT_REQUESTS,
|
||||||
|
"WEB_LOADER_CONCURRENT_REQUESTS": request.app.state.config.WEB_LOADER_CONCURRENT_REQUESTS,
|
||||||
"WEB_SEARCH_DOMAIN_FILTER_LIST": request.app.state.config.WEB_SEARCH_DOMAIN_FILTER_LIST,
|
"WEB_SEARCH_DOMAIN_FILTER_LIST": request.app.state.config.WEB_SEARCH_DOMAIN_FILTER_LIST,
|
||||||
"BYPASS_WEB_SEARCH_EMBEDDING_AND_RETRIEVAL": request.app.state.config.BYPASS_WEB_SEARCH_EMBEDDING_AND_RETRIEVAL,
|
"BYPASS_WEB_SEARCH_EMBEDDING_AND_RETRIEVAL": request.app.state.config.BYPASS_WEB_SEARCH_EMBEDDING_AND_RETRIEVAL,
|
||||||
"BYPASS_WEB_SEARCH_WEB_LOADER": request.app.state.config.BYPASS_WEB_SEARCH_WEB_LOADER,
|
"BYPASS_WEB_SEARCH_WEB_LOADER": request.app.state.config.BYPASS_WEB_SEARCH_WEB_LOADER,
|
||||||
|
@ -1470,7 +1500,7 @@ def process_file(
|
||||||
log.debug(f"text_content: {text_content}")
|
log.debug(f"text_content: {text_content}")
|
||||||
Files.update_file_data_by_id(
|
Files.update_file_data_by_id(
|
||||||
file.id,
|
file.id,
|
||||||
{"content": text_content},
|
{"status": "completed", "content": text_content},
|
||||||
)
|
)
|
||||||
|
|
||||||
hash = calculate_sha256_string(text_content)
|
hash = calculate_sha256_string(text_content)
|
||||||
|
@ -1624,7 +1654,7 @@ def process_web(
|
||||||
loader = get_web_loader(
|
loader = get_web_loader(
|
||||||
form_data.url,
|
form_data.url,
|
||||||
verify_ssl=request.app.state.config.ENABLE_WEB_LOADER_SSL_VERIFICATION,
|
verify_ssl=request.app.state.config.ENABLE_WEB_LOADER_SSL_VERIFICATION,
|
||||||
requests_per_second=request.app.state.config.WEB_SEARCH_CONCURRENT_REQUESTS,
|
requests_per_second=request.app.state.config.WEB_LOADER_CONCURRENT_REQUESTS,
|
||||||
)
|
)
|
||||||
docs = loader.load()
|
docs = loader.load()
|
||||||
content = " ".join([doc.page_content for doc in docs])
|
content = " ".join([doc.page_content for doc in docs])
|
||||||
|
@ -1798,6 +1828,7 @@ def search_web(request: Request, engine: str, query: str) -> list[SearchResult]:
|
||||||
query,
|
query,
|
||||||
request.app.state.config.WEB_SEARCH_RESULT_COUNT,
|
request.app.state.config.WEB_SEARCH_RESULT_COUNT,
|
||||||
request.app.state.config.WEB_SEARCH_DOMAIN_FILTER_LIST,
|
request.app.state.config.WEB_SEARCH_DOMAIN_FILTER_LIST,
|
||||||
|
concurrent_requests=request.app.state.config.WEB_SEARCH_CONCURRENT_REQUESTS,
|
||||||
)
|
)
|
||||||
elif engine == "tavily":
|
elif engine == "tavily":
|
||||||
if request.app.state.config.TAVILY_API_KEY:
|
if request.app.state.config.TAVILY_API_KEY:
|
||||||
|
@ -1971,7 +2002,7 @@ async def process_web_search(
|
||||||
loader = get_web_loader(
|
loader = get_web_loader(
|
||||||
urls,
|
urls,
|
||||||
verify_ssl=request.app.state.config.ENABLE_WEB_LOADER_SSL_VERIFICATION,
|
verify_ssl=request.app.state.config.ENABLE_WEB_LOADER_SSL_VERIFICATION,
|
||||||
requests_per_second=request.app.state.config.WEB_SEARCH_CONCURRENT_REQUESTS,
|
requests_per_second=request.app.state.config.WEB_LOADER_CONCURRENT_REQUESTS,
|
||||||
trust_env=request.app.state.config.WEB_SEARCH_TRUST_ENV,
|
trust_env=request.app.state.config.WEB_SEARCH_TRUST_ENV,
|
||||||
)
|
)
|
||||||
docs = await loader.aload()
|
docs = await loader.aload()
|
||||||
|
|
|
@ -198,14 +198,7 @@ async def generate_title(
|
||||||
else:
|
else:
|
||||||
template = DEFAULT_TITLE_GENERATION_PROMPT_TEMPLATE
|
template = DEFAULT_TITLE_GENERATION_PROMPT_TEMPLATE
|
||||||
|
|
||||||
content = title_generation_template(
|
content = title_generation_template(template, form_data["messages"], user)
|
||||||
template,
|
|
||||||
form_data["messages"],
|
|
||||||
{
|
|
||||||
"name": user.name,
|
|
||||||
"location": user.info.get("location") if user.info else None,
|
|
||||||
},
|
|
||||||
)
|
|
||||||
|
|
||||||
max_tokens = (
|
max_tokens = (
|
||||||
models[task_model_id].get("info", {}).get("params", {}).get("max_tokens", 1000)
|
models[task_model_id].get("info", {}).get("params", {}).get("max_tokens", 1000)
|
||||||
|
@ -289,14 +282,7 @@ async def generate_follow_ups(
|
||||||
else:
|
else:
|
||||||
template = DEFAULT_FOLLOW_UP_GENERATION_PROMPT_TEMPLATE
|
template = DEFAULT_FOLLOW_UP_GENERATION_PROMPT_TEMPLATE
|
||||||
|
|
||||||
content = follow_up_generation_template(
|
content = follow_up_generation_template(template, form_data["messages"], user)
|
||||||
template,
|
|
||||||
form_data["messages"],
|
|
||||||
{
|
|
||||||
"name": user.name,
|
|
||||||
"location": user.info.get("location") if user.info else None,
|
|
||||||
},
|
|
||||||
)
|
|
||||||
|
|
||||||
payload = {
|
payload = {
|
||||||
"model": task_model_id,
|
"model": task_model_id,
|
||||||
|
@ -369,9 +355,7 @@ async def generate_chat_tags(
|
||||||
else:
|
else:
|
||||||
template = DEFAULT_TAGS_GENERATION_PROMPT_TEMPLATE
|
template = DEFAULT_TAGS_GENERATION_PROMPT_TEMPLATE
|
||||||
|
|
||||||
content = tags_generation_template(
|
content = tags_generation_template(template, form_data["messages"], user)
|
||||||
template, form_data["messages"], {"name": user.name}
|
|
||||||
)
|
|
||||||
|
|
||||||
payload = {
|
payload = {
|
||||||
"model": task_model_id,
|
"model": task_model_id,
|
||||||
|
@ -437,13 +421,7 @@ async def generate_image_prompt(
|
||||||
else:
|
else:
|
||||||
template = DEFAULT_IMAGE_PROMPT_GENERATION_PROMPT_TEMPLATE
|
template = DEFAULT_IMAGE_PROMPT_GENERATION_PROMPT_TEMPLATE
|
||||||
|
|
||||||
content = image_prompt_generation_template(
|
content = image_prompt_generation_template(template, form_data["messages"], user)
|
||||||
template,
|
|
||||||
form_data["messages"],
|
|
||||||
user={
|
|
||||||
"name": user.name,
|
|
||||||
},
|
|
||||||
)
|
|
||||||
|
|
||||||
payload = {
|
payload = {
|
||||||
"model": task_model_id,
|
"model": task_model_id,
|
||||||
|
@ -524,9 +502,7 @@ async def generate_queries(
|
||||||
else:
|
else:
|
||||||
template = DEFAULT_QUERY_GENERATION_PROMPT_TEMPLATE
|
template = DEFAULT_QUERY_GENERATION_PROMPT_TEMPLATE
|
||||||
|
|
||||||
content = query_generation_template(
|
content = query_generation_template(template, form_data["messages"], user)
|
||||||
template, form_data["messages"], {"name": user.name}
|
|
||||||
)
|
|
||||||
|
|
||||||
payload = {
|
payload = {
|
||||||
"model": task_model_id,
|
"model": task_model_id,
|
||||||
|
@ -611,9 +587,7 @@ async def generate_autocompletion(
|
||||||
else:
|
else:
|
||||||
template = DEFAULT_AUTOCOMPLETE_GENERATION_PROMPT_TEMPLATE
|
template = DEFAULT_AUTOCOMPLETE_GENERATION_PROMPT_TEMPLATE
|
||||||
|
|
||||||
content = autocomplete_generation_template(
|
content = autocomplete_generation_template(template, prompt, messages, type, user)
|
||||||
template, prompt, messages, type, {"name": user.name}
|
|
||||||
)
|
|
||||||
|
|
||||||
payload = {
|
payload = {
|
||||||
"model": task_model_id,
|
"model": task_model_id,
|
||||||
|
@ -675,14 +649,7 @@ async def generate_emoji(
|
||||||
|
|
||||||
template = DEFAULT_EMOJI_GENERATION_PROMPT_TEMPLATE
|
template = DEFAULT_EMOJI_GENERATION_PROMPT_TEMPLATE
|
||||||
|
|
||||||
content = emoji_generation_template(
|
content = emoji_generation_template(template, form_data["prompt"], user)
|
||||||
template,
|
|
||||||
form_data["prompt"],
|
|
||||||
{
|
|
||||||
"name": user.name,
|
|
||||||
"location": user.info.get("location") if user.info else None,
|
|
||||||
},
|
|
||||||
)
|
|
||||||
|
|
||||||
payload = {
|
payload = {
|
||||||
"model": task_model_id,
|
"model": task_model_id,
|
||||||
|
|
|
@ -19,10 +19,10 @@ from open_webui.utils.plugin import load_tool_module_by_id, replace_imports
|
||||||
from open_webui.utils.tools import get_tool_specs
|
from open_webui.utils.tools import get_tool_specs
|
||||||
from open_webui.utils.auth import get_admin_user, get_verified_user
|
from open_webui.utils.auth import get_admin_user, get_verified_user
|
||||||
from open_webui.utils.access_control import has_access, has_permission
|
from open_webui.utils.access_control import has_access, has_permission
|
||||||
from open_webui.utils.tools import get_tool_servers_data
|
from open_webui.utils.tools import get_tool_servers
|
||||||
|
|
||||||
from open_webui.env import SRC_LOG_LEVELS
|
from open_webui.env import SRC_LOG_LEVELS
|
||||||
from open_webui.config import CACHE_DIR, ENABLE_ADMIN_WORKSPACE_CONTENT_ACCESS
|
from open_webui.config import CACHE_DIR, BYPASS_ADMIN_ACCESS_CONTROL
|
||||||
from open_webui.constants import ERROR_MESSAGES
|
from open_webui.constants import ERROR_MESSAGES
|
||||||
|
|
||||||
|
|
||||||
|
@ -32,6 +32,7 @@ log.setLevel(SRC_LOG_LEVELS["MAIN"])
|
||||||
|
|
||||||
router = APIRouter()
|
router = APIRouter()
|
||||||
|
|
||||||
|
|
||||||
############################
|
############################
|
||||||
# GetTools
|
# GetTools
|
||||||
############################
|
############################
|
||||||
|
@ -39,23 +40,14 @@ router = APIRouter()
|
||||||
|
|
||||||
@router.get("/", response_model=list[ToolUserResponse])
|
@router.get("/", response_model=list[ToolUserResponse])
|
||||||
async def get_tools(request: Request, user=Depends(get_verified_user)):
|
async def get_tools(request: Request, user=Depends(get_verified_user)):
|
||||||
|
|
||||||
if not request.app.state.TOOL_SERVERS:
|
|
||||||
# If the tool servers are not set, we need to set them
|
|
||||||
# This is done only once when the server starts
|
|
||||||
# This is done to avoid loading the tool servers every time
|
|
||||||
|
|
||||||
request.app.state.TOOL_SERVERS = await get_tool_servers_data(
|
|
||||||
request.app.state.config.TOOL_SERVER_CONNECTIONS
|
|
||||||
)
|
|
||||||
|
|
||||||
tools = Tools.get_tools()
|
tools = Tools.get_tools()
|
||||||
for server in request.app.state.TOOL_SERVERS:
|
|
||||||
|
for server in await get_tool_servers(request):
|
||||||
tools.append(
|
tools.append(
|
||||||
ToolUserResponse(
|
ToolUserResponse(
|
||||||
**{
|
**{
|
||||||
"id": f"server:{server['idx']}",
|
"id": f"server:{server.get('id')}",
|
||||||
"user_id": f"server:{server['idx']}",
|
"user_id": f"server:{server.get('id')}",
|
||||||
"name": server.get("openapi", {})
|
"name": server.get("openapi", {})
|
||||||
.get("info", {})
|
.get("info", {})
|
||||||
.get("title", "Tool Server"),
|
.get("title", "Tool Server"),
|
||||||
|
@ -65,7 +57,7 @@ async def get_tools(request: Request, user=Depends(get_verified_user)):
|
||||||
.get("description", ""),
|
.get("description", ""),
|
||||||
},
|
},
|
||||||
"access_control": request.app.state.config.TOOL_SERVER_CONNECTIONS[
|
"access_control": request.app.state.config.TOOL_SERVER_CONNECTIONS[
|
||||||
server["idx"]
|
server.get("idx", 0)
|
||||||
]
|
]
|
||||||
.get("config", {})
|
.get("config", {})
|
||||||
.get("access_control", None),
|
.get("access_control", None),
|
||||||
|
@ -75,7 +67,7 @@ async def get_tools(request: Request, user=Depends(get_verified_user)):
|
||||||
)
|
)
|
||||||
)
|
)
|
||||||
|
|
||||||
if user.role == "admin" and ENABLE_ADMIN_WORKSPACE_CONTENT_ACCESS:
|
if user.role == "admin" and BYPASS_ADMIN_ACCESS_CONTROL:
|
||||||
# Admin can see all tools
|
# Admin can see all tools
|
||||||
return tools
|
return tools
|
||||||
else:
|
else:
|
||||||
|
@ -95,7 +87,7 @@ async def get_tools(request: Request, user=Depends(get_verified_user)):
|
||||||
|
|
||||||
@router.get("/list", response_model=list[ToolUserResponse])
|
@router.get("/list", response_model=list[ToolUserResponse])
|
||||||
async def get_tool_list(user=Depends(get_verified_user)):
|
async def get_tool_list(user=Depends(get_verified_user)):
|
||||||
if user.role == "admin" and ENABLE_ADMIN_WORKSPACE_CONTENT_ACCESS:
|
if user.role == "admin" and BYPASS_ADMIN_ACCESS_CONTROL:
|
||||||
tools = Tools.get_tools()
|
tools = Tools.get_tools()
|
||||||
else:
|
else:
|
||||||
tools = Tools.get_tools_by_user_id(user.id, "write")
|
tools = Tools.get_tools_by_user_id(user.id, "write")
|
||||||
|
|
|
@ -112,6 +112,9 @@ class S3StorageProvider(StorageProvider):
|
||||||
"use_accelerate_endpoint": S3_USE_ACCELERATE_ENDPOINT,
|
"use_accelerate_endpoint": S3_USE_ACCELERATE_ENDPOINT,
|
||||||
"addressing_style": S3_ADDRESSING_STYLE,
|
"addressing_style": S3_ADDRESSING_STYLE,
|
||||||
},
|
},
|
||||||
|
# KIT change - see https://github.com/boto/boto3/issues/4400#issuecomment-2600742103∆
|
||||||
|
request_checksum_calculation="when_required",
|
||||||
|
response_checksum_validation="when_required",
|
||||||
)
|
)
|
||||||
|
|
||||||
# If access key and secret are provided, use them for authentication
|
# If access key and secret are provided, use them for authentication
|
||||||
|
|
|
@ -60,8 +60,7 @@ def get_permissions(
|
||||||
|
|
||||||
# Combine permissions from all user groups
|
# Combine permissions from all user groups
|
||||||
for group in user_groups:
|
for group in user_groups:
|
||||||
group_permissions = group.permissions or {}
|
permissions = combine_permissions(permissions, group.permissions or {})
|
||||||
permissions = combine_permissions(permissions, group_permissions)
|
|
||||||
|
|
||||||
# Ensure all fields from default_permissions are present and filled in
|
# Ensure all fields from default_permissions are present and filled in
|
||||||
permissions = fill_missing_permissions(permissions, default_permissions)
|
permissions = fill_missing_permissions(permissions, default_permissions)
|
||||||
|
@ -96,8 +95,7 @@ def has_permission(
|
||||||
user_groups = Groups.get_groups_by_member_id(user_id)
|
user_groups = Groups.get_groups_by_member_id(user_id)
|
||||||
|
|
||||||
for group in user_groups:
|
for group in user_groups:
|
||||||
group_permissions = group.permissions
|
if get_permission(group.permissions or {}, permission_hierarchy):
|
||||||
if get_permission(group_permissions, permission_hierarchy):
|
|
||||||
return True
|
return True
|
||||||
|
|
||||||
# Check default permissions afterward if the group permissions don't allow it
|
# Check default permissions afterward if the group permissions don't allow it
|
||||||
|
|
|
@ -3,6 +3,7 @@ import logging
|
||||||
import sys
|
import sys
|
||||||
import os
|
import os
|
||||||
import base64
|
import base64
|
||||||
|
import textwrap
|
||||||
|
|
||||||
import asyncio
|
import asyncio
|
||||||
from aiocache import cached
|
from aiocache import cached
|
||||||
|
@ -73,6 +74,7 @@ from open_webui.utils.misc import (
|
||||||
add_or_update_user_message,
|
add_or_update_user_message,
|
||||||
get_last_user_message,
|
get_last_user_message,
|
||||||
get_last_assistant_message,
|
get_last_assistant_message,
|
||||||
|
get_system_message,
|
||||||
prepend_to_first_user_message_content,
|
prepend_to_first_user_message_content,
|
||||||
convert_logit_bias_input_to_json,
|
convert_logit_bias_input_to_json,
|
||||||
)
|
)
|
||||||
|
@ -83,14 +85,14 @@ from open_webui.utils.filter import (
|
||||||
process_filter_functions,
|
process_filter_functions,
|
||||||
)
|
)
|
||||||
from open_webui.utils.code_interpreter import execute_code_jupyter
|
from open_webui.utils.code_interpreter import execute_code_jupyter
|
||||||
from open_webui.utils.payload import apply_model_system_prompt_to_body
|
from open_webui.utils.payload import apply_system_prompt_to_body
|
||||||
|
|
||||||
from open_webui.tasks import create_task
|
|
||||||
|
|
||||||
from open_webui.config import (
|
from open_webui.config import (
|
||||||
CACHE_DIR,
|
CACHE_DIR,
|
||||||
DEFAULT_TOOLS_FUNCTION_CALLING_PROMPT_TEMPLATE,
|
DEFAULT_TOOLS_FUNCTION_CALLING_PROMPT_TEMPLATE,
|
||||||
DEFAULT_CODE_INTERPRETER_PROMPT,
|
DEFAULT_CODE_INTERPRETER_PROMPT,
|
||||||
|
CODE_INTERPRETER_BLOCKED_MODULES,
|
||||||
)
|
)
|
||||||
from open_webui.env import (
|
from open_webui.env import (
|
||||||
SRC_LOG_LEVELS,
|
SRC_LOG_LEVELS,
|
||||||
|
@ -736,6 +738,12 @@ async def process_chat_payload(request, form_data, user, metadata, model):
|
||||||
form_data = apply_params_to_form_data(form_data, model)
|
form_data = apply_params_to_form_data(form_data, model)
|
||||||
log.debug(f"form_data: {form_data}")
|
log.debug(f"form_data: {form_data}")
|
||||||
|
|
||||||
|
system_message = get_system_message(form_data.get("messages", []))
|
||||||
|
if system_message:
|
||||||
|
form_data = apply_system_prompt_to_body(
|
||||||
|
system_message.get("content"), form_data, metadata, user
|
||||||
|
)
|
||||||
|
|
||||||
event_emitter = get_event_emitter(metadata)
|
event_emitter = get_event_emitter(metadata)
|
||||||
event_call = get_event_call(metadata)
|
event_call = get_event_call(metadata)
|
||||||
|
|
||||||
|
@ -777,7 +785,7 @@ async def process_chat_payload(request, form_data, user, metadata, model):
|
||||||
|
|
||||||
if folder and folder.data:
|
if folder and folder.data:
|
||||||
if "system_prompt" in folder.data:
|
if "system_prompt" in folder.data:
|
||||||
form_data = apply_model_system_prompt_to_body(
|
form_data = apply_system_prompt_to_body(
|
||||||
folder.data["system_prompt"], form_data, metadata, user
|
folder.data["system_prompt"], form_data, metadata, user
|
||||||
)
|
)
|
||||||
if "files" in folder.data:
|
if "files" in folder.data:
|
||||||
|
@ -908,7 +916,7 @@ async def process_chat_payload(request, form_data, user, metadata, model):
|
||||||
tools_dict = {}
|
tools_dict = {}
|
||||||
|
|
||||||
if tool_ids:
|
if tool_ids:
|
||||||
tools_dict = get_tools(
|
tools_dict = await get_tools(
|
||||||
request,
|
request,
|
||||||
tool_ids,
|
tool_ids,
|
||||||
user,
|
user,
|
||||||
|
@ -989,25 +997,24 @@ async def process_chat_payload(request, form_data, user, metadata, model):
|
||||||
if prompt is None:
|
if prompt is None:
|
||||||
raise Exception("No user message found")
|
raise Exception("No user message found")
|
||||||
|
|
||||||
if context_string == "":
|
if context_string != "":
|
||||||
if request.app.state.config.RELEVANCE_THRESHOLD == 0:
|
|
||||||
log.debug(
|
|
||||||
f"With a 0 relevancy threshold for RAG, the context cannot be empty"
|
|
||||||
)
|
|
||||||
else:
|
|
||||||
# Workaround for Ollama 2.0+ system prompt issue
|
# Workaround for Ollama 2.0+ system prompt issue
|
||||||
# TODO: replace with add_or_update_system_message
|
# TODO: replace with add_or_update_system_message
|
||||||
if model.get("owned_by") == "ollama":
|
if model.get("owned_by") == "ollama":
|
||||||
form_data["messages"] = prepend_to_first_user_message_content(
|
form_data["messages"] = prepend_to_first_user_message_content(
|
||||||
rag_template(
|
rag_template(
|
||||||
request.app.state.config.RAG_TEMPLATE, context_string, prompt
|
request.app.state.config.RAG_TEMPLATE,
|
||||||
|
context_string,
|
||||||
|
prompt,
|
||||||
),
|
),
|
||||||
form_data["messages"],
|
form_data["messages"],
|
||||||
)
|
)
|
||||||
else:
|
else:
|
||||||
form_data["messages"] = add_or_update_system_message(
|
form_data["messages"] = add_or_update_system_message(
|
||||||
rag_template(
|
rag_template(
|
||||||
request.app.state.config.RAG_TEMPLATE, context_string, prompt
|
request.app.state.config.RAG_TEMPLATE,
|
||||||
|
context_string,
|
||||||
|
prompt,
|
||||||
),
|
),
|
||||||
form_data["messages"],
|
form_data["messages"],
|
||||||
)
|
)
|
||||||
|
@ -1324,7 +1331,7 @@ async def process_chat_response(
|
||||||
if not get_active_status_by_user_id(user.id):
|
if not get_active_status_by_user_id(user.id):
|
||||||
webhook_url = Users.get_user_webhook_url_by_id(user.id)
|
webhook_url = Users.get_user_webhook_url_by_id(user.id)
|
||||||
if webhook_url:
|
if webhook_url:
|
||||||
post_webhook(
|
await post_webhook(
|
||||||
request.app.state.WEBUI_NAME,
|
request.app.state.WEBUI_NAME,
|
||||||
webhook_url,
|
webhook_url,
|
||||||
f"{title} - {request.app.state.config.WEBUI_URL}/c/{metadata['chat_id']}\n\n{content}",
|
f"{title} - {request.app.state.config.WEBUI_URL}/c/{metadata['chat_id']}\n\n{content}",
|
||||||
|
@ -1620,9 +1627,13 @@ async def process_chat_response(
|
||||||
|
|
||||||
match = re.search(start_tag_pattern, content)
|
match = re.search(start_tag_pattern, content)
|
||||||
if match:
|
if match:
|
||||||
|
try:
|
||||||
attr_content = (
|
attr_content = (
|
||||||
match.group(1) if match.group(1) else ""
|
match.group(1) if match.group(1) else ""
|
||||||
) # Ensure it's not None
|
) # Ensure it's not None
|
||||||
|
except:
|
||||||
|
attr_content = ""
|
||||||
|
|
||||||
attributes = extract_attributes(
|
attributes = extract_attributes(
|
||||||
attr_content
|
attr_content
|
||||||
) # Extract attributes safely
|
) # Extract attributes safely
|
||||||
|
@ -1846,6 +1857,21 @@ async def process_chat_response(
|
||||||
or 1
|
or 1
|
||||||
),
|
),
|
||||||
)
|
)
|
||||||
|
last_delta_data = None
|
||||||
|
|
||||||
|
async def flush_pending_delta_data(threshold: int = 0):
|
||||||
|
nonlocal delta_count
|
||||||
|
nonlocal last_delta_data
|
||||||
|
|
||||||
|
if delta_count >= threshold and last_delta_data:
|
||||||
|
await event_emitter(
|
||||||
|
{
|
||||||
|
"type": "chat:completion",
|
||||||
|
"data": last_delta_data,
|
||||||
|
}
|
||||||
|
)
|
||||||
|
delta_count = 0
|
||||||
|
last_delta_data = None
|
||||||
|
|
||||||
async for line in response.body_iterator:
|
async for line in response.body_iterator:
|
||||||
line = line.decode("utf-8") if isinstance(line, bytes) else line
|
line = line.decode("utf-8") if isinstance(line, bytes) else line
|
||||||
|
@ -1886,6 +1912,12 @@ async def process_chat_response(
|
||||||
"selectedModelId": model_id,
|
"selectedModelId": model_id,
|
||||||
},
|
},
|
||||||
)
|
)
|
||||||
|
await event_emitter(
|
||||||
|
{
|
||||||
|
"type": "chat:completion",
|
||||||
|
"data": data,
|
||||||
|
}
|
||||||
|
)
|
||||||
else:
|
else:
|
||||||
choices = data.get("choices", [])
|
choices = data.get("choices", [])
|
||||||
if not choices:
|
if not choices:
|
||||||
|
@ -2096,14 +2128,9 @@ async def process_chat_response(
|
||||||
|
|
||||||
if delta:
|
if delta:
|
||||||
delta_count += 1
|
delta_count += 1
|
||||||
|
last_delta_data = data
|
||||||
if delta_count >= delta_chunk_size:
|
if delta_count >= delta_chunk_size:
|
||||||
await event_emitter(
|
await flush_pending_delta_data(delta_chunk_size)
|
||||||
{
|
|
||||||
"type": "chat:completion",
|
|
||||||
"data": data,
|
|
||||||
}
|
|
||||||
)
|
|
||||||
delta_count = 0
|
|
||||||
else:
|
else:
|
||||||
await event_emitter(
|
await event_emitter(
|
||||||
{
|
{
|
||||||
|
@ -2118,6 +2145,7 @@ async def process_chat_response(
|
||||||
else:
|
else:
|
||||||
log.debug(f"Error: {e}")
|
log.debug(f"Error: {e}")
|
||||||
continue
|
continue
|
||||||
|
await flush_pending_delta_data()
|
||||||
|
|
||||||
if content_blocks:
|
if content_blocks:
|
||||||
# Clean up the last text block
|
# Clean up the last text block
|
||||||
|
@ -2355,6 +2383,27 @@ async def process_chat_response(
|
||||||
try:
|
try:
|
||||||
if content_blocks[-1]["attributes"].get("type") == "code":
|
if content_blocks[-1]["attributes"].get("type") == "code":
|
||||||
code = content_blocks[-1]["content"]
|
code = content_blocks[-1]["content"]
|
||||||
|
if CODE_INTERPRETER_BLOCKED_MODULES:
|
||||||
|
blocking_code = textwrap.dedent(
|
||||||
|
f"""
|
||||||
|
import builtins
|
||||||
|
|
||||||
|
BLOCKED_MODULES = {CODE_INTERPRETER_BLOCKED_MODULES}
|
||||||
|
|
||||||
|
_real_import = builtins.__import__
|
||||||
|
def restricted_import(name, globals=None, locals=None, fromlist=(), level=0):
|
||||||
|
if name.split('.')[0] in BLOCKED_MODULES:
|
||||||
|
importer_name = globals.get('__name__') if globals else None
|
||||||
|
if importer_name == '__main__':
|
||||||
|
raise ImportError(
|
||||||
|
f"Direct import of module {{name}} is restricted."
|
||||||
|
)
|
||||||
|
return _real_import(name, globals, locals, fromlist, level)
|
||||||
|
|
||||||
|
builtins.__import__ = restricted_import
|
||||||
|
"""
|
||||||
|
)
|
||||||
|
code = blocking_code + "\n" + code
|
||||||
|
|
||||||
if (
|
if (
|
||||||
request.app.state.config.CODE_INTERPRETER_ENGINE
|
request.app.state.config.CODE_INTERPRETER_ENGINE
|
||||||
|
@ -2520,7 +2569,7 @@ async def process_chat_response(
|
||||||
if not get_active_status_by_user_id(user.id):
|
if not get_active_status_by_user_id(user.id):
|
||||||
webhook_url = Users.get_user_webhook_url_by_id(user.id)
|
webhook_url = Users.get_user_webhook_url_by_id(user.id)
|
||||||
if webhook_url:
|
if webhook_url:
|
||||||
post_webhook(
|
await post_webhook(
|
||||||
request.app.state.WEBUI_NAME,
|
request.app.state.WEBUI_NAME,
|
||||||
webhook_url,
|
webhook_url,
|
||||||
f"{title} - {request.app.state.config.WEBUI_URL}/c/{metadata['chat_id']}\n\n{content}",
|
f"{title} - {request.app.state.config.WEBUI_URL}/c/{metadata['chat_id']}\n\n{content}",
|
||||||
|
@ -2557,13 +2606,7 @@ async def process_chat_response(
|
||||||
if response.background is not None:
|
if response.background is not None:
|
||||||
await response.background()
|
await response.background()
|
||||||
|
|
||||||
# background_tasks.add_task(response_handler, response, events)
|
return await response_handler(response, events)
|
||||||
task_id, _ = await create_task(
|
|
||||||
request.app.state.redis,
|
|
||||||
response_handler(response, events),
|
|
||||||
id=metadata["chat_id"],
|
|
||||||
)
|
|
||||||
return {"status": True, "task_id": task_id}
|
|
||||||
|
|
||||||
else:
|
else:
|
||||||
# Fallback to the original response
|
# Fallback to the original response
|
||||||
|
|
|
@ -1,5 +1,6 @@
|
||||||
import hashlib
|
import hashlib
|
||||||
import re
|
import re
|
||||||
|
import threading
|
||||||
import time
|
import time
|
||||||
import uuid
|
import uuid
|
||||||
import logging
|
import logging
|
||||||
|
@ -478,3 +479,46 @@ def convert_logit_bias_input_to_json(user_input):
|
||||||
bias = 100 if bias > 100 else -100 if bias < -100 else bias
|
bias = 100 if bias > 100 else -100 if bias < -100 else bias
|
||||||
logit_bias_json[token] = bias
|
logit_bias_json[token] = bias
|
||||||
return json.dumps(logit_bias_json)
|
return json.dumps(logit_bias_json)
|
||||||
|
|
||||||
|
|
||||||
|
def freeze(value):
|
||||||
|
"""
|
||||||
|
Freeze a value to make it hashable.
|
||||||
|
"""
|
||||||
|
if isinstance(value, dict):
|
||||||
|
return frozenset((k, freeze(v)) for k, v in value.items())
|
||||||
|
elif isinstance(value, list):
|
||||||
|
return tuple(freeze(v) for v in value)
|
||||||
|
return value
|
||||||
|
|
||||||
|
|
||||||
|
def throttle(interval: float = 10.0):
|
||||||
|
"""
|
||||||
|
Decorator to prevent a function from being called more than once within a specified duration.
|
||||||
|
If the function is called again within the duration, it returns None. To avoid returning
|
||||||
|
different types, the return type of the function should be Optional[T].
|
||||||
|
|
||||||
|
:param interval: Duration in seconds to wait before allowing the function to be called again.
|
||||||
|
"""
|
||||||
|
|
||||||
|
def decorator(func):
|
||||||
|
last_calls = {}
|
||||||
|
lock = threading.Lock()
|
||||||
|
|
||||||
|
def wrapper(*args, **kwargs):
|
||||||
|
if interval is None:
|
||||||
|
return func(*args, **kwargs)
|
||||||
|
|
||||||
|
key = (args, freeze(kwargs))
|
||||||
|
now = time.time()
|
||||||
|
if now - last_calls.get(key, 0) < interval:
|
||||||
|
return None
|
||||||
|
with lock:
|
||||||
|
if now - last_calls.get(key, 0) < interval:
|
||||||
|
return None
|
||||||
|
last_calls[key] = now
|
||||||
|
return func(*args, **kwargs)
|
||||||
|
|
||||||
|
return wrapper
|
||||||
|
|
||||||
|
return decorator
|
||||||
|
|
|
@ -115,7 +115,13 @@ class OAuthManager:
|
||||||
nested_claims = oauth_claim.split(".")
|
nested_claims = oauth_claim.split(".")
|
||||||
for nested_claim in nested_claims:
|
for nested_claim in nested_claims:
|
||||||
claim_data = claim_data.get(nested_claim, {})
|
claim_data = claim_data.get(nested_claim, {})
|
||||||
oauth_roles = claim_data if isinstance(claim_data, list) else []
|
|
||||||
|
oauth_roles = []
|
||||||
|
|
||||||
|
if isinstance(claim_data, list):
|
||||||
|
oauth_roles = claim_data
|
||||||
|
if isinstance(claim_data, str) or isinstance(claim_data, int):
|
||||||
|
oauth_roles = [str(claim_data)]
|
||||||
|
|
||||||
log.debug(f"Oauth Roles claim: {oauth_claim}")
|
log.debug(f"Oauth Roles claim: {oauth_claim}")
|
||||||
log.debug(f"User roles from oauth: {oauth_roles}")
|
log.debug(f"User roles from oauth: {oauth_roles}")
|
||||||
|
@ -355,7 +361,11 @@ class OAuthManager:
|
||||||
log.warning(f"OAuth callback error: {e}")
|
log.warning(f"OAuth callback error: {e}")
|
||||||
raise HTTPException(400, detail=ERROR_MESSAGES.INVALID_CRED)
|
raise HTTPException(400, detail=ERROR_MESSAGES.INVALID_CRED)
|
||||||
user_data: UserInfo = token.get("userinfo")
|
user_data: UserInfo = token.get("userinfo")
|
||||||
if not user_data or auth_manager_config.OAUTH_EMAIL_CLAIM not in user_data:
|
if (
|
||||||
|
(not user_data)
|
||||||
|
or (auth_manager_config.OAUTH_EMAIL_CLAIM not in user_data)
|
||||||
|
or (auth_manager_config.OAUTH_USERNAME_CLAIM not in user_data)
|
||||||
|
):
|
||||||
user_data: UserInfo = await client.userinfo(token=token)
|
user_data: UserInfo = await client.userinfo(token=token)
|
||||||
if not user_data:
|
if not user_data:
|
||||||
log.warning(f"OAuth callback failed, user data is missing: {token}")
|
log.warning(f"OAuth callback failed, user data is missing: {token}")
|
||||||
|
@ -498,7 +508,7 @@ class OAuthManager:
|
||||||
)
|
)
|
||||||
|
|
||||||
if auth_manager_config.WEBHOOK_URL:
|
if auth_manager_config.WEBHOOK_URL:
|
||||||
post_webhook(
|
await post_webhook(
|
||||||
WEBUI_NAME,
|
WEBUI_NAME,
|
||||||
auth_manager_config.WEBHOOK_URL,
|
auth_manager_config.WEBHOOK_URL,
|
||||||
WEBHOOK_MESSAGES.USER_SIGNUP(user.name),
|
WEBHOOK_MESSAGES.USER_SIGNUP(user.name),
|
||||||
|
@ -525,7 +535,15 @@ class OAuthManager:
|
||||||
default_permissions=request.app.state.config.USER_PERMISSIONS,
|
default_permissions=request.app.state.config.USER_PERMISSIONS,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
redirect_base_url = str(request.app.state.config.WEBUI_URL or request.base_url)
|
||||||
|
if redirect_base_url.endswith("/"):
|
||||||
|
redirect_base_url = redirect_base_url[:-1]
|
||||||
|
redirect_url = f"{redirect_base_url}/auth"
|
||||||
|
|
||||||
|
response = RedirectResponse(url=redirect_url, headers=response.headers)
|
||||||
|
|
||||||
# Set the cookie token
|
# Set the cookie token
|
||||||
|
# Redirect back to the frontend with the JWT token
|
||||||
response.set_cookie(
|
response.set_cookie(
|
||||||
key="token",
|
key="token",
|
||||||
value=jwt_token,
|
value=jwt_token,
|
||||||
|
@ -543,11 +561,4 @@ class OAuthManager:
|
||||||
samesite=WEBUI_AUTH_COOKIE_SAME_SITE,
|
samesite=WEBUI_AUTH_COOKIE_SAME_SITE,
|
||||||
secure=WEBUI_AUTH_COOKIE_SECURE,
|
secure=WEBUI_AUTH_COOKIE_SECURE,
|
||||||
)
|
)
|
||||||
# Redirect back to the frontend with the JWT token
|
return response
|
||||||
|
|
||||||
redirect_base_url = str(request.app.state.config.WEBUI_URL or request.base_url)
|
|
||||||
if redirect_base_url.endswith("/"):
|
|
||||||
redirect_base_url = redirect_base_url[:-1]
|
|
||||||
redirect_url = f"{redirect_base_url}/auth"
|
|
||||||
|
|
||||||
return RedirectResponse(url=redirect_url, headers=response.headers)
|
|
||||||
|
|
|
@ -9,7 +9,7 @@ import json
|
||||||
|
|
||||||
|
|
||||||
# inplace function: form_data is modified
|
# inplace function: form_data is modified
|
||||||
def apply_model_system_prompt_to_body(
|
def apply_system_prompt_to_body(
|
||||||
system: Optional[str], form_data: dict, metadata: Optional[dict] = None, user=None
|
system: Optional[str], form_data: dict, metadata: Optional[dict] = None, user=None
|
||||||
) -> dict:
|
) -> dict:
|
||||||
if not system:
|
if not system:
|
||||||
|
@ -22,15 +22,7 @@ def apply_model_system_prompt_to_body(
|
||||||
system = prompt_variables_template(system, variables)
|
system = prompt_variables_template(system, variables)
|
||||||
|
|
||||||
# Legacy (API Usage)
|
# Legacy (API Usage)
|
||||||
if user:
|
system = prompt_template(system, user)
|
||||||
template_params = {
|
|
||||||
"user_name": user.name,
|
|
||||||
"user_location": user.info.get("location") if user.info else None,
|
|
||||||
}
|
|
||||||
else:
|
|
||||||
template_params = {}
|
|
||||||
|
|
||||||
system = prompt_template(system, **template_params)
|
|
||||||
|
|
||||||
form_data["messages"] = add_or_update_system_message(
|
form_data["messages"] = add_or_update_system_message(
|
||||||
system, form_data.get("messages", [])
|
system, form_data.get("messages", [])
|
||||||
|
|
|
@ -260,7 +260,7 @@ def install_tool_and_function_dependencies():
|
||||||
all_dependencies += f"{dependencies}, "
|
all_dependencies += f"{dependencies}, "
|
||||||
for tool in tool_list:
|
for tool in tool_list:
|
||||||
# Only install requirements for admin tools
|
# Only install requirements for admin tools
|
||||||
if tool.user.role == "admin":
|
if tool.user and tool.user.role == "admin":
|
||||||
frontmatter = extract_frontmatter(replace_imports(tool.content))
|
frontmatter = extract_frontmatter(replace_imports(tool.content))
|
||||||
if dependencies := frontmatter.get("requirements"):
|
if dependencies := frontmatter.get("requirements"):
|
||||||
all_dependencies += f"{dependencies}, "
|
all_dependencies += f"{dependencies}, "
|
||||||
|
|
|
@ -2,7 +2,7 @@ import logging
|
||||||
import math
|
import math
|
||||||
import re
|
import re
|
||||||
from datetime import datetime
|
from datetime import datetime
|
||||||
from typing import Optional
|
from typing import Optional, Any
|
||||||
import uuid
|
import uuid
|
||||||
|
|
||||||
|
|
||||||
|
@ -38,9 +38,46 @@ def prompt_variables_template(template: str, variables: dict[str, str]) -> str:
|
||||||
return template
|
return template
|
||||||
|
|
||||||
|
|
||||||
def prompt_template(
|
def prompt_template(template: str, user: Optional[Any] = None) -> str:
|
||||||
template: str, user_name: Optional[str] = None, user_location: Optional[str] = None
|
|
||||||
) -> str:
|
USER_VARIABLES = {}
|
||||||
|
|
||||||
|
if user:
|
||||||
|
if hasattr(user, "model_dump"):
|
||||||
|
user = user.model_dump()
|
||||||
|
|
||||||
|
if isinstance(user, dict):
|
||||||
|
user_info = user.get("info", {}) or {}
|
||||||
|
birth_date = user.get("date_of_birth")
|
||||||
|
age = None
|
||||||
|
|
||||||
|
if birth_date:
|
||||||
|
try:
|
||||||
|
# If birth_date is str, convert to datetime
|
||||||
|
if isinstance(birth_date, str):
|
||||||
|
birth_date = datetime.strptime(birth_date, "%Y-%m-%d")
|
||||||
|
|
||||||
|
today = datetime.now()
|
||||||
|
age = (
|
||||||
|
today.year
|
||||||
|
- birth_date.year
|
||||||
|
- (
|
||||||
|
(today.month, today.day)
|
||||||
|
< (birth_date.month, birth_date.day)
|
||||||
|
)
|
||||||
|
)
|
||||||
|
except Exception as e:
|
||||||
|
pass
|
||||||
|
|
||||||
|
USER_VARIABLES = {
|
||||||
|
"name": str(user.get("name")),
|
||||||
|
"location": str(user_info.get("location")),
|
||||||
|
"bio": str(user.get("bio")),
|
||||||
|
"gender": str(user.get("gender")),
|
||||||
|
"birth_date": str(birth_date),
|
||||||
|
"age": str(age),
|
||||||
|
}
|
||||||
|
|
||||||
# Get the current date
|
# Get the current date
|
||||||
current_date = datetime.now()
|
current_date = datetime.now()
|
||||||
|
|
||||||
|
@ -56,19 +93,20 @@ def prompt_template(
|
||||||
)
|
)
|
||||||
template = template.replace("{{CURRENT_WEEKDAY}}", formatted_weekday)
|
template = template.replace("{{CURRENT_WEEKDAY}}", formatted_weekday)
|
||||||
|
|
||||||
if user_name:
|
template = template.replace("{{USER_NAME}}", USER_VARIABLES.get("name", "Unknown"))
|
||||||
# Replace {{USER_NAME}} in the template with the user's name
|
template = template.replace("{{USER_BIO}}", USER_VARIABLES.get("bio", "Unknown"))
|
||||||
template = template.replace("{{USER_NAME}}", user_name)
|
template = template.replace(
|
||||||
else:
|
"{{USER_GENDER}}", USER_VARIABLES.get("gender", "Unknown")
|
||||||
# Replace {{USER_NAME}} in the template with "Unknown"
|
)
|
||||||
template = template.replace("{{USER_NAME}}", "Unknown")
|
template = template.replace(
|
||||||
|
"{{USER_BIRTH_DATE}}", USER_VARIABLES.get("birth_date", "Unknown")
|
||||||
if user_location:
|
)
|
||||||
# Replace {{USER_LOCATION}} in the template with the current location
|
template = template.replace(
|
||||||
template = template.replace("{{USER_LOCATION}}", user_location)
|
"{{USER_AGE}}", str(USER_VARIABLES.get("age", "Unknown"))
|
||||||
else:
|
)
|
||||||
# Replace {{USER_LOCATION}} in the template with "Unknown"
|
template = template.replace(
|
||||||
template = template.replace("{{USER_LOCATION}}", "Unknown")
|
"{{USER_LOCATION}}", USER_VARIABLES.get("location", "Unknown")
|
||||||
|
)
|
||||||
|
|
||||||
return template
|
return template
|
||||||
|
|
||||||
|
@ -189,90 +227,56 @@ def rag_template(template: str, context: str, query: str):
|
||||||
|
|
||||||
|
|
||||||
def title_generation_template(
|
def title_generation_template(
|
||||||
template: str, messages: list[dict], user: Optional[dict] = None
|
template: str, messages: list[dict], user: Optional[Any] = None
|
||||||
) -> str:
|
) -> str:
|
||||||
|
|
||||||
prompt = get_last_user_message(messages)
|
prompt = get_last_user_message(messages)
|
||||||
template = replace_prompt_variable(template, prompt)
|
template = replace_prompt_variable(template, prompt)
|
||||||
template = replace_messages_variable(template, messages)
|
template = replace_messages_variable(template, messages)
|
||||||
|
|
||||||
template = prompt_template(
|
template = prompt_template(template, user)
|
||||||
template,
|
|
||||||
**(
|
|
||||||
{"user_name": user.get("name"), "user_location": user.get("location")}
|
|
||||||
if user
|
|
||||||
else {}
|
|
||||||
),
|
|
||||||
)
|
|
||||||
|
|
||||||
return template
|
return template
|
||||||
|
|
||||||
|
|
||||||
def follow_up_generation_template(
|
def follow_up_generation_template(
|
||||||
template: str, messages: list[dict], user: Optional[dict] = None
|
template: str, messages: list[dict], user: Optional[Any] = None
|
||||||
) -> str:
|
) -> str:
|
||||||
prompt = get_last_user_message(messages)
|
prompt = get_last_user_message(messages)
|
||||||
template = replace_prompt_variable(template, prompt)
|
template = replace_prompt_variable(template, prompt)
|
||||||
template = replace_messages_variable(template, messages)
|
template = replace_messages_variable(template, messages)
|
||||||
|
|
||||||
template = prompt_template(
|
template = prompt_template(template, user)
|
||||||
template,
|
|
||||||
**(
|
|
||||||
{"user_name": user.get("name"), "user_location": user.get("location")}
|
|
||||||
if user
|
|
||||||
else {}
|
|
||||||
),
|
|
||||||
)
|
|
||||||
return template
|
return template
|
||||||
|
|
||||||
|
|
||||||
def tags_generation_template(
|
def tags_generation_template(
|
||||||
template: str, messages: list[dict], user: Optional[dict] = None
|
template: str, messages: list[dict], user: Optional[Any] = None
|
||||||
) -> str:
|
) -> str:
|
||||||
prompt = get_last_user_message(messages)
|
prompt = get_last_user_message(messages)
|
||||||
template = replace_prompt_variable(template, prompt)
|
template = replace_prompt_variable(template, prompt)
|
||||||
template = replace_messages_variable(template, messages)
|
template = replace_messages_variable(template, messages)
|
||||||
|
|
||||||
template = prompt_template(
|
template = prompt_template(template, user)
|
||||||
template,
|
|
||||||
**(
|
|
||||||
{"user_name": user.get("name"), "user_location": user.get("location")}
|
|
||||||
if user
|
|
||||||
else {}
|
|
||||||
),
|
|
||||||
)
|
|
||||||
return template
|
return template
|
||||||
|
|
||||||
|
|
||||||
def image_prompt_generation_template(
|
def image_prompt_generation_template(
|
||||||
template: str, messages: list[dict], user: Optional[dict] = None
|
template: str, messages: list[dict], user: Optional[Any] = None
|
||||||
) -> str:
|
) -> str:
|
||||||
prompt = get_last_user_message(messages)
|
prompt = get_last_user_message(messages)
|
||||||
template = replace_prompt_variable(template, prompt)
|
template = replace_prompt_variable(template, prompt)
|
||||||
template = replace_messages_variable(template, messages)
|
template = replace_messages_variable(template, messages)
|
||||||
|
|
||||||
template = prompt_template(
|
template = prompt_template(template, user)
|
||||||
template,
|
|
||||||
**(
|
|
||||||
{"user_name": user.get("name"), "user_location": user.get("location")}
|
|
||||||
if user
|
|
||||||
else {}
|
|
||||||
),
|
|
||||||
)
|
|
||||||
return template
|
return template
|
||||||
|
|
||||||
|
|
||||||
def emoji_generation_template(
|
def emoji_generation_template(
|
||||||
template: str, prompt: str, user: Optional[dict] = None
|
template: str, prompt: str, user: Optional[Any] = None
|
||||||
) -> str:
|
) -> str:
|
||||||
template = replace_prompt_variable(template, prompt)
|
template = replace_prompt_variable(template, prompt)
|
||||||
template = prompt_template(
|
template = prompt_template(template, user)
|
||||||
template,
|
|
||||||
**(
|
|
||||||
{"user_name": user.get("name"), "user_location": user.get("location")}
|
|
||||||
if user
|
|
||||||
else {}
|
|
||||||
),
|
|
||||||
)
|
|
||||||
|
|
||||||
return template
|
return template
|
||||||
|
|
||||||
|
@ -282,38 +286,24 @@ def autocomplete_generation_template(
|
||||||
prompt: str,
|
prompt: str,
|
||||||
messages: Optional[list[dict]] = None,
|
messages: Optional[list[dict]] = None,
|
||||||
type: Optional[str] = None,
|
type: Optional[str] = None,
|
||||||
user: Optional[dict] = None,
|
user: Optional[Any] = None,
|
||||||
) -> str:
|
) -> str:
|
||||||
template = template.replace("{{TYPE}}", type if type else "")
|
template = template.replace("{{TYPE}}", type if type else "")
|
||||||
template = replace_prompt_variable(template, prompt)
|
template = replace_prompt_variable(template, prompt)
|
||||||
template = replace_messages_variable(template, messages)
|
template = replace_messages_variable(template, messages)
|
||||||
|
|
||||||
template = prompt_template(
|
template = prompt_template(template, user)
|
||||||
template,
|
|
||||||
**(
|
|
||||||
{"user_name": user.get("name"), "user_location": user.get("location")}
|
|
||||||
if user
|
|
||||||
else {}
|
|
||||||
),
|
|
||||||
)
|
|
||||||
return template
|
return template
|
||||||
|
|
||||||
|
|
||||||
def query_generation_template(
|
def query_generation_template(
|
||||||
template: str, messages: list[dict], user: Optional[dict] = None
|
template: str, messages: list[dict], user: Optional[Any] = None
|
||||||
) -> str:
|
) -> str:
|
||||||
prompt = get_last_user_message(messages)
|
prompt = get_last_user_message(messages)
|
||||||
template = replace_prompt_variable(template, prompt)
|
template = replace_prompt_variable(template, prompt)
|
||||||
template = replace_messages_variable(template, messages)
|
template = replace_messages_variable(template, messages)
|
||||||
|
|
||||||
template = prompt_template(
|
template = prompt_template(template, user)
|
||||||
template,
|
|
||||||
**(
|
|
||||||
{"user_name": user.get("name"), "user_location": user.get("location")}
|
|
||||||
if user
|
|
||||||
else {}
|
|
||||||
),
|
|
||||||
)
|
|
||||||
return template
|
return template
|
||||||
|
|
||||||
|
|
||||||
|
|
|
@ -17,6 +17,7 @@ from open_webui.env import (
|
||||||
OTEL_SERVICE_NAME,
|
OTEL_SERVICE_NAME,
|
||||||
OTEL_EXPORTER_OTLP_ENDPOINT,
|
OTEL_EXPORTER_OTLP_ENDPOINT,
|
||||||
OTEL_EXPORTER_OTLP_INSECURE,
|
OTEL_EXPORTER_OTLP_INSECURE,
|
||||||
|
ENABLE_OTEL_TRACES,
|
||||||
ENABLE_OTEL_METRICS,
|
ENABLE_OTEL_METRICS,
|
||||||
OTEL_BASIC_AUTH_USERNAME,
|
OTEL_BASIC_AUTH_USERNAME,
|
||||||
OTEL_BASIC_AUTH_PASSWORD,
|
OTEL_BASIC_AUTH_PASSWORD,
|
||||||
|
@ -27,6 +28,7 @@ from open_webui.env import (
|
||||||
def setup(app: FastAPI, db_engine: Engine):
|
def setup(app: FastAPI, db_engine: Engine):
|
||||||
# set up trace
|
# set up trace
|
||||||
resource = Resource.create(attributes={SERVICE_NAME: OTEL_SERVICE_NAME})
|
resource = Resource.create(attributes={SERVICE_NAME: OTEL_SERVICE_NAME})
|
||||||
|
if ENABLE_OTEL_TRACES:
|
||||||
trace.set_tracer_provider(TracerProvider(resource=resource))
|
trace.set_tracer_provider(TracerProvider(resource=resource))
|
||||||
|
|
||||||
# Add basic auth header only if both username and password are not empty
|
# Add basic auth header only if both username and password are not empty
|
||||||
|
|
|
@ -5,6 +5,7 @@ import inspect
|
||||||
import aiohttp
|
import aiohttp
|
||||||
import asyncio
|
import asyncio
|
||||||
import yaml
|
import yaml
|
||||||
|
import json
|
||||||
|
|
||||||
from pydantic import BaseModel
|
from pydantic import BaseModel
|
||||||
from pydantic.fields import FieldInfo
|
from pydantic.fields import FieldInfo
|
||||||
|
@ -38,6 +39,7 @@ from open_webui.models.users import UserModel
|
||||||
from open_webui.utils.plugin import load_tool_module_by_id
|
from open_webui.utils.plugin import load_tool_module_by_id
|
||||||
from open_webui.env import (
|
from open_webui.env import (
|
||||||
SRC_LOG_LEVELS,
|
SRC_LOG_LEVELS,
|
||||||
|
AIOHTTP_CLIENT_TIMEOUT,
|
||||||
AIOHTTP_CLIENT_TIMEOUT_TOOL_SERVER_DATA,
|
AIOHTTP_CLIENT_TIMEOUT_TOOL_SERVER_DATA,
|
||||||
AIOHTTP_CLIENT_SESSION_TOOL_SERVER_SSL,
|
AIOHTTP_CLIENT_SESSION_TOOL_SERVER_SSL,
|
||||||
)
|
)
|
||||||
|
@ -55,19 +57,38 @@ def get_async_tool_function_and_apply_extra_params(
|
||||||
extra_params = {k: v for k, v in extra_params.items() if k in sig.parameters}
|
extra_params = {k: v for k, v in extra_params.items() if k in sig.parameters}
|
||||||
partial_func = partial(function, **extra_params)
|
partial_func = partial(function, **extra_params)
|
||||||
|
|
||||||
|
# Remove the 'frozen' keyword arguments from the signature
|
||||||
|
# python-genai uses the signature to infer the tool properties for native function calling
|
||||||
|
parameters = []
|
||||||
|
for name, parameter in sig.parameters.items():
|
||||||
|
# Exclude keyword arguments that are frozen
|
||||||
|
if name in extra_params:
|
||||||
|
continue
|
||||||
|
# Keep remaining parameters
|
||||||
|
parameters.append(parameter)
|
||||||
|
|
||||||
|
new_sig = inspect.Signature(
|
||||||
|
parameters=parameters, return_annotation=sig.return_annotation
|
||||||
|
)
|
||||||
|
|
||||||
if inspect.iscoroutinefunction(function):
|
if inspect.iscoroutinefunction(function):
|
||||||
update_wrapper(partial_func, function)
|
# wrap the functools.partial as python-genai has trouble with it
|
||||||
return partial_func
|
# https://github.com/googleapis/python-genai/issues/907
|
||||||
|
async def new_function(*args, **kwargs):
|
||||||
|
return await partial_func(*args, **kwargs)
|
||||||
|
|
||||||
else:
|
else:
|
||||||
# Make it a coroutine function
|
# Make it a coroutine function when it is not already
|
||||||
async def new_function(*args, **kwargs):
|
async def new_function(*args, **kwargs):
|
||||||
return partial_func(*args, **kwargs)
|
return partial_func(*args, **kwargs)
|
||||||
|
|
||||||
update_wrapper(new_function, function)
|
update_wrapper(new_function, function)
|
||||||
|
new_function.__signature__ = new_sig
|
||||||
|
|
||||||
return new_function
|
return new_function
|
||||||
|
|
||||||
|
|
||||||
def get_tools(
|
async def get_tools(
|
||||||
request: Request, tool_ids: list[str], user: UserModel, extra_params: dict
|
request: Request, tool_ids: list[str], user: UserModel, extra_params: dict
|
||||||
) -> dict[str, dict]:
|
) -> dict[str, dict]:
|
||||||
tools_dict = {}
|
tools_dict = {}
|
||||||
|
@ -76,18 +97,24 @@ def get_tools(
|
||||||
tool = Tools.get_tool_by_id(tool_id)
|
tool = Tools.get_tool_by_id(tool_id)
|
||||||
if tool is None:
|
if tool is None:
|
||||||
if tool_id.startswith("server:"):
|
if tool_id.startswith("server:"):
|
||||||
server_idx = int(tool_id.split(":")[1])
|
server_id = tool_id.split(":")[1]
|
||||||
tool_server_connection = (
|
|
||||||
request.app.state.config.TOOL_SERVER_CONNECTIONS[server_idx]
|
|
||||||
)
|
|
||||||
tool_server_data = None
|
tool_server_data = None
|
||||||
for server in request.app.state.TOOL_SERVERS:
|
for server in await get_tool_servers(request):
|
||||||
if server["idx"] == server_idx:
|
if server["id"] == server_id:
|
||||||
tool_server_data = server
|
tool_server_data = server
|
||||||
break
|
break
|
||||||
assert tool_server_data is not None
|
|
||||||
specs = tool_server_data.get("specs", [])
|
|
||||||
|
|
||||||
|
if tool_server_data is None:
|
||||||
|
log.warning(f"Tool server data not found for {server_id}")
|
||||||
|
continue
|
||||||
|
|
||||||
|
tool_server_idx = tool_server_data.get("idx", 0)
|
||||||
|
tool_server_connection = (
|
||||||
|
request.app.state.config.TOOL_SERVER_CONNECTIONS[tool_server_idx]
|
||||||
|
)
|
||||||
|
|
||||||
|
specs = tool_server_data.get("specs", [])
|
||||||
for spec in specs:
|
for spec in specs:
|
||||||
function_name = spec["name"]
|
function_name = spec["name"]
|
||||||
|
|
||||||
|
@ -126,13 +153,14 @@ def get_tools(
|
||||||
"spec": spec,
|
"spec": spec,
|
||||||
}
|
}
|
||||||
|
|
||||||
# TODO: if collision, prepend toolkit name
|
# Handle function name collisions
|
||||||
if function_name in tools_dict:
|
while function_name in tools_dict:
|
||||||
log.warning(
|
log.warning(
|
||||||
f"Tool {function_name} already exists in another tools!"
|
f"Tool {function_name} already exists in another tools!"
|
||||||
)
|
)
|
||||||
log.warning(f"Discarding {tool_id}.{function_name}")
|
# Prepend server ID to function name
|
||||||
else:
|
function_name = f"{server_id}_{function_name}"
|
||||||
|
|
||||||
tools_dict[function_name] = tool_dict
|
tools_dict[function_name] = tool_dict
|
||||||
else:
|
else:
|
||||||
continue
|
continue
|
||||||
|
@ -193,13 +221,14 @@ def get_tools(
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
# TODO: if collision, prepend toolkit name
|
# Handle function name collisions
|
||||||
if function_name in tools_dict:
|
while function_name in tools_dict:
|
||||||
log.warning(
|
log.warning(
|
||||||
f"Tool {function_name} already exists in another tools!"
|
f"Tool {function_name} already exists in another tools!"
|
||||||
)
|
)
|
||||||
log.warning(f"Discarding {tool_id}.{function_name}")
|
# Prepend tool ID to function name
|
||||||
else:
|
function_name = f"{tool_id}_{function_name}"
|
||||||
|
|
||||||
tools_dict[function_name] = tool_dict
|
tools_dict[function_name] = tool_dict
|
||||||
|
|
||||||
return tools_dict
|
return tools_dict
|
||||||
|
@ -283,15 +312,15 @@ def convert_function_to_pydantic_model(func: Callable) -> type[BaseModel]:
|
||||||
|
|
||||||
field_defs = {}
|
field_defs = {}
|
||||||
for name, param in parameters.items():
|
for name, param in parameters.items():
|
||||||
|
|
||||||
type_hint = type_hints.get(name, Any)
|
type_hint = type_hints.get(name, Any)
|
||||||
default_value = param.default if param.default is not param.empty else ...
|
default_value = param.default if param.default is not param.empty else ...
|
||||||
|
|
||||||
param_description = function_param_descriptions.get(name, None)
|
param_description = function_param_descriptions.get(name, None)
|
||||||
|
|
||||||
if param_description:
|
if param_description:
|
||||||
field_defs[name] = type_hint, Field(
|
field_defs[name] = (
|
||||||
default_value, description=param_description
|
type_hint,
|
||||||
|
Field(default_value, description=param_description),
|
||||||
)
|
)
|
||||||
else:
|
else:
|
||||||
field_defs[name] = type_hint, default_value
|
field_defs[name] = type_hint, default_value
|
||||||
|
@ -442,6 +471,34 @@ def convert_openapi_to_tool_payload(openapi_spec):
|
||||||
return tool_payload
|
return tool_payload
|
||||||
|
|
||||||
|
|
||||||
|
async def set_tool_servers(request: Request):
|
||||||
|
request.app.state.TOOL_SERVERS = await get_tool_servers_data(
|
||||||
|
request.app.state.config.TOOL_SERVER_CONNECTIONS
|
||||||
|
)
|
||||||
|
|
||||||
|
if request.app.state.redis is not None:
|
||||||
|
await request.app.state.redis.set(
|
||||||
|
"tool_servers", json.dumps(request.app.state.TOOL_SERVERS)
|
||||||
|
)
|
||||||
|
|
||||||
|
return request.app.state.TOOL_SERVERS
|
||||||
|
|
||||||
|
|
||||||
|
async def get_tool_servers(request: Request):
|
||||||
|
tool_servers = []
|
||||||
|
if request.app.state.redis is not None:
|
||||||
|
try:
|
||||||
|
tool_servers = json.loads(await request.app.state.redis.get("tool_servers"))
|
||||||
|
except Exception as e:
|
||||||
|
log.error(f"Error fetching tool_servers from Redis: {e}")
|
||||||
|
|
||||||
|
if not tool_servers:
|
||||||
|
await set_tool_servers(request)
|
||||||
|
|
||||||
|
request.app.state.TOOL_SERVERS = tool_servers
|
||||||
|
return request.app.state.TOOL_SERVERS
|
||||||
|
|
||||||
|
|
||||||
async def get_tool_server_data(token: str, url: str) -> Dict[str, Any]:
|
async def get_tool_server_data(token: str, url: str) -> Dict[str, Any]:
|
||||||
headers = {
|
headers = {
|
||||||
"Accept": "application/json",
|
"Accept": "application/json",
|
||||||
|
@ -505,11 +562,16 @@ async def get_tool_servers_data(
|
||||||
token = server.get("key", "")
|
token = server.get("key", "")
|
||||||
elif auth_type == "session":
|
elif auth_type == "session":
|
||||||
token = session_token
|
token = session_token
|
||||||
server_entries.append((idx, server, full_url, info, token))
|
|
||||||
|
id = info.get("id")
|
||||||
|
if not id:
|
||||||
|
id = str(idx)
|
||||||
|
|
||||||
|
server_entries.append((id, idx, server, full_url, info, token))
|
||||||
|
|
||||||
# Create async tasks to fetch data
|
# Create async tasks to fetch data
|
||||||
tasks = [
|
tasks = [
|
||||||
get_tool_server_data(token, url) for (_, _, url, _, token) in server_entries
|
get_tool_server_data(token, url) for (_, _, _, url, _, token) in server_entries
|
||||||
]
|
]
|
||||||
|
|
||||||
# Execute tasks concurrently
|
# Execute tasks concurrently
|
||||||
|
@ -517,7 +579,7 @@ async def get_tool_servers_data(
|
||||||
|
|
||||||
# Build final results with index and server metadata
|
# Build final results with index and server metadata
|
||||||
results = []
|
results = []
|
||||||
for (idx, server, url, info, _), response in zip(server_entries, responses):
|
for (id, idx, server, url, info, _), response in zip(server_entries, responses):
|
||||||
if isinstance(response, Exception):
|
if isinstance(response, Exception):
|
||||||
log.error(f"Failed to connect to {url} OpenAPI tool server")
|
log.error(f"Failed to connect to {url} OpenAPI tool server")
|
||||||
continue
|
continue
|
||||||
|
@ -535,6 +597,7 @@ async def get_tool_servers_data(
|
||||||
|
|
||||||
results.append(
|
results.append(
|
||||||
{
|
{
|
||||||
|
"id": str(id),
|
||||||
"idx": idx,
|
"idx": idx,
|
||||||
"url": server.get("url"),
|
"url": server.get("url"),
|
||||||
"openapi": openapi_data,
|
"openapi": openapi_data,
|
||||||
|
@ -613,7 +676,9 @@ async def execute_tool_server(
|
||||||
if token:
|
if token:
|
||||||
headers["Authorization"] = f"Bearer {token}"
|
headers["Authorization"] = f"Bearer {token}"
|
||||||
|
|
||||||
async with aiohttp.ClientSession(trust_env=True) as session:
|
async with aiohttp.ClientSession(
|
||||||
|
trust_env=True, timeout=aiohttp.ClientTimeout(total=AIOHTTP_CLIENT_TIMEOUT)
|
||||||
|
) as session:
|
||||||
request_method = getattr(session, http_method.lower())
|
request_method = getattr(session, http_method.lower())
|
||||||
|
|
||||||
if http_method in ["post", "put", "patch"]:
|
if http_method in ["post", "put", "patch"]:
|
||||||
|
@ -626,7 +691,13 @@ async def execute_tool_server(
|
||||||
if response.status >= 400:
|
if response.status >= 400:
|
||||||
text = await response.text()
|
text = await response.text()
|
||||||
raise Exception(f"HTTP error {response.status}: {text}")
|
raise Exception(f"HTTP error {response.status}: {text}")
|
||||||
return await response.json()
|
|
||||||
|
try:
|
||||||
|
response_data = await response.json()
|
||||||
|
except Exception:
|
||||||
|
response_data = await response.text()
|
||||||
|
|
||||||
|
return response_data
|
||||||
else:
|
else:
|
||||||
async with request_method(
|
async with request_method(
|
||||||
final_url,
|
final_url,
|
||||||
|
@ -636,7 +707,13 @@ async def execute_tool_server(
|
||||||
if response.status >= 400:
|
if response.status >= 400:
|
||||||
text = await response.text()
|
text = await response.text()
|
||||||
raise Exception(f"HTTP error {response.status}: {text}")
|
raise Exception(f"HTTP error {response.status}: {text}")
|
||||||
return await response.json()
|
|
||||||
|
try:
|
||||||
|
response_data = await response.json()
|
||||||
|
except Exception:
|
||||||
|
response_data = await response.text()
|
||||||
|
|
||||||
|
return response_data
|
||||||
|
|
||||||
except Exception as err:
|
except Exception as err:
|
||||||
error = str(err)
|
error = str(err)
|
||||||
|
|
|
@ -1,7 +1,7 @@
|
||||||
import json
|
import json
|
||||||
import logging
|
import logging
|
||||||
|
import aiohttp
|
||||||
|
|
||||||
import requests
|
|
||||||
from open_webui.config import WEBUI_FAVICON_URL
|
from open_webui.config import WEBUI_FAVICON_URL
|
||||||
from open_webui.env import SRC_LOG_LEVELS, VERSION
|
from open_webui.env import SRC_LOG_LEVELS, VERSION
|
||||||
|
|
||||||
|
@ -9,7 +9,7 @@ log = logging.getLogger(__name__)
|
||||||
log.setLevel(SRC_LOG_LEVELS["WEBHOOK"])
|
log.setLevel(SRC_LOG_LEVELS["WEBHOOK"])
|
||||||
|
|
||||||
|
|
||||||
def post_webhook(name: str, url: str, message: str, event_data: dict) -> bool:
|
async def post_webhook(name: str, url: str, message: str, event_data: dict) -> bool:
|
||||||
try:
|
try:
|
||||||
log.debug(f"post_webhook: {url}, {message}, {event_data}")
|
log.debug(f"post_webhook: {url}, {message}, {event_data}")
|
||||||
payload = {}
|
payload = {}
|
||||||
|
@ -51,9 +51,12 @@ def post_webhook(name: str, url: str, message: str, event_data: dict) -> bool:
|
||||||
payload = {**event_data}
|
payload = {**event_data}
|
||||||
|
|
||||||
log.debug(f"payload: {payload}")
|
log.debug(f"payload: {payload}")
|
||||||
r = requests.post(url, json=payload)
|
async with aiohttp.ClientSession() as session:
|
||||||
|
async with session.post(url, json=payload) as r:
|
||||||
|
r_text = await r.text()
|
||||||
r.raise_for_status()
|
r.raise_for_status()
|
||||||
log.debug(f"r.text: {r.text}")
|
log.debug(f"r.text: {r_text}")
|
||||||
|
|
||||||
return True
|
return True
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
log.exception(e)
|
log.exception(e)
|
||||||
|
|
|
@ -1,12 +1,12 @@
|
||||||
{
|
{
|
||||||
"name": "open-webui",
|
"name": "open-webui",
|
||||||
"version": "0.6.22",
|
"version": "0.6.23",
|
||||||
"lockfileVersion": 3,
|
"lockfileVersion": 3,
|
||||||
"requires": true,
|
"requires": true,
|
||||||
"packages": {
|
"packages": {
|
||||||
"": {
|
"": {
|
||||||
"name": "open-webui",
|
"name": "open-webui",
|
||||||
"version": "0.6.22",
|
"version": "0.6.23",
|
||||||
"dependencies": {
|
"dependencies": {
|
||||||
"@azure/msal-browser": "^4.5.0",
|
"@azure/msal-browser": "^4.5.0",
|
||||||
"@codemirror/lang-javascript": "^6.2.2",
|
"@codemirror/lang-javascript": "^6.2.2",
|
||||||
|
|
|
@ -1,6 +1,6 @@
|
||||||
{
|
{
|
||||||
"name": "open-webui",
|
"name": "open-webui",
|
||||||
"version": "0.6.22",
|
"version": "0.6.23",
|
||||||
"private": true,
|
"private": true,
|
||||||
"scripts": {
|
"scripts": {
|
||||||
"dev": "npm run pyodide:fetch && vite dev --host",
|
"dev": "npm run pyodide:fetch && vite dev --host",
|
||||||
|
|
|
@ -7,7 +7,7 @@ authors = [
|
||||||
license = { file = "LICENSE" }
|
license = { file = "LICENSE" }
|
||||||
dependencies = [
|
dependencies = [
|
||||||
"fastapi==0.115.7",
|
"fastapi==0.115.7",
|
||||||
"uvicorn[standard]==0.34.2",
|
"uvicorn[standard]==0.35.0",
|
||||||
"pydantic==2.11.7",
|
"pydantic==2.11.7",
|
||||||
"python-multipart==0.0.20",
|
"python-multipart==0.0.20",
|
||||||
|
|
||||||
|
|
|
@ -86,6 +86,10 @@
|
||||||
|
|
||||||
document.addEventListener('DOMContentLoaded', function () {
|
document.addEventListener('DOMContentLoaded', function () {
|
||||||
const splash = document.getElementById('splash-screen');
|
const splash = document.getElementById('splash-screen');
|
||||||
|
if (document.documentElement.classList.contains('her')) {
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
if (splash) splash.prepend(logo);
|
if (splash) splash.prepend(logo);
|
||||||
});
|
});
|
||||||
})();
|
})();
|
||||||
|
@ -167,6 +171,7 @@
|
||||||
<style type="text/css" nonce="">
|
<style type="text/css" nonce="">
|
||||||
html {
|
html {
|
||||||
overflow-y: hidden !important;
|
overflow-y: hidden !important;
|
||||||
|
overscroll-behavior-y: none;
|
||||||
}
|
}
|
||||||
|
|
||||||
#splash-screen {
|
#splash-screen {
|
||||||
|
|
|
@ -393,7 +393,7 @@ export const addUser = async (
|
||||||
return res;
|
return res;
|
||||||
};
|
};
|
||||||
|
|
||||||
export const updateUserProfile = async (token: string, name: string, profileImageUrl: string) => {
|
export const updateUserProfile = async (token: string, profile: object) => {
|
||||||
let error = null;
|
let error = null;
|
||||||
|
|
||||||
const res = await fetch(`${WEBUI_API_BASE_URL}/auths/update/profile`, {
|
const res = await fetch(`${WEBUI_API_BASE_URL}/auths/update/profile`, {
|
||||||
|
@ -403,8 +403,7 @@ export const updateUserProfile = async (token: string, name: string, profileImag
|
||||||
...(token && { authorization: `Bearer ${token}` })
|
...(token && { authorization: `Bearer ${token}` })
|
||||||
},
|
},
|
||||||
body: JSON.stringify({
|
body: JSON.stringify({
|
||||||
name: name,
|
...profile
|
||||||
profile_image_url: profileImageUrl
|
|
||||||
})
|
})
|
||||||
})
|
})
|
||||||
.then(async (res) => {
|
.then(async (res) => {
|
||||||
|
|
|
@ -1,4 +1,5 @@
|
||||||
import { WEBUI_API_BASE_URL } from '$lib/constants';
|
import { WEBUI_API_BASE_URL } from '$lib/constants';
|
||||||
|
import { splitStream } from '$lib/utils';
|
||||||
|
|
||||||
export const uploadFile = async (token: string, file: File, metadata?: object | null) => {
|
export const uploadFile = async (token: string, file: File, metadata?: object | null) => {
|
||||||
const data = new FormData();
|
const data = new FormData();
|
||||||
|
@ -31,6 +32,75 @@ export const uploadFile = async (token: string, file: File, metadata?: object |
|
||||||
throw error;
|
throw error;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if (res) {
|
||||||
|
const status = await getFileProcessStatus(token, res.id);
|
||||||
|
|
||||||
|
if (status && status.ok) {
|
||||||
|
const reader = status.body
|
||||||
|
.pipeThrough(new TextDecoderStream())
|
||||||
|
.pipeThrough(splitStream('\n'))
|
||||||
|
.getReader();
|
||||||
|
|
||||||
|
while (true) {
|
||||||
|
const { value, done } = await reader.read();
|
||||||
|
if (done) {
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
|
||||||
|
try {
|
||||||
|
let lines = value.split('\n');
|
||||||
|
|
||||||
|
for (const line of lines) {
|
||||||
|
if (line !== '') {
|
||||||
|
console.log(line);
|
||||||
|
if (line === 'data: [DONE]') {
|
||||||
|
console.log(line);
|
||||||
|
} else {
|
||||||
|
let data = JSON.parse(line.replace(/^data: /, ''));
|
||||||
|
console.log(data);
|
||||||
|
|
||||||
|
if (data?.error) {
|
||||||
|
console.error(data.error);
|
||||||
|
res.error = data.error;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
} catch (error) {
|
||||||
|
console.log(error);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if (error) {
|
||||||
|
throw error;
|
||||||
|
}
|
||||||
|
|
||||||
|
return res;
|
||||||
|
};
|
||||||
|
|
||||||
|
export const getFileProcessStatus = async (token: string, id: string) => {
|
||||||
|
const queryParams = new URLSearchParams();
|
||||||
|
queryParams.append('stream', 'true');
|
||||||
|
|
||||||
|
let error = null;
|
||||||
|
const res = await fetch(`${WEBUI_API_BASE_URL}/files/${id}/process/status?${queryParams}`, {
|
||||||
|
method: 'GET',
|
||||||
|
headers: {
|
||||||
|
Accept: 'application/json',
|
||||||
|
authorization: `Bearer ${token}`
|
||||||
|
}
|
||||||
|
}).catch((err) => {
|
||||||
|
error = err.detail;
|
||||||
|
console.error(err);
|
||||||
|
return null;
|
||||||
|
});
|
||||||
|
|
||||||
|
if (error) {
|
||||||
|
throw error;
|
||||||
|
}
|
||||||
|
|
||||||
return res;
|
return res;
|
||||||
};
|
};
|
||||||
|
|
||||||
|
|
|
@ -360,7 +360,7 @@ export const getToolServersData = async (i18n, servers: object[]) => {
|
||||||
: `${server?.url}${(server?.path ?? '').startsWith('/') ? '' : '/'}${server?.path}`
|
: `${server?.url}${(server?.path ?? '').startsWith('/') ? '' : '/'}${server?.path}`
|
||||||
).catch((err) => {
|
).catch((err) => {
|
||||||
toast.error(
|
toast.error(
|
||||||
i18n.t(`Failed to connect to {{URL}} OpenAPI tool server`, {
|
$i18n.t(`Failed to connect to {{URL}} OpenAPI tool server`, {
|
||||||
URL: (server?.path ?? '').includes('://')
|
URL: (server?.path ?? '').includes('://')
|
||||||
? server?.path
|
? server?.path
|
||||||
: `${server?.url}${(server?.path ?? '').startsWith('/') ? '' : '/'}${server?.path}`
|
: `${server?.url}${(server?.path ?? '').startsWith('/') ? '' : '/'}${server?.path}`
|
||||||
|
@ -480,7 +480,14 @@ export const executeToolServer = async (
|
||||||
throw new Error(`HTTP error! Status: ${res.status}. Message: ${resText}`);
|
throw new Error(`HTTP error! Status: ${res.status}. Message: ${resText}`);
|
||||||
}
|
}
|
||||||
|
|
||||||
return await res.json();
|
let responseData;
|
||||||
|
try {
|
||||||
|
responseData = await res.json();
|
||||||
|
} catch (err) {
|
||||||
|
responseData = await res.text();
|
||||||
|
}
|
||||||
|
|
||||||
|
return responseData;
|
||||||
} catch (err: any) {
|
} catch (err: any) {
|
||||||
error = err.message;
|
error = err.message;
|
||||||
console.error('API Request Error:', error);
|
console.error('API Request Error:', error);
|
||||||
|
|
|
@ -234,7 +234,7 @@ export const getOllamaModels = async (token: string = '', urlIdx: null | number
|
||||||
return (res?.models ?? [])
|
return (res?.models ?? [])
|
||||||
.map((model) => ({ id: model.model, name: model.name ?? model.model, ...model }))
|
.map((model) => ({ id: model.model, name: model.name ?? model.model, ...model }))
|
||||||
.sort((a, b) => {
|
.sort((a, b) => {
|
||||||
return a.name.localeCompare(b.name);
|
return (a?.name ?? a?.id ?? '').localeCompare(b?.name ?? b?.id ?? '');
|
||||||
});
|
});
|
||||||
};
|
};
|
||||||
|
|
||||||
|
|
|
@ -108,7 +108,7 @@
|
||||||
|
|
||||||
if (!ollama && !url) {
|
if (!ollama && !url) {
|
||||||
loading = false;
|
loading = false;
|
||||||
toast.error('URL is required');
|
toast.error($i18n.t('URL is required'));
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -116,20 +116,20 @@
|
||||||
if (!apiVersion) {
|
if (!apiVersion) {
|
||||||
loading = false;
|
loading = false;
|
||||||
|
|
||||||
toast.error('API Version is required');
|
toast.error($i18n.t('API Version is required'));
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (!key) {
|
if (!key) {
|
||||||
loading = false;
|
loading = false;
|
||||||
|
|
||||||
toast.error('Key is required');
|
toast.error($i18n.t('Key is required'));
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (modelIds.length === 0) {
|
if (modelIds.length === 0) {
|
||||||
loading = false;
|
loading = false;
|
||||||
toast.error('Deployment names are required');
|
toast.error($i18n.t('Deployment names are required for Azure OpenAI'));
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
|
@ -35,6 +35,7 @@
|
||||||
|
|
||||||
let accessControl = {};
|
let accessControl = {};
|
||||||
|
|
||||||
|
let id = '';
|
||||||
let name = '';
|
let name = '';
|
||||||
let description = '';
|
let description = '';
|
||||||
|
|
||||||
|
@ -76,6 +77,7 @@
|
||||||
access_control: accessControl
|
access_control: accessControl
|
||||||
},
|
},
|
||||||
info: {
|
info: {
|
||||||
|
id,
|
||||||
name,
|
name,
|
||||||
description
|
description
|
||||||
}
|
}
|
||||||
|
@ -106,6 +108,7 @@
|
||||||
access_control: accessControl
|
access_control: accessControl
|
||||||
},
|
},
|
||||||
info: {
|
info: {
|
||||||
|
id: id,
|
||||||
name: name,
|
name: name,
|
||||||
description: description
|
description: description
|
||||||
}
|
}
|
||||||
|
@ -121,6 +124,7 @@
|
||||||
key = '';
|
key = '';
|
||||||
auth_type = 'bearer';
|
auth_type = 'bearer';
|
||||||
|
|
||||||
|
id = '';
|
||||||
name = '';
|
name = '';
|
||||||
description = '';
|
description = '';
|
||||||
|
|
||||||
|
@ -136,6 +140,7 @@
|
||||||
auth_type = connection?.auth_type ?? 'bearer';
|
auth_type = connection?.auth_type ?? 'bearer';
|
||||||
key = connection?.key ?? '';
|
key = connection?.key ?? '';
|
||||||
|
|
||||||
|
id = connection.info?.id ?? '';
|
||||||
name = connection.info?.name ?? '';
|
name = connection.info?.name ?? '';
|
||||||
description = connection.info?.description ?? '';
|
description = connection.info?.description ?? '';
|
||||||
|
|
||||||
|
@ -278,8 +283,8 @@
|
||||||
class={`w-full text-sm bg-transparent pr-5 ${($settings?.highContrastMode ?? false) ? 'placeholder:text-gray-700 dark:placeholder:text-gray-100' : 'outline-hidden placeholder:text-gray-300 dark:placeholder:text-gray-700'}`}
|
class={`w-full text-sm bg-transparent pr-5 ${($settings?.highContrastMode ?? false) ? 'placeholder:text-gray-700 dark:placeholder:text-gray-100' : 'outline-hidden placeholder:text-gray-300 dark:placeholder:text-gray-700'}`}
|
||||||
bind:value={auth_type}
|
bind:value={auth_type}
|
||||||
>
|
>
|
||||||
<option value="bearer">Bearer</option>
|
<option value="bearer">{$i18n.t('Bearer')}</option>
|
||||||
<option value="session">Session</option>
|
<option value="session">{$i18n.t('Session')}</option>
|
||||||
</select>
|
</select>
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
|
@ -308,10 +313,34 @@
|
||||||
<div class="flex gap-2">
|
<div class="flex gap-2">
|
||||||
<div class="flex flex-col w-full">
|
<div class="flex flex-col w-full">
|
||||||
<label
|
<label
|
||||||
for="enter-name"
|
for="enter-id"
|
||||||
class={`mb-0.5 text-xs" ${($settings?.highContrastMode ?? false) ? 'text-gray-800 dark:text-gray-100' : 'text-gray-500'}`}
|
class={`mb-0.5 text-xs ${($settings?.highContrastMode ?? false) ? 'text-gray-800 dark:text-gray-100' : 'text-gray-500'}`}
|
||||||
>{$i18n.t('Name')}</label
|
>{$i18n.t('ID')}
|
||||||
|
<span class="text-xs text-gray-200 dark:text-gray-800 ml-0.5"
|
||||||
|
>{$i18n.t('Optional')}</span
|
||||||
>
|
>
|
||||||
|
</label>
|
||||||
|
|
||||||
|
<div class="flex-1">
|
||||||
|
<input
|
||||||
|
id="enter-id"
|
||||||
|
class={`w-full text-sm bg-transparent ${($settings?.highContrastMode ?? false) ? 'placeholder:text-gray-700 dark:placeholder:text-gray-100' : 'outline-hidden placeholder:text-gray-300 dark:placeholder:text-gray-700'}`}
|
||||||
|
type="text"
|
||||||
|
bind:value={id}
|
||||||
|
placeholder={$i18n.t('Enter ID')}
|
||||||
|
autocomplete="off"
|
||||||
|
/>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<div class="flex gap-2 mt-2">
|
||||||
|
<div class="flex flex-col w-full">
|
||||||
|
<label
|
||||||
|
for="enter-name"
|
||||||
|
class={`mb-0.5 text-xs ${($settings?.highContrastMode ?? false) ? 'text-gray-800 dark:text-gray-100' : 'text-gray-500'}`}
|
||||||
|
>{$i18n.t('Name')}
|
||||||
|
</label>
|
||||||
|
|
||||||
<div class="flex-1">
|
<div class="flex-1">
|
||||||
<input
|
<input
|
||||||
|
|
|
@ -115,7 +115,7 @@
|
||||||
if (a.rating === '-' && b.rating !== '-') return 1;
|
if (a.rating === '-' && b.rating !== '-') return 1;
|
||||||
if (b.rating === '-' && a.rating !== '-') return -1;
|
if (b.rating === '-' && a.rating !== '-') return -1;
|
||||||
if (a.rating !== '-' && b.rating !== '-') return b.rating - a.rating;
|
if (a.rating !== '-' && b.rating !== '-') return b.rating - a.rating;
|
||||||
return a.name.localeCompare(b.name);
|
return (a?.name ?? a?.id ?? '').localeCompare(b?.name ?? b?.id ?? '');
|
||||||
});
|
});
|
||||||
|
|
||||||
loadingLeaderboard = false;
|
loadingLeaderboard = false;
|
||||||
|
|
|
@ -105,7 +105,7 @@
|
||||||
sessionStorage.function = JSON.stringify({
|
sessionStorage.function = JSON.stringify({
|
||||||
..._function,
|
..._function,
|
||||||
id: `${_function.id}_clone`,
|
id: `${_function.id}_clone`,
|
||||||
name: `${_function.name} (Clone)`
|
name: `${_function.name} (${$i18n.t('Clone')})`
|
||||||
});
|
});
|
||||||
goto('/admin/functions/create');
|
goto('/admin/functions/create');
|
||||||
}
|
}
|
||||||
|
@ -626,7 +626,12 @@
|
||||||
const _functions = JSON.parse(event.target.result);
|
const _functions = JSON.parse(event.target.result);
|
||||||
console.log(_functions);
|
console.log(_functions);
|
||||||
|
|
||||||
for (const func of _functions) {
|
for (let func of _functions) {
|
||||||
|
if ('function' in func) {
|
||||||
|
// Required for Community JSON import
|
||||||
|
func = func.function;
|
||||||
|
}
|
||||||
|
|
||||||
const res = await createNewFunction(localStorage.token, func).catch((error) => {
|
const res = await createNewFunction(localStorage.token, func).catch((error) => {
|
||||||
toast.error(`${error}`);
|
toast.error(`${error}`);
|
||||||
return null;
|
return null;
|
||||||
|
@ -650,7 +655,7 @@
|
||||||
>
|
>
|
||||||
<div class="text-sm text-gray-500">
|
<div class="text-sm text-gray-500">
|
||||||
<div class=" bg-yellow-500/20 text-yellow-700 dark:text-yellow-200 rounded-lg px-4 py-3">
|
<div class=" bg-yellow-500/20 text-yellow-700 dark:text-yellow-200 rounded-lg px-4 py-3">
|
||||||
<div>Please carefully review the following warnings:</div>
|
<div>{$i18n.t('Please carefully review the following warnings:')}</div>
|
||||||
|
|
||||||
<ul class=" mt-1 list-disc pl-4 text-xs">
|
<ul class=" mt-1 list-disc pl-4 text-xs">
|
||||||
<li>{$i18n.t('Functions allow arbitrary code execution.')}</li>
|
<li>{$i18n.t('Functions allow arbitrary code execution.')}</li>
|
||||||
|
|
|
@ -215,13 +215,13 @@
|
||||||
<select
|
<select
|
||||||
class="dark:bg-gray-900 cursor-pointer w-fit pr-8 rounded-sm px-2 p-1 text-xs bg-transparent outline-hidden text-right"
|
class="dark:bg-gray-900 cursor-pointer w-fit pr-8 rounded-sm px-2 p-1 text-xs bg-transparent outline-hidden text-right"
|
||||||
bind:value={STT_ENGINE}
|
bind:value={STT_ENGINE}
|
||||||
placeholder="Select an engine"
|
placeholder={$i18n.t('Select an engine')}
|
||||||
>
|
>
|
||||||
<option value="">{$i18n.t('Whisper (Local)')}</option>
|
<option value="">{$i18n.t('Whisper (Local)')}</option>
|
||||||
<option value="openai">OpenAI</option>
|
<option value="openai">{$i18n.t('OpenAI')}</option>
|
||||||
<option value="web">{$i18n.t('Web API')}</option>
|
<option value="web">{$i18n.t('Web API')}</option>
|
||||||
<option value="deepgram">Deepgram</option>
|
<option value="deepgram">{$i18n.t('Deepgram')}</option>
|
||||||
<option value="azure">Azure AI Speech</option>
|
<option value="azure">{$i18n.t('Azure AI Speech')}</option>
|
||||||
</select>
|
</select>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
|
@ -250,7 +250,7 @@
|
||||||
list="model-list"
|
list="model-list"
|
||||||
class="w-full rounded-lg py-2 px-4 text-sm bg-gray-50 dark:text-gray-300 dark:bg-gray-850 outline-hidden"
|
class="w-full rounded-lg py-2 px-4 text-sm bg-gray-50 dark:text-gray-300 dark:bg-gray-850 outline-hidden"
|
||||||
bind:value={STT_MODEL}
|
bind:value={STT_MODEL}
|
||||||
placeholder="Select a model"
|
placeholder={$i18n.t('Select a model')}
|
||||||
/>
|
/>
|
||||||
|
|
||||||
<datalist id="model-list">
|
<datalist id="model-list">
|
||||||
|
@ -275,7 +275,7 @@
|
||||||
<input
|
<input
|
||||||
class="w-full rounded-lg py-2 px-4 text-sm bg-gray-50 dark:text-gray-300 dark:bg-gray-850 outline-hidden"
|
class="w-full rounded-lg py-2 px-4 text-sm bg-gray-50 dark:text-gray-300 dark:bg-gray-850 outline-hidden"
|
||||||
bind:value={STT_MODEL}
|
bind:value={STT_MODEL}
|
||||||
placeholder="Select a model (optional)"
|
placeholder={$i18n.t('Select a model (optional)')}
|
||||||
/>
|
/>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
|
@ -424,7 +424,7 @@
|
||||||
<select
|
<select
|
||||||
class=" dark:bg-gray-900 w-fit pr-8 cursor-pointer rounded-sm px-2 p-1 text-xs bg-transparent outline-hidden text-right"
|
class=" dark:bg-gray-900 w-fit pr-8 cursor-pointer rounded-sm px-2 p-1 text-xs bg-transparent outline-hidden text-right"
|
||||||
bind:value={TTS_ENGINE}
|
bind:value={TTS_ENGINE}
|
||||||
placeholder="Select a mode"
|
placeholder={$i18n.t('Select a mode')}
|
||||||
on:change={async (e) => {
|
on:change={async (e) => {
|
||||||
await updateConfigHandler();
|
await updateConfigHandler();
|
||||||
await getVoices();
|
await getVoices();
|
||||||
|
@ -539,7 +539,7 @@
|
||||||
list="model-list"
|
list="model-list"
|
||||||
class="w-full rounded-lg py-2 px-4 text-sm bg-gray-50 dark:text-gray-300 dark:bg-gray-850 outline-hidden"
|
class="w-full rounded-lg py-2 px-4 text-sm bg-gray-50 dark:text-gray-300 dark:bg-gray-850 outline-hidden"
|
||||||
bind:value={TTS_MODEL}
|
bind:value={TTS_MODEL}
|
||||||
placeholder="CMU ARCTIC speaker embedding name"
|
placeholder={$i18n.t('CMU ARCTIC speaker embedding name')}
|
||||||
/>
|
/>
|
||||||
|
|
||||||
<datalist id="model-list">
|
<datalist id="model-list">
|
||||||
|
@ -581,7 +581,7 @@
|
||||||
list="voice-list"
|
list="voice-list"
|
||||||
class="w-full rounded-lg py-2 px-4 text-sm bg-gray-50 dark:text-gray-300 dark:bg-gray-850 outline-hidden"
|
class="w-full rounded-lg py-2 px-4 text-sm bg-gray-50 dark:text-gray-300 dark:bg-gray-850 outline-hidden"
|
||||||
bind:value={TTS_VOICE}
|
bind:value={TTS_VOICE}
|
||||||
placeholder="Select a voice"
|
placeholder={$i18n.t('Select a voice')}
|
||||||
/>
|
/>
|
||||||
|
|
||||||
<datalist id="voice-list">
|
<datalist id="voice-list">
|
||||||
|
@ -600,7 +600,7 @@
|
||||||
list="tts-model-list"
|
list="tts-model-list"
|
||||||
class="w-full rounded-lg py-2 px-4 text-sm bg-gray-50 dark:text-gray-300 dark:bg-gray-850 outline-hidden"
|
class="w-full rounded-lg py-2 px-4 text-sm bg-gray-50 dark:text-gray-300 dark:bg-gray-850 outline-hidden"
|
||||||
bind:value={TTS_MODEL}
|
bind:value={TTS_MODEL}
|
||||||
placeholder="Select a model"
|
placeholder={$i18n.t('Select a model')}
|
||||||
/>
|
/>
|
||||||
|
|
||||||
<datalist id="tts-model-list">
|
<datalist id="tts-model-list">
|
||||||
|
@ -622,7 +622,7 @@
|
||||||
list="voice-list"
|
list="voice-list"
|
||||||
class="w-full rounded-lg py-2 px-4 text-sm bg-gray-50 dark:text-gray-300 dark:bg-gray-850 outline-hidden"
|
class="w-full rounded-lg py-2 px-4 text-sm bg-gray-50 dark:text-gray-300 dark:bg-gray-850 outline-hidden"
|
||||||
bind:value={TTS_VOICE}
|
bind:value={TTS_VOICE}
|
||||||
placeholder="Select a voice"
|
placeholder={$i18n.t('Select a voice')}
|
||||||
/>
|
/>
|
||||||
|
|
||||||
<datalist id="voice-list">
|
<datalist id="voice-list">
|
||||||
|
@ -641,7 +641,7 @@
|
||||||
list="tts-model-list"
|
list="tts-model-list"
|
||||||
class="w-full rounded-lg py-2 px-4 text-sm bg-gray-50 dark:text-gray-300 dark:bg-gray-850 outline-hidden"
|
class="w-full rounded-lg py-2 px-4 text-sm bg-gray-50 dark:text-gray-300 dark:bg-gray-850 outline-hidden"
|
||||||
bind:value={TTS_MODEL}
|
bind:value={TTS_MODEL}
|
||||||
placeholder="Select a model"
|
placeholder={$i18n.t('Select a model')}
|
||||||
/>
|
/>
|
||||||
|
|
||||||
<datalist id="tts-model-list">
|
<datalist id="tts-model-list">
|
||||||
|
@ -663,7 +663,7 @@
|
||||||
list="voice-list"
|
list="voice-list"
|
||||||
class="w-full rounded-lg py-2 px-4 text-sm bg-gray-50 dark:text-gray-300 dark:bg-gray-850 outline-hidden"
|
class="w-full rounded-lg py-2 px-4 text-sm bg-gray-50 dark:text-gray-300 dark:bg-gray-850 outline-hidden"
|
||||||
bind:value={TTS_VOICE}
|
bind:value={TTS_VOICE}
|
||||||
placeholder="Select a voice"
|
placeholder={$i18n.t('Select a voice')}
|
||||||
/>
|
/>
|
||||||
|
|
||||||
<datalist id="voice-list">
|
<datalist id="voice-list">
|
||||||
|
@ -690,7 +690,7 @@
|
||||||
list="tts-model-list"
|
list="tts-model-list"
|
||||||
class="w-full rounded-lg py-2 px-4 text-sm bg-gray-50 dark:text-gray-300 dark:bg-gray-850 outline-hidden"
|
class="w-full rounded-lg py-2 px-4 text-sm bg-gray-50 dark:text-gray-300 dark:bg-gray-850 outline-hidden"
|
||||||
bind:value={TTS_AZURE_SPEECH_OUTPUT_FORMAT}
|
bind:value={TTS_AZURE_SPEECH_OUTPUT_FORMAT}
|
||||||
placeholder="Select a output format"
|
placeholder={$i18n.t('Select an output format')}
|
||||||
/>
|
/>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
|
@ -704,7 +704,7 @@
|
||||||
<div class="flex items-center relative">
|
<div class="flex items-center relative">
|
||||||
<select
|
<select
|
||||||
class="dark:bg-gray-900 w-fit pr-8 cursor-pointer rounded-sm px-2 p-1 text-xs bg-transparent outline-hidden text-right"
|
class="dark:bg-gray-900 w-fit pr-8 cursor-pointer rounded-sm px-2 p-1 text-xs bg-transparent outline-hidden text-right"
|
||||||
aria-label="Select how to split message text for TTS requests"
|
aria-label={$i18n.t('Select how to split message text for TTS requests')}
|
||||||
bind:value={TTS_SPLIT_ON}
|
bind:value={TTS_SPLIT_ON}
|
||||||
>
|
>
|
||||||
{#each Object.values(TTS_RESPONSE_SPLIT) as split}
|
{#each Object.values(TTS_RESPONSE_SPLIT) as split}
|
||||||
|
|
|
@ -6,7 +6,7 @@
|
||||||
|
|
||||||
import { getOllamaConfig, updateOllamaConfig } from '$lib/apis/ollama';
|
import { getOllamaConfig, updateOllamaConfig } from '$lib/apis/ollama';
|
||||||
import { getOpenAIConfig, updateOpenAIConfig, getOpenAIModels } from '$lib/apis/openai';
|
import { getOpenAIConfig, updateOpenAIConfig, getOpenAIModels } from '$lib/apis/openai';
|
||||||
import { getModels as _getModels } from '$lib/apis';
|
import { getModels as _getModels, getBackendConfig } from '$lib/apis';
|
||||||
import { getConnectionsConfig, setConnectionsConfig } from '$lib/apis/configs';
|
import { getConnectionsConfig, setConnectionsConfig } from '$lib/apis/configs';
|
||||||
|
|
||||||
import { config, models, settings, user } from '$lib/stores';
|
import { config, models, settings, user } from '$lib/stores';
|
||||||
|
@ -114,6 +114,7 @@
|
||||||
if (res) {
|
if (res) {
|
||||||
toast.success($i18n.t('Connections settings updated'));
|
toast.success($i18n.t('Connections settings updated'));
|
||||||
await models.set(await getModels());
|
await models.set(await getModels());
|
||||||
|
await config.set(await getBackendConfig());
|
||||||
}
|
}
|
||||||
};
|
};
|
||||||
|
|
||||||
|
@ -198,6 +199,8 @@
|
||||||
updateOllamaHandler();
|
updateOllamaHandler();
|
||||||
|
|
||||||
dispatch('save');
|
dispatch('save');
|
||||||
|
|
||||||
|
await config.set(await getBackendConfig());
|
||||||
};
|
};
|
||||||
</script>
|
</script>
|
||||||
|
|
||||||
|
|
|
@ -76,7 +76,7 @@
|
||||||
);
|
);
|
||||||
|
|
||||||
if (res) {
|
if (res) {
|
||||||
toast.success('Config imported successfully');
|
toast.success($i18n.t('Config imported successfully'));
|
||||||
}
|
}
|
||||||
e.target.value = null;
|
e.target.value = null;
|
||||||
};
|
};
|
||||||
|
|
|
@ -746,7 +746,7 @@
|
||||||
<select
|
<select
|
||||||
class="dark:bg-gray-900 w-fit pr-8 rounded-sm px-2 p-1 text-xs bg-transparent outline-hidden text-right"
|
class="dark:bg-gray-900 w-fit pr-8 rounded-sm px-2 p-1 text-xs bg-transparent outline-hidden text-right"
|
||||||
bind:value={embeddingEngine}
|
bind:value={embeddingEngine}
|
||||||
placeholder="Select an embedding model engine"
|
placeholder={$i18n.t('Select an embedding model engine')}
|
||||||
on:change={(e) => {
|
on:change={(e) => {
|
||||||
if (e.target.value === 'ollama') {
|
if (e.target.value === 'ollama') {
|
||||||
embeddingModel = '';
|
embeddingModel = '';
|
||||||
|
@ -762,7 +762,7 @@
|
||||||
<option value="">{$i18n.t('Default (SentenceTransformers)')}</option>
|
<option value="">{$i18n.t('Default (SentenceTransformers)')}</option>
|
||||||
<option value="ollama">{$i18n.t('Ollama')}</option>
|
<option value="ollama">{$i18n.t('Ollama')}</option>
|
||||||
<option value="openai">{$i18n.t('OpenAI')}</option>
|
<option value="openai">{$i18n.t('OpenAI')}</option>
|
||||||
<option value="azure_openai">Azure OpenAI</option>
|
<option value="azure_openai">{$i18n.t('Azure OpenAI')}</option>
|
||||||
</select>
|
</select>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
|
@ -811,7 +811,7 @@
|
||||||
<div class="flex gap-2">
|
<div class="flex gap-2">
|
||||||
<input
|
<input
|
||||||
class="flex-1 w-full text-sm bg-transparent outline-hidden"
|
class="flex-1 w-full text-sm bg-transparent outline-hidden"
|
||||||
placeholder="Version"
|
placeholder={$i18n.t('Version')}
|
||||||
bind:value={AzureOpenAIVersion}
|
bind:value={AzureOpenAIVersion}
|
||||||
required
|
required
|
||||||
/>
|
/>
|
||||||
|
@ -947,7 +947,7 @@
|
||||||
<select
|
<select
|
||||||
class="dark:bg-gray-900 w-fit pr-8 rounded-sm px-2 p-1 text-xs bg-transparent outline-hidden text-right"
|
class="dark:bg-gray-900 w-fit pr-8 rounded-sm px-2 p-1 text-xs bg-transparent outline-hidden text-right"
|
||||||
bind:value={RAGConfig.RAG_RERANKING_ENGINE}
|
bind:value={RAGConfig.RAG_RERANKING_ENGINE}
|
||||||
placeholder="Select a reranking model engine"
|
placeholder={$i18n.t('Select a reranking model engine')}
|
||||||
on:change={(e) => {
|
on:change={(e) => {
|
||||||
if (e.target.value === 'external') {
|
if (e.target.value === 'external') {
|
||||||
RAGConfig.RAG_RERANKING_MODEL = '';
|
RAGConfig.RAG_RERANKING_MODEL = '';
|
||||||
|
|
|
@ -26,7 +26,7 @@
|
||||||
});
|
});
|
||||||
|
|
||||||
if (evaluationConfig) {
|
if (evaluationConfig) {
|
||||||
toast.success('Settings saved successfully');
|
toast.success($i18n.t('Settings saved successfully!'));
|
||||||
models.set(
|
models.set(
|
||||||
await getModels(
|
await getModels(
|
||||||
localStorage.token,
|
localStorage.token,
|
||||||
|
|
|
@ -62,7 +62,7 @@
|
||||||
|
|
||||||
if (!name || !id) {
|
if (!name || !id) {
|
||||||
loading = false;
|
loading = false;
|
||||||
toast.error('Name and ID are required, please fill them out');
|
toast.error($i18n.t('Name and ID are required, please fill them out'));
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -70,7 +70,7 @@
|
||||||
if ($models.find((model) => model.name === name)) {
|
if ($models.find((model) => model.name === name)) {
|
||||||
loading = false;
|
loading = false;
|
||||||
name = '';
|
name = '';
|
||||||
toast.error('Model name already exists, please choose a different one');
|
toast.error($i18n.t('Model name already exists, please choose a different one'));
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
|
@ -290,7 +290,7 @@
|
||||||
<select
|
<select
|
||||||
class="dark:bg-gray-900 w-fit pr-8 rounded-sm px-2 text-xs bg-transparent outline-hidden text-right"
|
class="dark:bg-gray-900 w-fit pr-8 rounded-sm px-2 text-xs bg-transparent outline-hidden text-right"
|
||||||
bind:value={adminConfig.DEFAULT_USER_ROLE}
|
bind:value={adminConfig.DEFAULT_USER_ROLE}
|
||||||
placeholder="Select a role"
|
placeholder={$i18n.t('Select a role')}
|
||||||
>
|
>
|
||||||
<option value="pending">{$i18n.t('pending')}</option>
|
<option value="pending">{$i18n.t('pending')}</option>
|
||||||
<option value="user">{$i18n.t('user')}</option>
|
<option value="user">{$i18n.t('user')}</option>
|
||||||
|
@ -587,7 +587,7 @@
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
<div class="flex justify-between items-center text-xs">
|
<div class="flex justify-between items-center text-xs">
|
||||||
<div class=" font-medium">Validate certificate</div>
|
<div class=" font-medium">{$i18n.t('Validate certificate')}</div>
|
||||||
|
|
||||||
<div class="mt-1">
|
<div class="mt-1">
|
||||||
<Switch bind:state={LDAP_SERVER.validate_cert} />
|
<Switch bind:state={LDAP_SERVER.validate_cert} />
|
||||||
|
|
|
@ -143,7 +143,7 @@
|
||||||
|
|
||||||
if (config?.comfyui?.COMFYUI_WORKFLOW) {
|
if (config?.comfyui?.COMFYUI_WORKFLOW) {
|
||||||
if (!validateJSON(config.comfyui.COMFYUI_WORKFLOW)) {
|
if (!validateJSON(config.comfyui.COMFYUI_WORKFLOW)) {
|
||||||
toast.error('Invalid JSON format for ComfyUI Workflow.');
|
toast.error($i18n.t('Invalid JSON format for ComfyUI Workflow.'));
|
||||||
loading = false;
|
loading = false;
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
@ -566,10 +566,10 @@
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
<div class="">
|
<div class="">
|
||||||
<Tooltip content="Input Key (e.g. text, unet_name, steps)">
|
<Tooltip content={$i18n.t('Input Key (e.g. text, unet_name, steps)')}>
|
||||||
<input
|
<input
|
||||||
class="py-1 px-3 w-24 text-xs text-center bg-transparent outline-hidden border-r border-gray-50 dark:border-gray-850"
|
class="py-1 px-3 w-24 text-xs text-center bg-transparent outline-hidden border-r border-gray-50 dark:border-gray-850"
|
||||||
placeholder="Key"
|
placeholder={$i18n.t('Key')}
|
||||||
bind:value={node.key}
|
bind:value={node.key}
|
||||||
required
|
required
|
||||||
/>
|
/>
|
||||||
|
@ -578,12 +578,12 @@
|
||||||
|
|
||||||
<div class="w-full">
|
<div class="w-full">
|
||||||
<Tooltip
|
<Tooltip
|
||||||
content="Comma separated Node Ids (e.g. 1 or 1,2)"
|
content={$i18n.t('Comma separated Node Ids (e.g. 1 or 1,2)')}
|
||||||
placement="top-start"
|
placement="top-start"
|
||||||
>
|
>
|
||||||
<input
|
<input
|
||||||
class="w-full py-1 px-4 text-xs bg-transparent outline-hidden"
|
class="w-full py-1 px-4 text-xs bg-transparent outline-hidden"
|
||||||
placeholder="Node Ids"
|
placeholder={$i18n.t('Node Ids')}
|
||||||
bind:value={node.node_ids}
|
bind:value={node.node_ids}
|
||||||
/>
|
/>
|
||||||
</Tooltip>
|
</Tooltip>
|
||||||
|
@ -650,7 +650,7 @@
|
||||||
list="model-list"
|
list="model-list"
|
||||||
class="w-full rounded-lg py-2 px-4 text-sm bg-gray-50 dark:text-gray-300 dark:bg-gray-850 outline-hidden"
|
class="w-full rounded-lg py-2 px-4 text-sm bg-gray-50 dark:text-gray-300 dark:bg-gray-850 outline-hidden"
|
||||||
bind:value={imageGenerationConfig.MODEL}
|
bind:value={imageGenerationConfig.MODEL}
|
||||||
placeholder="Select a model"
|
placeholder={$i18n.t('Select a model')}
|
||||||
required
|
required
|
||||||
/>
|
/>
|
||||||
|
|
||||||
|
|
|
@ -42,7 +42,7 @@
|
||||||
}
|
}
|
||||||
|
|
||||||
if (bannerListElement) {
|
if (bannerListElement) {
|
||||||
sortable = Sortable.create(bannerListElement, {
|
sortable = new Sortable(bannerListElement, {
|
||||||
animation: 150,
|
animation: 150,
|
||||||
handle: '.item-handle',
|
handle: '.item-handle',
|
||||||
onUpdate: async (event) => {
|
onUpdate: async (event) => {
|
||||||
|
|
|
@ -63,7 +63,7 @@
|
||||||
// return (b.is_active ?? true) - (a.is_active ?? true);
|
// return (b.is_active ?? true) - (a.is_active ?? true);
|
||||||
// }
|
// }
|
||||||
// If both models' active states are the same, sort alphabetically
|
// If both models' active states are the same, sort alphabetically
|
||||||
return a.name.localeCompare(b.name);
|
return (a?.name ?? a?.id ?? '').localeCompare(b?.name ?? b?.id ?? '');
|
||||||
});
|
});
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -437,7 +437,7 @@
|
||||||
...$MODEL_DOWNLOAD_POOL
|
...$MODEL_DOWNLOAD_POOL
|
||||||
});
|
});
|
||||||
await deleteModel(localStorage.token, model);
|
await deleteModel(localStorage.token, model);
|
||||||
toast.success(`${model} download has been canceled`);
|
toast.success($i18n.t('{{model}} download has been canceled', { model: model }));
|
||||||
}
|
}
|
||||||
};
|
};
|
||||||
|
|
||||||
|
|
|
@ -31,9 +31,9 @@
|
||||||
}
|
}
|
||||||
|
|
||||||
if (modelListElement) {
|
if (modelListElement) {
|
||||||
sortable = Sortable.create(modelListElement, {
|
sortable = new Sortable(modelListElement, {
|
||||||
animation: 150,
|
animation: 150,
|
||||||
handle: '.item-handle',
|
handle: '.model-item-handle',
|
||||||
onUpdate: async (event) => {
|
onUpdate: async (event) => {
|
||||||
positionChangeHandler();
|
positionChangeHandler();
|
||||||
}
|
}
|
||||||
|
@ -44,11 +44,11 @@
|
||||||
|
|
||||||
{#if modelIds.length > 0}
|
{#if modelIds.length > 0}
|
||||||
<div class="flex flex-col -translate-x-1" bind:this={modelListElement}>
|
<div class="flex flex-col -translate-x-1" bind:this={modelListElement}>
|
||||||
{#each modelIds as modelId, modelIdx (modelId)}
|
{#each modelIds as modelId, modelIdx (`${modelId}-${modelIdx}`)}
|
||||||
<div class=" flex gap-2 w-full justify-between items-center" id="model-item-{modelId}">
|
<div class=" flex gap-2 w-full justify-between items-center" id="model-item-{modelId}">
|
||||||
<Tooltip content={modelId} placement="top-start">
|
<Tooltip content={modelId} placement="top-start">
|
||||||
<div class="flex items-center gap-1">
|
<div class="flex items-center gap-1">
|
||||||
<EllipsisVertical className="size-4 cursor-move item-handle" />
|
<EllipsisVertical className="size-4 cursor-move model-item-handle" />
|
||||||
|
|
||||||
<div class=" text-sm flex-1 py-1 rounded-lg">
|
<div class=" text-sm flex-1 py-1 rounded-lg">
|
||||||
{#if $models.find((model) => model.id === modelId)}
|
{#if $models.find((model) => model.id === modelId)}
|
||||||
|
|
|
@ -152,7 +152,7 @@
|
||||||
const res = await uploadPipeline(localStorage.token, file, selectedPipelinesUrlIdx).catch(
|
const res = await uploadPipeline(localStorage.token, file, selectedPipelinesUrlIdx).catch(
|
||||||
(error) => {
|
(error) => {
|
||||||
console.error(error);
|
console.error(error);
|
||||||
toast.error('Something went wrong :/');
|
toast.error($i18n.t('Something went wrong :/'));
|
||||||
return null;
|
return null;
|
||||||
}
|
}
|
||||||
);
|
);
|
||||||
|
@ -410,10 +410,10 @@
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
<div class="mt-2 text-xs text-gray-500">
|
<div class="mt-2 text-xs text-gray-500">
|
||||||
<span class=" font-semibold dark:text-gray-200">Warning:</span> Pipelines are a plugin
|
<span class=" font-semibold dark:text-gray-200">{$i18n.t('Warning:')}</span>
|
||||||
system with arbitrary code execution —
|
{$i18n.t('Pipelines are a plugin system with arbitrary code execution —')}
|
||||||
<span class=" font-medium dark:text-gray-400"
|
<span class=" font-medium dark:text-gray-400"
|
||||||
>don't fetch random pipelines from sources you don't trust.</span
|
>{$i18n.t("don't fetch random pipelines from sources you don't trust.")}</span
|
||||||
>
|
>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
|
@ -514,7 +514,7 @@
|
||||||
{:else if (valves_spec.properties[property]?.type ?? null) === 'boolean'}
|
{:else if (valves_spec.properties[property]?.type ?? null) === 'boolean'}
|
||||||
<div class="flex justify-between items-center">
|
<div class="flex justify-between items-center">
|
||||||
<div class="text-xs text-gray-500">
|
<div class="text-xs text-gray-500">
|
||||||
{valves[property] ? 'Enabled' : 'Disabled'}
|
{valves[property] ? $i18n.t('Enabled') : $i18n.t('Disabled')}
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
<div class=" pr-2">
|
<div class=" pr-2">
|
||||||
|
@ -540,12 +540,12 @@
|
||||||
<Spinner className="size-5" />
|
<Spinner className="size-5" />
|
||||||
{/if}
|
{/if}
|
||||||
{:else}
|
{:else}
|
||||||
<div>No valves</div>
|
<div>{$i18n.t('No valves')}</div>
|
||||||
{/if}
|
{/if}
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
{:else if pipelines.length === 0}
|
{:else if pipelines.length === 0}
|
||||||
<div>Pipelines Not Detected</div>
|
<div>{$i18n.t('Pipelines Not Detected')}</div>
|
||||||
{/if}
|
{/if}
|
||||||
{:else}
|
{:else}
|
||||||
<div class="flex justify-center">
|
<div class="flex justify-center">
|
||||||
|
|
|
@ -119,7 +119,11 @@
|
||||||
>
|
>
|
||||||
<option disabled selected value="">{$i18n.t('Select a engine')}</option>
|
<option disabled selected value="">{$i18n.t('Select a engine')}</option>
|
||||||
{#each webSearchEngines as engine}
|
{#each webSearchEngines as engine}
|
||||||
|
{#if engine === 'duckduckgo' || engine === 'ddgs'}
|
||||||
|
<option value={engine}>DDGS</option>
|
||||||
|
{:else}
|
||||||
<option value={engine}>{engine}</option>
|
<option value={engine}>{engine}</option>
|
||||||
|
{/if}
|
||||||
{/each}
|
{/each}
|
||||||
</select>
|
</select>
|
||||||
</div>
|
</div>
|
||||||
|
@ -471,11 +475,11 @@
|
||||||
/>
|
/>
|
||||||
|
|
||||||
<datalist id="perplexity-model-list">
|
<datalist id="perplexity-model-list">
|
||||||
<option value="sonar">Sonar</option>
|
<option value="sonar">{$i18n.t('Sonar')}</option>
|
||||||
<option value="sonar-pro">Sonar Pro</option>
|
<option value="sonar-pro">{$i18n.t('Sonar Pro')}</option>
|
||||||
<option value="sonar-reasoning">Sonar Reasoning</option>
|
<option value="sonar-reasoning">{$i18n.t('Sonar Reasoning')}</option>
|
||||||
<option value="sonar-reasoning-pro">Sonar Reasoning Pro</option>
|
<option value="sonar-reasoning-pro">{$i18n.t('Sonar Reasoning Pro')}</option>
|
||||||
<option value="sonar-deep-research">Sonar Deep Research</option>
|
<option value="sonar-deep-research">{$i18n.t('Sonar Deep Research')}</option>
|
||||||
</datalist>
|
</datalist>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
|
@ -489,9 +493,9 @@
|
||||||
class="w-full rounded-lg py-2 px-4 text-sm bg-gray-50 dark:text-gray-300 dark:bg-gray-850 outline-hidden"
|
class="w-full rounded-lg py-2 px-4 text-sm bg-gray-50 dark:text-gray-300 dark:bg-gray-850 outline-hidden"
|
||||||
bind:value={webConfig.PERPLEXITY_SEARCH_CONTEXT_USAGE}
|
bind:value={webConfig.PERPLEXITY_SEARCH_CONTEXT_USAGE}
|
||||||
>
|
>
|
||||||
<option value="low">Low</option>
|
<option value="low">{$i18n.t('Low')}</option>
|
||||||
<option value="medium">Medium</option>
|
<option value="medium">{$i18n.t('Medium')}</option>
|
||||||
<option value="high">High</option>
|
<option value="high">{$i18n.t('High')}</option>
|
||||||
</select>
|
</select>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
|
@ -551,6 +555,19 @@
|
||||||
/>
|
/>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
|
{:else if webConfig.WEB_SEARCH_ENGINE === 'ddgs' || webConfig.WEB_SEARCH_ENGINE === 'duckduckgo'}
|
||||||
|
<div class="w-full mb-2.5">
|
||||||
|
<div class=" self-center text-xs font-medium mb-1">
|
||||||
|
{$i18n.t('Concurrent Requests')}
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<input
|
||||||
|
class="w-full rounded-lg py-2 px-4 text-sm bg-gray-50 dark:text-gray-300 dark:bg-gray-850 outline-hidden"
|
||||||
|
placeholder={$i18n.t('Concurrent Requests')}
|
||||||
|
bind:value={webConfig.WEB_SEARCH_CONCURRENT_REQUESTS}
|
||||||
|
required
|
||||||
|
/>
|
||||||
|
</div>
|
||||||
{:else if webConfig.WEB_SEARCH_ENGINE === 'external'}
|
{:else if webConfig.WEB_SEARCH_ENGINE === 'external'}
|
||||||
<div class="mb-2.5 flex w-full flex-col">
|
<div class="mb-2.5 flex w-full flex-col">
|
||||||
<div>
|
<div>
|
||||||
|
@ -600,19 +617,6 @@
|
||||||
required
|
required
|
||||||
/>
|
/>
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
<div class="w-full">
|
|
||||||
<div class=" self-center text-xs font-medium mb-1">
|
|
||||||
{$i18n.t('Concurrent Requests')}
|
|
||||||
</div>
|
|
||||||
|
|
||||||
<input
|
|
||||||
class="w-full rounded-lg py-2 px-4 text-sm bg-gray-50 dark:text-gray-300 dark:bg-gray-850 outline-hidden"
|
|
||||||
placeholder={$i18n.t('Concurrent Requests')}
|
|
||||||
bind:value={webConfig.WEB_SEARCH_CONCURRENT_REQUESTS}
|
|
||||||
required
|
|
||||||
/>
|
|
||||||
</div>
|
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
|
@ -849,6 +853,19 @@
|
||||||
</div>
|
</div>
|
||||||
{/if}
|
{/if}
|
||||||
|
|
||||||
|
<div class="mb-2.5 w-full">
|
||||||
|
<div class=" self-center text-xs font-medium mb-1">
|
||||||
|
{$i18n.t('Concurrent Requests')}
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<input
|
||||||
|
class="w-full rounded-lg py-2 px-4 text-sm bg-gray-50 dark:text-gray-300 dark:bg-gray-850 outline-hidden"
|
||||||
|
placeholder={$i18n.t('Concurrent Requests')}
|
||||||
|
bind:value={webConfig.WEB_LOADER_CONCURRENT_REQUESTS}
|
||||||
|
required
|
||||||
|
/>
|
||||||
|
</div>
|
||||||
|
|
||||||
<div class=" mb-2.5 flex w-full justify-between">
|
<div class=" mb-2.5 flex w-full justify-between">
|
||||||
<div class=" self-center text-xs font-medium">
|
<div class=" self-center text-xs font-medium">
|
||||||
{$i18n.t('Youtube Language')}
|
{$i18n.t('Youtube Language')}
|
||||||
|
|
|
@ -213,9 +213,9 @@
|
||||||
{:else}
|
{:else}
|
||||||
<div>
|
<div>
|
||||||
<div class=" flex items-center gap-3 justify-between text-xs uppercase px-1 font-bold">
|
<div class=" flex items-center gap-3 justify-between text-xs uppercase px-1 font-bold">
|
||||||
<div class="w-full basis-3/5">Group</div>
|
<div class="w-full basis-3/5">{$i18n.t('Group')}</div>
|
||||||
|
|
||||||
<div class="w-full basis-2/5 text-right">Users</div>
|
<div class="w-full basis-2/5 text-right">{$i18n.t('Users')}</div>
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
<hr class="mt-1.5 border-gray-100 dark:border-gray-850" />
|
<hr class="mt-1.5 border-gray-100 dark:border-gray-850" />
|
||||||
|
|
|
@ -1,6 +1,6 @@
|
||||||
<script>
|
<script>
|
||||||
import { toast } from 'svelte-sonner';
|
import { toast } from 'svelte-sonner';
|
||||||
import { getContext } from 'svelte';
|
import { onMount, getContext } from 'svelte';
|
||||||
|
|
||||||
const i18n = getContext('i18n');
|
const i18n = getContext('i18n');
|
||||||
|
|
||||||
|
@ -10,6 +10,7 @@
|
||||||
import User from '$lib/components/icons/User.svelte';
|
import User from '$lib/components/icons/User.svelte';
|
||||||
import UserCircleSolid from '$lib/components/icons/UserCircleSolid.svelte';
|
import UserCircleSolid from '$lib/components/icons/UserCircleSolid.svelte';
|
||||||
import GroupModal from './EditGroupModal.svelte';
|
import GroupModal from './EditGroupModal.svelte';
|
||||||
|
import { querystringValue } from '$lib/utils';
|
||||||
|
|
||||||
export let users = [];
|
export let users = [];
|
||||||
export let group = {
|
export let group = {
|
||||||
|
@ -44,6 +45,13 @@
|
||||||
setGroups();
|
setGroups();
|
||||||
}
|
}
|
||||||
};
|
};
|
||||||
|
|
||||||
|
onMount(() => {
|
||||||
|
const groupId = querystringValue('id');
|
||||||
|
if (groupId && groupId === group.id) {
|
||||||
|
showEdit = true;
|
||||||
|
}
|
||||||
|
});
|
||||||
</script>
|
</script>
|
||||||
|
|
||||||
<GroupModal
|
<GroupModal
|
||||||
|
|
|
@ -158,7 +158,7 @@
|
||||||
<select
|
<select
|
||||||
class="w-full bg-transparent outline-hidden py-0.5 text-sm"
|
class="w-full bg-transparent outline-hidden py-0.5 text-sm"
|
||||||
bind:value={permissions.model.default_id}
|
bind:value={permissions.model.default_id}
|
||||||
placeholder="Select a model"
|
placeholder={$i18n.t('Select a model')}
|
||||||
>
|
>
|
||||||
<option value="" disabled selected>{$i18n.t('Select a model')}</option>
|
<option value="" disabled selected>{$i18n.t('Select a model')}</option>
|
||||||
{#each permissions.model.filter ? $models.filter( (model) => filterModelIds.includes(model.id) ) : $models.filter((model) => model.id) as model}
|
{#each permissions.model.filter ? $models.filter( (model) => filterModelIds.includes(model.id) ) : $models.filter((model) => model.id) as model}
|
||||||
|
|
|
@ -166,12 +166,12 @@
|
||||||
{#if total > $config?.license_metadata?.seats}
|
{#if total > $config?.license_metadata?.seats}
|
||||||
<span class="text-lg font-medium text-red-500"
|
<span class="text-lg font-medium text-red-500"
|
||||||
>{total} of {$config?.license_metadata?.seats}
|
>{total} of {$config?.license_metadata?.seats}
|
||||||
<span class="text-sm font-normal">available users</span></span
|
<span class="text-sm font-normal">{$i18n.t('available users')}</span></span
|
||||||
>
|
>
|
||||||
{:else}
|
{:else}
|
||||||
<span class="text-lg font-medium text-gray-500 dark:text-gray-300"
|
<span class="text-lg font-medium text-gray-500 dark:text-gray-300"
|
||||||
>{total} of {$config?.license_metadata?.seats}
|
>{total} of {$config?.license_metadata?.seats}
|
||||||
<span class="text-sm font-normal">available users</span></span
|
<span class="text-sm font-normal">{$i18n.t('available users')}</span></span
|
||||||
>
|
>
|
||||||
{/if}
|
{/if}
|
||||||
{:else}
|
{:else}
|
||||||
|
|
|
@ -104,7 +104,9 @@
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
toast.success(`Successfully imported ${userCount} users.`);
|
toast.success(
|
||||||
|
$i18n.t('Successfully imported {{userCount}} users.', { userCount: userCount })
|
||||||
|
);
|
||||||
inputFiles = null;
|
inputFiles = null;
|
||||||
const uploadInputElement = document.getElementById('upload-user-csv-input');
|
const uploadInputElement = document.getElementById('upload-user-csv-input');
|
||||||
|
|
||||||
|
|
|
@ -4,6 +4,8 @@
|
||||||
import { createEventDispatcher } from 'svelte';
|
import { createEventDispatcher } from 'svelte';
|
||||||
import { onMount, getContext } from 'svelte';
|
import { onMount, getContext } from 'svelte';
|
||||||
|
|
||||||
|
import { goto } from '$app/navigation';
|
||||||
|
|
||||||
import { updateUserById, getUserGroupsById } from '$lib/apis/users';
|
import { updateUserById, getUserGroupsById } from '$lib/apis/users';
|
||||||
|
|
||||||
import Modal from '$lib/components/common/Modal.svelte';
|
import Modal from '$lib/components/common/Modal.svelte';
|
||||||
|
@ -127,7 +129,13 @@
|
||||||
<div class="flex flex-wrap gap-1 my-0.5 -mx-1">
|
<div class="flex flex-wrap gap-1 my-0.5 -mx-1">
|
||||||
{#each userGroups as userGroup}
|
{#each userGroups as userGroup}
|
||||||
<span class="px-2 py-0.5 rounded-full bg-gray-100 dark:bg-gray-850 text-xs">
|
<span class="px-2 py-0.5 rounded-full bg-gray-100 dark:bg-gray-850 text-xs">
|
||||||
|
<a
|
||||||
|
href={'/admin/users/groups?id=' + userGroup.id}
|
||||||
|
on:click|preventDefault={() =>
|
||||||
|
goto('/admin/users/groups?id=' + userGroup.id)}
|
||||||
|
>
|
||||||
{userGroup.name}
|
{userGroup.name}
|
||||||
|
</a>
|
||||||
</span>
|
</span>
|
||||||
{/each}
|
{/each}
|
||||||
</div>
|
</div>
|
||||||
|
|
|
@ -12,6 +12,7 @@
|
||||||
blobToFile,
|
blobToFile,
|
||||||
compressImage,
|
compressImage,
|
||||||
extractInputVariables,
|
extractInputVariables,
|
||||||
|
getAge,
|
||||||
getCurrentDateTime,
|
getCurrentDateTime,
|
||||||
getFormattedDate,
|
getFormattedDate,
|
||||||
getFormattedTime,
|
getFormattedTime,
|
||||||
|
@ -31,6 +32,7 @@
|
||||||
import FilesOverlay from '../chat/MessageInput/FilesOverlay.svelte';
|
import FilesOverlay from '../chat/MessageInput/FilesOverlay.svelte';
|
||||||
import Commands from '../chat/MessageInput/Commands.svelte';
|
import Commands from '../chat/MessageInput/Commands.svelte';
|
||||||
import InputVariablesModal from '../chat/MessageInput/InputVariablesModal.svelte';
|
import InputVariablesModal from '../chat/MessageInput/InputVariablesModal.svelte';
|
||||||
|
import { getSessionUser } from '$lib/apis/auths';
|
||||||
|
|
||||||
export let placeholder = $i18n.t('Send a Message');
|
export let placeholder = $i18n.t('Send a Message');
|
||||||
|
|
||||||
|
@ -116,11 +118,47 @@
|
||||||
text = text.replaceAll('{{USER_LOCATION}}', String(location));
|
text = text.replaceAll('{{USER_LOCATION}}', String(location));
|
||||||
}
|
}
|
||||||
|
|
||||||
|
const sessionUser = await getSessionUser(localStorage.token);
|
||||||
|
|
||||||
if (text.includes('{{USER_NAME}}')) {
|
if (text.includes('{{USER_NAME}}')) {
|
||||||
const name = $user?.name || 'User';
|
const name = sessionUser?.name || 'User';
|
||||||
text = text.replaceAll('{{USER_NAME}}', name);
|
text = text.replaceAll('{{USER_NAME}}', name);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if (text.includes('{{USER_BIO}}')) {
|
||||||
|
const bio = sessionUser?.bio || '';
|
||||||
|
|
||||||
|
if (bio) {
|
||||||
|
text = text.replaceAll('{{USER_BIO}}', bio);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if (text.includes('{{USER_GENDER}}')) {
|
||||||
|
const gender = sessionUser?.gender || '';
|
||||||
|
|
||||||
|
if (gender) {
|
||||||
|
text = text.replaceAll('{{USER_GENDER}}', gender);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if (text.includes('{{USER_BIRTH_DATE}}')) {
|
||||||
|
const birthDate = sessionUser?.date_of_birth || '';
|
||||||
|
|
||||||
|
if (birthDate) {
|
||||||
|
text = text.replaceAll('{{USER_BIRTH_DATE}}', birthDate);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if (text.includes('{{USER_AGE}}')) {
|
||||||
|
const birthDate = sessionUser?.date_of_birth || '';
|
||||||
|
|
||||||
|
if (birthDate) {
|
||||||
|
// calculate age using date
|
||||||
|
const age = getAge(birthDate);
|
||||||
|
text = text.replaceAll('{{USER_AGE}}', age);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
if (text.includes('{{USER_LANGUAGE}}')) {
|
if (text.includes('{{USER_LANGUAGE}}')) {
|
||||||
const language = localStorage.getItem('locale') || 'en-US';
|
const language = localStorage.getItem('locale') || 'en-US';
|
||||||
text = text.replaceAll('{{USER_LANGUAGE}}', language);
|
text = text.replaceAll('{{USER_LANGUAGE}}', language);
|
||||||
|
|
|
@ -59,7 +59,7 @@
|
||||||
>
|
>
|
||||||
<div class="w-full flex justify-center py-1 text-xs animate-pulse items-center gap-2">
|
<div class="w-full flex justify-center py-1 text-xs animate-pulse items-center gap-2">
|
||||||
<Spinner className=" size-4" />
|
<Spinner className=" size-4" />
|
||||||
<div class=" ">Loading...</div>
|
<div class=" ">{$i18n.t('Loading...')}</div>
|
||||||
</div>
|
</div>
|
||||||
</Loader>
|
</Loader>
|
||||||
{:else if !thread}
|
{:else if !thread}
|
||||||
|
@ -84,7 +84,7 @@
|
||||||
</div>
|
</div>
|
||||||
{:else}
|
{:else}
|
||||||
<div class="flex justify-center text-xs items-center gap-2 py-5">
|
<div class="flex justify-center text-xs items-center gap-2 py-5">
|
||||||
<div class=" ">Start of the channel</div>
|
<div class=" ">{$i18n.t('Start of the channel')}</div>
|
||||||
</div>
|
</div>
|
||||||
{/if}
|
{/if}
|
||||||
|
|
||||||
|
|
|
@ -1,5 +1,8 @@
|
||||||
<script lang="ts">
|
<script lang="ts">
|
||||||
import { DropdownMenu } from 'bits-ui';
|
import { DropdownMenu } from 'bits-ui';
|
||||||
|
import { getContext } from 'svelte';
|
||||||
|
|
||||||
|
const i18n = getContext('i18n');
|
||||||
|
|
||||||
import { flyAndScale } from '$lib/utils/transitions';
|
import { flyAndScale } from '$lib/utils/transitions';
|
||||||
import { WEBUI_BASE_URL } from '$lib/constants';
|
import { WEBUI_BASE_URL } from '$lib/constants';
|
||||||
|
@ -76,7 +79,7 @@
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
<div class=" -translate-y-[1px]">
|
<div class=" -translate-y-[1px]">
|
||||||
<span class="text-xs"> Active </span>
|
<span class="text-xs"> {$i18n.t('Active')} </span>
|
||||||
</div>
|
</div>
|
||||||
{:else}
|
{:else}
|
||||||
<div>
|
<div>
|
||||||
|
@ -86,7 +89,7 @@
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
<div class=" -translate-y-[1px]">
|
<div class=" -translate-y-[1px]">
|
||||||
<span class="text-xs"> Away </span>
|
<span class="text-xs"> {$i18n.t('Away')} </span>
|
||||||
</div>
|
</div>
|
||||||
{/if}
|
{/if}
|
||||||
</div>
|
</div>
|
||||||
|
|
|
@ -1,11 +1,18 @@
|
||||||
<script lang="ts">
|
<script lang="ts">
|
||||||
import { DropdownMenu } from 'bits-ui';
|
import { DropdownMenu } from 'bits-ui';
|
||||||
|
import VirtualList from '@sveltejs/svelte-virtual-list';
|
||||||
|
|
||||||
|
import { getContext } from 'svelte';
|
||||||
|
|
||||||
import { flyAndScale } from '$lib/utils/transitions';
|
import { flyAndScale } from '$lib/utils/transitions';
|
||||||
|
import { WEBUI_BASE_URL } from '$lib/constants';
|
||||||
|
|
||||||
|
import Tooltip from '$lib/components/common/Tooltip.svelte';
|
||||||
|
|
||||||
import emojiGroups from '$lib/emoji-groups.json';
|
import emojiGroups from '$lib/emoji-groups.json';
|
||||||
import emojiShortCodes from '$lib/emoji-shortcodes.json';
|
import emojiShortCodes from '$lib/emoji-shortcodes.json';
|
||||||
import Tooltip from '$lib/components/common/Tooltip.svelte';
|
|
||||||
import VirtualList from '@sveltejs/svelte-virtual-list';
|
const i18n = getContext('i18n');
|
||||||
import { WEBUI_BASE_URL } from '$lib/constants';
|
|
||||||
|
|
||||||
export let onClose = () => {};
|
export let onClose = () => {};
|
||||||
export let onSubmit = (name) => {};
|
export let onSubmit = (name) => {};
|
||||||
|
@ -118,14 +125,16 @@
|
||||||
<input
|
<input
|
||||||
type="text"
|
type="text"
|
||||||
class="w-full text-sm bg-transparent outline-hidden"
|
class="w-full text-sm bg-transparent outline-hidden"
|
||||||
placeholder="Search all emojis"
|
placeholder={$i18n.t('Search all emojis')}
|
||||||
bind:value={search}
|
bind:value={search}
|
||||||
/>
|
/>
|
||||||
</div>
|
</div>
|
||||||
<!-- Virtualized Emoji List -->
|
<!-- Virtualized Emoji List -->
|
||||||
<div class="w-full flex justify-start h-96 overflow-y-auto px-3 pb-3 text-sm">
|
<div class="w-full flex justify-start h-96 overflow-y-auto px-3 pb-3 text-sm">
|
||||||
{#if emojiRows.length === 0}
|
{#if emojiRows.length === 0}
|
||||||
<div class="text-center text-xs text-gray-500 dark:text-gray-400">No results</div>
|
<div class="text-center text-xs text-gray-500 dark:text-gray-400">
|
||||||
|
{$i18n.t('No results')}
|
||||||
|
</div>
|
||||||
{:else}
|
{:else}
|
||||||
<div class="w-full flex ml-0.5">
|
<div class="w-full flex ml-0.5">
|
||||||
<VirtualList rowHeight={ROW_HEIGHT} items={emojiRows} height={384} let:item>
|
<VirtualList rowHeight={ROW_HEIGHT} items={emojiRows} height={384} let:item>
|
||||||
|
|
|
@ -8,9 +8,11 @@
|
||||||
import XMark from '$lib/components/icons/XMark.svelte';
|
import XMark from '$lib/components/icons/XMark.svelte';
|
||||||
import MessageInput from './MessageInput.svelte';
|
import MessageInput from './MessageInput.svelte';
|
||||||
import Messages from './Messages.svelte';
|
import Messages from './Messages.svelte';
|
||||||
import { onDestroy, onMount, tick } from 'svelte';
|
import { onDestroy, onMount, tick, getContext } from 'svelte';
|
||||||
import { toast } from 'svelte-sonner';
|
import { toast } from 'svelte-sonner';
|
||||||
|
|
||||||
|
const i18n = getContext('i18n');
|
||||||
|
|
||||||
export let threadId = null;
|
export let threadId = null;
|
||||||
export let channel = null;
|
export let channel = null;
|
||||||
|
|
||||||
|
@ -158,7 +160,7 @@
|
||||||
{#if channel}
|
{#if channel}
|
||||||
<div class="flex flex-col w-full h-full bg-gray-50 dark:bg-gray-850">
|
<div class="flex flex-col w-full h-full bg-gray-50 dark:bg-gray-850">
|
||||||
<div class="flex items-center justify-between px-3.5 pt-3">
|
<div class="flex items-center justify-between px-3.5 pt-3">
|
||||||
<div class=" font-medium text-lg">Thread</div>
|
<div class=" font-medium text-lg">{$i18n.t('Thread')}</div>
|
||||||
|
|
||||||
<div>
|
<div>
|
||||||
<button
|
<button
|
||||||
|
|
|
@ -37,18 +37,14 @@
|
||||||
showArtifacts,
|
showArtifacts,
|
||||||
tools,
|
tools,
|
||||||
toolServers,
|
toolServers,
|
||||||
selectedFolder
|
selectedFolder,
|
||||||
|
pinnedChats
|
||||||
} from '$lib/stores';
|
} from '$lib/stores';
|
||||||
import {
|
import {
|
||||||
convertMessagesToHistory,
|
convertMessagesToHistory,
|
||||||
copyToClipboard,
|
copyToClipboard,
|
||||||
getMessageContentParts,
|
getMessageContentParts,
|
||||||
createMessagesList,
|
createMessagesList,
|
||||||
extractSentencesForAudio,
|
|
||||||
promptTemplate,
|
|
||||||
splitStream,
|
|
||||||
sleep,
|
|
||||||
removeDetails,
|
|
||||||
getPromptVariables,
|
getPromptVariables,
|
||||||
processDetails,
|
processDetails,
|
||||||
removeAllDetails
|
removeAllDetails
|
||||||
|
@ -60,8 +56,10 @@
|
||||||
getAllTags,
|
getAllTags,
|
||||||
getChatById,
|
getChatById,
|
||||||
getChatList,
|
getChatList,
|
||||||
|
getPinnedChatList,
|
||||||
getTagsById,
|
getTagsById,
|
||||||
updateChatById
|
updateChatById,
|
||||||
|
updateChatFolderIdById
|
||||||
} from '$lib/apis/chats';
|
} from '$lib/apis/chats';
|
||||||
import { generateOpenAIChatCompletion } from '$lib/apis/openai';
|
import { generateOpenAIChatCompletion } from '$lib/apis/openai';
|
||||||
import { processWeb, processWebSearch, processYoutubeVideo } from '$lib/apis/retrieval';
|
import { processWeb, processWebSearch, processYoutubeVideo } from '$lib/apis/retrieval';
|
||||||
|
@ -90,6 +88,7 @@
|
||||||
import { fade } from 'svelte/transition';
|
import { fade } from 'svelte/transition';
|
||||||
import Tooltip from '../common/Tooltip.svelte';
|
import Tooltip from '../common/Tooltip.svelte';
|
||||||
import Sidebar from '../icons/Sidebar.svelte';
|
import Sidebar from '../icons/Sidebar.svelte';
|
||||||
|
import { uploadFile } from '$lib/apis/files';
|
||||||
|
|
||||||
export let chatIdProp = '';
|
export let chatIdProp = '';
|
||||||
|
|
||||||
|
@ -741,6 +740,15 @@
|
||||||
await temporaryChatEnabled.set(true);
|
await temporaryChatEnabled.set(true);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if ($settings?.temporaryChatByDefault ?? false) {
|
||||||
|
if ($temporaryChatEnabled === false) {
|
||||||
|
await temporaryChatEnabled.set(true);
|
||||||
|
} else if ($temporaryChatEnabled === null) {
|
||||||
|
// if set to null set to false; refer to temp chat toggle click handler
|
||||||
|
await temporaryChatEnabled.set(false);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
const availableModels = $models
|
const availableModels = $models
|
||||||
.filter((m) => !(m?.info?.meta?.hidden ?? false))
|
.filter((m) => !(m?.info?.meta?.hidden ?? false))
|
||||||
.map((m) => m.id);
|
.map((m) => m.id);
|
||||||
|
@ -1434,6 +1442,7 @@
|
||||||
}
|
}
|
||||||
|
|
||||||
messageInput?.setText('');
|
messageInput?.setText('');
|
||||||
|
prompt = '';
|
||||||
|
|
||||||
// Reset chat input textarea
|
// Reset chat input textarea
|
||||||
if (!($settings?.richTextInput ?? true)) {
|
if (!($settings?.richTextInput ?? true)) {
|
||||||
|
@ -1645,6 +1654,14 @@
|
||||||
);
|
);
|
||||||
await tick();
|
await tick();
|
||||||
|
|
||||||
|
let userLocation;
|
||||||
|
if ($settings?.userLocation) {
|
||||||
|
userLocation = await getAndUpdateUserLocation(localStorage.token).catch((err) => {
|
||||||
|
console.error(err);
|
||||||
|
return undefined;
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
const stream =
|
const stream =
|
||||||
model?.info?.params?.stream_response ??
|
model?.info?.params?.stream_response ??
|
||||||
$settings?.params?.stream_response ??
|
$settings?.params?.stream_response ??
|
||||||
|
@ -1655,16 +1672,7 @@
|
||||||
params?.system || $settings.system
|
params?.system || $settings.system
|
||||||
? {
|
? {
|
||||||
role: 'system',
|
role: 'system',
|
||||||
content: `${promptTemplate(
|
content: `${params?.system ?? $settings?.system ?? ''}`
|
||||||
params?.system ?? $settings?.system ?? '',
|
|
||||||
$user?.name,
|
|
||||||
$settings?.userLocation
|
|
||||||
? await getAndUpdateUserLocation(localStorage.token).catch((err) => {
|
|
||||||
console.error(err);
|
|
||||||
return undefined;
|
|
||||||
})
|
|
||||||
: undefined
|
|
||||||
)}`
|
|
||||||
}
|
}
|
||||||
: undefined,
|
: undefined,
|
||||||
..._messages.map((message) => ({
|
..._messages.map((message) => ({
|
||||||
|
@ -1742,15 +1750,7 @@
|
||||||
memory: $settings?.memory ?? false
|
memory: $settings?.memory ?? false
|
||||||
},
|
},
|
||||||
variables: {
|
variables: {
|
||||||
...getPromptVariables(
|
...getPromptVariables($user?.name, $settings?.userLocation ? userLocation : undefined)
|
||||||
$user?.name,
|
|
||||||
$settings?.userLocation
|
|
||||||
? await getAndUpdateUserLocation(localStorage.token).catch((err) => {
|
|
||||||
console.error(err);
|
|
||||||
return undefined;
|
|
||||||
})
|
|
||||||
: undefined
|
|
||||||
)
|
|
||||||
},
|
},
|
||||||
model_item: $models.find((m) => m.id === model.id),
|
model_item: $models.find((m) => m.id === model.id),
|
||||||
|
|
||||||
|
@ -2122,6 +2122,27 @@
|
||||||
}
|
}
|
||||||
await sessionStorage.removeItem(`chat-input${chatId ? `-${chatId}` : ''}`);
|
await sessionStorage.removeItem(`chat-input${chatId ? `-${chatId}` : ''}`);
|
||||||
};
|
};
|
||||||
|
|
||||||
|
const moveChatHandler = async (chatId, folderId) => {
|
||||||
|
if (chatId && folderId) {
|
||||||
|
const res = await updateChatFolderIdById(localStorage.token, chatId, folderId).catch(
|
||||||
|
(error) => {
|
||||||
|
toast.error(`${error}`);
|
||||||
|
return null;
|
||||||
|
}
|
||||||
|
);
|
||||||
|
|
||||||
|
if (res) {
|
||||||
|
currentChatPage.set(1);
|
||||||
|
await chats.set(await getChatList(localStorage.token, $currentChatPage));
|
||||||
|
await pinnedChats.set(await getPinnedChatList(localStorage.token));
|
||||||
|
|
||||||
|
toast.success($i18n.t('Chat moved successfully'));
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
toast.error($i18n.t('Failed to move chat'));
|
||||||
|
}
|
||||||
|
};
|
||||||
</script>
|
</script>
|
||||||
|
|
||||||
<svelte:head>
|
<svelte:head>
|
||||||
|
@ -2196,6 +2217,44 @@
|
||||||
shareEnabled={!!history.currentId}
|
shareEnabled={!!history.currentId}
|
||||||
{initNewChat}
|
{initNewChat}
|
||||||
showBanners={!showCommands}
|
showBanners={!showCommands}
|
||||||
|
archiveChatHandler={() => {}}
|
||||||
|
{moveChatHandler}
|
||||||
|
onSaveTempChat={async () => {
|
||||||
|
try {
|
||||||
|
if (!history?.currentId || !Object.keys(history.messages).length) {
|
||||||
|
toast.error($i18n.t('No conversation to save'));
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
const messages = createMessagesList(history, history.currentId);
|
||||||
|
const title =
|
||||||
|
messages.find((m) => m.role === 'user')?.content ?? $i18n.t('New Chat');
|
||||||
|
|
||||||
|
const savedChat = await createNewChat(
|
||||||
|
localStorage.token,
|
||||||
|
{
|
||||||
|
id: uuidv4(),
|
||||||
|
title: title.length > 50 ? `${title.slice(0, 50)}...` : title,
|
||||||
|
models: selectedModels,
|
||||||
|
history: history,
|
||||||
|
messages: messages,
|
||||||
|
timestamp: Date.now()
|
||||||
|
},
|
||||||
|
null
|
||||||
|
);
|
||||||
|
|
||||||
|
if (savedChat) {
|
||||||
|
temporaryChatEnabled.set(false);
|
||||||
|
chatId.set(savedChat.id);
|
||||||
|
chats.set(await getChatList(localStorage.token, $currentChatPage));
|
||||||
|
|
||||||
|
await goto(`/c/${savedChat.id}`);
|
||||||
|
toast.success($i18n.t('Conversation saved successfully'));
|
||||||
|
}
|
||||||
|
} catch (error) {
|
||||||
|
console.error('Error saving conversation:', error);
|
||||||
|
toast.error($i18n.t('Failed to save conversation'));
|
||||||
|
}
|
||||||
|
}}
|
||||||
/>
|
/>
|
||||||
|
|
||||||
<div class="flex flex-col flex-auto z-10 w-full @container overflow-auto">
|
<div class="flex flex-col flex-auto z-10 w-full @container overflow-auto">
|
||||||
|
@ -2276,6 +2335,7 @@
|
||||||
clearDraft();
|
clearDraft();
|
||||||
if (e.detail || files.length > 0) {
|
if (e.detail || files.length > 0) {
|
||||||
await tick();
|
await tick();
|
||||||
|
|
||||||
submitPrompt(
|
submitPrompt(
|
||||||
($settings?.richTextInput ?? true)
|
($settings?.richTextInput ?? true)
|
||||||
? e.detail.replaceAll('\n\n', '\n')
|
? e.detail.replaceAll('\n\n', '\n')
|
||||||
|
|
|
@ -230,6 +230,7 @@
|
||||||
class="w-full {($showOverview || $showArtifacts) && !$showCallOverlay
|
class="w-full {($showOverview || $showArtifacts) && !$showCallOverlay
|
||||||
? ' '
|
? ' '
|
||||||
: 'px-4 py-4 bg-white dark:shadow-lg dark:bg-gray-850 border border-gray-100 dark:border-gray-850'} z-40 pointer-events-auto overflow-y-auto scrollbar-hidden"
|
: 'px-4 py-4 bg-white dark:shadow-lg dark:bg-gray-850 border border-gray-100 dark:border-gray-850'} z-40 pointer-events-auto overflow-y-auto scrollbar-hidden"
|
||||||
|
id="controls-container"
|
||||||
>
|
>
|
||||||
{#if $showCallOverlay}
|
{#if $showCallOverlay}
|
||||||
<div class="w-full h-full flex justify-center">
|
<div class="w-full h-full flex justify-center">
|
||||||
|
|
|
@ -68,7 +68,7 @@
|
||||||
|
|
||||||
const actionHandler = async (actionId) => {
|
const actionHandler = async (actionId) => {
|
||||||
if (!model) {
|
if (!model) {
|
||||||
toast.error('Model not selected');
|
toast.error($i18n.t('Model not selected'));
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -79,7 +79,7 @@
|
||||||
|
|
||||||
let selectedAction = actions.find((action) => action.id === actionId);
|
let selectedAction = actions.find((action) => action.id === actionId);
|
||||||
if (!selectedAction) {
|
if (!selectedAction) {
|
||||||
toast.error('Action not found');
|
toast.error($i18n.t('Action not found'));
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -195,7 +195,7 @@
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
} else {
|
} else {
|
||||||
toast.error('An error occurred while fetching the explanation');
|
toast.error($i18n.t('An error occurred while fetching the explanation'));
|
||||||
}
|
}
|
||||||
};
|
};
|
||||||
|
|
||||||
|
|
|
@ -150,7 +150,7 @@
|
||||||
<select
|
<select
|
||||||
class=" w-full rounded-sm text-xs py-2 px-1 bg-transparent outline-hidden"
|
class=" w-full rounded-sm text-xs py-2 px-1 bg-transparent outline-hidden"
|
||||||
bind:value={tab}
|
bind:value={tab}
|
||||||
placeholder="Select"
|
placeholder={$i18n.t('Select')}
|
||||||
>
|
>
|
||||||
<option value="tools" class="bg-gray-100 dark:bg-gray-800">{$i18n.t('Tools')}</option>
|
<option value="tools" class="bg-gray-100 dark:bg-gray-800">{$i18n.t('Tools')}</option>
|
||||||
<option value="functions" class="bg-gray-100 dark:bg-gray-800"
|
<option value="functions" class="bg-gray-100 dark:bg-gray-800"
|
||||||
|
|
|
@ -38,6 +38,7 @@
|
||||||
extractContentFromFile,
|
extractContentFromFile,
|
||||||
extractCurlyBraceWords,
|
extractCurlyBraceWords,
|
||||||
extractInputVariables,
|
extractInputVariables,
|
||||||
|
getAge,
|
||||||
getCurrentDateTime,
|
getCurrentDateTime,
|
||||||
getFormattedDate,
|
getFormattedDate,
|
||||||
getFormattedTime,
|
getFormattedTime,
|
||||||
|
@ -73,6 +74,7 @@
|
||||||
import { KokoroWorker } from '$lib/workers/KokoroWorker';
|
import { KokoroWorker } from '$lib/workers/KokoroWorker';
|
||||||
import InputVariablesModal from './MessageInput/InputVariablesModal.svelte';
|
import InputVariablesModal from './MessageInput/InputVariablesModal.svelte';
|
||||||
import Voice from '../icons/Voice.svelte';
|
import Voice from '../icons/Voice.svelte';
|
||||||
|
import { getSessionUser } from '$lib/apis/auths';
|
||||||
const i18n = getContext('i18n');
|
const i18n = getContext('i18n');
|
||||||
|
|
||||||
export let onChange: Function = () => {};
|
export let onChange: Function = () => {};
|
||||||
|
@ -176,11 +178,47 @@
|
||||||
text = text.replaceAll('{{USER_LOCATION}}', String(location));
|
text = text.replaceAll('{{USER_LOCATION}}', String(location));
|
||||||
}
|
}
|
||||||
|
|
||||||
|
const sessionUser = await getSessionUser(localStorage.token);
|
||||||
|
|
||||||
if (text.includes('{{USER_NAME}}')) {
|
if (text.includes('{{USER_NAME}}')) {
|
||||||
const name = $_user?.name || 'User';
|
const name = sessionUser?.name || 'User';
|
||||||
text = text.replaceAll('{{USER_NAME}}', name);
|
text = text.replaceAll('{{USER_NAME}}', name);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if (text.includes('{{USER_BIO}}')) {
|
||||||
|
const bio = sessionUser?.bio || '';
|
||||||
|
|
||||||
|
if (bio) {
|
||||||
|
text = text.replaceAll('{{USER_BIO}}', bio);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if (text.includes('{{USER_GENDER}}')) {
|
||||||
|
const gender = sessionUser?.gender || '';
|
||||||
|
|
||||||
|
if (gender) {
|
||||||
|
text = text.replaceAll('{{USER_GENDER}}', gender);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if (text.includes('{{USER_BIRTH_DATE}}')) {
|
||||||
|
const birthDate = sessionUser?.date_of_birth || '';
|
||||||
|
|
||||||
|
if (birthDate) {
|
||||||
|
text = text.replaceAll('{{USER_BIRTH_DATE}}', birthDate);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if (text.includes('{{USER_AGE}}')) {
|
||||||
|
const birthDate = sessionUser?.date_of_birth || '';
|
||||||
|
|
||||||
|
if (birthDate) {
|
||||||
|
// calculate age using date
|
||||||
|
const age = getAge(birthDate);
|
||||||
|
text = text.replaceAll('{{USER_AGE}}', age);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
if (text.includes('{{USER_LANGUAGE}}')) {
|
if (text.includes('{{USER_LANGUAGE}}')) {
|
||||||
const language = localStorage.getItem('locale') || 'en-US';
|
const language = localStorage.getItem('locale') || 'en-US';
|
||||||
text = text.replaceAll('{{USER_LANGUAGE}}', language);
|
text = text.replaceAll('{{USER_LANGUAGE}}', language);
|
||||||
|
@ -872,7 +910,8 @@
|
||||||
: `${WEBUI_BASE_URL}/static/favicon.png`)}
|
: `${WEBUI_BASE_URL}/static/favicon.png`)}
|
||||||
/>
|
/>
|
||||||
<div class="translate-y-[0.5px]">
|
<div class="translate-y-[0.5px]">
|
||||||
Talking to <span class=" font-medium">{atSelectedModel.name}</span>
|
{$i18n.t('Talk to model')}:
|
||||||
|
<span class=" font-medium">{atSelectedModel.name}</span>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
<div>
|
<div>
|
||||||
|
@ -1130,7 +1169,20 @@
|
||||||
return res;
|
return res;
|
||||||
}}
|
}}
|
||||||
oncompositionstart={() => (isComposing = true)}
|
oncompositionstart={() => (isComposing = true)}
|
||||||
oncompositionend={() => (isComposing = false)}
|
oncompositionend={() => {
|
||||||
|
const isSafari = /^((?!chrome|android).)*safari/i.test(
|
||||||
|
navigator.userAgent
|
||||||
|
);
|
||||||
|
|
||||||
|
if (isSafari) {
|
||||||
|
// Safari has a bug where compositionend is not triggered correctly #16615
|
||||||
|
// when using the virtual keyboard on iOS.
|
||||||
|
// We use a timeout to ensure that the composition is ended after a short delay.
|
||||||
|
setTimeout(() => (isComposing = false));
|
||||||
|
} else {
|
||||||
|
isComposing = false;
|
||||||
|
}
|
||||||
|
}}
|
||||||
on:keydown={async (e) => {
|
on:keydown={async (e) => {
|
||||||
e = e.detail.event;
|
e = e.detail.event;
|
||||||
|
|
||||||
|
@ -1341,7 +1393,18 @@
|
||||||
command = getCommand();
|
command = getCommand();
|
||||||
}}
|
}}
|
||||||
on:compositionstart={() => (isComposing = true)}
|
on:compositionstart={() => (isComposing = true)}
|
||||||
on:compositionend={() => (isComposing = false)}
|
on:compositionend={() => {
|
||||||
|
const isSafari = /^((?!chrome|android).)*safari/i.test(navigator.userAgent);
|
||||||
|
|
||||||
|
if (isSafari) {
|
||||||
|
// Safari has a bug where compositionend is not triggered correctly #16615
|
||||||
|
// when using the virtual keyboard on iOS.
|
||||||
|
// We use a timeout to ensure that the composition is ended after a short delay.
|
||||||
|
setTimeout(() => (isComposing = false));
|
||||||
|
} else {
|
||||||
|
isComposing = false;
|
||||||
|
}
|
||||||
|
}}
|
||||||
on:keydown={async (e) => {
|
on:keydown={async (e) => {
|
||||||
const isCtrlPressed = e.ctrlKey || e.metaKey; // metaKey is for Cmd key on Mac
|
const isCtrlPressed = e.ctrlKey || e.metaKey; // metaKey is for Cmd key on Mac
|
||||||
|
|
||||||
|
|
|
@ -78,7 +78,6 @@
|
||||||
|
|
||||||
onMount(async () => {
|
onMount(async () => {
|
||||||
window.addEventListener('resize', adjustHeight);
|
window.addEventListener('resize', adjustHeight);
|
||||||
adjustHeight();
|
|
||||||
|
|
||||||
let notes = await getNoteList(localStorage.token).catch(() => {
|
let notes = await getNoteList(localStorage.token).catch(() => {
|
||||||
return [];
|
return [];
|
||||||
|
@ -175,6 +174,9 @@
|
||||||
fuse = new Fuse(items, {
|
fuse = new Fuse(items, {
|
||||||
keys: ['name', 'description']
|
keys: ['name', 'description']
|
||||||
});
|
});
|
||||||
|
|
||||||
|
await tick();
|
||||||
|
adjustHeight();
|
||||||
});
|
});
|
||||||
|
|
||||||
onDestroy(() => {
|
onDestroy(() => {
|
||||||
|
|
|
@ -76,13 +76,14 @@
|
||||||
|
|
||||||
onMount(async () => {
|
onMount(async () => {
|
||||||
window.addEventListener('resize', adjustHeight);
|
window.addEventListener('resize', adjustHeight);
|
||||||
adjustHeight();
|
|
||||||
|
|
||||||
await tick();
|
await tick();
|
||||||
const chatInputElement = document.getElementById('chat-input');
|
const chatInputElement = document.getElementById('chat-input');
|
||||||
await tick();
|
await tick();
|
||||||
chatInputElement?.focus();
|
chatInputElement?.focus();
|
||||||
await tick();
|
await tick();
|
||||||
|
|
||||||
|
adjustHeight();
|
||||||
});
|
});
|
||||||
|
|
||||||
onDestroy(() => {
|
onDestroy(() => {
|
||||||
|
|
|
@ -59,8 +59,10 @@
|
||||||
onSelect({ type: 'prompt', data: command });
|
onSelect({ type: 'prompt', data: command });
|
||||||
};
|
};
|
||||||
|
|
||||||
onMount(() => {
|
onMount(async () => {
|
||||||
window.addEventListener('resize', adjustHeight);
|
window.addEventListener('resize', adjustHeight);
|
||||||
|
|
||||||
|
await tick();
|
||||||
adjustHeight();
|
adjustHeight();
|
||||||
});
|
});
|
||||||
|
|
||||||
|
|
|
@ -17,6 +17,7 @@
|
||||||
import CameraSolid from '$lib/components/icons/CameraSolid.svelte';
|
import CameraSolid from '$lib/components/icons/CameraSolid.svelte';
|
||||||
import PhotoSolid from '$lib/components/icons/PhotoSolid.svelte';
|
import PhotoSolid from '$lib/components/icons/PhotoSolid.svelte';
|
||||||
import CommandLineSolid from '$lib/components/icons/CommandLineSolid.svelte';
|
import CommandLineSolid from '$lib/components/icons/CommandLineSolid.svelte';
|
||||||
|
import Spinner from '$lib/components/common/Spinner.svelte';
|
||||||
|
|
||||||
const i18n = getContext('i18n');
|
const i18n = getContext('i18n');
|
||||||
|
|
||||||
|
@ -34,7 +35,7 @@
|
||||||
|
|
||||||
export let onClose: Function;
|
export let onClose: Function;
|
||||||
|
|
||||||
let tools = {};
|
let tools = null;
|
||||||
let show = false;
|
let show = false;
|
||||||
let showAllTools = false;
|
let showAllTools = false;
|
||||||
|
|
||||||
|
@ -48,10 +49,8 @@
|
||||||
($user?.role === 'admin' || $user?.permissions?.chat?.file_upload);
|
($user?.role === 'admin' || $user?.permissions?.chat?.file_upload);
|
||||||
|
|
||||||
const init = async () => {
|
const init = async () => {
|
||||||
if ($_tools === null) {
|
|
||||||
await _tools.set(await getTools(localStorage.token));
|
await _tools.set(await getTools(localStorage.token));
|
||||||
}
|
if ($_tools) {
|
||||||
|
|
||||||
tools = $_tools.reduce((a, tool, i, arr) => {
|
tools = $_tools.reduce((a, tool, i, arr) => {
|
||||||
a[tool.id] = {
|
a[tool.id] = {
|
||||||
name: tool.name,
|
name: tool.name,
|
||||||
|
@ -60,6 +59,8 @@
|
||||||
};
|
};
|
||||||
return a;
|
return a;
|
||||||
}, {});
|
}, {});
|
||||||
|
selectedToolIds = selectedToolIds.filter((id) => $_tools?.some((tool) => tool.id === id));
|
||||||
|
}
|
||||||
};
|
};
|
||||||
|
|
||||||
const detectMobile = () => {
|
const detectMobile = () => {
|
||||||
|
@ -107,8 +108,9 @@
|
||||||
align="start"
|
align="start"
|
||||||
transition={flyAndScale}
|
transition={flyAndScale}
|
||||||
>
|
>
|
||||||
|
{#if tools}
|
||||||
{#if Object.keys(tools).length > 0}
|
{#if Object.keys(tools).length > 0}
|
||||||
<div class="{showAllTools ? '' : 'max-h-28'} overflow-y-auto scrollbar-thin">
|
<div class="{showAllTools ? ' max-h-96' : 'max-h-28'} overflow-y-auto scrollbar-thin">
|
||||||
{#each Object.keys(tools) as toolId}
|
{#each Object.keys(tools) as toolId}
|
||||||
<button
|
<button
|
||||||
class="flex w-full justify-between gap-2 items-center px-3 py-2 text-sm font-medium cursor-pointer rounded-xl"
|
class="flex w-full justify-between gap-2 items-center px-3 py-2 text-sm font-medium cursor-pointer rounded-xl"
|
||||||
|
@ -172,6 +174,13 @@
|
||||||
{/if}
|
{/if}
|
||||||
<hr class="border-black/5 dark:border-white/5 my-1" />
|
<hr class="border-black/5 dark:border-white/5 my-1" />
|
||||||
{/if}
|
{/if}
|
||||||
|
{:else}
|
||||||
|
<div class="py-4">
|
||||||
|
<Spinner />
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<hr class="border-black/5 dark:border-white/5 my-1" />
|
||||||
|
{/if}
|
||||||
|
|
||||||
<Tooltip
|
<Tooltip
|
||||||
content={fileUploadCapableModels.length !== selectedModels.length
|
content={fileUploadCapableModels.length !== selectedModels.length
|
||||||
|
@ -379,7 +388,7 @@
|
||||||
>
|
>
|
||||||
<div class="flex flex-col">
|
<div class="flex flex-col">
|
||||||
<div class="line-clamp-1">{$i18n.t('Microsoft OneDrive (work/school)')}</div>
|
<div class="line-clamp-1">{$i18n.t('Microsoft OneDrive (work/school)')}</div>
|
||||||
<div class="text-xs text-gray-500">Includes SharePoint</div>
|
<div class="text-xs text-gray-500">{$i18n.t('Includes SharePoint')}</div>
|
||||||
</div>
|
</div>
|
||||||
</DropdownMenu.Item>
|
</DropdownMenu.Item>
|
||||||
</DropdownMenu.SubContent>
|
</DropdownMenu.SubContent>
|
||||||
|
|
|
@ -131,7 +131,7 @@
|
||||||
<input
|
<input
|
||||||
type="text"
|
type="text"
|
||||||
class="flex-1 py-1 text-sm dark:text-gray-300 bg-transparent outline-hidden"
|
class="flex-1 py-1 text-sm dark:text-gray-300 bg-transparent outline-hidden"
|
||||||
placeholder="Enter value (true/false)"
|
placeholder={$i18n.t('Enter value (true/false)')}
|
||||||
bind:value={variableValues[variable]}
|
bind:value={variableValues[variable]}
|
||||||
autocomplete="off"
|
autocomplete="off"
|
||||||
required
|
required
|
||||||
|
@ -156,7 +156,7 @@
|
||||||
<input
|
<input
|
||||||
type="text"
|
type="text"
|
||||||
class="flex-1 py-2 text-sm dark:text-gray-300 bg-transparent outline-hidden"
|
class="flex-1 py-2 text-sm dark:text-gray-300 bg-transparent outline-hidden"
|
||||||
placeholder="Enter hex color (e.g. #FF0000)"
|
placeholder={$i18n.t('Enter hex color (e.g. #FF0000)')}
|
||||||
bind:value={variableValues[variable]}
|
bind:value={variableValues[variable]}
|
||||||
autocomplete="off"
|
autocomplete="off"
|
||||||
required
|
required
|
||||||
|
@ -232,7 +232,7 @@
|
||||||
<input
|
<input
|
||||||
type="text"
|
type="text"
|
||||||
class=" py-1 text-sm dark:text-gray-300 bg-transparent outline-hidden text-right"
|
class=" py-1 text-sm dark:text-gray-300 bg-transparent outline-hidden text-right"
|
||||||
placeholder="Enter value"
|
placeholder={$i18n.t('Enter value')}
|
||||||
bind:value={variableValues[variable]}
|
bind:value={variableValues[variable]}
|
||||||
autocomplete="off"
|
autocomplete="off"
|
||||||
required
|
required
|
||||||
|
@ -308,7 +308,7 @@
|
||||||
<input
|
<input
|
||||||
type="text"
|
type="text"
|
||||||
class=" w-full py-1 text-left text-sm dark:text-gray-300 bg-transparent outline-hidden"
|
class=" w-full py-1 text-left text-sm dark:text-gray-300 bg-transparent outline-hidden"
|
||||||
placeholder="Enter coordinates (e.g. 51.505, -0.09)"
|
placeholder={$i18n.t('Enter coordinates (e.g. 51.505, -0.09)')}
|
||||||
bind:value={variableValues[variable]}
|
bind:value={variableValues[variable]}
|
||||||
autocomplete="off"
|
autocomplete="off"
|
||||||
required
|
required
|
||||||
|
|
|
@ -404,7 +404,8 @@
|
||||||
{:else}
|
{:else}
|
||||||
<div class="w-full pt-2">
|
<div class="w-full pt-2">
|
||||||
{#key chatId}
|
{#key chatId}
|
||||||
<div class="w-full">
|
<section class="w-full" aria-labelledby="chat-conversation">
|
||||||
|
<h2 class="sr-only" id="chat-conversation">{$i18n.t('Chat Conversation')}</h2>
|
||||||
{#if messages.at(0)?.parentId !== null}
|
{#if messages.at(0)?.parentId !== null}
|
||||||
<Loader
|
<Loader
|
||||||
on:visible={(e) => {
|
on:visible={(e) => {
|
||||||
|
@ -416,11 +417,11 @@
|
||||||
>
|
>
|
||||||
<div class="w-full flex justify-center py-1 text-xs animate-pulse items-center gap-2">
|
<div class="w-full flex justify-center py-1 text-xs animate-pulse items-center gap-2">
|
||||||
<Spinner className=" size-4" />
|
<Spinner className=" size-4" />
|
||||||
<div class=" ">Loading...</div>
|
<div class=" ">{$i18n.t('Loading...')}</div>
|
||||||
</div>
|
</div>
|
||||||
</Loader>
|
</Loader>
|
||||||
{/if}
|
{/if}
|
||||||
|
<ul role="log" aria-live="polite" aria-relevant="additions" aria-atomic="false">
|
||||||
{#each messages as message, messageIdx (message.id)}
|
{#each messages as message, messageIdx (message.id)}
|
||||||
<Message
|
<Message
|
||||||
{chatId}
|
{chatId}
|
||||||
|
@ -449,7 +450,8 @@
|
||||||
{topPadding}
|
{topPadding}
|
||||||
/>
|
/>
|
||||||
{/each}
|
{/each}
|
||||||
</div>
|
</ul>
|
||||||
|
</section>
|
||||||
<div class="pb-12" />
|
<div class="pb-12" />
|
||||||
{#if bottomPadding}
|
{#if bottomPadding}
|
||||||
<div class=" pb-6" />
|
<div class=" pb-6" />
|
||||||
|
|
|
@ -18,6 +18,13 @@
|
||||||
let selectedCitation: any = null;
|
let selectedCitation: any = null;
|
||||||
let isCollapsibleOpen = false;
|
let isCollapsibleOpen = false;
|
||||||
|
|
||||||
|
export const showSourceModal = (sourceIdx) => {
|
||||||
|
if (citations[sourceIdx]) {
|
||||||
|
selectedCitation = citations[sourceIdx];
|
||||||
|
showCitationModal = true;
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
function calculateShowRelevance(sources: any[]) {
|
function calculateShowRelevance(sources: any[]) {
|
||||||
const distances = sources.flatMap((citation) => citation.distances ?? []);
|
const distances = sources.flatMap((citation) => citation.distances ?? []);
|
||||||
const inRange = distances.filter((d) => d !== undefined && d >= -1 && d <= 1).length;
|
const inRange = distances.filter((d) => d !== undefined && d >= -1 && d <= 1).length;
|
||||||
|
|
|
@ -548,13 +548,13 @@
|
||||||
>
|
>
|
||||||
{#if executing}
|
{#if executing}
|
||||||
<div class=" ">
|
<div class=" ">
|
||||||
<div class=" text-gray-500 text-xs mb-1">STDOUT/STDERR</div>
|
<div class=" text-gray-500 text-xs mb-1">{$i18n.t('STDOUT/STDERR')}</div>
|
||||||
<div class="text-sm">Running...</div>
|
<div class="text-sm">{$i18n.t('Running...')}</div>
|
||||||
</div>
|
</div>
|
||||||
{:else}
|
{:else}
|
||||||
{#if stdout || stderr}
|
{#if stdout || stderr}
|
||||||
<div class=" ">
|
<div class=" ">
|
||||||
<div class=" text-gray-500 text-xs mb-1">STDOUT/STDERR</div>
|
<div class=" text-gray-500 text-xs mb-1">{$i18n.t('STDOUT/STDERR')}</div>
|
||||||
<div
|
<div
|
||||||
class="text-sm {stdout?.split('\n')?.length > 100
|
class="text-sm {stdout?.split('\n')?.length > 100
|
||||||
? `max-h-96`
|
? `max-h-96`
|
||||||
|
@ -566,7 +566,7 @@
|
||||||
{/if}
|
{/if}
|
||||||
{#if result || files}
|
{#if result || files}
|
||||||
<div class=" ">
|
<div class=" ">
|
||||||
<div class=" text-gray-500 text-xs mb-1">RESULT</div>
|
<div class=" text-gray-500 text-xs mb-1">{$i18n.t('RESULT')}</div>
|
||||||
{#if result}
|
{#if result}
|
||||||
<div class="text-sm">{`${JSON.stringify(result)}`}</div>
|
<div class="text-sm">{`${JSON.stringify(result)}`}</div>
|
||||||
{/if}
|
{/if}
|
||||||
|
|
|
@ -139,15 +139,15 @@
|
||||||
{preview}
|
{preview}
|
||||||
{done}
|
{done}
|
||||||
{topPadding}
|
{topPadding}
|
||||||
sourceIds={(sources ?? []).reduce((acc, s) => {
|
sourceIds={(sources ?? []).reduce((acc, source) => {
|
||||||
let ids = [];
|
let ids = [];
|
||||||
s.document.forEach((document, index) => {
|
source.document.forEach((document, index) => {
|
||||||
if (model?.info?.meta?.capabilities?.citations == false) {
|
if (model?.info?.meta?.capabilities?.citations == false) {
|
||||||
ids.push('N/A');
|
ids.push('N/A');
|
||||||
return ids;
|
return ids;
|
||||||
}
|
}
|
||||||
|
|
||||||
const metadata = s.metadata?.[index];
|
const metadata = source.metadata?.[index];
|
||||||
const id = metadata?.source ?? 'N/A';
|
const id = metadata?.source ?? 'N/A';
|
||||||
|
|
||||||
if (metadata?.name) {
|
if (metadata?.name) {
|
||||||
|
@ -158,7 +158,7 @@
|
||||||
if (id.startsWith('http://') || id.startsWith('https://')) {
|
if (id.startsWith('http://') || id.startsWith('https://')) {
|
||||||
ids.push(id);
|
ids.push(id);
|
||||||
} else {
|
} else {
|
||||||
ids.push(s?.source?.name ?? id);
|
ids.push(source?.source?.name ?? id);
|
||||||
}
|
}
|
||||||
|
|
||||||
return ids;
|
return ids;
|
||||||
|
|
|
@ -44,7 +44,7 @@
|
||||||
export let topPadding = false;
|
export let topPadding = false;
|
||||||
</script>
|
</script>
|
||||||
|
|
||||||
<div
|
<li
|
||||||
class="flex flex-col justify-between px-5 mb-3 w-full {($settings?.widescreenMode ?? null)
|
class="flex flex-col justify-between px-5 mb-3 w-full {($settings?.widescreenMode ?? null)
|
||||||
? 'max-w-full'
|
? 'max-w-full'
|
||||||
: 'max-w-5xl'} mx-auto rounded-lg group"
|
: 'max-w-5xl'} mx-auto rounded-lg group"
|
||||||
|
@ -120,4 +120,4 @@
|
||||||
/>
|
/>
|
||||||
{/if}
|
{/if}
|
||||||
{/if}
|
{/if}
|
||||||
</div>
|
</li>
|
||||||
|
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue