- Added custom exception handler to ensure CORS headers are included in
all HTTP error responses from the API
- Added documentation for 413 (Payload Too Large) errors when behind
reverse proxies (nginx, traefik, kubernetes ingress)
- Added client_max_body_size to nginx configuration examples
- Documented how to configure CORS headers for proxy-level error responses
Fixes#401
The model uniqueness constraint now considers (provider, name, type)
instead of just (provider, name). This allows users to add the same
model name for different purposes (e.g., language vs embedding).
Fixes#391
- Use QUERY_KEYS.sourcesInfinite for infinite scroll query key
Starting with ['sources', ...] ensures mutations that invalidate
['sources'] will also invalidate the infinite scroll cache
- Use httpx.Timeout for chat service with short connect (10s) and
long read (600s) timeouts. Prevents 10 min wait on connection errors
Users with Ollama reported timeout errors on notebook chat while the
backend was still processing. The answer would appear after refresh.
- Frontend axios timeout: 5 min → 10 min
- Backend chat service timeout: 2 min → 10 min
Local LLMs can take several minutes for complex questions with large
contexts, especially on slower hardware.
* fix: add missing overflow wrapper to notebooks list page
Adds flex-1 overflow-y-auto wrapper to enable proper scrolling
when notebook list exceeds viewport height. Matches the layout
pattern used by all other dashboard pages.
Co-Authored-By: Claude <noreply@anthropic.com>
* fix: reorder transformation routes to prevent dynamic route interception
Moved static routes (/transformations/execute and /transformations/default-prompt)
before dynamic routes (/transformations/{transformation_id}) to ensure FastAPI
matches them correctly. Previously, requests to static routes were incorrectly
captured by the dynamic route handler.
Fixes#250
Co-Authored-By: Claude <noreply@anthropic.com>
* chore: bump to 1.2.1
---------
Co-authored-by: Claude <noreply@anthropic.com>
* chore: improve podcast transcripts
* fix: remove date from insight - fixes#241
* fix: improve scrolling on source and insights - fixes#237
* chore: update esperanto to fix: #234
* chore: update esperanto to fix#226
* fix: process vectorization as subcommands to handle larger documents more gracefully - fix: #229
* feat: enable background job retry capabilities
* feat: reenable content types that were disabled during alpha version
* fix: remove unnecessary model caching causing many issues.
* feat: support multiple azure endpoints and keys just like openai compatible. Fixes#215
* docs: update azure variables
* chore: bump and update dependencies
* feat: prevent duplicate model names under same provider
Implement case-insensitive validation to prevent users from creating
duplicate model names under the same provider. This validation is
implemented both in the backend API and the frontend UI.
Changes:
- Backend: Add duplicate check in create_model endpoint (case-insensitive)
- Frontend: Add client-side validation in AddModelForm
- Frontend: Improve error message display from backend
- Tests: Add unit tests for duplicate model validation
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
* refactor: optimize duplicate model validation and improve error handling
- Replace O(n) model iteration with efficient SurrealDB query for duplicate check
- Improve error message to include model name and provider for better UX
- Remove frontend duplicate validation (backend-only enforcement)
- Fix test authentication by setting OPEN_NOTEBOOK_PASSWORD before imports
- Update test mocking to use repo_query instead of Model.get_all()
- Add pytest fixture for TestClient to ensure proper test isolation
All 11 tests passing.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
* remove unnecessary package
* fix: replace any with unknown type in error handler
- Change error type from 'any' to 'unknown' to satisfy ESLint
- Add proper type assertion for error object structure
- Maintains same runtime behavior with better type safety
---------
Co-authored-by: Claude <noreply@anthropic.com>
* fix: small issue where users cant change podcast segments
* chore: remove playwright mcp from gut
* feat: add ability to link existing sources to notebooks (OSS-311)
Implemented bidirectional source-notebook linking functionality:
Backend changes:
- Add POST endpoint to link sources to notebooks
- Include notebook associations in source detail response
- Implement idempotent linking with proper RecordID handling
Frontend changes:
- Add AddExistingSourceDialog with search and multi-select
- Add NotebookAssociations component for source detail view
- Add dropdown menu to "Add Source" button (new/existing)
- Implement useAddSourcesToNotebook hook with graceful error handling
- Fix dialog pointer-events during close animation
- Add loading states and disable checkboxes for linked sources
- Optimize dialog width with proper responsive breakpoints
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
* fix: address PR review feedback
- Fix sources.py query to use correct reference direction (OUT where IN)
- Remove debug console.log statements
- Add truncation warning for 100+ source lists
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
---------
Co-authored-by: Claude <noreply@anthropic.com>
* fix: small issue where users cant change podcast segments
* feat: display source and note counts on notebook cards (OSS-312)
Add item counters to notebook listing page showing the number of sources
and notes in each notebook. Counts are displayed in a footer section with
FileText and StickyNote icons for visual consistency with ContextIndicator.
Backend changes:
- Add source_count and note_count to NotebookResponse model
- Update /notebooks endpoint to use SurrealDB graph traversal query
- Query: count(<-reference.in) for sources, count(<-artifact.in) for notes
- Update all notebook endpoints to include counts
Frontend changes:
- Add source_count and note_count to TypeScript NotebookResponse interface
- Add footer section to NotebookCard component
- Display counts with FileText and StickyNote icons (h-3 w-3)
- Use border-top separator and muted-foreground styling
Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
* style: use colorful badges for notebook counts matching ContextIndicator
Update notebook card counts to use Badge components with primary color
styling instead of plain text, matching the visual style of the
ContextIndicator component in the chat window.
Changes:
- Replace plain text divs with Badge components
- Apply text-primary and border-primary/50 styling
- Use same spacing (gap-1.5, px-1.5, py-0.5) as ContextIndicator
- Remove bullet separator (not needed with badge layout)
Visual result matches the colorful badges shown in chat context.
Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
---------
Co-authored-by: Claude <noreply@anthropic.com>
* fix: increase API client timeouts for transformation operations
- Increase frontend timeout from 30s to 300s (5 minutes)
- Increase Streamlit API client timeout from 30s to 300s
- Add API_CLIENT_TIMEOUT environment variable for configurability
- Add ESPERANTO_LLM_TIMEOUT environment variable documentation
- Update .env.example with comprehensive timeout documentation
Fixes#131 - API timeout errors during transformation generation
Transformations now have sufficient time to complete on slower
hardware (Ollama, LM Studio) without frontend timeout errors.
Users can now configure timeouts for both the API client layer
(API_CLIENT_TIMEOUT) and the LLM provider layer (ESPERANTO_LLM_TIMEOUT)
to accommodate their specific hardware and network conditions.
* docs: add timeout configuration documentation
- Add comprehensive timeout troubleshooting section to common-issues.md
- Add FAQ entry about timeout errors during transformations
- Document API_CLIENT_TIMEOUT and ESPERANTO_LLM_TIMEOUT usage
- Provide specific timeout recommendations for different hardware/network scenarios
- Link to GitHub issue #131 for reference
* chore: bump
* refactor: improve timeout configuration with validation and consistency
Based on PR review feedback, this commit addresses several improvements:
**Timeout Validation:**
- Add validation to ensure timeout values are between 30s and 3600s
- Invalid values fall back to default 300s with warning logs
- Handles edge cases (negative, zero, invalid strings)
**Fix Hard-coded Timeouts:**
- Replace all hard-coded timeout values in api/client.py
- ask_simple: 300s → self.timeout
- execute_transformation: 120s → self.timeout
- embed_content: 120s → self.timeout
- create_source: 300s → self.timeout
- rebuild_embeddings: Uses smart logic (2x timeout, max 3600s)
**Improved Documentation:**
- Add clarifying comments about ms vs seconds (frontend vs backend)
- Document that frontend uses 300000ms = backend 300s
- Add inline documentation for rebuild_embeddings timeout logic
**Development Dependencies:**
- Add pytest>=8.0.0 to dev dependencies for future test coverage
This makes timeout configuration more robust, consistent, and user-friendly
while maintaining backward compatibility.
* fix text
* remove lint from docker publish workflow
* gemini base url docs
* feat: add multimodal support for openai-compatible providers
- Add helper function to check OpenAI-compatible provider availability per mode
- Update provider detection to support language, embedding, STT, and TTS modalities
- Implement mode-specific environment variable detection (LLM, EMBEDDING, STT, TTS)
- Maintain backward compatibility with generic OPENAI_COMPATIBLE_BASE_URL
- Add comprehensive unit tests for all configuration scenarios
- Update .env.example with mode-specific environment variables
- Update provider support matrix in ai-models.md
- Create comprehensive openai-compatible.md setup guide
This enables users to configure different OpenAI-compatible endpoints for
different AI capabilities (e.g., LM Studio for language models, dedicated
server for embeddings) while maintaining full backward compatibility.
* upgrade
* chore: change docker release strategy
Changed create_source() timeout from default 30s to 300s (5 minutes) to handle
long-running operations like PDF processing with OCR.
Issue:
- PDF imports were timing out after 30 seconds with "Failed to connect to API: timed out"
- PDF processing (especially with OCR/parsing) takes longer than the default timeout
- Users were unable to import PDF documents
Solution:
- Increased timeout to 300 seconds (5 minutes), matching the timeout used by ask_simple()
- This gives sufficient time for document processing operations to complete
- Prevents premature connection timeout errors
Technical Details:
- Modified api/client.py create_source() method
- Added timeout=300.0 parameter to _make_request() call
- Consistent with existing long-running operations (ask_simple uses same timeout)
Testing:
- Users should now be able to import PDFs without timeout errors
- Smaller PDFs will still complete quickly
- Larger PDFs have sufficient time to process
New front-end
Launch Chat API
Manage Sources
Enable re-embedding of all contents
Sources can be added without a notebook now
Improved settings
Enable model selector on all chats
Background processing for better experience
Dark mode
Improved Notes
Improved Docs:
- Remove all Streamlit references from documentation
- Update deployment guides with React frontend setup
- Fix Docker environment variables format (SURREAL_URL, SURREAL_PASSWORD)
- Update docker image tag from :latest to :v1-latest
- Change navigation references (Settings → Models to just Models)
- Update development setup to include frontend npm commands
- Add MIGRATION.md guide for users upgrading from Streamlit
- Update quick-start guide with correct environment variables
- Add port 5055 documentation for API access
- Update project structure to reflect frontend/ directory
- Remove outdated source-chat documentation files
Creates the API layer for Open Notebook
Creates a services API gateway for the Streamlit front-end
Migrates the SurrealDB SDK to the official one
Change all database calls to async
New podcast framework supporting multiple speaker configurations
Implement the surreal-commands library for async processing
Improve docker image and docker-compose configurations