- Replaced inline SSR logic in `AssetService.ts` with `handleSsr` import.
- Enhanced `build:client` script to ensure server directory creation.
- Updated dependencies and devDependencies across multiple packages for compatibility improvements.
- Update `package.json` across multiple packages to include missing newline and add package manager metadata.
- Minor README formatting fixes to remove unnecessary trailing spaces.
- Moved `providers`, `services`, `models`, `lib`, and related files to `src` directory within `server` package.
- Adjusted imports across the codebase to reflect the new paths.
- Renamed several `.ts` files for consistency.
- Introduced an `index.ts` in the `ai/providers` package to export all providers.
This improves maintainability and aligns with the project's updated directory structure.
- Removed outdated links and unused properties in Sidebar and Welcome Home Text files.
- Dropped extraneous comments and consolidated imports in server files for streamlined code.
- Enhanced MarkdownEditor visuals with a colorful border for better user experience.
- Adjusted import statements across the codebase to align with consistent use of `type`.
- Unified usage of `EventSource` initialization.
- Introduced `RootDeps` type for shared dependencies.
- Commented out unused VitePWA configuration.
- Updated proxy target URLs in Vite configuration.
- Replaced single Docker command for Ollama with a `docker-compose` setup.
- Updated `start_inference_server.sh` to use `ollama-compose.yml`.
- Updated README with new usage instructions for Ollama web UI access.
Update README deployment steps and add deploy:secrets script to package.json
update local inference script and README
update lockfile
reconfigure package scripts for development
update test execution
pass server tests
Update README with revised Bun commands and workspace details
remove pnpm package manager designator
create bun server
- Introduced `configure_local_inference.sh` to automatically set `.dev.vars` based on active local inference services.
- Updated `start_inference_server.sh` to handle both Ollama and mlx-omni-server server types.
- Enhanced `package.json` to include new commands for starting and configuring inference servers.
- Refined README to include updated instructions for running and adding models for local inference.
- Minor cleanup in `MessageBubble.tsx`.