run format

This commit is contained in:
geoffsee
2025-06-24 17:31:15 -04:00
committed by Geoff Seemueller
parent 02c3253343
commit f76301d620
17 changed files with 180 additions and 199 deletions

View File

@@ -1,60 +1,60 @@
Legacy Development History ## Legacy Development History
---
The source code of open-gsio was drawn from the source code of my personal website. That commit history was contaminated early on with secrets. `open-gsio` is a refinement of those sources. A total of 367 commits were submitted to the main branch of the upstream source repository between August 2024 and May 2025. The source code of open-gsio was drawn from the source code of my personal website. That commit history was contaminated early on with secrets. `open-gsio` is a refinement of those sources. A total of 367 commits were submitted to the main branch of the upstream source repository between August 2024 and May 2025.
#### **May 2025** #### **May 2025**
* Added **seemueller.ai** link to UI sidebar. - Added **seemueller.ai** link to UI sidebar.
* Global config/markdown guide cleanup; patched a critical forgotten bug. - Global config/markdown guide cleanup; patched a critical forgotten bug.
#### **Apr 2025** #### **Apr 2025**
* **CI/CD overhaul**: autodeploy to dev & staging, Bun adoption as package manager, streamlined blocklist workflow (now autoupdates via VPN blocker). - **CI/CD overhaul**: autodeploy to dev & staging, Bun adoption as package manager, streamlined blocklist workflow (now autoupdates via VPN blocker).
* New 404 error page; multiple robots.txt and editorresize fixes; removed dead/duplicate code. - New 404 error page; multiple robots.txt and editorresize fixes; removed dead/duplicate code.
#### **Mar 2025** #### **Mar 2025**
* Introduced **modelspecific `max_tokens`** handling and plugged in **Cloudflare AI models** for testing. - Introduced **modelspecific `max_tokens`** handling and plugged in **Cloudflare AI models** for testing.
* Bundle size minimised (reenabled minifier, smaller vendor set). - Bundle size minimised (reenabled minifier, smaller vendor set).
#### **Feb 2025** #### **Feb 2025**
* **Full theme system** (runtime switching, Centauri theme, serversaved prefs). - **Full theme system** (runtime switching, Centauri theme, serversaved prefs).
* Tightened MobX typing for messages; responsive breakpoints & input scaling repaired. - Tightened MobX typing for messages; responsive breakpoints & input scaling repaired.
* Dropped legacy document API; general folder restructure. - Dropped legacy document API; general folder restructure.
#### **Jan 2025** #### **Jan 2025**
* **Ratelimit middleware**, larger KV/R2 storage quota. - **Ratelimit middleware**, larger KV/R2 storage quota.
* Switched default model → *llamav3p170binstruct*; pluggable model handlers. - Switched default model → _llamav3p170binstruct_; pluggable model handlers.
* Added **KaTeX fonts** & **Marked.js** for rich math/markdown. - Added **KaTeX fonts** & **Marked.js** for rich math/markdown.
* Fireworks key rotation; deprecated Google models removed. - Fireworks key rotation; deprecated Google models removed.
#### **Dec 2024** #### **Dec 2024**
* Major package upgrades; **CodeHighlighter** now supports HTML/JSX/TS(X)/Zig. - Major package upgrades; **CodeHighlighter** now supports HTML/JSX/TS(X)/Zig.
* Refactored streaming + markdown renderer; Androidspecific padding fixes. - Refactored streaming + markdown renderer; Androidspecific padding fixes.
* Reset default chat model to **gpt4o**; welcome message & richer searchintent logic. - Reset default chat model to **gpt4o**; welcome message & richer searchintent logic.
#### **Nov 2024** #### **Nov 2024**
* **Fireworks API** + agent server; firstclass support for **Anthropic** & **GROQ** models (incl. attachments). - **Fireworks API** + agent server; firstclass support for **Anthropic** & **GROQ** models (incl. attachments).
* **VPN blocker** shipped with CIDR validation and dedicated GitHub Action. - **VPN blocker** shipped with CIDR validation and dedicated GitHub Action.
* Live search buffering, feedback modal, smarter context preprocessing. - Live search buffering, feedback modal, smarter context preprocessing.
#### **Oct 2024** #### **Oct 2024**
* Rolled out **image generation** + picker for image models. - Rolled out **image generation** + picker for image models.
* Deployed **ETH payment processor** & depositaddress flow. - Deployed **ETH payment processor** & depositaddress flow.
* Introduced fewshot prompting library; analytics worker refactor; Halloween prompt. - Introduced fewshot prompting library; analytics worker refactor; Halloween prompt.
* Extensive mobileUX polish and bundling/worker config updates. - Extensive mobileUX polish and bundling/worker config updates.
#### **Sep 2024** #### **Sep 2024**
* Endtoend **math rendering** (KaTeX) and **GitHubflavoured markdown**. - Endtoend **math rendering** (KaTeX) and **GitHubflavoured markdown**.
* Migrated chat state to **MobX**; launched analytics service & metrics worker. - Migrated chat state to **MobX**; launched analytics service & metrics worker.
* Switched build minifier to **esbuild**; tokenizer limits enforced; gradient sidebar & cookieconsent manager added. - Switched build minifier to **esbuild**; tokenizer limits enforced; gradient sidebar & cookieconsent manager added.
#### **Aug 2024** #### **Aug 2024**
* **Initial MVP**: iMessagestyle chat UI, websocket prototype, Google Analytics, Cloudflare bindings, base workersite scaffold. - **Initial MVP**: iMessagestyle chat UI, websocket prototype, Google Analytics, Cloudflare bindings, base workersite scaffold.

View File

@@ -1,12 +1,13 @@
# open-gsio # open-gsio
[![Tests](https://github.com/geoffsee/open-gsio/actions/workflows/test.yml/badge.svg)](https://github.com/geoffsee/open-gsio/actions/workflows/test.yml) [![Tests](https://github.com/geoffsee/open-gsio/actions/workflows/test.yml/badge.svg)](https://github.com/geoffsee/open-gsio/actions/workflows/test.yml)
[![License: MIT](https://img.shields.io/badge/License-MIT-green.svg)](https://opensource.org/licenses/MIT) [![License: MIT](https://img.shields.io/badge/License-MIT-green.svg)](https://opensource.org/licenses/MIT)
</br> </br>
<p align="center"> <p align="center">
<img src="https://github.com/user-attachments/assets/620d2517-e7be-4bb0-b2b7-3aa0cba37ef0" width="250" /> <img src="https://github.com/user-attachments/assets/620d2517-e7be-4bb0-b2b7-3aa0cba37ef0" width="250" />
</p> </p>
This is a full-stack Conversational AI. This is a full-stack Conversational AI.
## Table of Contents ## Table of Contents
@@ -14,16 +15,15 @@ This is a full-stack Conversational AI.
- [Installation](#installation) - [Installation](#installation)
- [Deployment](#deployment) - [Deployment](#deployment)
- [Local Inference](#local-inference) - [Local Inference](#local-inference)
- [mlx-omni-server (default)](#mlx-omni-server) - [mlx-omni-server (default)](#mlx-omni-server)
- [Adding models](#adding-models-for-local-inference-apple-silicon) - [Adding models](#adding-models-for-local-inference-apple-silicon)
- [Ollama](#ollama) - [Ollama](#ollama)
- [Adding models](#adding-models-for-local-inference-ollama) - [Adding models](#adding-models-for-local-inference-ollama)
- [Testing](#testing) - [Testing](#testing)
- [Troubleshooting](#troubleshooting) - [Troubleshooting](#troubleshooting)
- [Acknowledgments](#acknowledgments) - [Acknowledgments](#acknowledgments)
- [License](#license) - [License](#license)
## Installation ## Installation
1. `bun i && bun test:all` 1. `bun i && bun test:all`
@@ -33,18 +33,22 @@ This is a full-stack Conversational AI.
> Note: it should be possible to use pnpm in place of bun. > Note: it should be possible to use pnpm in place of bun.
## Deployment ## Deployment
1. Setup KV_STORAGE binding in `packages/server/wrangler.jsonc` 1. Setup KV_STORAGE binding in `packages/server/wrangler.jsonc`
1. [Add keys in secrets.json](https://console.groq.com/keys) 1. [Add keys in secrets.json](https://console.groq.com/keys)
1. Run `bun run deploy && bun run deploy:secrets && bun run deploy` 1. Run `bun run deploy && bun run deploy:secrets && bun run deploy`
> Note: Subsequent deployments should omit `bun run deploy:secrets` > Note: Subsequent deployments should omit `bun run deploy:secrets`
## Local Inference ## Local Inference
> Local inference is supported for Ollama and mlx-omni-server. OpenAI compatible servers can be used by overriding OPENAI_API_KEY and OPENAI_API_ENDPOINT. > Local inference is supported for Ollama and mlx-omni-server. OpenAI compatible servers can be used by overriding OPENAI_API_KEY and OPENAI_API_ENDPOINT.
### mlx-omni-server ### mlx-omni-server
(default) (Apple Silicon Only) (default) (Apple Silicon Only)
~~~bash
```bash
# (prereq) install mlx-omni-server # (prereq) install mlx-omni-server
brew tap seemueller-io/tap brew tap seemueller-io/tap
brew install seemueller-io/tap/mlx-omni-server brew install seemueller-io/tap/mlx-omni-server
@@ -52,10 +56,11 @@ brew install seemueller-io/tap/mlx-omni-server
bun run openai:local mlx-omni-server # Start mlx-omni-server bun run openai:local mlx-omni-server # Start mlx-omni-server
bun run openai:local:configure # Configure connection bun run openai:local:configure # Configure connection
bun run server:dev # Restart server bun run server:dev # Restart server
~~~ ```
#### Adding models for local inference (Apple Silicon) #### Adding models for local inference (Apple Silicon)
~~~bash ```bash
# ensure mlx-omni-server is running # ensure mlx-omni-server is running
# See https://huggingface.co/mlx-community for available models # See https://huggingface.co/mlx-community for available models
@@ -67,21 +72,22 @@ curl http://localhost:10240/v1/chat/completions \
\"model\": \"$MODEL_TO_ADD\", \"model\": \"$MODEL_TO_ADD\",
\"messages\": [{\"role\": \"user\", \"content\": \"Hello\"}] \"messages\": [{\"role\": \"user\", \"content\": \"Hello\"}]
}" }"
~~~ ```
### Ollama ### Ollama
~~~bash
```bash
bun run openai:local ollama # Start ollama server bun run openai:local ollama # Start ollama server
bun run openai:local:configure # Configure connection bun run openai:local:configure # Configure connection
bun run server:dev # Restart server bun run server:dev # Restart server
~~~ ```
#### Adding models for local inference (ollama) #### Adding models for local inference (ollama)
~~~bash ```bash
# See https://ollama.com/library for available models # See https://ollama.com/library for available models
use the ollama web ui @ http://localhost:8080 use the ollama web ui @ http://localhost:8080
~~~ ```
## Testing ## Testing
@@ -89,44 +95,44 @@ Tests are located in `__tests__` directories next to the code they test. Testing
> `bun test:all` will run all tests > `bun test:all` will run all tests
## Troubleshooting ## Troubleshooting
1. `bun clean` 1. `bun clean`
1. `bun i` 1. `bun i`
1. `bun server:dev` 1. `bun server:dev`
1. `bun client:dev` 1. `bun client:dev`
1. Submit an issue 1. Submit an issue
## History
History
---
A high-level overview for the development history of the parent repository, [geoff-seemueller-io](https://geoff.seemueller.io), is provided in [LEGACY.md](./LEGACY.md). A high-level overview for the development history of the parent repository, [geoff-seemueller-io](https://geoff.seemueller.io), is provided in [LEGACY.md](./LEGACY.md).
## Acknowledgments ## Acknowledgments
I would like to express gratitude to the following projects, libraries, and individuals that have contributed to making open-gsio possible: I would like to express gratitude to the following projects, libraries, and individuals that have contributed to making open-gsio possible:
- [TypeScript](https://www.typescriptlang.org/) - Primary programming language - [TypeScript](https://www.typescriptlang.org/) - Primary programming language
- [React](https://react.dev/) - UI library for building the frontend - [React](https://react.dev/) - UI library for building the frontend
- [Vike](https://vike.dev/) - Framework for server-side rendering and routing - [Vike](https://vike.dev/) - Framework for server-side rendering and routing
- [Cloudflare Workers](https://developers.cloudflare.com/workers/) - Serverless execution environment - [Cloudflare Workers](https://developers.cloudflare.com/workers/) - Serverless execution environment
- [Bun](https://bun.sh/) - JavaScript runtime and toolkit - [Bun](https://bun.sh/) - JavaScript runtime and toolkit
- [itty-router](https://github.com/kwhitley/itty-router) - Lightweight router for serverless environments - [itty-router](https://github.com/kwhitley/itty-router) - Lightweight router for serverless environments
- [MobX-State-Tree](https://mobx-state-tree.js.org/) - State management solution - [MobX-State-Tree](https://mobx-state-tree.js.org/) - State management solution
- [OpenAI SDK](https://github.com/openai/openai-node) - Client for AI model integration - [OpenAI SDK](https://github.com/openai/openai-node) - Client for AI model integration
- [Vitest](https://vitest.dev/) - Testing framework - [Vitest](https://vitest.dev/) - Testing framework
- [OpenAI](https://github.com/openai) - [OpenAI](https://github.com/openai)
- [Groq](https://console.groq.com/) - Fast inference API - [Groq](https://console.groq.com/) - Fast inference API
- [Anthropic](https://www.anthropic.com/) - Creator of Claude models - [Anthropic](https://www.anthropic.com/) - Creator of Claude models
- [Fireworks](https://fireworks.ai/) - AI inference platform - [Fireworks](https://fireworks.ai/) - AI inference platform
- [XAI](https://x.ai/) - Creator of Grok models - [XAI](https://x.ai/) - Creator of Grok models
- [Cerebras](https://www.cerebras.net/) - AI compute and models - [Cerebras](https://www.cerebras.net/) - AI compute and models
- [(madroidmaq) MLX Omni Server](https://github.com/madroidmaq/mlx-omni-server) - Open-source high-performance inference for Apple Silicon - [(madroidmaq) MLX Omni Server](https://github.com/madroidmaq/mlx-omni-server) - Open-source high-performance inference for Apple Silicon
- [MLX](https://github.com/ml-explore/mlx) - An array framework for Apple silicon - [MLX](https://github.com/ml-explore/mlx) - An array framework for Apple silicon
- [Ollama](https://github.com/ollama/ollama) - Versatile solution for self-hosting models - [Ollama](https://github.com/ollama/ollama) - Versatile solution for self-hosting models
## License ## License
~~~text
```text
MIT License MIT License
Copyright (c) 2025 Geoff Seemueller Copyright (c) 2025 Geoff Seemueller
@@ -148,4 +154,4 @@ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE. SOFTWARE.
~~~ ```

View File

@@ -38,5 +38,6 @@
}, },
"peerDependencies": { "peerDependencies": {
"typescript": "^5" "typescript": "^5"
} },
"packageManager": "pnpm@10.10.0+sha512.d615db246fe70f25dcfea6d8d73dee782ce23e2245e3c4f6f888249fb568149318637dca73c2c5c8ef2a4ca0d5657fb9567188bfab47f566d1ee6ce987815c39"
} }

View File

@@ -4,10 +4,6 @@
"outDir": "dist", "outDir": "dist",
"rootDir": "." "rootDir": "."
}, },
"include": [ "include": ["*.ts"],
"*.ts" "exclude": ["node_modules"]
],
"exclude": [
"node_modules"
]
} }

View File

@@ -15,30 +15,29 @@
}; };
function s() { function s() {
var i = [ var i = [
g(m(4)) + "=" + g(m(6)), g(m(4)) + '=' + g(m(6)),
"ga=" + t.ga_tid, 'ga=' + t.ga_tid,
"dt=" + r(e.title), 'dt=' + r(e.title),
"de=" + r(e.characterSet || e.charset), 'de=' + r(e.characterSet || e.charset),
"dr=" + r(e.referrer), 'dr=' + r(e.referrer),
"ul=" + (n.language || n.browserLanguage || n.userLanguage), 'ul=' + (n.language || n.browserLanguage || n.userLanguage),
"sd=" + a.colorDepth + "-bit", 'sd=' + a.colorDepth + '-bit',
"sr=" + a.width + "x" + a.height, 'sr=' + a.width + 'x' + a.height,
"vp=" + 'vp=' +
o(e.documentElement.clientWidth, t.innerWidth || 0) + o(e.documentElement.clientWidth, t.innerWidth || 0) +
"x" + 'x' +
o(e.documentElement.clientHeight, t.innerHeight || 0), o(e.documentElement.clientHeight, t.innerHeight || 0),
"plt=" + c(d.loadEventStart - d.navigationStart || 0), 'plt=' + c(d.loadEventStart - d.navigationStart || 0),
"dns=" + c(d.domainLookupEnd - d.domainLookupStart || 0), 'dns=' + c(d.domainLookupEnd - d.domainLookupStart || 0),
"pdt=" + c(d.responseEnd - d.responseStart || 0), 'pdt=' + c(d.responseEnd - d.responseStart || 0),
"rrt=" + c(d.redirectEnd - d.redirectStart || 0), 'rrt=' + c(d.redirectEnd - d.redirectStart || 0),
"tcp=" + c(d.connectEnd - d.connectStart || 0), 'tcp=' + c(d.connectEnd - d.connectStart || 0),
"srt=" + c(d.responseStart - d.requestStart || 0), 'srt=' + c(d.responseStart - d.requestStart || 0),
"dit=" + c(d.domInteractive - d.domLoading || 0), 'dit=' + c(d.domInteractive - d.domLoading || 0),
"clt=" + c(d.domContentLoadedEventStart - d.navigationStart || 0), 'clt=' + c(d.domContentLoadedEventStart - d.navigationStart || 0),
"z=" + Date.now(), 'z=' + Date.now(),
]; ];
(t.__ga_img = new Image()), (t.__ga_img.src = t.ga_api + "?" + i.join("&")); ((t.__ga_img = new Image()), (t.__ga_img.src = t.ga_api + '?' + i.join('&')));
} }
(t.cfga = s), ((t.cfga = s), 'complete' === e.readyState ? s() : t.addEventListener('load', s));
"complete" === e.readyState ? s() : t.addEventListener("load", s);
})(window, document, navigator); })(window, document, navigator);

View File

@@ -8,12 +8,6 @@
"baseUrl": "src", "baseUrl": "src",
"noEmit": true "noEmit": true
}, },
"include": [ "include": ["src/**/*.ts", "src/**/*.tsx"],
"src/**/*.ts", "exclude": ["node_modules", "dist"]
"src/**/*.tsx"
],
"exclude": [
"node_modules",
"dist"
]
} }

View File

@@ -1,7 +1,9 @@
# open-gsio # open-gsio
[![Tests](https://github.com/geoffsee/open-gsio/actions/workflows/test.yml/badge.svg)](https://github.com/geoffsee/open-gsio/actions/workflows/test.yml) [![Tests](https://github.com/geoffsee/open-gsio/actions/workflows/test.yml/badge.svg)](https://github.com/geoffsee/open-gsio/actions/workflows/test.yml)
[![License: MIT](https://img.shields.io/badge/License-MIT-green.svg)](https://opensource.org/licenses/MIT) [![License: MIT](https://img.shields.io/badge/License-MIT-green.svg)](https://opensource.org/licenses/MIT)
</br> </br>
<p align="center"> <p align="center">
<img src="https://github.com/user-attachments/assets/620d2517-e7be-4bb0-b2b7-3aa0cba37ef0" width="250" /> <img src="https://github.com/user-attachments/assets/620d2517-e7be-4bb0-b2b7-3aa0cba37ef0" width="250" />
</p> </p>
@@ -15,25 +17,25 @@
- [Installation](#installation) - [Installation](#installation)
- [Deployment](#deployment) - [Deployment](#deployment)
- [Local Inference](#local-inference) - [Local Inference](#local-inference)
- [mlx-omni-server (default)](#mlx-omni-server) - [mlx-omni-server (default)](#mlx-omni-server)
- [Adding models](#adding-models-for-local-inference-apple-silicon) - [Adding models](#adding-models-for-local-inference-apple-silicon)
- [Ollama](#ollama) - [Ollama](#ollama)
- [Adding models](#adding-models-for-local-inference-ollama) - [Adding models](#adding-models-for-local-inference-ollama)
- [Testing](#testing) - [Testing](#testing)
- [Troubleshooting](#troubleshooting) - [Troubleshooting](#troubleshooting)
- [History](#history) - [History](#history)
- [License](#license) - [License](#license)
## Stack ## Stack
* [TypeScript](https://www.typescriptlang.org/)
* [Vike](https://vike.dev/)
* [React](https://react.dev/)
* [Cloudflare Workers](https://developers.cloudflare.com/workers/)
* [ittyrouter](https://github.com/kwhitley/itty-router)
* [MobXStateTree](https://mobx-state-tree.js.org/)
* [OpenAI SDK](https://github.com/openai/openai-node)
* [Vitest](https://vitest.dev/)
- [TypeScript](https://www.typescriptlang.org/)
- [Vike](https://vike.dev/)
- [React](https://react.dev/)
- [Cloudflare Workers](https://developers.cloudflare.com/workers/)
- [ittyrouter](https://github.com/kwhitley/itty-router)
- [MobXStateTree](https://mobx-state-tree.js.org/)
- [OpenAI SDK](https://github.com/openai/openai-node)
- [Vitest](https://vitest.dev/)
## Installation ## Installation
@@ -44,19 +46,22 @@
> Note: it should be possible to use pnpm in place of bun. > Note: it should be possible to use pnpm in place of bun.
## Deployment ## Deployment
1. Setup the KV_STORAGE bindings in `wrangler.jsonc` 1. Setup the KV_STORAGE bindings in `wrangler.jsonc`
1. [Add another `GROQ_API_KEY` in secrets.json](https://console.groq.com/keys) 1. [Add another `GROQ_API_KEY` in secrets.json](https://console.groq.com/keys)
1. Run `bun run deploy && bun run deploy:secrets && bun run deploy` 1. Run `bun run deploy && bun run deploy:secrets && bun run deploy`
> Note: Subsequent deployments should omit `bun run deploy:secrets` > Note: Subsequent deployments should omit `bun run deploy:secrets`
## Local Inference ## Local Inference
> Local inference is achieved by overriding the `OPENAI_API_KEY` and `OPENAI_API_ENDPOINT` environment variables. See below. > Local inference is achieved by overriding the `OPENAI_API_KEY` and `OPENAI_API_ENDPOINT` environment variables. See below.
### mlx-omni-server ### mlx-omni-server
(default) (Apple Silicon Only) - Use Ollama for other platforms. (default) (Apple Silicon Only) - Use Ollama for other platforms.
~~~bash
```bash
# (prereq) install mlx-omni-server # (prereq) install mlx-omni-server
brew tap seemueller-io/tap brew tap seemueller-io/tap
brew install seemueller-io/tap/mlx-omni-server brew install seemueller-io/tap/mlx-omni-server
@@ -64,10 +69,11 @@ brew install seemueller-io/tap/mlx-omni-server
bun run openai:local mlx-omni-server # Start mlx-omni-server bun run openai:local mlx-omni-server # Start mlx-omni-server
bun run openai:local:enable # Configure connection bun run openai:local:enable # Configure connection
bun run server:dev # Restart server bun run server:dev # Restart server
~~~ ```
#### Adding models for local inference (Apple Silicon) #### Adding models for local inference (Apple Silicon)
~~~bash ```bash
# ensure mlx-omni-server is running # ensure mlx-omni-server is running
# See https://huggingface.co/mlx-community for available models # See https://huggingface.co/mlx-community for available models
@@ -79,22 +85,23 @@ curl http://localhost:10240/v1/chat/completions \
\"model\": \"$MODEL_TO_ADD\", \"model\": \"$MODEL_TO_ADD\",
\"messages\": [{\"role\": \"user\", \"content\": \"Hello\"}] \"messages\": [{\"role\": \"user\", \"content\": \"Hello\"}]
}" }"
~~~ ```
### Ollama ### Ollama
~~~bash
```bash
bun run openai:local ollama # Start ollama server bun run openai:local ollama # Start ollama server
bun run openai:local:enable # Configure connection bun run openai:local:enable # Configure connection
bun run server:dev # Restart server bun run server:dev # Restart server
~~~ ```
#### Adding models for local inference (ollama) #### Adding models for local inference (ollama)
~~~bash ```bash
# See https://ollama.com/library for available models # See https://ollama.com/library for available models
MODEL_TO_ADD=gemma3 MODEL_TO_ADD=gemma3
docker exec -it ollama ollama run ${MODEL_TO_ADD} docker exec -it ollama ollama run ${MODEL_TO_ADD}
~~~ ```
## Testing ## Testing
@@ -102,20 +109,21 @@ Tests are located in `__tests__` directories next to the code they test. Testing
> `bun run test` will run all tests > `bun run test` will run all tests
## Troubleshooting ## Troubleshooting
1. `bun run clean` 1. `bun run clean`
1. `bun i` 1. `bun i`
1. `bun server:dev` 1. `bun server:dev`
1. `bun client:dev` 1. `bun client:dev`
1. Submit an issue 1. Submit an issue
## History
History
---
A high-level overview for the development history of the parent repository, [geoff-seemueller-io](https://geoff.seemueller.io), is provided in [LEGACY.md](../../LEGACY.md). A high-level overview for the development history of the parent repository, [geoff-seemueller-io](https://geoff.seemueller.io), is provided in [LEGACY.md](../../LEGACY.md).
## License ## License
~~~text
```text
MIT License MIT License
Copyright (c) 2025 Geoff Seemueller Copyright (c) 2025 Geoff Seemueller
@@ -137,5 +145,4 @@ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE. SOFTWARE.
~~~ ```

View File

@@ -8,11 +8,6 @@
"outDir": "dist", "outDir": "dist",
"rootDir": "." "rootDir": "."
}, },
"include": [ "include": ["*.ts"],
"*.ts" "exclude": ["node_modules", "*.test.ts"]
],
"exclude": [
"node_modules",
"*.test.ts"
]
} }

View File

@@ -4,7 +4,7 @@ interface Env {
EMAIL_SERVICE: any; EMAIL_SERVICE: any;
// Durable Objects // Durable Objects
SERVER_COORDINATOR: import("packages/server/durable-objects/ServerCoordinator.ts"); SERVER_COORDINATOR: import('packages/server/durable-objects/ServerCoordinator.ts');
// Handles serving static assets // Handles serving static assets
ASSETS: Fetcher; ASSETS: Fetcher;
@@ -12,7 +12,6 @@ interface Env {
// KV Bindings // KV Bindings
KV_STORAGE: KVNamespace; KV_STORAGE: KVNamespace;
// Text/Secrets // Text/Secrets
METRICS_HOST: string; METRICS_HOST: string;
OPENAI_API_ENDPOINT: string; OPENAI_API_ENDPOINT: string;

View File

@@ -4,11 +4,6 @@
"outDir": "dist", "outDir": "dist",
"rootDir": "." "rootDir": "."
}, },
"include": [ "include": ["*.ts", "*.d.ts"],
"*.ts", "exclude": ["node_modules"]
"*.d.ts"
],
"exclude": [
"node_modules"
]
} }

View File

@@ -6,11 +6,6 @@
"allowJs": true, "allowJs": true,
"noEmit": false "noEmit": false
}, },
"include": [ "include": ["*.js", "*.ts"],
"*.js", "exclude": ["node_modules"]
"*.ts"
],
"exclude": [
"node_modules"
]
} }

View File

@@ -6,15 +6,15 @@ This directory contains the server component of open-gsio, a full-stack Conversa
- `__tests__/`: Contains test files for the server components - `__tests__/`: Contains test files for the server components
- `services/`: Contains service modules for different functionalities - `services/`: Contains service modules for different functionalities
- `AssetService.ts`: Handles static assets and SSR - `AssetService.ts`: Handles static assets and SSR
- `ChatService.ts`: Manages chat interactions with AI models - `ChatService.ts`: Manages chat interactions with AI models
- `ContactService.ts`: Processes contact form submissions - `ContactService.ts`: Processes contact form submissions
- `FeedbackService.ts`: Handles user feedback - `FeedbackService.ts`: Handles user feedback
- `MetricsService.ts`: Collects and processes metrics - `MetricsService.ts`: Collects and processes metrics
- `TransactionService.ts`: Manages transactions - `TransactionService.ts`: Manages transactions
- `durable_objects/`: Contains durable object implementations - `durable_objects/`: Contains durable object implementations
- `ServerCoordinator.ts`: Cloudflare Implementation - `ServerCoordinator.ts`: Cloudflare Implementation
- `ServerCoordinatorBun.ts`: Bun Implementation - `ServerCoordinatorBun.ts`: Bun Implementation
- `api-router.ts`: API Router - `api-router.ts`: API Router
- `RequestContext.ts`: Application Context - `RequestContext.ts`: Application Context
- `server.ts`: Main server entry point - `server.ts`: Main server entry point

View File

@@ -10,12 +10,6 @@
"allowJs": true, "allowJs": true,
"jsx": "react-jsx" "jsx": "react-jsx"
}, },
"include": [ "include": ["**/*.ts", "**/*.tsx"],
"**/*.ts", "exclude": ["node_modules", "dist"]
"**/*.tsx"
],
"exclude": [
"node_modules",
"dist"
]
} }

View File

@@ -1,5 +1,5 @@
declare global { declare global {
type ExecutionContext = any type ExecutionContext = any;
type Env = import("@open-gsio/env") type Env = import('@open-gsio/env');
} }
export type ExecutionContext = any export type ExecutionContext = any;