2025-06-24 20:46:36 -04:00
2025-06-04 18:45:08 -04:00
2025-06-24 20:46:36 -04:00
2025-06-24 17:32:59 -04:00
2025-06-24 17:32:59 -04:00
2025-06-18 10:41:39 -04:00
2025-06-24 17:32:59 -04:00
2025-06-24 17:32:59 -04:00
2025-06-24 17:32:59 -04:00
2025-05-22 23:14:01 -04:00
2025-06-24 17:36:39 -04:00
2025-06-24 17:32:59 -04:00
2025-06-04 18:45:08 -04:00
2025-06-24 17:32:59 -04:00

open-gsio

Tests License: MIT

This is a full-stack Conversational AI.

Table of Contents

Installation

  1. bun i && bun test:all
  2. Setup Local Inference OR Add your own GROQ_API_KEY in packages/cloudflare-workers/open-gsio/.dev.vars
  3. In isolated shells, run bun run server:dev and bun run client:dev

Note: it should be possible to use pnpm in place of bun.

Deployment

  1. Setup KV_STORAGE binding in packages/server/wrangler.jsonc
  2. Add keys in secrets.json
  3. Run bun run deploy && bun run deploy:secrets && bun run deploy

Note: Subsequent deployments should omit bun run deploy:secrets

Local Inference

Local inference is supported for Ollama and mlx-omni-server. OpenAI compatible servers can be used by overriding OPENAI_API_KEY and OPENAI_API_ENDPOINT.

mlx-omni-server

(default) (Apple Silicon Only)

# (prereq) install mlx-omni-server
brew tap seemueller-io/tap
brew install seemueller-io/tap/mlx-omni-server

bun run openai:local mlx-omni-server         # Start mlx-omni-server
bun run openai:local:configure               # Configure connection
bun run server:dev                           # Restart server

Adding models for local inference (Apple Silicon)

# ensure mlx-omni-server is running

# See https://huggingface.co/mlx-community for available models
MODEL_TO_ADD=mlx-community/gemma-3-4b-it-8bit

curl http://localhost:10240/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d "{
    \"model\": \"$MODEL_TO_ADD\",
    \"messages\": [{\"role\": \"user\", \"content\": \"Hello\"}]
  }"

Ollama

bun run openai:local ollama                  # Start ollama server
bun run openai:local:configure               # Configure connection
bun run server:dev                           # Restart server

Adding models for local inference (ollama)

# See https://ollama.com/library for available models
use the ollama web ui @ http://localhost:8080

Testing

Tests are located in __tests__ directories next to the code they test. Testing is incomplete at this time.

bun test:all will run all tests

Troubleshooting

  1. bun clean
  2. bun i
  3. bun server:dev
  4. bun client:dev
  5. Submit an issue

History

A high-level overview for the development history of the parent repository, geoff-seemueller-io, is provided in LEGACY.md.

Acknowledgments

I would like to express gratitude to the following projects, libraries, and individuals that have contributed to making open-gsio possible:

License

MIT License

Copyright (c) 2025 Geoff Seemueller

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
Description
No description provided
Readme MIT 6.9 MiB
Languages
TypeScript 97.3%
JavaScript 1.4%
Shell 0.8%
Dockerfile 0.4%