mirror of
https://github.com/geoffsee/open-gsio.git
synced 2025-09-08 22:56:46 +00:00
run format
This commit is contained in:

committed by
Geoff Seemueller

parent
02c3253343
commit
f76301d620
98
README.md
98
README.md
@@ -1,29 +1,29 @@
|
||||
# open-gsio
|
||||
|
||||
[](https://github.com/geoffsee/open-gsio/actions/workflows/test.yml)
|
||||
[](https://opensource.org/licenses/MIT)
|
||||
</br>
|
||||
|
||||
<p align="center">
|
||||
<img src="https://github.com/user-attachments/assets/620d2517-e7be-4bb0-b2b7-3aa0cba37ef0" width="250" />
|
||||
</p>
|
||||
|
||||
|
||||
This is a full-stack Conversational AI.
|
||||
This is a full-stack Conversational AI.
|
||||
|
||||
## Table of Contents
|
||||
|
||||
- [Installation](#installation)
|
||||
- [Deployment](#deployment)
|
||||
- [Local Inference](#local-inference)
|
||||
- [mlx-omni-server (default)](#mlx-omni-server)
|
||||
- [Adding models](#adding-models-for-local-inference-apple-silicon)
|
||||
- [Ollama](#ollama)
|
||||
- [Adding models](#adding-models-for-local-inference-ollama)
|
||||
- [mlx-omni-server (default)](#mlx-omni-server)
|
||||
- [Adding models](#adding-models-for-local-inference-apple-silicon)
|
||||
- [Ollama](#ollama)
|
||||
- [Adding models](#adding-models-for-local-inference-ollama)
|
||||
- [Testing](#testing)
|
||||
- [Troubleshooting](#troubleshooting)
|
||||
- [Acknowledgments](#acknowledgments)
|
||||
- [License](#license)
|
||||
|
||||
|
||||
## Installation
|
||||
|
||||
1. `bun i && bun test:all`
|
||||
@@ -33,29 +33,34 @@ This is a full-stack Conversational AI.
|
||||
> Note: it should be possible to use pnpm in place of bun.
|
||||
|
||||
## Deployment
|
||||
|
||||
1. Setup KV_STORAGE binding in `packages/server/wrangler.jsonc`
|
||||
1. [Add keys in secrets.json](https://console.groq.com/keys)
|
||||
1. [Add keys in secrets.json](https://console.groq.com/keys)
|
||||
1. Run `bun run deploy && bun run deploy:secrets && bun run deploy`
|
||||
|
||||
> Note: Subsequent deployments should omit `bun run deploy:secrets`
|
||||
|
||||
|
||||
## Local Inference
|
||||
> Local inference is supported for Ollama and mlx-omni-server. OpenAI compatible servers can be used by overriding OPENAI_API_KEY and OPENAI_API_ENDPOINT.
|
||||
|
||||
> Local inference is supported for Ollama and mlx-omni-server. OpenAI compatible servers can be used by overriding OPENAI_API_KEY and OPENAI_API_ENDPOINT.
|
||||
|
||||
### mlx-omni-server
|
||||
|
||||
(default) (Apple Silicon Only)
|
||||
~~~bash
|
||||
|
||||
```bash
|
||||
# (prereq) install mlx-omni-server
|
||||
brew tap seemueller-io/tap
|
||||
brew install seemueller-io/tap/mlx-omni-server
|
||||
brew tap seemueller-io/tap
|
||||
brew install seemueller-io/tap/mlx-omni-server
|
||||
|
||||
bun run openai:local mlx-omni-server # Start mlx-omni-server
|
||||
bun run openai:local:configure # Configure connection
|
||||
bun run server:dev # Restart server
|
||||
~~~
|
||||
```
|
||||
|
||||
#### Adding models for local inference (Apple Silicon)
|
||||
|
||||
~~~bash
|
||||
```bash
|
||||
# ensure mlx-omni-server is running
|
||||
|
||||
# See https://huggingface.co/mlx-community for available models
|
||||
@@ -67,21 +72,22 @@ curl http://localhost:10240/v1/chat/completions \
|
||||
\"model\": \"$MODEL_TO_ADD\",
|
||||
\"messages\": [{\"role\": \"user\", \"content\": \"Hello\"}]
|
||||
}"
|
||||
~~~
|
||||
```
|
||||
|
||||
### Ollama
|
||||
~~~bash
|
||||
|
||||
```bash
|
||||
bun run openai:local ollama # Start ollama server
|
||||
bun run openai:local:configure # Configure connection
|
||||
bun run server:dev # Restart server
|
||||
~~~
|
||||
```
|
||||
|
||||
#### Adding models for local inference (ollama)
|
||||
|
||||
~~~bash
|
||||
```bash
|
||||
# See https://ollama.com/library for available models
|
||||
use the ollama web ui @ http://localhost:8080
|
||||
~~~
|
||||
|
||||
```
|
||||
|
||||
## Testing
|
||||
|
||||
@@ -89,44 +95,44 @@ Tests are located in `__tests__` directories next to the code they test. Testing
|
||||
|
||||
> `bun test:all` will run all tests
|
||||
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
1. `bun clean`
|
||||
1. `bun i`
|
||||
1. `bun server:dev`
|
||||
1. `bun client:dev`
|
||||
1. Submit an issue
|
||||
1. Submit an issue
|
||||
|
||||
## History
|
||||
|
||||
History
|
||||
---
|
||||
A high-level overview for the development history of the parent repository, [geoff-seemueller-io](https://geoff.seemueller.io), is provided in [LEGACY.md](./LEGACY.md).
|
||||
|
||||
## Acknowledgments
|
||||
|
||||
I would like to express gratitude to the following projects, libraries, and individuals that have contributed to making open-gsio possible:
|
||||
|
||||
- [TypeScript](https://www.typescriptlang.org/) - Primary programming language
|
||||
- [React](https://react.dev/) - UI library for building the frontend
|
||||
- [Vike](https://vike.dev/) - Framework for server-side rendering and routing
|
||||
- [Cloudflare Workers](https://developers.cloudflare.com/workers/) - Serverless execution environment
|
||||
- [Bun](https://bun.sh/) - JavaScript runtime and toolkit
|
||||
- [itty-router](https://github.com/kwhitley/itty-router) - Lightweight router for serverless environments
|
||||
- [MobX-State-Tree](https://mobx-state-tree.js.org/) - State management solution
|
||||
- [OpenAI SDK](https://github.com/openai/openai-node) - Client for AI model integration
|
||||
- [Vitest](https://vitest.dev/) - Testing framework
|
||||
- [OpenAI](https://github.com/openai)
|
||||
- [Groq](https://console.groq.com/) - Fast inference API
|
||||
- [Anthropic](https://www.anthropic.com/) - Creator of Claude models
|
||||
- [Fireworks](https://fireworks.ai/) - AI inference platform
|
||||
- [XAI](https://x.ai/) - Creator of Grok models
|
||||
- [Cerebras](https://www.cerebras.net/) - AI compute and models
|
||||
- [(madroidmaq) MLX Omni Server](https://github.com/madroidmaq/mlx-omni-server) - Open-source high-performance inference for Apple Silicon
|
||||
- [MLX](https://github.com/ml-explore/mlx) - An array framework for Apple silicon
|
||||
- [Ollama](https://github.com/ollama/ollama) - Versatile solution for self-hosting models
|
||||
|
||||
- [TypeScript](https://www.typescriptlang.org/) - Primary programming language
|
||||
- [React](https://react.dev/) - UI library for building the frontend
|
||||
- [Vike](https://vike.dev/) - Framework for server-side rendering and routing
|
||||
- [Cloudflare Workers](https://developers.cloudflare.com/workers/) - Serverless execution environment
|
||||
- [Bun](https://bun.sh/) - JavaScript runtime and toolkit
|
||||
- [itty-router](https://github.com/kwhitley/itty-router) - Lightweight router for serverless environments
|
||||
- [MobX-State-Tree](https://mobx-state-tree.js.org/) - State management solution
|
||||
- [OpenAI SDK](https://github.com/openai/openai-node) - Client for AI model integration
|
||||
- [Vitest](https://vitest.dev/) - Testing framework
|
||||
- [OpenAI](https://github.com/openai)
|
||||
- [Groq](https://console.groq.com/) - Fast inference API
|
||||
- [Anthropic](https://www.anthropic.com/) - Creator of Claude models
|
||||
- [Fireworks](https://fireworks.ai/) - AI inference platform
|
||||
- [XAI](https://x.ai/) - Creator of Grok models
|
||||
- [Cerebras](https://www.cerebras.net/) - AI compute and models
|
||||
- [(madroidmaq) MLX Omni Server](https://github.com/madroidmaq/mlx-omni-server) - Open-source high-performance inference for Apple Silicon
|
||||
- [MLX](https://github.com/ml-explore/mlx) - An array framework for Apple silicon
|
||||
- [Ollama](https://github.com/ollama/ollama) - Versatile solution for self-hosting models
|
||||
|
||||
## License
|
||||
~~~text
|
||||
|
||||
```text
|
||||
MIT License
|
||||
|
||||
Copyright (c) 2025 Geoff Seemueller
|
||||
@@ -148,4 +154,4 @@ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||
SOFTWARE.
|
||||
~~~
|
||||
```
|
||||
|
Reference in New Issue
Block a user