Update README.md
Signed-off-by: Geoff Seemueller <28698553+geoffsee@users.noreply.github.com>
This commit is contained in:
27
README.md
27
README.md
@@ -1,27 +1,12 @@
|
||||
# open-web-agent-rs
|
||||
|
||||
A Rust-based web agent with local inference capabilities.
|
||||
A Rust-based web agent with an embedded openai compatible inference server (supports gemma models only).
|
||||
|
||||
## Components
|
||||
|
||||
### Local Inference Engine
|
||||
|
||||
The [Local Inference Engine](./local_inference_engine/README.md) provides a way to run large language models locally. It supports both CLI mode for direct text generation and server mode with an OpenAI-compatible API.
|
||||
|
||||
Features:
|
||||
- Run Gemma models locally (1B, 2B, 7B, 9B variants)
|
||||
- CLI mode for direct text generation
|
||||
- Server mode with OpenAI-compatible API
|
||||
- Support for various model configurations (base, instruction-tuned)
|
||||
- Metal acceleration on macOS
|
||||
|
||||
See the [Local Inference Engine README](./local_inference_engine/README.md) for detailed usage instructions.
|
||||
|
||||
### Web Server
|
||||
|
||||
Server is being converted to MCP. Things are probably broken.
|
||||
|
||||
```text
|
||||
## Quickstart
|
||||
```bash
|
||||
cp .env.example .env
|
||||
bun i
|
||||
(cd local_inference_server && cargo run --release -- --server)
|
||||
docker compose up -d searxng
|
||||
bun dev
|
||||
```
|
||||
|
Reference in New Issue
Block a user