From 2f19477aba7f0a15f3d6c59c9ec91323cb17f05a Mon Sep 17 00:00:00 2001 From: geoffsee <> Date: Thu, 5 Jun 2025 23:32:20 -0400 Subject: [PATCH] Update Acknowledgements --- README.md | 21 +++++++++++++++++++++ 1 file changed, 21 insertions(+) diff --git a/README.md b/README.md index 7a91d1e..f1b1ca1 100644 --- a/README.md +++ b/README.md @@ -9,6 +9,27 @@ This project is organized as a Cargo workspace with the following crates: - `agent-server`: The main web agent server - `local_inference_engine`: An embedded OpenAI-compatible inference server for Gemma models +## Acknowledgements + +Special gratitude and thanks expressed for: + +- The Rust community for their excellent tools and libraries +- The Gemma team for making their models available +- Open source projects that have inspired and enabled this work + +- **[axum](https://github.com/tokio-rs/axum)**: Web framework for building APIs +- **[tokio](https://github.com/tokio-rs/tokio)**: Asynchronous runtime for efficient concurrency +- **[serde](https://github.com/serde-rs/serde)**: Serialization/deserialization framework +- **[rmcp](https://github.com/model-context-protocol/rmcp)**: Model Context Protocol SDK for agent communication +- **[sled](https://github.com/spacejam/sled)**: Embedded database for persistent storage +- **[tower-http](https://github.com/tower-rs/tower-http)**: HTTP middleware components +- **[candle-core](https://github.com/huggingface/candle)**: ML framework for efficient tensor operations +- **[candle-transformers](https://github.com/huggingface/candle/tree/main/candle-transformers)**: Implementation of + transformer models in Candle +- **[hf-hub](https://github.com/huggingface/hf-hub)**: Client for downloading models from Hugging Face +- **[tokenizers](https://github.com/huggingface/tokenizers)**: Fast text tokenization for ML models +- **[safetensors](https://github.com/huggingface/safetensors)**: Secure format for storing tensors + ## Architecture Diagram ```mermaid