mirror of
https://github.com/geoffsee/open-gsio.git
synced 2025-09-08 22:56:46 +00:00
Add scripts and documentation for local inference configuration with Ollama and mlx-omni-server
- Introduced `configure_local_inference.sh` to automatically set `.dev.vars` based on active local inference services. - Updated `start_inference_server.sh` to handle both Ollama and mlx-omni-server server types. - Enhanced `package.json` to include new commands for starting and configuring inference servers. - Refined README to include updated instructions for running and adding models for local inference. - Minor cleanup in `MessageBubble.tsx`.
This commit is contained in:

committed by
Geoff Seemueller

parent
f2d91e2752
commit
9e8b427826
@@ -1,8 +1,12 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
SERVER_TYPE="mlx-omni-server"
|
||||
|
||||
printf "Starting Inference Server: %s\n" ${SERVER_TYPE}
|
||||
|
||||
|
||||
mlx-omni-server --log-level debug
|
||||
if [ "$1" = "mlx-omni-server" ]; then
|
||||
printf "Starting Inference Server: %s\n" "$1"
|
||||
mlx-omni-server --log-level debug
|
||||
elif [ "$1" = "ollama" ]; then
|
||||
echo "starting ollama"
|
||||
docker run -d -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama
|
||||
else
|
||||
printf "Error: First argument must be 'mlx-omni-server'\n"
|
||||
exit 1
|
||||
fi
|
||||
|
Reference in New Issue
Block a user