Add CI and related fixes (#1)
All checks were successful
continuous-integration/drone/push Build is passing

Reviewed-on: #1
Co-authored-by: Juanjo Gutiérrez <juanjo@gutierrezdequevedo.com>
Co-committed-by: Juanjo Gutiérrez <juanjo@gutierrezdequevedo.com>
This commit is contained in:
parent 50d66c2985
commit df5b0b4400
6 changed files with 87 additions and 9 deletions

36
.drone.yml Normal file
View file

@ -0,0 +1,36 @@
---
kind: pipeline
type: docker
name: default
steps:
- name: docker image build
image: plugins/docker
settings:
tags:
- latest
repo: docker.gutierrezdequevedo.com/ps/model-viewer
---
kind: pipeline
type: docker
name: notification
depends_on:
- default
steps:
- name: notify matrix
image: spotlightkid/drone-matrixchat-notify
settings:
homeserver: 'https://grava.work'
roomid: '!wMVeFx6jwwF0TWA18h:grava.work'
userid: '@juanjo:grava.work'
deviceid: 'drone CI'
accesstoken: G66FRa3fG7qNfM4KKoW5wx6TWophvvtF
markdown: 'yes'
template: |
`${DRONE_REPO}` build #${DRONE_BUILD_NUMBER} status: **${DRONE_BUILD_STATUS}**
[${DRONE_BUILD_LINK}]

10
Dockerfile Normal file
View file

@ -0,0 +1,10 @@
FROM python:3.14-slim
WORKDIR /app
COPY . .
RUN pip install --no-cache-dir -r requirements.txt
EXPOSE 8000
CMD ["python", "-m", "gunicorn", "app:app", "--access-logfile", "-", "-b", "0.0.0.0:8000"]

32
README.md Normal file
View file

@ -0,0 +1,32 @@
# OpenAI Models Viewer
A web-based interface for browsing and interacting with OpenAI-compatible API endpoints. This application allows you to manage multiple server connections, view available models, and chat with AI models directly from your browser.
## Features
- **Multi-Server Management**: Add and manage multiple OpenAI-compatible endpoints
- **Model Discovery**: Browse and view all available models from configured servers
- **Interactive Chat**: Chat directly with AI models through a clean web interface
- **Local Storage**: Securely stores server configurations in browser localStorage
- **Responsive Design**: Works on desktop and mobile devices
### Docker Support
Build and run with Docker:
```bash
docker build -t openai-models-viewer .
docker run -p 8000:8000 openai-models-viewer
```
## API Endpoints
The application connects to standard OpenAI-compatible endpoints:
- `/models` - List available models
- `/v1/chat/completions` - Chat completion endpoint
## Security
- API keys are stored locally in browser localStorage
- All communication happens directly between your browser and the API endpoints
- No server-side storage of sensitive information

9
app.py
View file

@ -1,18 +1,17 @@
from flask import Flask, send_from_directory
import os
app = Flask(__name__, static_folder='static', static_url_path='/static')
@app.route('/')
def index():
return send_from_directory('static', 'index.html')
# Serve static files
@app.route('/<path:filename>')
def static_files(filename):
return send_from_directory('static', filename)
if __name__ == '__main__':
import sys
port = int(sys.argv[1]) if len(sys.argv) > 1 else 5000
app.run(debug=True, host='0.0.0.0', port=port)
app.run(debug=True, host='0.0.0.0', port=5000)

View file

@ -1,2 +1,3 @@
flask
gunicorn
requests

View file

@ -4,7 +4,7 @@
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>OpenAI Models Viewer</title>
<link rel="stylesheet" href="/static/style.css">
<link rel="stylesheet" href="style.css">
</head>
<body>
<div class="container">
@ -91,6 +91,6 @@
</div>
</div>
<script src="/static/script.js"></script>
<script src="script.js"></script>
</body>
</html>