No description
Find a file
Juanjo Gutiérrez 90cc336ec2
All checks were successful
continuous-integration/drone/push Build is passing
bind to ANY address
2026-01-30 12:05:11 +01:00
static fix server related stuff 2026-01-30 11:47:39 +01:00
.drone.yml fix dependencies 2026-01-30 11:07:59 +01:00
app.py fix server related stuff 2026-01-30 11:47:39 +01:00
Dockerfile bind to ANY address 2026-01-30 12:05:11 +01:00
README.md first test 2026-01-30 10:17:24 +01:00
requirements.txt fix server related stuff 2026-01-30 11:47:39 +01:00

OpenAI Models Viewer

A web-based interface for browsing and interacting with OpenAI-compatible API endpoints. This application allows you to manage multiple server connections, view available models, and chat with AI models directly from your browser.

Features

  • Multi-Server Management: Add and manage multiple OpenAI-compatible endpoints
  • Model Discovery: Browse and view all available models from configured servers
  • Interactive Chat: Chat directly with AI models through a clean web interface
  • Local Storage: Securely stores server configurations in browser localStorage
  • Responsive Design: Works on desktop and mobile devices

Project Structure

├── app.py              # Flask backend server
├── requirements.txt    # Python dependencies
├── static/
│   ├── index.html      # Main HTML interface
│   ├── script.js       # Client-side JavaScript logic
│   └── style.css       # Styling and layout

Getting Started

Prerequisites

  • Python 3.7+
  • pip (Python package manager)

Installation

  1. Clone the repository:
git clone <repository-url>
cd openai-models-viewer
  1. Install Python dependencies:
pip install -r requirements.txt
  1. Run the application:
python app.py

The application will start on http://localhost:5000 by default.

Usage

  1. Add a Server:

    • Click the gear icon next to the server selector
    • Enter a server name (e.g., "OpenAI Production")
    • Enter the API endpoint URL (e.g., https://api.openai.com)
    • Enter your API key
    • Click "Add Server"
  2. View Models:

    • Select a configured server from the dropdown
    • Models will automatically load and display
  3. Chat with Models:

    • Click on any model to open the chat interface
    • Type your message and press Enter or click Send
    • View the AI's response in real-time

Development

Running with Custom Port

python app.py 8080

Docker Support

Build and run with Docker:

docker build -t openai-models-viewer .
docker run -p 5000:5000 openai-models-viewer

API Endpoints

The application connects to standard OpenAI-compatible endpoints:

  • /models - List available models
  • /v1/chat/completions - Chat completion endpoint

Security

  • API keys are stored locally in browser localStorage
  • All communication happens directly between your browser and the API endpoints
  • No server-side storage of sensitive information

Contributing

  1. Fork the repository
  2. Create your feature branch
  3. Commit your changes
  4. Push to the branch
  5. Open a pull request

License

This project is licensed under the MIT License - see the LICENSE file for details.

Acknowledgments

  • Built with Flask (Python web framework)
  • Uses modern JavaScript for client-side functionality
  • Designed with responsive CSS for cross-device compatibility