On-Premise & Hybrid Hosting

An alternative to default cloud hosting, bespoke on-premise or hybrid hosting gives you location control over data residency.

circle-info

On-premise or hybrid hosting are available with the VIP Club add-on.

Hosting Options

Get in touch with your Yola account representative to discuss which hosting solution is right for you, and its manual deployment procedure.

  • On-premise means all database content (e.g. AI configurations, conversations, and profiles of registered players), bucket content (files & attachments), and the Yola platform (web app) are all hosted independently by you.

  • Hybrid Hosting means all database and bucket content is hosting independently by you, but hosting and provision of the Yola platform (web app) is maintained by us.


System requirements

Software

  • OS: Ubuntu 22.04.3 LTS

  • Web Server: Nginx 1.14.0 or later

  • Docker: 24.0.0 or later

  • Docker Compose: 2.23.0 or later

  • AWS CLI: 2.13.30 or later

  • Python: 3.10 or later

Hardware

  • Memory: 32GB RAM minimum

  • Disk Space: 20GB required for safe installation. An additional 500 GB for database storage is recommended.

Network

  • Allowed traffic through port 80/443

External Services

  • SendGrid API key

  • OpenAI or Azure OpenAI API key


Chat-GPT data privacy

If you have AI agent with a Chat-GPT LLM enabled, it's recommended to use Azure's OpenAI service instead of OpenAI directly. The benefits of Azure’s OpenAI service include:

  • Faster response times

  • 99.9% uptime SLA

  • Stronger data privacy

From Microsoft's legal resource for data, privacy, and security for Azure OpenAI Servicearrow-up-right:

Your prompts (inputs) and completions (outputs), your embeddings, and your training data:

  • are NOT available to other customers.

  • are NOT available to OpenAI.

  • are NOT used to improve OpenAI models.

  • are NOT used to improve any Microsoft or 3rd party products or services.

  • are NOT used for automatically improving Azure OpenAI models for your use in your resource (The models are stateless, unless you explicitly fine-tune models with your training data).

  • Your fine-tuned Azure OpenAI models are available exclusively for your use.

The Azure OpenAI Service is fully controlled by Microsoft; Microsoft hosts the OpenAI models in Microsoft’s Azure environment and the Service does NOT interact with any services operated by OpenAI (e.g. ChatGPT, or the OpenAI API).


Installation

circle-info

These installation steps are specific to on-premise hosting and are for reference only. Yola personnel will provide direct guidance on your host migration process.

There are three components that must be served by your local server:

  1. The backend platform

  2. The Yola platform (web app)

  3. The webchat library

These three components can be served directly from Nginx. As components (2) and (3) are pre-built, they only need to be served as static files from Nginx without any configuration.

You will be provided with the following files to scaffold the on-premise deployment:

  • A Python file: yola_scaffold.py

  • A JSON file: yola_credentials.json

To scaffold:

  1. Place both files in your desired installation directory.

  2. Run python yola_scaffold.py init

The command will not start Yola. It will create a folder with the relevant service configuration TOML files, authenticate your instance with our private docker container repository, and add a docker-compose.yml.


Configuration

In the newly created directory, you'll find a subdirectory named backend/config/. This contains all the configurable parameters for the backend platform. While many parameters might not require adjustments, the following are mandatory:

Nginx example

The backend platform will run its services on port 8080 of the containers. Each service should be prefixed with api/. The web app and the webchat library will be served from / and /webchat, respectively.

Managing platform state

  • Start the platform: in backend/ run docker compose up -d

  • Stop the platform: in backend/ run docker compose down

  • Update the platform from remote: python yola_scaffold.py update

  • Directly connect to database: psql -U postgres -d platform

Last updated