Go back

Troubleshooting and Fixing Common Errors in OpenWebUI

This section covers how to troubleshoot and fix common errors while running Open WebUI using Nginx proxy via HTTPS in your local Browser.

Note: Please check our Getting Started Guide pages to learn how to deploy and connect to the ‘DeepSeek & Llama-powered All-in-One LLM Suite’ solution via terminal, as well as how to access the Open WebUI interface. Alternatively, check the GPU-supported DeepSeek & Llama-powered All-in-One LLM Suite if you are using a GPU-based alternative of the same offer.

A. SyntaxError: JSON.parse: unexpected character at line 1 column 1 of the JSON data or Network Error

/img/azure/multi-llm-vm/json-parse-error.png

/img/azure/multi-llm-vm/network-error.png

If you are encountering JSON.parse error or Network Error then follow below steps to fix it.

  1. Connect via terminal and edit the default nginx configuration file. To do so run below command in terminal
   sudo vi /etc/nginx/sites-available/default

/img/azure/multi-llm-vm/edit-default.png

Press ‘i’ to enable insert mode. In the “location” section add below parameters and save the changes by pressing ESC key followed by :wq

  proxy_read_timeout 300s;
  proxy_send_timeout 300s;

The file should match the format shown in the screenshot below.

/img/azure/multi-llm-vm/add-proxy-timeout-in-default-nginx-file.png

  1. Reboot the VM, wait for few minutes to start the Open WebUI and access it in your local browser using https://public_ip_of_the_vm.

/img/azure/multi-llm-vm/fixed-json-error.png


B. 500: Ollama: 500, message=‘Internal Server Error’, url=‘http://127.0.0.1:11434/api/chat’

/img/azure/multi-llm-vm/internal-server-error.png

  • If you encounter the error “500 Internal Server Error” in OpenWebUI, it means that the loaded model requires more RAM than what is currently available.

  • To determine how much RAM is needed, you can run the following command in the terminal:

check the name of the available models -

ollama list

/img/azure/multi-llm-vm/ollama-list.png

Run the model using -

  ollama run modelname

This will display the required memory for the model. If the available RAM is insufficient, updating your existing instance to a higher RAM instance type should resolve the issue.

/img/azure/multi-llm-vm/ollama-memory-error.png


C. Multiple models are loaded in the memory

  • If you are switching models in Open WebUI using the dropdown menu and it’s taking too long to respond, please check if the previously selected model is still loaded in memory, as this may be slowing down the newly loaded model.

/img/azure/multi-llm-vm/model-dropdown.png

  • To do so, connect via terminal and run -
ollama ps

/img/azure/multi-llm-vm/loaded-multiple-models.png

  • This command will output the running models. If you see previously loaded model in the output , you can stop it manaully by running -
ollama stop modelname
  • This will immediately stop the previously loaded model, speeding up the newly selected model.

/img/azure/multi-llm-vm/stop-models.png

Few commands to troubleshoot the issue and check the logs:

  • To check the ollama logs
journalctl -fu Ollama
  • To check the open-webui logs
sudo docker logs open-webui --follow
  • To check the nginx logs
journalctl -fu nginx
  • To check RAM status after running any model
free -h
  • If its Nvidia instance then check the Nvidia-usage by running
watch -n 1 nvidia-smi
Go back