Go back

How to Use RAGFlow

This guide covers step by step instructions to get started with RAGFlow.

  1. RAGFlow is a RAG engine and needs to work with an LLM to offer grounded, hallucination-free question-answering capabilities. However the VM Comes with preinstalled Ollama LLMs (Deepseek-R1, Qwen 2.5, Mistral, Gemma, Llama, LLaVA) allowing you to get started quickly with these local LLMs and also provide you with the ability to easily add your own models via Ollama or any Other LLM provider, giving you complete control over what models you use and how you run them.

Below is the list of available LLMs on this VM setup.

llava:latest
llava:7b
llava:13b
llava:34b
qwen2.5:14b
qwen2.5:32b
qwen2.5:72b
qwen2.5:7b
deepseek-r1:70b
deepseek-r1:32b
deepseek-r1:14b
deepseek-r1:8b
nomic-embed-text:latest
gemma2:27b
mistral:latest
llama3.3:latest

  1. To add and configure an LLM:
  • Login to your RAGFlow Web Interface.

  • Click on your logo on the top right of the page and navigate to Model providers page.

/img/azure/ragflow-vm/settings.png

  • Click on “Add Model” link under Ollama and add your desired Ollama model. Here we will add deepseek-r1:8b chat model and llava:7b vision model. The VM comes with BAAI and Youdao Text Embedding models available by default.

/img/azure/ragflow-vm/navigate-to-model-providers.png

  • In the popup window, complete basic settings for Ollama:

    • Select the type of model from the dropdown menu - chat, embedding, vision and image2text.
    • Ensure that your model name and type match those been pulled on this VM.
    • In Ollama base URL, put the URL as http://host.docker.internal:11434/v1
    • Token size e.g 1024
    • OPTIONAL: Switch on the toggle under Does it support Vision? if your model includes an image-to-text model. e.g Llava
    • Click on Ok to save the model. It takes some time to save. Once done it will show Modified message at the top of this page.
    • You can add more that one model by following the above steps.

/img/azure/ragflow-vm/add-llm.png

/img/azure/ragflow-vm/add-llm-vision.png

  • Once Added you can see the list of available models here:

/img/azure/ragflow-vm/view-llms.png

  1. Click Set Default Models to select the default models: Chat model, Embedding model, Image-to-text model, and more.

/img/azure/ragflow-vm/set-default-models.png

/img/azure/ragflow-vm/set-default-models-02.png

  1. Create your first knowledge base:

You are allowed to upload files to a knowledge base in RAGFlow and parse them into datasets. A knowledge base is virtually a collection of datasets. Question answering in RAGFlow can be based on a particular knowledge base or multiple knowledge bases. File formats that RAGFlow supports include documents (PDF, DOC, DOCX, TXT, MD, MDX), tables (CSV, XLSX, XLS), pictures (JPEG, JPG, PNG, TIF, GIF), and slides (PPT, PPTX).

  • To create your first knowledge base:

    Click the Dataset tab in the top middle of the homepage and click + Create Knowledge base.

    /img/azure/ragflow-vm/dataset.png

    /img/azure/ragflow-vm/create-knowledge-base-01.png

    Input the name of your knowledge base and click OK to Save your changes.

    /img/azure/ragflow-vm/create-knowledge-base-02.png

    You are taken to the Configuration page of your knowledge base.

  • Click + Add file > Local files to start uploading a particular file to the knowledge base.

/img/azure/ragflow-vm/add-file.png

  • In the uploaded file entry, click the play button to start file parsing. To parse the uploaded file, RAGFlow will use default Embedding model set in Set default models at above Step 3.

/img/azure/ragflow-vm/file-parsing.png

  • Once parsing is completed, you can see chunk result by clicking on the file name.

/img/azure/ragflow-vm/parsing-completed.png

/img/azure/ragflow-vm/chunk-result.png

  1. Setting AI Chat in RAGFlow

Once you have created your knowledge base and finished file parsing, you can go ahead and start an AI conversation based on a particular knowledge base or multiple knowledge bases.

  • Click the Chat tab in the middle top of the homepage > Create Chat. Give a name to your Chat Assistant and click Save. Click on newly created Chat Assistant to show the Chat Configuration dialogue of your next dialogue.

/img/azure/ragflow-vm/chat.png

/img/azure/ragflow-vm/create-chat.png

  • Provide different chat setting in setting pop-up. Select your Knowledge base, provide Tavily API Key , set the temperature, select the model etc. Save your Changes and start Chatting with your AI Assistance. Click on + icon from left pane to start the chat.

Note: RAGFlow offer the flexibility of choosing a different chat model for each dialogue, while allowing you to set the default models in System Model Settings.

/img/azure/ragflow-vm/chat-settings.png

/img/azure/ragflow-vm/ai-chat.png

  1. Setting Up AI search

The Key difference between an AI search and an AI chat is that a chat is a multi-turn AI conversation where you can define your retrieval strategy and choose your chat model whereas an AI search is a single-turn AI conversation using a predefined retrieval strategy and the system’s default chat model.

To start the search feature:

  • Click on Search option at the top of the page. Click on Create Search.

/img/azure/ragflow-vm/create-search.png

/img/azure/ragflow-vm/create-search-02.png

  • Check the Search Settings and set your Dataset and other options here. Save the changes.
  • Run the Search.
  • Click on output to go to the page for your search query.

/img/azure/ragflow-vm/search-settings.png

/img/azure/ragflow-vm/search-result.png

/img/azure/ragflow-vm/search-result-02.png

  1. Create an Agent

Agents and RAG are complementary techniques, each enhancing the other’s capabilities in business applications.

  • Click the Agent tab in the middle top of the page to show the Agent page > Create Agent. As shown in the screenshot below, the cards on this page represent the various agent templates, which you can continue to edit. You can also create new agents from Scratch.

/img/azure/ragflow-vm/agent.png

/img/azure/ragflow-vm/agent-template.png

  • To create an agent from one of the templates, click the desired card, such as SEO Blog, name your agent in the pop-up dialogue, and click OK to confirm.

  • It will show no-code workflow editor page. Edit/Update the workflow as per your requirement and Save the Changes. Run the Agent to get the output.

/img/azure/ragflow-vm/no-code-agent.png

  1. If you want to install additional ollama models on this VM then follow below steps:
  • Connect to SSH Terminal of this VM. Please refer to Deployment Guide of respective Cloud Platform to launch and connect to the VM.

  • Check available Ollama models by running

ollama ls

/img/azure/ragflow-vm/ollama-ls.png

  • To pull new model run
ollama pull modelname

e.g ollama pull bge-m3:latest , Check more models on Ollama Official Site

/img/azure/ragflow-vm/ollama-pull.png

  • Now this model can be added in your RAGFlow Web Interface as shown in above Step2.

/img/azure/ragflow-vm/ollama-add-new-model-in-lagflow.png

/img/azure/ragflow-vm/ollama-new-model-added.png

  1. If you have deployed this solution using GPU instance then you can monitor GPU use by entering below command in terminal while running chat/search/agents in RAGFlow Web Interface.
watch -n 1 nvidia-smi

/img/azure/ragflow-vm/nvidia-smi.png


For more details, please visit Official Documentation page

Go back