Go back

Setup and installation of 'DeepSeek & Llama powered All-in-One LLM Suite' on AWS


Note: We provide free demo access for the “DeepSeek & Llama-powered All-in-One LLM Suite.” To request a free demo, please reach out to us at marketing@techlatest.net with the subject “Free Demo Access Request - [Your Company Name]”



This section describes how to launch and connect to ‘DeepSeek & Llama powered All-in-One LLM Suite’ VM solution on AWS.

  1. Open DeepSeek & Llama powered All-in-One LLM Suite VM listing on AWS marketplace

/img/aws/multi-llm-vm/marketplace.png

  1. Click on View purchase options.
  • Login with your credentials and follow the instruction.
  • Subscribe to the product and click on Continue to configuration button.
  • Select a Region where you want to launch the VM(such as US East (N.Virginia))

/img/aws/multi-llm-vm/region.png

  • Click on Continue to Launch Button.
  • Choose Action: You can launch it through EC2 or from Website.(Let’s choose Launch from website)

/img/aws/multi-llm-vm/launch.png

  • Optionally change the EC2 instance type. (This defaults to t2.xlarge instance type, 4 vCPUs and 16 GB RAM.)

Minimum VM Specs : 8gbvRAM /2vCPU , However for swift performance go with 4vCPUs/16GB RAM or higher configuration

  • Optionally change the network name and subnetwork names.

/img/aws/minikube/vpc.png

  • Select the Security Group. Be sure that whichever Security Group you specify have ports 22 (for SSH), 3389 (for RDP), port 80 (for HTTP) and 443 (for HTTPS) exposed. Or you can create the new SG by clicking on “Create New Based On Seller Settings” button. Provide the name and description and save the SG for this instance.

/img/aws/desktop-linux-ubuntu2404/create-new-sg.png

/img/aws/multi-llm-vm/SG.png

  • Be sure to download the key-pair which is available by default, or you can create the new key-pair and download it. /img/aws/minikube/key-pair.png

  • Click on Launch..

  • DeepSeek & Llama powered All-in-One LLM Suite will begin deploying.

  1. A summary page displays. To see this instance on EC2 Console click on EC2 Console link.

/img/aws/multi-llm-vm/deployed.png

  1. To connect to this instance through putty, copy the IPv4 Public IP Address from the VM’s details page.

/img/aws/multi-llm-vm/public-ip.png

  1. Open putty, paste the IP address and browse your private key you downloaded while deploying the VM, by going to SSH->Auth->Credentials, click on Open. Enter ubuntu as userid

/img/aws/desktop-linux/putty-01.png

/img/aws/nvidia-aiml/putty-02.png

/img/aws/multi-llm-vm/ssh-login.png

  1. Once connected, change the password for ubuntu user using below command
sudo passwd ubuntu

/img/aws/multi-llm-vm/update-passwd.png

  1. Now the password for ubuntu user is set, you can connect to the VM’s desktop environment from any local Windows Machine using RDP protocol or Linux Machine using Remmina.

From your local windows machine, goto “start” menu, in the search box type and select “Remote desktop connection”. In the “Remote Desktop connection” wizard, copy the public IP address and click connect

/img/aws/desktop-linux/rdp.png

  1. This will connect you to the VM’s desktop environment. Provide the username “ubuntu” and the password set in the above “Reset password” step to authenticate. Click OK

/img/aws/desktop-linux/rdp-login.png

  1. Now you are connected to the out of box DeepSeek & Llama powered All-in-One LLM Suite VM’s desktop environment via Windows Machine.

/img/aws/multi-llm-vm/rdp-desktop.png

  1. To connect using RDP via Linux machine, first note the external IP of the VM from VM details page, then from your local Linux machine, goto menu, in the search box type and select “Remmina”.

Note: If you don’t have Remmina installed on your Linux machine, first Install Remmina as per your linux distribution.

/img/gcp/common/remmina-search.png

  1. In the “Remmina Remote Desktop Client” wizard, select the RDP option from dropdown and paste the external ip and click enter.

/img/gcp/common/remmina-external-ip.png

  1. This will connect you to the VM’s desktop environment. Provide “ubuntu” as the userid and the password set in above reset password step to authenticate. Click OK

/img/gcp/common/remmina-rdp-login.png

  1. Now you are connected to out of box DeepSeek & Llama powered All-in-One LLM Suite VM’s desktop environment via Linux machine.

/img/aws/multi-llm-vm/rdp-desktop.png

  1. To access the Open WebUI Interface, copy the public IP address of the VM and paste it in the browser:

Browser will display a SSL certificate warning message. Accept the certificate warning and Continue.

/img/aws/multi-llm-vm-vm/browser-warning.png

  1. The VM also comes with Certbot Nginx plugin preinstalled. So if you have valid domain name (DNS) and your instance IP is configured to access that DNS then you can generate free Letsencrypt SSL Certificates to access the Open WebUI securely over HTTPS using that DNS and can avoid Browser warnings. To do so, connect via terminal and run
sudo certbot --nginx

This command will prompt you for valid DNS Name, provide the DNS Name associated with this instance.

/img/aws/multi-llm-vm/certbot-configure.png

  1. Once your Certbot certificates are ready , you can navigate to any browser and access the Open Web UI using DNS Name securely.

/img/aws/multi-llm-vm/access-openwebui-using-dns.png

  1. Click Get Started on very first page. This will take you to registration page. Provide the details here and create your first admin account.

/img/aws/multi-llm-vm/open-webui-get-started.png

/img/aws/multi-llm-vm/open-webui-registration-page.png

  1. Now you are logged in to Open WebUI Interface. You can choose different preinstalled models from the dropdown and ask your queries.

/img/aws/multi-llm-vm/open-webui-home-page.png

/img/aws/multi-llm-vm/preinstalled-models.png

/img/aws/multi-llm-vm/query-in-open-webui.png

  1. You can also run various ollama models from the VM’s terminal. To list the installed models run -
ollama list

/img/aws/multi-llm-vm/ollama-list.png

  1. To run the specific model use below command

Replace modelname with actual name of the model, e.g qwen2.5:7b

ollama run modelname

/img/aws/multi-llm-vm/run-model.png

  1. To pull any new ollama model run -
ollama pull modelname

/img/aws/multi-llm-vm/pull-model.png

Once your model is pulled successfully , you can start using it.

For more details, please visit Official Documentation page

For video tutorials on this solution, please visit Free course on ‘DeepSeek & Llama powered All-in-One LLM Suite’

Go back