In this article, we will explore how to deploy a RAG chatbot app that interacts with large language models(LLM’s) to a cloud provider such as AWS us

Deploying an LLM App to AWS using Open Source Tools

submited by
Style Pass
2024-04-19 12:00:07

In this article, we will explore how to deploy a RAG chatbot app that interacts with large language models(LLM’s) to a cloud provider such as AWS using GitHub Actions, OpenTofu, and Digger. We were extremely inspired by Wenqi Glantz' article and thought of creating a version of our own, with OpenTofu & Digger.

3. Generate AWS Access Key and Secret Key: In the AWS IAM Console, Select the created user from the list and click on "Create access key " button to create access key and secret key for the IAM user created earlier. The "Create access key" button is highlighted in black box in screenshot below. 

4. Set up AWS CLI: Install AWS CLI on your local machine. For installation instructions specific to your operating system check this link.

Once AWS CLI is installed, you need to configure it with your AWS credentials and set the default region specific to your AWS account. You can do this by running the aws configure command in your terminal or command prompt.

Leave a Comment