DEV Community

AzeematRaji
AzeematRaji

Posted on

Deploying APIs as Microservice with Kubernetes, Docker, and AWS

Overview

This project demonstrates how to deploy a set of APIs as a microservice using Kubernetes, Docker, and AWS. The service is designed to manage application operations, leveraging PostgreSQL for efficient data storage and Flask to create the web API. Here’s a breakdown of each key component:

  • Kubernetes: Used for orchestrating containerized microservices. This allows us to deploy, scale, and manage the APIs in production.
  • Docker: Containers are used to package the microservices (Flask app and PostgreSQL) along with their dependencies, making the deployment process seamless.
  • AWS CodeBuild: Automates the build process of Docker images, pushing them to Elastic Container Registry (ECR).
  • PostgreSQL: Acts as the database for storing all user data, including usage statistics for the application.
  • CloudWatch: Monitors application performance and logs, ensuring that any issues can be diagnosed quickly.

Prerequisites:

Step by step instructions

  • Ensure AWS CLI is configured

aws sts get-caller-identity

Necessary IAM permissions is needed to create a cluster.

eksctl create cluster --name my-cluster --region us-east-1 --nodegroup-name my-nodes --node-type t3.small --nodes 1 --nodes-min 1 --nodes-max 2

--name my-cluster: Cluster name.
--region us-east-1: Region for the cluster.
--node-type t3.small: EC2 instance type for worker nodes.
--nodes 1, --nodes-min 1, --nodes-max 2: Auto-scaling configuration for the number of worker nodes (1 to 2).

  • Update the kubeconfig

aws eks --region us-east-1 update-kubeconfig --name my-cluster

This allows access to the cluster locally using kubectl

  • Verify connection

kubectl config current-context

Now we need to configure database for our application

  • Create a file pv.yaml to define the Persistent Volume (PV), which will be used to store data.
apiVersion: v1
kind: PersistentVolume
metadata:
  name: my-manual-pv
spec:
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  storageClassName: gp2
  hostPath:
    path: "/mnt/data"
Enter fullscreen mode Exit fullscreen mode

storageClassName: gp2: Tells Kubernetes to use an AWS EBS (Elastic Block Store) for storage.

  • Create pvc.yaml to define the Persistent Volume Claim (PVC), which is used to bind the persistent volume to the application.
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: postgresql-pvc
spec:
  storageClassName: gp2
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
Enter fullscreen mode Exit fullscreen mode

It can be referenced in the database's deployment to mount the storage.

  • Create a file postgresql-deployment.yaml for Postgres Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
  name: postgresql
spec:
  selector:
    matchLabels:
      app: postgresql
  template:
    metadata:
      labels:
        app: postgresql
    spec:
      containers:
      - name: postgresql
        image: postgres:latest
        env:
        - name: POSTGRES_DB
          value: mydatabase
        - name: POSTGRES_USER
          value: myuser
        - name: POSTGRES_PASSWORD
          value: mypassword
        ports:
        - containerPort: 5432
        volumeMounts:
        - mountPath: /var/lib/postgresql/data
          name: postgresql-storage
      volumes:
      - name: postgresql-storage
        persistentVolumeClaim:     #PVC referenced
          claimName: postgresql-pvc
Enter fullscreen mode Exit fullscreen mode
  • Apply YAML configurations in the following order
kubectl apply -f pv.yaml
kubectl apply -f pvc.yaml
kubectl apply -f postgresql-deployment.yaml
Enter fullscreen mode Exit fullscreen mode
  • Verify Database deployment

kubectl get pods

  • To open bash into the pod

kubectl exec -it <postgresql-name> -- bash

Once inside the pod, you can run psql -U myuser -d mydatabase to have access to the mydatabase database.

  • Create a YAML file, postgresql-service.yaml

Service needs to be created for the deployment to be exposed to other pods in the cluster

apiVersion: v1
kind: Service
metadata:
  name: postgresql-service
spec:
  ports:
  - port: 5432
    targetPort: 5432
  selector:
    app: postgresql
Enter fullscreen mode Exit fullscreen mode

This targets pods on port 5432, which is the default port for PostgreSQL deployment pod.

  • Run kubectl apply -f postgresql-service.yaml

  • Verify the service kubectl get svc

  • psql is a lightweight client application for postgresql that enables connection to your postgres server and it must be installed

apt update
apt install postgresql postgresql-contrib
Enter fullscreen mode Exit fullscreen mode
  • Run the seed files in db/ in order to create the tables and populate them with data in the database
export DB_PASSWORD=mypassword
PGPASSWORD="$DB_PASSWORD" psql --host 127.0.0.1 -U myuser -d mydatabase -p 5433 < <FILE_NAME.sql>
Enter fullscreen mode Exit fullscreen mode
  • Verify the database is populated

PGPASSWORD="$DB_PASSWORD" psql --host 127.0.0.1 -U myuser -d mydatabase -p 5433 to open psql terminal

Run query select *from users; to ensure they are not empty.

Setting up Continous Integration with codebuild
  • Create an Amazon ECR repository on AWS console by navigating to ECR service

  • Create buildspec.yml file the root directory of the repository

version: 0.2

phases:
  pre_build:
    commands:
      - echo Logging into ECR
      - aws ecr get-login-password --region $AWS_DEFAULT_REGION | docker login --username AWS --password-stdin $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com
  build:
    commands:
      - echo Starting build at `date`
      - echo Building the Docker image...          
      - docker build -t $IMAGE_REPO_NAME:$CODEBUILD_BUILD_NUMBER -f analytics/Dockerfile .
      - docker tag $IMAGE_REPO_NAME:$CODEBUILD_BUILD_NUMBER $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/$IMAGE_REPO_NAME:$CODEBUILD_BUILD_NUMBER     
  post_build:
    commands:
      - echo Completed build at `date`
      - echo Pushing the Docker image...
      - docker push $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/$IMAGE_REPO_NAME:$CODEBUILD_BUILD_NUMBER
Enter fullscreen mode Exit fullscreen mode

This template is what the CodeBuild will use to build docker image and push to the ECR repository. The placehoders in this file would need to be set in the CodeBuild project.

  • Create an Amazon CodeBuild project;

    • navigate to the Codebuild service, click create a new project
    • enter the project name
    • select github as the source code provider
    • authorize AWS to access project's GitHub repository and to run on push to the repository
    • configure the environment
    • check the priviledge box to enable docker build in the codebuild
    • select new service role in the absence of existing service role but note: ECR permission must be added to it
    • set variables based on the placeholders in the buildspec.yml like $AWS_DEFAULT_REGION $AWS_ACCOUNT_ID $IMAGE_REPO_NAME
    • specify the buildspec file, (buildspec.yml ensure it is in the root of your source repository)
  • Modify the IAM role of the newly created service role by the codebuild, add inline policy

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "ecr:*"
            ],
            "Resource": [
                "*"
            ]
        }
    ]
}
Enter fullscreen mode Exit fullscreen mode
  • Push the buildSpec.yml to the github repository and the codebuild should be triggered and the build should be successful

CodeBuild

  • Verify the image in ECR

ECR

  • Deploy the Application

Create a Configmap.yaml for the external configuration of the app, in this case our database values and Secret store all the sensitive environment variables such as (DB_PASSWORD)

apiVersion: v1
kind: ConfigMap
metadata:
  name: postgresql-service
data:
  DB_NAME: "mydatabase"
  DB_USER: "myuser"
  DB_HOST: "10.100.30.162"
  DB_PORT: "5432"
---
apiVersion: v1
kind: Secret
metadata:
  name: mysecret
type: Opaque
data:
  password: "bXlwYXNzd29yZA=="
Enter fullscreen mode Exit fullscreen mode

Run kubectl apply -f configmap.yaml

  • Create coworking.yaml for the service and deployment of the application, the docker image is the URI of the image we pushed to ECR, configmap and secret is referenced too.
apiVersion: v1
kind: Service
metadata:
  name: coworking
spec:
  type: LoadBalancer
  selector:
    service: coworking
  ports:
  - name: "5153"
    protocol: TCP
    port: 5153
    targetPort: 5153
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: coworking
  labels:
    name: coworking
spec:
  replicas: 1
  selector:
    matchLabels:
      service: coworking
  template:
    metadata:
      labels:
        service: coworking
    spec:
      containers:
      - name: coworking
        image: 739244444072.dkr.ecr.us-east-1.amazonaws.com/api-service-img-redo:16
        imagePullPolicy: IfNotPresent
        livenessProbe:
          httpGet:
            path: /health_check
            port: 5153
          initialDelaySeconds: 5
          timeoutSeconds: 2
        readinessProbe:
          httpGet:
            path: "/readiness_check"
            port: 5153
          initialDelaySeconds: 5
          timeoutSeconds: 5
        envFrom:
        - configMapRef:
            name: postgresql-service
        env:
        - name: DB_PASSWORD
          valueFrom:
            secretKeyRef:
              name: mysecret
              key: password
      restartPolicy: Always
Enter fullscreen mode Exit fullscreen mode

Run kubectl apply -f coworking.yaml

  • Verify the deployment

kubectl get pods

pods description

kubectl get svc

svc
kubectl describe svc <DATABASE_SERVICE_NAME>

svc-description

kubectl describe deployment <SERVICE_NAME>

deployment description

The application is running successfully.

  • Monitoring container Insight logs for the applications with Cloudwatch

    • navigate to the Cloudwatch service on the console
    • go to the logs menu, log groups, the cluster is present there
    • now go to the terminal to change the eks node group IAM role
aws iam attach-role-policy \
--role-name my-worker-node-role \
--policy-arn arn:aws:iam::aws:policy/CloudWatchAgentServerPolicy 
Enter fullscreen mode Exit fullscreen mode

To get the name of the node group IAM role, go to your cluster, then compute, click on the active node group to get the name of IAM role, replace my-worker-node-role

  • run another command to install addons for the eks cluster

aws eks create-addon --addon-name amazon-cloudwatch-observability --cluster-name my-cluster

  • check the log groups on the console again, aws/containerinsights/my-cluster-name/application should be there, Click on one of the log streams to see the logs.

cloudwatch

The logs that show the health of the application

Conclusion

  • Kubernetes, Docker, and AWS enable scalable and reliable microservice deployments.
  • AWS CodeBuild automates building, pushing, and deploying application updates.
  • PostgreSQL ensures effective data storage, while CloudWatch provides effective monitoring.
  • This setup creates a maintainable system ready for production.

Top comments (1)

Collapse
 
skillboosttrainer profile image
SkillBoostTrainer

Great step-by-step guide for deploying APIs as microservices using Kubernetes, Docker, and AWS! It provides a solid foundation for building scalable, production-ready applications.