Demo application to demonstrate MySQL, Postgres, MongoDB databases, monitoring and deployment in Kubernetes.
This is a great opportunity to learn about Go and how it works with MySQL, PostgreSQL, MongoDB databases.
The application is a web-based control panel through which you can manage the load on the databases. In combination with Percona Monitoring and Management (PMM) tools, you will have an excellent tool for demonstrating databases and their capabilities.
The application consists of three components:
- Control Panel - a web application that you can run in your browser. The control panel contains controls and load switches for databases MySQL, Postgres, MongoDB. The control panel also contains Settings tabs for configuring the connection to databases and Dataset tab for tracking the dataset state.
- Dataset Loader - Go application that fetches data from GitHub via API and loads it into databases. It is used to provide databases with data for testing and load simulation. The component loads data into an empty database and periodically loads new data to keep the dataset up to date.
- Load Generator - Go application that connects to databases and generates SQL and NoSQL queries based on control panel settings (component 1)
All three components work as a single application and allow you to experiment with databases. The application can be run locally using Docker Compose or in the cloud using Kubernetes.
The application can connect to and generate load on MySQL, Postgres and MongoDB databases in the cloud or Kubernetes. You can start the databases with:
- Docker compose - configuration is available in the repository.
- Percona Everest or Percona Operators in a Kubernetes cluster. If the databases are not externally accessible, then run the application in the same cluster.
- In another way that is convenient for you. Connection parameters can be set in environment variables or in the Settings tab of the Control Panel of the application.
Usage Scenario:
- Start the control panel in your browser (for example on iPad)
- Start PMM in the browser (for example on a screen or laptop)
- Install Percona Everest, run it in the browser and create MySQL, Postgres, MongoDB databases.
- Connect the databases in the control panel settings
- Change the load on the control panel and monitor the changes on PMM.
How it works technically:
- Control panel is a web application, when you adjust or switch, the settings are stored in the Valkey database.
- Dataset loader - a constantly running script that checks the settings in Valkey every 5 seconds, connects to the databases and loads the data.
- Load Generator - a constantly running script that can work on one database or on all three databases. Every 5 seconds it checks the load settings in Valkey. Based on the settings, it creates separate go routines (parallel threads) of load, each of which performs a connection to the database and runs SQL and NoSQL queries, which can be switched in the Control Panel. Queries are available in the file internal/load/load.go
- Clone the project repository
git clone [email protected]:dbazhenov/github-stat.git
-
Copy or rename
.env.example
to.env
. Set the parameters in the .env file. -
Run the environment.
docker compose up -d
-
Launch the application at
localhost:3000
in your browser. -
Open the Settings tab and create connections to the databases you want to load.
You can use any of your databases, such as those running with Percona Everest or Docker.
If you don't have databases, start them using the command:
docker compose -f docker-compose-dbs.yaml up -d
docker-compose launches three databases (MySQL, Postgres, MongoDB) and PMM with pmm-client.
Connection options are available in docker-compose-dbs.yaml
- MySQL: `root:password@tcp(mysql:3306)/dataset`
- Postgres: `user=postgres password='password' dbname=dataset host=postgres port=5432 sslmode=disable`
- MongoDB: `mongodb://databaseAdmin:password@mongodb:27017/`
-
On the Settings tab, for each database, load the test dataset by clicking the “Create schema” and “Import Dataset” buttons. By default a small dataset from a CSV file (26 repos and 4600 PRs) will be imported, to import the full dataset you need to add a GitHub Token to the .env file and change the import type to github.
-
Turn on the Enable Load setting and open the Load Generator Control Panel tab.
-
Change load adjustments and check the results in PMM at
localhost:8081
- Run the environment.
docker compose -f docker-compose-dev.yaml up -d
- Run the Control Panel script
go run cmd/web/main.go
Launch the control panel in your browser (localhost:3000).
- Run the Dataset loader script
go run cmd/dataset/main.go
- Run the Load Generator script
go run cmd/load/main.go
- Optional. Uncomment the configuration for PMM and PMM Client in docker-compose.yaml. Run PMM
docker compose up -d
Start PMM in your browser (localhost)
- Create a Kubernetes cluster, for example in minikube or GKE. For GKE I use the command
gcloud container clusters create demo-app --project percona-product --zone us-central1-a --cluster-version 1.30 --machine-type n1-standard-16 --num-nodes=1
Doc: Create Kubernetes cluster on Google Kubernetes Engine (GKE)
- Install Percona Everest to create databases or Percona Operators.
Percona Everest documentation:
Create databases if you don't have any.
- Install the PMM, e.g. with HELM
helm repo add percona https://percona.github.io/percona-helm-charts/
helm install pmm -n demo \
--set service.type="LoadBalancer" \
--set pmmResources.limits.memory="4Gi" \
--set pmmResources.limits.cpu="2" \
percona/pmm
Get the administrator password (admin user)
kubectl get secret pmm-secret -n demo -o jsonpath='{.data.PMM_ADMIN_PASSWORD}' | base64 --decode
Get a public IP to open PMM in a browser
kubectl get svc -n demo monitoring-service -o jsonpath="{.status.loadBalancer.ingress[0].ip}"
Next, you can run the Demo application, either manually or with Helm.
-
Set the HELM parameters in the ./k8s/helm/values.yaml file:
-
Launch the application
helm install demo-app ./k8s/helm -n demo
-
Run
kubectl -n demo get svc
to get the public IP for demo-app-web-service. Launch the control panel in your browser. -
Open the Settings tab on the control panel and set the parameters for connecting to the databases you created in Percona Everest or with Percona Operators.
-
You may need to restart the dataset pod to speed up the process of loading the dataset into the databases.
kubectl -n demo delete pod [DATASET_POD]
- You can change the allocated resources or the number of replicas by editing the
values.yaml
file and issuing the command
helm upgrade demo-app ./k8s/helm -n demo
Demo App HELM parameters (./k8s/helm/values.yaml):
-
githubToken
- is required to properly load the dataset from the GitHub API. You can create a personal Token at https://github.com/settings/tokens. -
separateLoads
- If true, separate pods for each database will be started for the load. -
useResourceLimits
- if true, resource limits will be set for the resource consumption -
controlPanelService.type
- LoadBalancer for the public address of the dashboard. NodePort for developing locally.
- Create Secrets and ConfigMap for the application.
kubectl apply -f k8s/manual/config.yaml -n demo
Check the k8s/config.yaml file. Be sure to set GITHUB_TOKEN
, which is required to properly load the dataset from the GitHub API. You can create a personal Token at https://github.com/settings/tokens.
- Run Valkey database
kubectl apply -f k8s/manual/valkey.yaml -n demo
- Run the Control Panel script
kubectl apply -f k8s/manual/web-deployment.yaml -n demo
Run kubectl -n demo get svc
to get the public IP. Launch the control panel in your browser.
Open the control panel in your browser. Open the Settings tab. Set the connection string to the databases created in Percona Everest. Click the Connect button.
The first time you connect to MySQL and Postgres, you will need to create a schema and tables. You will see the buttons on the Settings tab.
- Run the Dataset loader script
kubectl apply -f k8s/manual/dataset-deployment.yaml -n demo
- Run the Load Generator script.
If one script for all databases.
kubectl apply -f k8s/manual/load-deployment.yaml -n demo
You can run a separate load generator for each database. To distribute resources or scale the load.
- MySQL:
kubectl apply -f k8s/manual/load-mysql-deployment.yaml -n demo
- Postgres:
kubectl apply -f k8s/manual/load-postgres-deployment.yaml -n demo
- MongoDB
kubectl apply -f k8s/manual/load-mongodb-deployment.yaml -n demo
You can set the environment variable to determine which database the script will load.
- Control the load in the control panel. Change queries using the switches. Track the result on PMM dashboards. Scale or change database parameters with Percona Everest.
Have fun experimenting.
kubectl get pods -n demo
kubectl logs [pod_name] -n demo
kubectl describe pod [pod_name] -n demo
-
Clone the repository
-
Run locally using Docker Compose.
-
Make changes to the code and run scripts for tests.
-
The repository contains Workflow to build and publish to Docker Hub. You can publish your own versions of containers and run them in Kubernetes.
-
Send your changes to the project using Pull Request.
You are invited to make a contribution:
- Suggest improvements and create Issues
- Improve code or do a review.