Reference
Here you’ll find more detailed documentation.
Here you’ll find more detailed documentation.
If you want to have multiple applications in the same repository, i.e. a team “mono repo”, here is how you do it.
The recommended pattern is to create a tenant for the monorepo, and individual child tenants of that per application. See Tenancy for tenant creation.
Instead of creating a new repository for each of the applications, you can create a single repository and add the applications as sub-directories.
First create a new root repository
corectl apps create <new-monorepo-name> --tenant <tenant-name> --non-interactive
This will create an empty repository with necessary variables preconfigured for the P2P to work.
You should first setup an application specific sub-tenant - this will be a child tenant of the above.
Now create a new application in the sub-directory. You will be prompted for a template to use.
cd <new-monorepo-name>
corectl app create <new-app-name> --tenant <app-specific-child-tenant-name>
Your new application will be created in a new PR for a monorepo. This will give you a chance to review the changes.
Once you’re happy with the changes, merge the PR.
The recommended way to onboard is via corectl tenancy.
The Core Platform is a multi-tenant platform where each tenant gets their own segregated environments and P2P.
A Tenancy is the unit of access to the Core Platform. It contains a readonly and an admin group and gives CI/CD actors (GitHub Actions) access to a namespace and a docker registry for images. Once you have a tenancy, you can add sub-namespaces for all your application testing needs.
Tenants are organized in a tree structure. For each tenant, we create a hierarchical namespace. A tenancy can be used to configure:
corectl
does this for you. Only follow this section if you want to manually interact with the environments repo.
To add a tenancy raise a PR to the Environments Repo under tenants/tenants/
in your platform environments repo.
Your tenancy name must be the same as the file name!
For example, if I want to create a tenancy with the name myfirsttenancy
, then I will create a file named myfirsttenancy.yaml
with the following structure:
name: myfirsttenancy
parent: sandboxes
description: "Go Application"
contactEmail: go-application@awesomestartup.com
environments:
- gcp-dev
repos:
- https://github.com/<your-github-id>/go-application
adminGroup: platform-accelerator-admin@awesomestartup.com
readonlyGroup: platform-readonly@awesomestartup.com
cloudAccess:
- name: ca # Cloud Access. Keeping it short so the username is also short, biggest one will be ca-connected-app-functional which is 27 chars, for mysql 8.0 needs to be 32max. For 5.7 16 max
provider: gcp
kubernetesServiceAccounts:
- <namespace>/<k8s_service_account_name>
infrastructure:
network:
projects:
- name: name
id: <project_id>
environment: <platform_environment>
betaFeatures:
- k6-operator
repos
- All repos
GitHub actions will get permission to deploy to the created namespaces for implementing your application’s Path to Production aka CI/CDcloudAccess
- generates cloud-provider-specific machine identities for kubernetes service accounts to impersonate/assume. Note that the kubernetesServiceAccounts
are constructed like <namespace>/<kubernetesServiceAccount>
so make sure these match with what your application is doing. This Kubernetes Service Account is controlled and created by the App and configured to use the GCP service account created by this configuration.infrastructure
- allows you to configure projects to be attached to the current one’s shared VPC, allowing you to use Private Service Access connections to databases in your own projects. This will attach your project to the one on the environment.betaFeatures
- enables certain beta features for tenants:
k6-operator
- allows running tests with K6 Operator.
This attachment is unique, you can only attach your project to a single other project.
gcp-dev
and gcp-prod
for example, your tenant will need 2 GCP projects to attach to each environment.
To delete a tenancy, you have to:
Once the PR is merged and GitHub pipeline is finished running, the tenancy will be deleted.
If the tenant namespace has subnamespaces, the platform will be unable to delete the tenant, and all tenant related resources will be left in the cluster.
Recommended and opinionated repository structure for applications.
We distinguish between 3 types of repository structure:
Simple structure where there is only one application in the repository.
/
├── .github/
│ └── workflows/
│ ├── app-fast-feedback.yaml
│ ├── app-extended-test.yaml
│ └── app-prod.yaml
├── Dockerfile
├── Makefile
├── src/
├── tests/
│ ├── functional/
│ ├── extended/
│ └── nft/
├── helm-charts/
└── README.md
Key points:
Namespace structure:
app-tenant
├── [s] app-tenant-extended
├── [s] app-tenant-functional
└── [s] app-tenant-nft
└── [s] app-tenant-integration
[s] indicates subnamespaces
A single tenant app-tenant
is created as a root for the app.
To isolate resources related to specific lifecycle stage we create a subnamespaces via
hierarchical namespaces.
The app-tenant
name is stored in a GitHub repository variable TENANT_NAME
You can use corectl
to create this structure executing following commands:
corectl tenant create <tenant-name>
corectl config update
corectl app create <app-name> --tenant <tenant-name>
This structure is designed for projects containing multiple independent applications, each with its own deployment lifecycle.
/
├── .github/
│ └── workflows/
│ ├── app1-fast-feedback.yaml
│ ├── app1-extended-test.yaml
│ ├── app1-prod.yaml
│ ├── app2-fast-feedback.yaml
│ ├── app2-extended-test.yaml
│ └── app2-prod.yaml
├── app1/
│ ├── Dockerfile
│ ├── Makefile
│ ├── src/
│ ├── tests/
│ │ ├── functional/
│ │ ├── extended/
│ │ └── nft/
│ └── helm-charts/
├── app2/
│ ├── Dockerfile
│ ├── Makefile
│ ├── src/
│ ├── tests/
│ │ ├── functional/
│ │ ├── extended/
│ │ └── nft/
│ └── helm-charts/
└── README.md
Key features:
Modular Application Structure:
app1/
, app2/
).Dockerfile
, Makefile
, source code, tests, and Helm charts.Isolated Lifecycles:
Application-Specific Build and Deployment:
Makefile
with tasks specific to that application.Namespace structure:
parent-tenant
├── app1-tenant
│ ├── [s] app1-extended
│ ├── [s] app1-functional
│ └── [s] app1-nft
│ └── [s] app1-integration
└── app2-tenant
├── [s] app2-extended
├── [s] app2-functional
└── [s] app2-nft
└── [s] app2-integration
[s] indicates subnamespaces
Key aspects:
Hierarchical Structure:
Isolated Testing Environments:
Authentication:
This structure provides a clear separation of concerns, allowing each application to be developed, tested, and deployed independently while maintaining a cohesive project structure.
You can create projects like this following the steps below:
corectl tenant create <parent-tenant-name>
corectl app create <monorepo-name> --tenant <parent-tenant-name>
corectl config update
corectl tenant create <app-tenant-name> --parent <parent-tenant-name>
cd <monorepo-name> && corectl app create <app-name> --tenant <app-tenant-name>
This structure is optimized for projects comprising multiple tightly coupled applications that require simultaneous deployment and shared resources.
/
├── .github/
│ └── workflows/
│ ├── coupled-workload-fast-feedback.yaml
│ ├── coupled-workload-extended-test.yaml
│ └── coupled-workload-prod.yaml
├── app1/...
├── app2/...
├── coupled-workload/
│ ├── Makefile
│ ├── app3/
│ │ ├── Dockerfile
│ │ └── src/
│ ├── app4/
│ │ ├── Dockerfile
│ │ └── src/
│ ├── tests/
│ │ ├── functional/
│ │ ├── extended/
│ │ └── nft/
│ ├── helm-charts/
│ │ └── coupled-workload/
│ │ ├── Chart.yaml
│ │ ├── values.yaml
│ │ └── templates/
│ └── resources/
│ └── subns-anchor.yaml
└── README.md
Key features:
Unified Workload Structure:
coupled-workload/
directory.app3/
, app4/
) contains its Dockerfile
and src/
directory.Makefile
at the workload level manages build and deployment tasks for all applications.Consolidated Testing:
tests/
directory includes subdirectories for different testing phases: functional/
, extended/
, integration/
and nft/
(non-functional tests).Unified Helm Chart:
helm-charts/coupled-workload/
) is used for the entire workload.Chart.yaml
and values.yaml
files define the overall workload configuration.Shared Lifecycle:
Resource Management:
resources/
directory contains shared configuration files, such as subns-anchor.yaml
for namespace management.parent-tenant
└── coupled-workload
├── [s] coupled-workload-extended
├── [s] coupled-workload-functional
└── [s] coupled-workload-nft
└── [s] coupled-workload-integration
[s] indicates subnamespaces
Key aspects:
Hierarchical Tenant Structure:
Shared Namespace Environments:
Resource Isolation:
Simplified Access Control:
Corectl doesn’t fully support this structure. We don’t recommend this approach for newly created projects and it should be only used for existing projects that already have coupled workloads.
It is possible to setup required tenents via corectl.
corectl tenant create <parent-tenant-name>
corectl tenant create <app-tenant-name> --parent <parent-tenant-name>
You need to manually configure the following:
Helm chart contains subchart for each app. Ideally, the subchart should point to a helm chart hosted externally for better versioning support.
Sample Chart.yaml
apiVersion: v2
name: coupled-workload
description: Helm chart for a coupled workload made up of two services
type: application
version: 0.1.0
appVersion: "1.16.0"
dependencies:
- name: app
alias: "app-3"
version: "1.0.0"
repository: "https://coreeng.github.io/core-platform-assets"
- name: app
alias: "app-4"
version: "1.0.0"
repository: "https://coreeng.github.io/core-platform-assets"
Configuration can be done via values.yaml
file like below:
common:
resources:
limits: &limits
cpu: 500m
memory: 100Mi
app-3:
appName: app-3
resources:
requests:
cpu: 300m
memory: 50Mi
limits: *limits
app-4:
appName: app-4
resources:
requests:
cpu: 300m
memory: 50Mi
limits: *limits
We need to modify standard p2p Makefile to accommodate multiple docker images.
Modify the p2p-build
target to build the images for each app
.PHONY: p2p-build
p2p-build: service-build service-push
.PHONY: service-build
service-build:
@echo $(REGISTRY)
@echo 'VERSION: $(VERSION)'
@echo '### SERVICE BUILD ###'
docker build --platform=linux/amd64 --file ./app-3/Dockerfile --tag $(REGISTRY)/$(FAST_FEEDBACK_PATH)/$(app_3_image_name):$(image_tag) ./app-3
docker build --platform=linux/amd64 --file ./app-4/Dockerfile --tag $(REGISTRY)/$(FAST_FEEDBACK_PATH)/$(app_4_image_name):$(image_tag) ./app-4
.PHONY: service-push
service-push: ## Push the service image
@echo '### SERVICE PUSH FOR FEEDBACK ###'
docker image push $(REGISTRY)/$(FAST_FEEDBACK_PATH)/$(app_3_image_name):$(image_tag)
docker image push $(REGISTRY)/$(FAST_FEEDBACK_PATH)/$(app_4_image_name):$(image_tag)
Modify promotion target to promote the images for each app
.PHONY: p2p-promote-generic
p2p-promote-generic: ## Generic promote functionality
corectl p2p promote $(image_name):${image_tag} \
--source-stage $(source_repo_path) \
--dest-registry $(REGISTRY) \
--dest-stage $(dest_repo_path)
promote-app-3-extended: source_repo_path=$(FAST_FEEDBACK_PATH)
promote-app-3-extended: dest_repo_path=$(EXTENDED_TEST_PATH)
promote-app-3-extended: image_name=$(app_3_image_name)
promote-app-3-extended: p2p-promote-generic
promote-app-4-extended: source_repo_path=$(FAST_FEEDBACK_PATH)
promote-app-4-extended: dest_repo_path=$(EXTENDED_TEST_PATH)
promote-app-4-extended: image_name=$(app_4_image_name)
promote-app-4-extended: p2p-promote-generic
# Promote both images
.PHONY: p2p-promote-to-extended-test
p2p-promote-to-extended-test: promote-app-3-extended promote-app-4-extended
Modify deployment
functional-deploy: namespace=$(tenant_name)-functional
functional-deploy: path=$(FAST_FEEDBACK_PATH)
functional-deploy: deploy
deploy:
helm upgrade --install $(helm_release_name) $(helm_chart_path) \
-n $(namespace) \
--set app-3.registry=$(REGISTRY)/$(path) --set app-3.image=$(app_4_image_name) --set app-3.tag=$(VERSION) \
--set app-4.registry=$(REGISTRY)/$(path) --set app-4.image=$(app_3_image_name) --set app-4.tag=$(VERSION)
When deploying an application to the platform we need to make sure that it has enough resources to operate correctly. By the word resources we usually mean CPU and memory.
Kubernetes allows us to set up requests
and limits
for the resources:
⚠️ WARNING |
---|
We recommend every critical workload has CPU and memory requests. Otherwise you aren’t guaranteed any resources. |
The Kubernetes scheduler uses resource requests to select a node for a Pod to run on.
Each node has a maximum capacity for each of the resource types: the amount of CPU and memory it can provide for Pods.
The scheduler ensures that, for each resource type, the sum of the resource requests of the scheduled containers is less than the capacity of the node
.
Defining resource limits helps ensure that containers never use all available underlying infrastructure provided by nodes.
Defining both requests and limits for memory ensures balanced control over consumption. If the application exceeds the memory limit then it is being terminated due to Out Of Memory condition (OOMKilled). This means that either the limit is too low or there is a memory leak that needs to be investigated.
resources:
requests:
memory: 50Mi
limits:
memory: 100Mi
A common approach is to set CPU requests without limits. This results in decent scheduling, high utilization of resources and fairness when all containers need CPU cycles.
The container will have a minimum amount of CPU even when all the containers on the host are under load. If the other containers on the host are idle or there are no other containers then your container can use all the available CPU.
resources:
requests:
cpu: 100m
memory: 50Mi
limits:
memory: 100Mi
When running stubbed NFT or extended tests we need to have stable performance so that we can reliably validate Transactions Per Second (TPS) and latency thresholds.
A disadvantage of not having CPU limits is that it is harder to capacity plan your application because the number of resources your container gets varies depending on what else is running on the same host. As a result you can have different results between test runs.
In order to have stable results we set the CPU limits to be the same as requests. Then we scale the deployment to the
required number of replicas
to handle the load.
replicas: 2
resources:
requests:
cpu: 100m
memory: 50Mi
limits:
cpu: 100m
memory: 100Mi
Make sure that resource requests are defined for the application. Autoscalers use them as a base-line to calculate utilization.
Applications are scaled vertically or horizontally to be able to handle the increasing load. When traffic goes up, you add more resources (CPU and/or memory) and/or deploy more replicas. When traffic goes down, you revert back to the initial state to minimise costs. This can be done automatically with Kubernetes based on resource utilization.
This section describes autoscaling mechanisms and provides some guidelines on how to scale an app with Core Platform.
Kubernetes provides Horizontal Pod Autoscaler (HPA) and Vertical Pod Autoscaler (VPA) as out-of-the-box tools for scaling deployments:
Horizontal scaling response to increased load is to deploy more pods. If the load decreases, HPA instructs the deployment to scale back down.
Below is an example of HPA configuration for the Reference app to scale based on CPU Utilization. (We can use other resources e.g. memory utilization or a combination of them) We specify the range for the number of replicas, resources and thresholds when HPA should be triggered.
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: reference-app
labels:
app.kubernetes.io/name: reference-app
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: reference-app
minReplicas: 1
maxReplicas: 30
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 60
We can also affect how fast the application scales up by modifying scaling behavior. This policy allows pods to scale up 1000% every 15 seconds.
behavior:
scaleUp:
policies:
- type: Percent
value: 1000
periodSeconds: 15
Automatically adjusts the amount of CPU and memory requested by pods. VPA provides recommendations for resource usage over time, it works best with long-running homogeneous workloads. It can both down-scale pods that are over-requesting resources, and also up-scale pods that are under-requesting resources based on historical usage.
There are four modes in which VPA operates: Auto
, Recreate
, Initial
, Off
. Refer to VPA docs for more details.
Below is an example of VPA configuration for the reference app running in Off
mode:
apiVersion: autoscaling.k8s.io/v1beta2
kind: VerticalPodAutoscaler
metadata:
name: reference-app
labels:
app.kubernetes.io/name: reference-app
spec:
targetRef:
apiVersion: apps/v1
kind: Deployment
name: reference-app
updatePolicy:
updateMode: "Off"
It is advised to start with Off
mode when VPA does not automatically change the resource requirements of the pods.
The recommendations are calculated and can be inspected in the VPA object.
After the validation we can switch the updateMode
to Auto
to allow applying recommendations to resource requests.
VPA should not be used with the HPA on the same resource metric (CPU or memory) at this moment. Due to the independence of these two controllers, when they are configured to optimise the same target, e.g., CPU usage, they can lead to an awkward situation where HPA tries to spin more pods based on the higher-than-threshold CPU usage while VPA tries to squeeze the size of each pod based on the lower CPU usage (after scaling out by HPA).
However, you can use VPA with HPA on separate resource metrics (e.g. VPA on memory and HPA on CPU) as well as with HPA on custom and external metrics.
To autoscale, we need to start by defining non-functional requirements for the application.
For example, we require:
We choose which autoscaling mechanism to use:
We prepare NFT scenarios to validate that the application meets the requirements for the load. We need to repeatedly run the tests to adjust the resource requests and fine-tune the thresholds to handle the required traffic patterns.
The following is a list of recommendations that can be applied to improve the results of the test:
After you changed the parameters you need to re-run the test and validate the results. As for any type of performance testing, try changing only one parameter at a time to assess the impact correctly.
There are several dashboards that can help you better understand the behavior of the system:
Refer to Application Monitoring and Platform Monitoring sections for more details.
Your applications can be configured to be attached to the platforms shared VPC.
This can be enabled in your tenancy via adding the infrastructure.network
section of your tenant definition in your platform environments repository at the top level:
infrastructure:
network:
projects:
- name: name
id: <project_id>
environment: <platform_environment>
This allows you to configure projects to be attached to the current one’s shared VPC, allowing you to use Private Service Access connections to databases in your own projects. This will attach your project to the one on the environment.
This attachment is unique, you can only attach your project to a single other project.
dev
and prod
environments for example, your tenant will need 2 GCP projects to attach to each environment.
This will share the core platform network with the <project_id> configured. It is then necessary to configure resources in that project to explicitly use the shared network. This will then mean they will be reachable from the core platform.
Your applications can access Cloud Infrastructure in different Cloud Accounts.
Enable Cloud Access in your tenancy via adding the cloudAccess
section at the top level:
cloudAccess:
- name: ca
provider: gcp
environment: all
kubernetesServiceAccounts:
- <your_namespace>/sa
name
: Use a short name for the cloud access, with letters and -
s (32 character limit). For CloudSQL, this will be your IAM SA username.provider
: only gcp
is currently supported.kubernetesServiceAccounts
: a list of kubernetes service accounts that will be allowed to access the cloud infrastructure in the format namespace/name
e.g. the service account sa
in the namespace myfirsttenancy
using the P2P should have myfirsttenancy-functional/sa
, myfirsttenancy-nft/sa
, myfirsttenancy/sa
, myfirsttenancy/sa
and whatever other namespace you need.environment
is be used to specify the environment in which this specific Cloud Access configuration will be deployed. To deploy it in all of the environments where the tenant is configured, you can use the keyword all
as the environments value.In your parent namespace (the one named after your tenancy run) run:
TENANT_NAME=myfirsttenancy # your tenant name
NAME=ca # replace this with the name you have configured under `cloud-access`
kubectl get iamserviceaccount -n $TENANT_NAME -o jsonpath='{.items[0].status.email}' $TENANT_NAME-$NAME
For example, for the tenant name myfirsttenancy
and the name ca
:
kubectl -n myfirsttenancy get iamserviceaccount myfirsttenancy-ca -o jsonpath='{.status.email}'
myfirsttenancy-ca@{{ project-id }}.iam.gserviceaccount.com
This gives us an IAM Service Account that any permissions can be added to in your target Cloud Infra project.
myfirsttenancy-ca@{{ project-id }}.iam.gserviceaccount.com
To be able to impersonate the above service account, annotate your service account with the IAM Service Account. For example:
apiVersion: v1
kind: ServiceAccount
metadata:
name: sa # (the name of the configured kubernetesServiceAccount, after the /)
annotations:
iam.gke.io/gcp-service-account: myfirsttenancy-ca@{{ project-id }}.iam.gserviceaccount.com
You will need a service account in each of the namespaces the app will be deployed to, so if using the standard p2p, and helm it would make sense to configure this as a helm chart template with the app (the project ID should be parameterised if the app is deployed to multiple environments). This will ensure it is created correctly for each sub-namespace (e.g. app-functional, app-nft etc.).
Your pods should use this service account, then anytime they use a Google Cloud library they will assume the identity of the service account.
Software template provide pre-built, standardized structures for common types of applications. This allows developers to:
This efficiency can significantly reduce development time, especially for common application types, allowing teams to deliver projects faster and with more consistency.
CECG provides software templates that integrate the entire P2P lifecycle, offering a significant advantage in application development.
These templates:
You can find the templates in the CECG Software Templates repository
Corectl leverages the templates to create a new project via corectl app create
command.
To ease authoring of a new software template there is a command corectl template render
that will render a
template into a new directory.
Software templates can be parameterized to allow for customization of the template.
There are 2 types of parameters:
template.yaml
fileCustom parameters are defined in the template.yaml
file.
They are defined in the parameters
section of the file.
For example:
parameters:
- name: appName
description: Name of the application
type: string
optional: true
default: my-app
When rendering a template corectl will prompt user for the values of the parameters.
Implicit parameters are set by corectl when creating a new application.
At the moment, there are 4 implicit parameters:
- name: name
description: application name
optional: false
- name: tenant
description: tenant used to deploy the application
optional: false
- name: working_directory
description: working directory where application is located
optional: false
default: "./"
- name: version_prefix
description: version prefix for application
optional: false
default: "v"
name
is the name of the applicationtenant
is the tenant used to deploy the applicationAll the software templates are published into a single [public repository] that can be forked and configured to deploy to your Core Platform environments as a quick way of seeing a set of applications running.
For Core Platform reference applications.
After forking, configure the P2P:
corectl p2p env sync <your-fork> <your-tenant>
You can then execute the workflows and the reference applications will be deployed to your environments.
After you have forked the core-platform-reference-applications repo, you need to do the following:
tenant_name
variable to match the name of the tenancy you created."