Reference

Here you’ll find more detailed documentation.

Subsections of Reference

Multi-App Repos

If you want to have multiple applications in the same repository, i.e. a team “mono repo”, here is how you do it.

Create an application as part of a monorepo

Note

The recommended pattern is to create a tenant for the monorepo, and individual child tenants of that per application. See Tenancy for tenant creation.

Instead of creating a new repository for each of the applications, you can create a single repository and add the applications as sub-directories.

First create a new root repository

corectl apps create <new-monorepo-name> --tenant <tenant-name> --non-interactive

This will create an empty repository with necessary variables preconfigured for the P2P to work.

You should first setup an application specific sub-tenant - this will be a child tenant of the above.

Now create a new application in the sub-directory. You will be prompted for a template to use.

cd <new-monorepo-name>
corectl app create <new-app-name> --tenant <app-specific-child-tenant-name>

Your new application will be created in a new PR for a monorepo. This will give you a chance to review the changes.

Once you’re happy with the changes, merge the PR.

Tenancy

The recommended way to onboard is via corectl tenancy.

The Core Platform is a multi-tenant platform where each tenant gets their own segregated environments and P2P.

What is a tenant?

A Tenancy is the unit of access to the Core Platform. It contains a readonly and an admin group and gives CI/CD actors (GitHub Actions) access to a namespace and a docker registry for images. Once you have a tenancy, you can add sub-namespaces for all your application testing needs.

Tenants are organized in a tree structure. For each tenant, we create a hierarchical namespace. A tenancy can be used to configure:

  • resource quotas for a tenant and it’s children
  • access control via network policies on a per-tenant basis. (e.g. granting another tenancy network access to your tenant)
  • a shared prometheus instance for a tenant and its children

Manually raising a PR for a new tenancy

Note

corectl does this for you. Only follow this section if you want to manually interact with the environments repo.

To add a tenancy raise a PR to the Environments Repo under tenants/tenants/ in your platform environments repo.

Note

Your tenancy name must be the same as the file name!

For example, if I want to create a tenancy with the name myfirsttenancy, then I will create a file named myfirsttenancy.yaml with the following structure:

name: myfirsttenancy 
parent: sandboxes
description: "Go Application"
contactEmail: go-application@awesomestartup.com
environments:
  - gcp-dev
repos:
  - https://github.com/<your-github-id>/go-application
adminGroup: platform-accelerator-admin@awesomestartup.com
readonlyGroup: platform-readonly@awesomestartup.com
cloudAccess:
  - name: ca # Cloud Access. Keeping it short so the username is also short, biggest one will be ca-connected-app-functional which is 27 chars, for mysql 8.0 needs to be 32max. For 5.7 16 max
    provider: gcp
    kubernetesServiceAccounts:
    - <namespace>/<k8s_service_account_name>
infrastructure:
  network:
      projects:
      - name: name
        id: <project_id>
        environment: <platform_environment>
betaFeatures:
  - k6-operator
  • repos - All repos GitHub actions will get permission to deploy to the created namespaces for implementing your application’s Path to Production aka CI/CD
  • cloudAccess - generates cloud-provider-specific machine identities for kubernetes service accounts to impersonate/assume. Note that the kubernetesServiceAccounts are constructed like <namespace>/<kubernetesServiceAccount> so make sure these match with what your application is doing. This Kubernetes Service Account is controlled and created by the App and configured to use the GCP service account created by this configuration.
  • infrastructure - allows you to configure projects to be attached to the current one’s shared VPC, allowing you to use Private Service Access connections to databases in your own projects. This will attach your project to the one on the environment.
  • betaFeatures - enables certain beta features for tenants:
    • k6-operator - allows running tests with K6 Operator.

Note

This attachment is unique, you can only attach your project to a single other project.

This means that if you want to have your databases in gcp-dev and gcp-prod for example, your tenant will need 2 GCP projects to attach to each environment.

Deleting a tenancy

To delete a tenancy, you have to:

  1. Delete all the child tenancies.
  2. Delete all the subnamespaces with applications of the tenancy.
  3. Delete tenant configuration file related to the tenancy from the Environments Repo and merge the PR with this change.

Once the PR is merged and GitHub pipeline is finished running, the tenancy will be deleted.

Note

If the tenant namespace has subnamespaces, the platform will be unable to delete the tenant, and all tenant related resources will be left in the cluster.

Repository structure

Recommended and opinionated repository structure for applications.

We distinguish between 3 types of repository structure:

  1. Single app in repository
  2. Multiple apps in repository
  3. Multiple apps in repository that has to be deployed together

Single App Structure

Simple structure where there is only one application in the repository.

/
├── .github/
│   └── workflows/
│       ├── app-fast-feedback.yaml
│       ├── app-extended-test.yaml
│       └── app-prod.yaml
├── Dockerfile
├── Makefile
├── src/
├── tests/
│   ├── functional/
│   ├── extended/
│   └── nft/
├── helm-charts/
└── README.md

Key points:

  • There is only one p2p lifecycle
  • All application code, tests, and deployment configurations reside in root directory

Tenant and Namespace Structure

Namespace structure:

app-tenant
├── [s] app-tenant-extended
├── [s] app-tenant-functional
└── [s] app-tenant-nft
└── [s] app-tenant-integration

[s] indicates subnamespaces

A single tenant app-tenant is created as a root for the app. To isolate resources related to specific lifecycle stage we create a subnamespaces via hierarchical namespaces. The app-tenant name is stored in a GitHub repository variable TENANT_NAME

Corectl support

You can use corectl to create this structure executing following commands:

corectl tenant create <tenant-name>
corectl config update
corectl app create <app-name> --tenant <tenant-name>

Multiple Apps with Workflow per App

This structure is designed for projects containing multiple independent applications, each with its own deployment lifecycle.

/
├── .github/
│   └── workflows/
│       ├── app1-fast-feedback.yaml
│       ├── app1-extended-test.yaml
│       ├── app1-prod.yaml
│       ├── app2-fast-feedback.yaml
│       ├── app2-extended-test.yaml
│       └── app2-prod.yaml
├── app1/
│   ├── Dockerfile
│   ├── Makefile
│   ├── src/
│   ├── tests/
│   │   ├── functional/
│   │   ├── extended/
│   │   └── nft/
│   └── helm-charts/
├── app2/
│   ├── Dockerfile
│   ├── Makefile
│   ├── src/
│   ├── tests/
│   │   ├── functional/
│   │   ├── extended/
│   │   └── nft/
│   └── helm-charts/
└── README.md

Key features:

  1. Modular Application Structure:

    • Each application resides in its own directory (app1/, app2/).
    • Applications contain all necessary files and configurations, including Dockerfile, Makefile, source code, tests, and Helm charts.
  2. Isolated Lifecycles:

    • GitHub Actions workflows are defined separately for each application.
  3. Application-Specific Build and Deployment:

    • Each application has its own Makefile with tasks specific to that application.

Tenant and Namespace Structure

Namespace structure:

parent-tenant
├── app1-tenant
│   ├── [s] app1-extended
│   ├── [s] app1-functional
│   └── [s] app1-nft
│   └── [s] app1-integration
└── app2-tenant
    ├── [s] app2-extended
    ├── [s] app2-functional
    └── [s] app2-nft
    └── [s] app2-integration

[s] indicates subnamespaces

Key aspects:

  1. Hierarchical Structure:

    • A parent tenant serves as the root for all applications.
    • Each application has its own child tenant.
  2. Isolated Testing Environments:

    • Subnamespaces are created for different testing stages (extended, functional, nft, integration) within each application’s tenant.
  3. Authentication:

    • The parent tenant is used for authenticating all applications’ P2P workflows to GCP.

This structure provides a clear separation of concerns, allowing each application to be developed, tested, and deployed independently while maintaining a cohesive project structure.

Corectl support

You can create projects like this following the steps below:

  1. Create a parent tenant corectl tenant create <parent-tenant-name>
  2. Create an empty root repository corectl app create <monorepo-name> --tenant <parent-tenant-name>
  3. Fetch tenant changes corectl config update
  4. Create tenant for the app corectl tenant create <app-tenant-name> --parent <parent-tenant-name>
  5. Create an app cd <monorepo-name> && corectl app create <app-name> --tenant <app-tenant-name>

Multiple Apps with Shared Workflow (Coupled Workload)

This structure is optimized for projects comprising multiple tightly coupled applications that require simultaneous deployment and shared resources.

/
├── .github/
│   └── workflows/
│       ├── coupled-workload-fast-feedback.yaml
│       ├── coupled-workload-extended-test.yaml
│       └── coupled-workload-prod.yaml
├── app1/...
├── app2/...
├── coupled-workload/
│   ├── Makefile
│   ├── app3/
│   │   ├── Dockerfile
│   │   └── src/
│   ├── app4/
│   │   ├── Dockerfile
│   │   └── src/
│   ├── tests/
│   │   ├── functional/
│   │   ├── extended/
│   │   └── nft/
│   ├── helm-charts/
│   │   └── coupled-workload/
│   │       ├── Chart.yaml
│   │       ├── values.yaml
│   │       └── templates/
│   └── resources/
│       └── subns-anchor.yaml
└── README.md

Key features:

  1. Unified Workload Structure:

    • All coupled applications reside within a single coupled-workload/ directory.
    • Each application (app3/, app4/) contains its Dockerfile and src/ directory.
    • A shared Makefile at the workload level manages build and deployment tasks for all applications.
  2. Consolidated Testing:

    • Tests are conducted at the workload level, encompassing all applications.
    • The tests/ directory includes subdirectories for different testing phases: functional/, extended/, integration/ and nft/ (non-functional tests).
  3. Unified Helm Chart:

    • A single Helm chart (helm-charts/coupled-workload/) is used for the entire workload.
    • The main Chart.yaml and values.yaml files define the overall workload configuration.
    • Subcharts for individual applications ideally stored externally
  4. Shared Lifecycle:

    • GitHub Actions workflows are defined for the entire workload, not individual applications.
  5. Resource Management:

    • The resources/ directory contains shared configuration files, such as subns-anchor.yaml for namespace management.

Tenant and Namespace Structure

parent-tenant
└── coupled-workload
    ├── [s] coupled-workload-extended
    ├── [s] coupled-workload-functional
    └── [s] coupled-workload-nft
    └── [s] coupled-workload-integration

[s] indicates subnamespaces

Key aspects:

  1. Hierarchical Tenant Structure:

    • A parent tenant serves as the root for the coupled workload.
    • The coupled workload has a single child tenant, promoting unified resource management.
  2. Shared Namespace Environments:

    • Subnamespaces are created for different testing environments (extended, functional, nft, integration) within the coupled workload tenant.
    • All applications in the workload share these namespaces, facilitating integrated testing and deployment.
  3. Resource Isolation:

    • The use of subnamespaces allows for resource isolation between different testing phases while maintaining a cohesive structure.
  4. Simplified Access Control:

    • The shared tenant structure simplifies access control and authentication mechanisms for the entire workload.

Corectl support

Corectl doesn’t fully support this structure. We don’t recommend this approach for newly created projects and it should be only used for existing projects that already have coupled workloads.

It is possible to setup required tenents via corectl.

corectl tenant create <parent-tenant-name>
corectl tenant create <app-tenant-name> --parent <parent-tenant-name>

Manual configuration

You need to manually configure the following:

  1. Helm charts
  2. Makefile

Helm chart

Helm chart contains subchart for each app. Ideally, the subchart should point to a helm chart hosted externally for better versioning support.

Sample Chart.yaml

apiVersion: v2
name: coupled-workload
description: Helm chart for a coupled workload made up of two services
type: application
version: 0.1.0
appVersion: "1.16.0"
dependencies:
  - name: app
    alias: "app-3"
    version: "1.0.0"
    repository: "https://coreeng.github.io/core-platform-assets"
  - name: app
    alias: "app-4"
    version: "1.0.0"
    repository: "https://coreeng.github.io/core-platform-assets"

Configuration can be done via values.yaml file like below:

common:
  resources:
    limits: &limits
      cpu: 500m
      memory: 100Mi

app-3:
  appName: app-3
  resources:
    requests:
      cpu: 300m
      memory: 50Mi
    limits: *limits

app-4:
  appName: app-4
  resources:
    requests:
      cpu: 300m
      memory: 50Mi
    limits: *limits

Makefile

We need to modify standard p2p Makefile to accommodate multiple docker images.

  1. Modify the p2p-build target to build the images for each app

    .PHONY: p2p-build
    p2p-build: service-build service-push
    
    .PHONY: service-build
    service-build:
       @echo $(REGISTRY)
       @echo 'VERSION: $(VERSION)'
       @echo '### SERVICE BUILD ###'
       docker build --platform=linux/amd64  --file ./app-3/Dockerfile --tag $(REGISTRY)/$(FAST_FEEDBACK_PATH)/$(app_3_image_name):$(image_tag) ./app-3
       docker build --platform=linux/amd64 --file ./app-4/Dockerfile --tag $(REGISTRY)/$(FAST_FEEDBACK_PATH)/$(app_4_image_name):$(image_tag) ./app-4
    
    .PHONY: service-push
    service-push: ## Push the service image
       @echo '### SERVICE PUSH FOR FEEDBACK ###'
       docker image push $(REGISTRY)/$(FAST_FEEDBACK_PATH)/$(app_3_image_name):$(image_tag)
       docker image push $(REGISTRY)/$(FAST_FEEDBACK_PATH)/$(app_4_image_name):$(image_tag)
    
  2. Modify promotion target to promote the images for each app

    .PHONY: p2p-promote-generic
    p2p-promote-generic:  ## Generic promote functionality
       corectl p2p promote $(image_name):${image_tag} \
          --source-stage $(source_repo_path) \
          --dest-registry $(REGISTRY) \
          --dest-stage $(dest_repo_path)
    
    promote-app-3-extended: source_repo_path=$(FAST_FEEDBACK_PATH)
    promote-app-3-extended: dest_repo_path=$(EXTENDED_TEST_PATH)
    promote-app-3-extended: image_name=$(app_3_image_name)
    promote-app-3-extended: p2p-promote-generic
    
    promote-app-4-extended: source_repo_path=$(FAST_FEEDBACK_PATH)
    promote-app-4-extended: dest_repo_path=$(EXTENDED_TEST_PATH)
    promote-app-4-extended: image_name=$(app_4_image_name)
    promote-app-4-extended: p2p-promote-generic
    
    
    # Promote both images
    .PHONY: p2p-promote-to-extended-test
    p2p-promote-to-extended-test: promote-app-3-extended promote-app-4-extended
  3. Modify deployment

    functional-deploy: namespace=$(tenant_name)-functional
    functional-deploy: path=$(FAST_FEEDBACK_PATH)
    functional-deploy: deploy
    
    deploy:
    helm upgrade --install $(helm_release_name) $(helm_chart_path)   \
          -n $(namespace) \
             --set app-3.registry=$(REGISTRY)/$(path) --set app-3.image=$(app_4_image_name) --set app-3.tag=$(VERSION) \
             --set app-4.registry=$(REGISTRY)/$(path) --set app-4.image=$(app_3_image_name) --set app-4.tag=$(VERSION)
    

Resources: Requests vs Limits

When deploying an application to the platform we need to make sure that it has enough resources to operate correctly. By the word resources we usually mean CPU and memory.

Kubernetes allows us to set up requests and limits for the resources:

  • requests: minimum amount of resources that are guaranteed to be available for the container.
  • limits: maximum amount of resources to be consumed by the container.
⚠️ WARNING
We recommend every critical workload has CPU and memory requests. Otherwise you aren’t guaranteed any resources.

The Kubernetes scheduler uses resource requests to select a node for a Pod to run on. Each node has a maximum capacity for each of the resource types: the amount of CPU and memory it can provide for Pods. The scheduler ensures that, for each resource type, the sum of the resource requests of the scheduled containers is less than the capacity of the node.

Defining resource limits helps ensure that containers never use all available underlying infrastructure provided by nodes.

Memory

Defining both requests and limits for memory ensures balanced control over consumption. If the application exceeds the memory limit then it is being terminated due to Out Of Memory condition (OOMKilled). This means that either the limit is too low or there is a memory leak that needs to be investigated.

  resources:
    requests:
      memory: 50Mi
    limits:
      memory: 100Mi

CPU Requests

A common approach is to set CPU requests without limits. This results in decent scheduling, high utilization of resources and fairness when all containers need CPU cycles.

The container will have a minimum amount of CPU even when all the containers on the host are under load. If the other containers on the host are idle or there are no other containers then your container can use all the available CPU.

  resources:
    requests:
      cpu: 100m
      memory: 50Mi
    limits:
      memory: 100Mi

CPU Limits for Load Testing

When running stubbed NFT or extended tests we need to have stable performance so that we can reliably validate Transactions Per Second (TPS) and latency thresholds.

A disadvantage of not having CPU limits is that it is harder to capacity plan your application because the number of resources your container gets varies depending on what else is running on the same host. As a result you can have different results between test runs.

In order to have stable results we set the CPU limits to be the same as requests. Then we scale the deployment to the required number of replicas to handle the load.

  replicas: 2
  resources:
    requests:
      cpu: 100m
      memory: 50Mi
    limits:
      cpu: 100m
      memory: 100Mi

Application Autoscaling

Note

Make sure that resource requests are defined for the application. Autoscalers use them as a base-line to calculate utilization.

Applications are scaled vertically or horizontally to be able to handle the increasing load. When traffic goes up, you add more resources (CPU and/or memory) and/or deploy more replicas. When traffic goes down, you revert back to the initial state to minimise costs. This can be done automatically with Kubernetes based on resource utilization.

This section describes autoscaling mechanisms and provides some guidelines on how to scale an app with Core Platform.

Autoscalers

Kubernetes provides Horizontal Pod Autoscaler (HPA) and Vertical Pod Autoscaler (VPA) as out-of-the-box tools for scaling deployments:

  • HPA - for stateless workloads
  • VPA - for stateful and long-running workloads

Horizontal Pod Autoscaler (HPA)

Horizontal scaling response to increased load is to deploy more pods. If the load decreases, HPA instructs the deployment to scale back down.

Below is an example of HPA configuration for the Reference app to scale based on CPU Utilization. (We can use other resources e.g. memory utilization or a combination of them) We specify the range for the number of replicas, resources and thresholds when HPA should be triggered.

apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
  name: reference-app
  labels:
    app.kubernetes.io/name: reference-app
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: reference-app
  minReplicas: 1
  maxReplicas: 30
  metrics:
    - type: Resource
      resource:
        name: cpu
        target:
          type: Utilization
          averageUtilization: 60

We can also affect how fast the application scales up by modifying scaling behavior. This policy allows pods to scale up 1000% every 15 seconds.

  behavior:
    scaleUp:
      policies:
        - type: Percent
          value: 1000
          periodSeconds: 15

Vertical Pod Autoscaler (VPA)

Automatically adjusts the amount of CPU and memory requested by pods. VPA provides recommendations for resource usage over time, it works best with long-running homogeneous workloads. It can both down-scale pods that are over-requesting resources, and also up-scale pods that are under-requesting resources based on historical usage.

There are four modes in which VPA operates: Auto, Recreate, Initial, Off. Refer to VPA docs for more details.

Below is an example of VPA configuration for the reference app running in Off mode:

apiVersion: autoscaling.k8s.io/v1beta2
kind: VerticalPodAutoscaler
metadata:
  name: reference-app
  labels:
    app.kubernetes.io/name: reference-app
spec:
  targetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: reference-app
  updatePolicy:
    updateMode: "Off"

It is advised to start with Off mode when VPA does not automatically change the resource requirements of the pods. The recommendations are calculated and can be inspected in the VPA object. After the validation we can switch the updateMode to Auto to allow applying recommendations to resource requests.

Combining HPA & VPA

VPA should not be used with the HPA on the same resource metric (CPU or memory) at this moment. Due to the independence of these two controllers, when they are configured to optimise the same target, e.g., CPU usage, they can lead to an awkward situation where HPA tries to spin more pods based on the higher-than-threshold CPU usage while VPA tries to squeeze the size of each pod based on the lower CPU usage (after scaling out by HPA).

However, you can use VPA with HPA on separate resource metrics (e.g. VPA on memory and HPA on CPU) as well as with HPA on custom and external metrics.

Guidelines

To autoscale, we need to start by defining non-functional requirements for the application.

For example, we require:

  • the application to handle 30k TPS with P99 latency < 500 ms.
  • there are spikes when traffic ramps up linearly from 0 to max in 3 minutes.

We choose which autoscaling mechanism to use:

  • To handle traffic spikes with a stateless app we should consider using HPA.
  • We choose VPA for stateful long-running homogeneous workloads.

We prepare NFT scenarios to validate that the application meets the requirements for the load. We need to repeatedly run the tests to adjust the resource requests and fine-tune the thresholds to handle the required traffic patterns.

The following is a list of recommendations that can be applied to improve the results of the test:

  • Ensure the load generator (e.g. K6) have enough resources and connections to generate the load.
  • If Platform Ingress is a bottleneck then ask Platform Operators to check Traefik resources and autoscaling configuration.
  • If pods are stuck in Pending state then ask Platform Operators to check Cluster Autoscaling configuration to make sure it has enough resources.
  • If pods are stuck in Pending state, slowing down the autoscaling, then you may need to over-provision resources. Ask Platform Operators to check Cluster Overprovisioning configuration.
  • If the app is dying due to OOM, you may need to give more memory and/or increase the minimum number of replicas.
  • If the app is not responding to readiness/liveness probes, you need to give more CPU and/or increase the minimum number of replicas.
  • If the app is not scaling-up fast enough, then you may need to lower the thresholds for resource utilization and/or adjust scaling behavior (see HPA configuration).

After you changed the parameters you need to re-run the test and validate the results. As for any type of performance testing, try changing only one parameter at a time to assess the impact correctly.

Dashboards

There are several dashboards that can help you better understand the behavior of the system:

  • Kubernetes / Views
  • Traefik Official Kubernetes Dashboard
  • Reference App Load Testing

Refer to Application Monitoring and Platform Monitoring sections for more details.

Accessing Private Service Access

Your applications can be configured to be attached to the platforms shared VPC.

This can be enabled in your tenancy via adding the infrastructure.network section of your tenant definition in your platform environments repository at the top level:

infrastructure:
  network:
      projects:
      - name: name
        id: <project_id>
        environment: <platform_environment>

This allows you to configure projects to be attached to the current one’s shared VPC, allowing you to use Private Service Access connections to databases in your own projects. This will attach your project to the one on the environment.

Note

This attachment is unique, you can only attach your project to a single other project.

This means that if you want to have your databases in dev and prod environments for example, your tenant will need 2 GCP projects to attach to each environment.

This will share the core platform network with the <project_id> configured. It is then necessary to configure resources in that project to explicitly use the shared network. This will then mean they will be reachable from the core platform.

Accessing Cloud Infrastructure

Your applications can access Cloud Infrastructure in different Cloud Accounts.

Enable Cloud Access in your tenancy via adding the cloudAccess section at the top level:

cloudAccess:
  - name: ca
    provider: gcp
    environment: all
    kubernetesServiceAccounts:
      - <your_namespace>/sa
  • name: Use a short name for the cloud access, with letters and -s (32 character limit). For CloudSQL, this will be your IAM SA username.
  • provider: only gcp is currently supported.
  • kubernetesServiceAccounts: a list of kubernetes service accounts that will be allowed to access the cloud infrastructure in the format namespace/name e.g. the service account sa in the namespace myfirsttenancy using the P2P should have myfirsttenancy-functional/sa, myfirsttenancy-nft/sa, myfirsttenancy/sa, myfirsttenancy/sa and whatever other namespace you need.
  • environment is be used to specify the environment in which this specific Cloud Access configuration will be deployed. To deploy it in all of the environments where the tenant is configured, you can use the keyword all as the environments value.

In your parent namespace (the one named after your tenancy run) run:

TENANT_NAME=myfirsttenancy # your tenant name
NAME=ca # replace this with the name you have configured under `cloud-access`
kubectl get iamserviceaccount  -n $TENANT_NAME -o jsonpath='{.items[0].status.email}' $TENANT_NAME-$NAME

For example, for the tenant name myfirsttenancy and the name ca:

kubectl -n myfirsttenancy get iamserviceaccount myfirsttenancy-ca -o jsonpath='{.status.email}'
myfirsttenancy-ca@{{ project-id }}.iam.gserviceaccount.com

This gives us an IAM Service Account that any permissions can be added to in your target Cloud Infra project.

myfirsttenancy-ca@{{ project-id }}.iam.gserviceaccount.com

Annotate Kubernetes Service Accounts

To be able to impersonate the above service account, annotate your service account with the IAM Service Account. For example:

apiVersion: v1
kind: ServiceAccount
metadata:
  name: sa # (the name of the configured kubernetesServiceAccount, after the /)
  annotations:
    iam.gke.io/gcp-service-account: myfirsttenancy-ca@{{ project-id }}.iam.gserviceaccount.com
Note

You will need a service account in each of the namespaces the app will be deployed to, so if using the standard p2p, and helm it would make sense to configure this as a helm chart template with the app (the project ID should be parameterised if the app is deployed to multiple environments). This will ensure it is created correctly for each sub-namespace (e.g. app-functional, app-nft etc.).

Your pods should use this service account, then anytime they use a Google Cloud library they will assume the identity of the service account.

Software Templates

What is a software template?

Software template provide pre-built, standardized structures for common types of applications. This allows developers to:

  • Quickly start new projects without building the basic architecture from scratch
  • Ensure consistency across multiple projects
  • Follow best practices and proven patterns built into the template
  • Focus on implementing specific business logic rather than generic boilerplate code

This efficiency can significantly reduce development time, especially for common application types, allowing teams to deliver projects faster and with more consistency.

CECG Software Templates

CECG provides software templates that integrate the entire P2P lifecycle, offering a significant advantage in application development.

These templates:

  • Incorporate the full P2P process out-of-the-box
  • Include built-in deployment pipelines with gated stages
  • Enable developers to start from zero and rapidly create a functional application

You can find the templates in the CECG Software Templates repository

Integration via corectl

Corectl leverages the templates to create a new project via corectl app create command.

To ease authoring of a new software template there is a command corectl template render that will render a template into a new directory.

Parameters

Software templates can be parameterized to allow for customization of the template.

There are 2 types of parameters:

  1. Custom parameters defined in the template.yaml file
  2. Implicit parameters set by corectl when creating a new application

Custom parameters

Custom parameters are defined in the template.yaml file.

They are defined in the parameters section of the file.

For example:

parameters:
  - name: appName
    description: Name of the application
    type: string
    optional: true
    default: my-app

When rendering a template corectl will prompt user for the values of the parameters.

Implicit parameters

Implicit parameters are set by corectl when creating a new application.

At the moment, there are 4 implicit parameters:

- name: name
  description: application name
  optional: false

- name: tenant
  description: tenant used to deploy the application
  optional: false

- name: working_directory
  description: working directory where application is located
  optional: false
  default: "./"

- name: version_prefix
  description: version prefix for application
  optional: false
  default: "v"
  • name is the name of the application
  • tenant is the tenant used to deploy the application

Deploying the reference apps

All the software templates are published into a single [public repository] that can be forked and configured to deploy to your Core Platform environments as a quick way of seeing a set of applications running.

Deploying all the reference apps

For Core Platform reference applications.

After forking, configure the P2P:

corectl p2p env sync <your-fork> <your-tenant>

You can then execute the workflows and the reference applications will be deployed to your environments.

Note

After you have forked the core-platform-reference-applications repo, you need to do the following:

  • Manually enable the workflows in your forked repository. To do this, navigate to your repository on GitHub. Click on the ‘Actions’ tab. If you see a notice about workflow permissions, click on ‘I understand my workflows, go ahead and enable them’.
  • In the Makefile of your repository, change the tenant_name variable to match the name of the tenancy you created."