Ktirio Urban Building CI/CD Workflow

The Continuous Integration and Continuous Deployment (CI/CD) pipeline for the {project+name} is designed to handle the development, testing, packaging, and deployment of both software and documentation. This document outlines the key components and the workflow steps involved in the process.

1. Key Components

The CI/CD pipeline consists of the following key components:

  • C++ Source Code: The primary programming language for the project.

  • Python Source Code: Used for scripting and testing purposes.

  • AsciiDoc/Antora-based Documentation: For maintaining project documentation.

  • GitHub Actions: Orchestrates the CI/CD pipeline, automating the build and deployment processes.

  • GHCR.io: A container registry service to host Docker images.

  • Ktirio-Urban-Building (KUB) Repository: Hosts all components and is responsible for releases.

2. Workflow Steps

The workflow consists of the following major jobs:

  1. Build Documentation (build_docs):

    • Triggered on push events to any branch.

    • Builds, installs, packages, and deploys documentation.

    • Deployed to gh-pages branch on the master branch push.

  2. Build and Package Code (build_code):

    • Triggered on push events, except when commit messages contain 'code skip'.

    • Handles building, testing, and packaging of C++ and Python code.

    • Artifacts uploaded and releases created on version tag pushes.

  3. Docker Image Creation and Push (deliver):

    • Dependent on the build_code job.

    • Builds a Docker image and pushes it to GHCR.io.

    • Triggered only on changes to the master branch.

3. EuroHPC Supercomputing Deployment

Post the creation and push of the Docker image, the pipeline extends to deploying the software on EuroHPC supercomputing platforms:

  • Apptainer Conversion:

  • Docker images are pulled from GHCR.io and converted into Apptainer containers.

  • Deployment on EuroHPC Supercomputers:

  • The Apptainer containers are deployed on various supercomputing platforms including PSNC, Lumi, Vega, and Meluxina.

  • Deployments are managed using the Slurm workload manager.

4. Scalability and Future Expansion

The workflow is designed for scalability and can be easily expanded to include additional supercomputers in the EuroHPC network. This modular approach allows for the seamless integration of new platforms or changes in the existing workflow.

5. Workflow

The current status is as follows:

Diagram

In the future, the workflow will integrate the Hidalgo2 services as follows:

Hidalgo2 Urban Building workflow
Figure 1. Hidalgo2 Urban Building workflow