Skip to content

Overview

Devops5d(D5D) is a system to run repetitive tasks or tasks that need to be executed in a particular order, probably controlled by trigger points , which could be manual triggers or automated triggers. They could also have conditions that need to be true for them to be executed. Once a task is done, they would also need to inform stakeholders about the action or the entire pipeline. The D5D system does all this by presenting a User interface that allows you to create Pipelines, which are a series of blocks, where each blocks consists of a trigger (optional), conditions(optional), action(mandatory) and observers(optional). You can connect any of these blocks together in the UI and allow them to be run one or many times. The results of each run is presented back in the UI.

Frontend

The Frontend of the D5D interface allows to create users with multiple roles, which typically could be Administrators, pipeline creators and pipeline executors. The system also allows to store sensitive information such as Credentials for SSH, Container registry, Cloud credentials etc. which might be used in the blocks of your pipeline.

Each user can create one or more applications, where each application can be made of many projects. A good way to understand is to think of an application as a collection of microservices, where each microservice could be a project (and typically backed by a separate svn/git repository). The projects combine to make up the application. When creating pipelines for the application, each of the blocks in the pipeline can be used to indicate which project it needs to operate against. With this arrangement, you can run blocks that pertain to each project and connect them together to make up the entire application. For e.g. you could write a block to build an Order project and another block to build an User object and then write other blocks to deploy them into a kubernetes cluster. There can be any number of blocks in each pipeline and each block will be associated to a single project.

You can also create environments, which are stages in your lifecycle, for e.g. you might have a development, stage, UAT and release lifecycle for your applications. When creating the pipelines, you can also setup which stage that block needs to be run against, so you can run a typical Dev -> Stage --> UAT --> Production lifecyle.

Once a pipeline is created and available, you can create Jobs against it which are instances of this pipeline and run at that time. You can also schedule the pipelines to be run at specific times so they are run automatically.

Backend

Once a job is processed and sent from the frontend, it reaches the D5D backend engine.

The backend engine of Devops pipeline provides a way to create Kubernetes custom pipeline objects that contains tasks. Each task can execute in a variety of ways depending on the nature of the task that they need to do. Each task also can contain attributes that help the actions perform their task, for e.g. for a Jenkins task needs to know what is the jenkins url, username, job name etc. Each action also has Conditions that will be evaluated before the action can proceed. If the conditions are not met, then the action is skipped. We also have Observers at a task or a pipeline level that let other systems know when something has happened (Success/Failure). We can also have Triggers, which help in identifying manual approvals that need to be done between tasks.

This is done through a set of Kubernetes Operators and uses the Kubebuilder project to write the operators. If you want to learn about it, go through the book at https://book.kubebuilder.io/

There are three basic CRDs(Custom Resources Definitions) that are built to handle the backend. * Pipeline: This is the main object that contains pipeline attributes, tasks that make up the pipeline and any observers. For each job that the frontend raises, a new Pipeline object is created * PipelineTask: These are the tasks in the above Pipeline object. For each task in the pipeline, a PipelineTask object is created and is executed. The status of the PipelineTask is stored in itself as well as the parent Pipeline Task * PipelineTrigger: Custom object to store information about Approvals. Anytime an approval is required, internally a PipelineTrigger object is created and is used to know how to block them and how to resolve them.

In addition to these objects, some standard Kubernetes objects are used as below:

  • Pods: Each PipelineTask creates a Pod object that is used to execute the action. This is used to isolate the action and also to make sure the environment is protected and sandboxed just inside that pod. Any actions that happen are isolated inside it and doesnt have any access outside of this pod.
  • ConfigMaps: The D5D System creates some configmaps for its own internal purposes -- For e.g. to store its actions (See Actions). In addition to this, they also create configmaps to store Blocked reasons in case a script decides to block executions because the environment is not ready (TODO: Link to blocks)
  • Secrets: The D5D system creates few secrets to temporarily store information from processed pods but removes them at the end of processing.

Basic Concepts

From a user standpoint, there are 2(or 3) things to know. The end-user, or the developer who is creating the pipelines based on the UI submits a job. This will do the following:

  1. Creates a Pipeline CRD (Custom Resource Definition) object for the pipeline that needs to be executed
  2. In that CRD, creates a list of Tasks, and some Conditions/Observers to let him/others know when its completed
  3. In each of the Tasks, the following elements are available a. Global Pipeline attributes that can be used throughout the pipeline b. Action (indicated by how it needs to be run, either as pod/php/python) c. Optional Attributes to an action d. Conditions (Can be simple conditions, "AND"-ed conditions, "OR"-ed conditions) e. Observers
  4. Provide a pipeline-level observer if they need to be informed about the status of pipeline

This is how a sample pipeline looks like (Pls note that this doesnt cover all elements above)

apiVersion: operations.starsystems.in/v1
kind: Pipeline
metadata:
  annotations:
    modified-by-d5d: "true"
  name: t1-1-62
  namespace: test1-2
spec:
  name: t1
  properties:
  - name: T1
    property:
      value: Testing
  tasks:
  - name: she1
    projectId: None
    environmentId: None
    internalName: shell_command_runner
    attributes:
    - configmap:
        value: she1-1-script
      name: script
      type: configmap
    - configmap:
        value: she1-1-pararmeters
      name: pararmeters
      type: configmap
    python:
      configMapName: shell-command-runner
      name: main.py
      packages:
      - requests
    type: python

Internally, the system create a PipelineTask for each Task mentioned in the pipeline and monitors the status of that task. That task could be in a variety of status-es depending on whether its yet to run, running, blocked, completed or failed. The total status of a pipeline is made up of all the tasks that make up the Pipeline.

In the case of blocks where manual inputs are required, we create a PipelineTrigger that is expected to be filled by the end-user (or the program that sits between the backend and UI). The user will then fill out the PipelineTrigger object to "unblock" it which is then tried again by the operator to see if its okay to continue past that point. If it does, the task is "unblocked" and the flow continues from there. See the Triggers section for more information

Pipelines

Pipeline K8s Operator is the basic building block for how the users interact with the backend. A pipeline usually has a name, namespace and is of kind: Pipeline. The purpose of this object is to serve as the container for all the tasks that run in it. Each instance of a pipeline execution will result in a separate pipeline object being created i.e. if there is a pipeline called "Star Basic deployment", every instance of it will create a new pipeline with

metadata:
  name: star-basic-deploy-1

The next instance of execution will be

metadata:
  name: star-basic-deploy-2

and so on...

Each pipeline has a Spec element under which the name for the pipeline is stored. This is the base name for the pipeline that got created, i.e. for the above example, it will be "Star Basic deployment". The Spec element also has an array of Tasks. Details about the Task can be found here.

Any number of tasks can be sent one after the other, and the tasks are run in succession. If a task is skipped because a condition mentioned for the task is not met, that task is skipped and the next task is considered.

The Status section of a Pipeline provides a lot of information about what happened on the Pipeline. It contains the Status of the pipeline, which can be one of

  1. NOT_STARTED: Pipeline has not started the execution. This can happen if there are many pipeline jobs waiting to be executed
  2. IN_PROGRESS: Pipeline is in this status when the execution is happening. At this point, the tasks of that pipeline are running
  3. COMPLETED: Pipeline has completed and all tasks have reached an end-state
  4. BLOCKED: Pipeline has been blocked waiting for a manual trigger to proceed. This would have created a PipelineTrigger object with the same name as the pipeline

The Status block also contains a startedAt field which indicates the time at which the Pipeline started execution.

The Status block also contains tasksOutput section which contains information about each task, specifically the following

  1. name: The name of the task as mentioned in the Spec section
  2. fullName: The full name of the pipeline that can be used to query the PipelineTask object. If you want to query the corresponding the PipelineTask object, you would need this field
  3. artifactsDirectory: The directory in a Minio installation where the output information is kept. The Minio server is installed as part of the D5D Setup and will house all the output artifacts. Inside an artifactsDirectory, you might find Stdout file which contains all the output from that specific task, Stderr file which contains all the error output and Parameters file which contains the Attributes that are output by that task. It could also contain a BlockedReason file which contains the reason if that pipeline's task has been blocked. The system will be able to read these files for you and present them in the UI accordingly.
  4. status: This is the status of the task, which can be one of

a. NOT_STARTED: This has been created but not started

b. IN_PROGRESS: This task is now getting executed. You can query the PipelineTask object to find more information about the task

c. SKIPPED: This task is skipped because the condition(s) mentioned in the Task cannot be satisfied

d. BLOCKED: This task has a manual trigger that needs to be addressed before moving on

e. COMPLETED: This task has completed successfully.

Output of the Pipeline task for the example given in Basic Concepts

emerald:testpipeline raja$ kubectl get pipelines -n test1-2 t1-1-62  -o yaml
apiVersion: operations.starsystems.in/v1
kind: Pipeline
metadata:
  name: t1-1-62
  namespace: test1-2
spec:
  ....
status:
  startedAt: "2021-03-28T15:45:01Z"
  status: Completed
  tasksOutput:
  - artifactsDirectory: t1-1-62/t1-1-62-she1-1616946301533
    fullName: t1-1-62-she1-1616946301533
    name: she1
    status: Completed

Pipeline Tasks

The backend creates PipelineTask objects for each of the Tasks mentioned in the main Pipeline. The tasks themselves have all the details about how it should be executed and are to be used only internally . We do not support changing the PipelineTask objects directly as the lifecycle of it is tightly connected to how the Pipeline executes.

The Pipeline object when it feels is ready to run another task, creates a PipelineTask object and passes all the required parameters for its execution. Before creating this, it checks whether the Triggers are ready and the trigger blocks are removed so it can create it. Once the task is ready to be executed, then the PipelineTaskController takes control of executing the Task.

Since the PipelineTask objects are internal, lets look at the Task objects inside the Pipeline as thats what the users will be creating and interacting.

A pipeline task similar to

apiVersion: operations.starsystems.in/v1
kind: Pipeline
....
spec:
  name: t1
  properties:
  - name: T1
    property:
      value: Testing
  tasks:
  - attributes:
    - configmap:
        value: she1-1-script
      name: script
      type: configmap
    - configmap:
        value: she1-1-pararmeters
      name: pararmeters
      type: configmap
    environmentId: None
    internalName: shell_command_runner
    name: she1
    observers:
    - type: slack
    - type: email
    projectId: None
    python:
      configMapName: shell-command-runner
      name: main.py
      packages:
      - requests
    type: python

translates to a PipelineTask as

apiVersion: operations.starsystems.in/v1
kind: PipelineTask
metadata:
  name: t1-1-62-she1-1616946301533
  namespace: test1-2
  ...
  ownerReferences:
  - apiVersion: operations.starsystems.in/v1
    blockOwnerDeletion: true
    controller: true
    kind: Pipeline
    name: t1-1-62
    uid: e042ec71-e1ce-4e60-8da5-b0fc0318ed66
  ...
spec:
  name: she1
  internalName: shell_command_runner
  projectId: None
  environmentId: None
  type: python
  attributes:
  - configmap:
      value: she1-1-script
    name: script
    type: configmap
  - configmap:
      value: she1-1-pararmeters
    name: pararmeters
    type: configmap
  envVariables:
  - key: T1
    value: Testing
  exportVariables:
  - exportparameters:
    - parameter:
        key: T1
        value: Testing
      type: attribute
    internalName: GLOBAL
    taskName: GLOBAL
  inputs:
    code:
    - ""
    configmap:
    - shell-command-runner
    imageName:
    - rajasaur/python-code-exec:latest
    mainName:
    - main.py
    requirements:
    - requests

There are a few bits of information missing in the above yaml as they are internal (k8s related and not necessary for our understanding at this point). You can see there is a match between the tasks in the Pipeline object and the PipelineTask object with some more new attributes introduced in the PipelineTask for it to be able to effectively manage it through a Pod object.

There are some interesting elements to understand in the PipelineTask object. The name of the PipelineTask is an autogenerated name and can be found using the Status block of the corresponding Pipeline. The OwnerReferences indicates the Pipeline object that actually owns this Pipeline Task.

In the Spec section, you can see that there is a key for envVariables. This contains all the Environment Variables that are available to this Task from all the previous Tasks. The parameters that are passed into this task are stored in the envVariables attribute. All exported attributes from previous tasks and Global parameters are stored in the exportVariables section. This section will contain all the exported parameters with the later task parameters overriding any previously written one by the same name. The internalName and taskName for each of these exportedparameters indicates which task and which internalName they belong to (For easier reference).

To write a parameter that needs to be available for future use, you can write into an file denoted by $TMP_DIR. This environment variable points to a temporary file on the system that is used to store parameters that can be exposed. Please note that this needs to be of the format Key=Value. Any longer strings can be base64 encoded and they will be automatically decoded when they are used in other tasks in the same pipeline. Anything that gets written to this file ($TMP_DIR) is parsed and added to the list of environment variables for use in subsequent steps. The pattern of any env variable will be Key = Value.

In addition to any Environment Variables that are passed down, any Attributes that are specific to this Task are passed also as environment variables to this Task, but prefixed with a PARAM_. For e.g. if an attribute called JENKINS_URL was passed in as an attribute, an env variable called PARAM_JENKINS_URL will be available for use and this can be used in the Task. The attributes in the envVariables section will be prepended with a PARAM_ so the actions need to use them as PARAM_<attrname> so that its not mixed up with other attributes that could be available in the pipeline

The next part of the Spec element is an inputs section which contains details about the tasks that are to be executed. The elements of the inputs varies as per the type of Actions performed. Please refer to the Actions to know more about what the values will be for various types.

The Status section of the Pipelinetask contains the Status of the Task, similar to the Task section mentioned in the Pipeline and any output emitted as part of this Task.

Pipeline Triggers

The Pipeline Triggers object is created when there was a manual input required as part of any pipeline. This object will be automatically created by the framework when it detects that it needs a manual trigger. A Sample Pipeline trigger when created will look like this.

apiVersion: operations.starsystems.in/v1
kind: PipelineTrigger
metadata:
  generation: 1
  name: pipeline-pod-condition-13
  ownerReferences:
  - apiVersion: operations.starsystems.in/v1
    blockOwnerDeletion: true
    controller: true
    kind: Pipeline
    name: pipeline-pod-condition-13
    uid: 7103266e-5f14-4f6a-bd7f-99d6117b06bc
  selfLink: /apis/operations.starsystems.in/v1/namespaces/default/pipelinetriggers/pipeline-pod-condition-13
spec:
  block-reliever: raja
  block-type: user
  task: task-2
  unblocked-by: raja

Looking through the above structure, the base metadata is shown, which contains the kind: PipelineTrigger and the Name as pipeline-pod-condition-13. The Name object is the same as the name of the pipeline and thats how we can unblock the pipeline. The idea is for the application to edit this object and add the unblocked-by information so that the system can validate it and unblock the pipeline. The input to the application can come through workflow/email/slack/other and the application needs to edit it and update the user who needs to be unblocked.

The Spec section also contains the details of why the pipeline is blocked. For e.g. it indicates the Task that was blocked, the type of block (whether a user or group is needed to block) and the name of the group/user who can unblock it.

The application should then enter the unblocked-by after validation and enter the name of the person/group that has unblocked it (Through Email/Slack/Other mechanisms). The backend will then again through the whole check and if there are any further blocks will again block the pipeline, otherwise it will continue to finish.

The output of the PipelineTrigger once completed will contain the history of all unblocks and at what time were they entered into the system. A example based on the above will be

status:
  blocks:
  - block-reliever: raja
    block-type: user
    task: task-2
    unblock-type: direct
    unblocked-at: "2020-04-03T05:40:58Z"
    unblocked-by: raja
  status: unblocked

The main status indicates the Block Status, whether the pipeline is currently blocked or not. The blocks array then contains one element for each block and relieved information. It also contains the timestamp of when the block was released.

Actions

Actions are the core workhorses of the application. This is where the various tasks that the Devops pipeline needs are executed. Actions are basically custom blocks that D5D developers can write or the end-user/developers can write and add it to the system.

There will be many Pre-defined actions available out of the box and Devops5D system will keep adding and improving those.

Actions are functions that need to be executed. So for e.g., 1. Running a Jenkins Job 2. Creating an image from a Dockerfile 3. Updating a Deployment 4. Creating a Firewall rule in Google Cloud 5. Creating a EBS volume in AWS.

For now, we manage 5 types of tasks. These are indicated by a type attribute and have a type-specific object at the top level to define

  1. Pod -- Write any action as a set of shell commands
  2. PHP -- Write any action as a PHP Script (or set of scripts with composer support)
  3. Python -- Write any action as a Python script (or set of scripts with pip3 support)
  4. Golang -- Write any action as a Golang script (or set of scripts with go.mod support)
  5. Shell -- Mostly internal

Pod

If you want to run commands in an isolated environment, this is the type to use.

You can run this as a shell script(Preferred) that can be uploaded as a configmap or as a series of commands inside of a Pod.

For e.g. you can write a script called main.sh which will execute a series of commands. These will be stored as a configmap (See how_to_write_actions.md TODO) and made available when the task is ready for execution. Pods also provide a way to export any information that might be required for the subsequent steps to use through environment variables. Any variables that this task needs to export to the pipeline can be sent by writing to a $TMP_DIR . The framework takes care of reading from the TMP_DIR and exporting it for other tasks to use.

Pods can also use Attributes that will feed information to it through environment variables. Any inputs to these commands can then be accessed using $PARAM_VARIABLE_NAME where VARIABLE_NAME is an attribute passed to the task.

Sample Pod is displayed below

  - name: action
    type: pod
    environmentId: "1"
    projectId: "2"
    internalName: create_website_in_s3
    attributes:
    - name: credentials
      secret:
        value: google
      type: secret
    - attribute:
        value: star151998
      name: BUCKET
      type: attribute
    - attribute:
        value: ap-south-1
      name: REGION
      type: attribute
    pod:
      configMapName: create-website-in-s3
      image: debian:buster-slim

For the above pod, it runs an action called create_website_in_s3 (Denoted by the internalName). There are some attributes (See Attributes) that are passed to it, which are parameters that are required for this action to run. This attribute list changes from action to action depending on what is required for it to be executed. It also has a pod section which consists of configMapName and a base image to run. The configMapName is the configmap in the d5dpipeline-system namespace that actually contains the logic to create a website in s3 as it is the action in this case. The image tag is used to setup a base image on which the script from configMapName will run on, so if you need a specialized image, you can do that in image and provide a script that could utilize that image. If you dont specify any image, it defaults to an Alpine base image.

PHP

If you want to execute an action written as a PHP script, you would use this type.

type: php is used to indicate that this action is a PHP script that needs to be run. We can either run PHP files stored in a zip file into a configmap(Preferred) and feed the configmap to the action or run them as standalone PHP files. We will discuss both of them below.

You can also install any dependencies that the inline script needs or the configmap needs through composer. If you specify any number of elements in the packages array, the backend system will generate a composer.json with these and install them before the actual script(inline or configmap) is executed.

Running as configmap
  tasks:
  - name: task-1
    type: php
    php:
      name: test.php
      packages:
      - "php:>=5.3.0"
      configMapName: php-test    

If there are more than one files that need to be executed, you can use the configMap setup to have the files. The actual set of PHP Files need to be combined as a Zip file and stored as a configmap using

kubectl create configmap php-test -n d5dpipeline-system \
    --from-file=<path to php zip>.zip \
    --dry-run=client -o yaml | kubectl annotate \
    action-name=<internal action name> \
    task-details='{"type":"php","configmap":"php-test","name":"test.php"}' \
    --local -f - -o yaml | kubectl apply -f -

The above command creates a Configmap called php-test in the d5dpipeline-system namespace and annotates it with

  1. action-name: Indicates which action that the pipeline should map to
  2. task-details: Indicates what other configuration is required for this action to run.

This setup is common for all types of Actions, so the creation of configmap is common for all types of actions. Please refer to Writing the Actions(TODO) to know how to write actions and publish them into the d5dpipeline-system namespace.

The system does a composer install on each of the packages and unpacks the directory at /app and runs the file given in the name attribute.

Running as inline PHP
  - name: task-2
    type: php
    php:
      name: main.php
      packages:
      - "php:>=5.3.0"
      code: |
        <?php
        echo "Hello, this is from a php pod\n";
        echo "Value of environment variable is: " . getenv('T');
        $myfile = fopen(getenv('TMP_DIR'), "w");
        fwrite($myfile, "T=Another value");
        fclose($myfile);

In the above example, you can see PHP scripts run as inline. The basic parameters of the php element are

  1. name: Contains the name of the PHP script. This can be anything as the script is inline. If its a configmap, then the name contains the main PHP file that needs to be executed
  2. packages: Contains an array of dependencies that this might need. Each element of the packages array will contain the package and the version dependency in the format <package_name>:<operator><version>, so something like php:>=5.3.0 means any version of PHP >= 5.3.0
  3. code: Contains a multi-line YAML containing the PHP script that needs to be executed. The environment variables in that script can be obtained through a standard getenv all

Python

If you want to execute an action written as a Python module, you would use this type.

type: python is used to indicate that this action is a python script that needs to be run. We can either run standalone python scripts or store them as a zip file into a configmap and feed the configmap to the action. We will discuss both of them below.

You can also install any dependencies that the inline script needs or the configmap needs through pip. If you specify any number of elements in the packages array, the backend system will generate a requirements file with these and install them before the actual script(inline or configmap) is executed. In the case of a configmap, the requirements file if it exists in the zip will be used for installing the required packages.

Running as configmap
  tasks:
  - name: task-1
    type: python
    python:
      name: main.py
      packages:
      - requests
      configMapName: test

This setup allows to create a Zip file containing the python modules and an optional requirements file to create a multi-module python setup. It also has the name attribute to indicate which main python file needs to be executed. The configmap is created very similar to how it was done in the PHP example, like:

kubectl create configmap python-test -n d5dpipeline-system \
    --from-file=<path to python zip>.zip \
    --dry-run=client -o yaml | kubectl annotate \
    action-name=<internal action name> \
    task-details='{"type":"python","configmap":"python-test","name":"main.py"}' \
    --local -f - -o yaml | kubectl apply -f -

The name of the file to be executed in the zipfile is stored in the name attribute.

The system does a pip install on each of the packages and unpacks the directory at /app and runs the file given in the name attribute.

Running as inline python
  tasks:
  - name: task-1
    type: python
    python:
      name: test.py
      packages:
      - requests
      code: |
        #!/usr/bin/env python
        print("Hello, this is from a python pod")
        print("Some more output")

In the above example, you can see python scripts run as inline. The basic parameters of the python element are

  1. name: Contains the name of the python module. This can be anything as the script is inline. If its a configmap, then the name contains the main python file that needs to be executed
  2. packages: Contains an array of dependencies that this might need. These will be created through pip
  3. code: Contains a multi-line YAML containing the Python script that needs to be executed. The environment variables in that script can be obtained through a standard os.environ dict

Golang

If you want to execute an action written as a Golang binary, you would use this type.

type: golang is used to indicate that this action is a golang module script that needs to be run. We can store them as a zip file into a configmap and feed the configmap to the action.

You can also install any dependencies that the inline script needs or the configmap needs through go.mod. Any modules in the go.mod file will be auto installed as part of running the main module.

Running as configmap
  tasks:
  - name: checkout-source-24
    type: golang
    internalName: _checkout_source
    projectId: "24"
    environmentId: "24"
    attributes:
    - attribute:
        value: "5"
      name: applicationId
      type: attribute
    - attribute:
        value: "24"
      name: environmentId
      type: attribute
    - attribute:
        value: new-application
      name: applicationName
      type: attribute
    - attribute:
        value: new-application-5
      name: namespace
      type: attribute
    golang:
      configMapName: checkout-source
      name: checkout.go

This setup allows to create a Zip file containing the golang modules and a go.mod file to create a multi-module golang setup. It also has the name attribute to indicate which main go file needs to be executed. The configmap is created very similar to how it was done in the PHP example, like:

kubectl create configmap golang-test -n d5dpipeline-system \
    --from-file=<path to golang zip>.zip \
    --dry-run=client -o yaml | kubectl annotate \
    action-name=<internal action name> \
    task-details='{"type":"golang","configmap":"go-test","name":"main.go"}' \
    --local -f - -o yaml | kubectl apply -f -

The name of the file to be executed in the zipfile is stored in the name attribute.

The system does a go mod tidy on the go.mod file and unpacks the directory at /app and runs the file given in the name attribute.

Shell

This is to execute commands in the same pod where the Operators are running. This is only for Testing and NEVER ALLOWED To execute this, you would need a setting like

tasks:
  - name: task-1
    type: shell
    shell:
      commands:
        - uname -a

As mentioned earlier, this is just purely for Testing, so its better we dont consider this.

Attributes

The attributes element contains Attributes that can be passed to the actions. There can be any number of attributes that can be passed to the Action. Each attribute belongs to one of the following types:

Attribute

attribute type contains key values. Any number of key values can be passed into it. These values are passed into the Action as PARAM_<Attribute Key>, so for e.g. an Attribute with key of "NAME" will be passed in as an environment variable called "PARAM_NAME". These are only available to that Task and not available in any future tasks. Each attribute element has a name, value with the type indicating its an attribute. Example of an attribute

    - attributes:
      - attribute:
           value: startest98
        name: BUCKET
        type: attribute
      - attribute:
           reference: 
              taskName: task-1
              internalName: task13
              key: attr2
        name: TESTVAL
        type: attribute

In the example above, the variable called BUCKET will be available in the action as an environment variable called $PARAM_BUCKET with a value of "startest98". The appending of $PARAM_ is to namespace it to mix up with other attributes.

In the second example, the variable called TESTVAL doesn't have a value directly assigned to it but picks up the value from an earlier task. This works in such a way that when the previous task finishes the value exported from that task is made available to this task. The variable in this case is from task-1 with an internalName of task13 and the value of exported variable attr2 from that task will be made available as $PARAM_TESTVAL

ConfigMap

configmap type contains a file or a set of configuration elements that could be passed into an action. Sample of this looks like

    attributes:
    - name: test
      type: configmap
      configmap:
        value: testconfigmap
    - name: attr1
      type: configmap
      configmap:
        reference:
            taskName: task-1
            internalName: task12
            key: django-props

For Attributes of type configmap, there needs to be a parameter called "value" which contains the name of the configmap. This configmap needs to be created in the same namespace as that of the application that the pipeline is created in. For most of the cases, this is done internally by the system. This configmap is then mounted at /mnt/configmaps/, so in this case , it will be mounted at /mnt/configmaps/test. This can then be read as files by the action. i.e any files that are mounted in the configmap will be available as files in the configmap, so something like /mnt/configmaps/test/file1.

The second example uses the reference value as mentioned earlier in the Attribute section, so everything remains the same except that instead of hardcoding a specific configmap, it is picked up from an earlier task. You still need to provide the taskName where it was made available, internalName that it belongs to and the key that was used to export it in that task.

Secret

secret type contains secret elements that can be passed into an action. Sample of this looks like

    attributes:
    - name: test
      type: secret
      secret:
        value: testsecret
    - name: attr2
      type: secret
      secret:
        reference:
            taskName: task-1
            internalName: task12
            key: secret-file    

For Attributes of type secret, there needs to be a parameter called "value" which contains the name of the secret. This secret is either saved in the namespace as that of the pipeline manually or through the internal D5D system. The secret is mounted at /mnt/secrets/<name of parameter>, so in this case , it will be mounted a /mnt/secrets/test. This can then be read as files by the action.
The Secrets are automatically decoded by the backend system and made available to the action.

In the second example, it uses a reference as mentioned in the earlier sections. This is used in cases where the value is not directly available but available through a previous task.

Directory

directory type contains directories that can be passed into an action. Sample of this looks like

    attributes:
    - name: direct
      type: directory
      directory:
        pvcName: test-pvc
    - name: attr1
      type: directory
      directory:
        reference:
            taskName: task-1
            internalName: task11
            key: attr-output5

For Attributes of type directory, the directory that needs to be mounted should be stored as a Persistent Volume Claim, which is nothing but a Volume available in the system. This PVC needs to be created either in the namespace as that of the pipeline manually or through the internal D5D system. The directory is then mounted at /mnt/in_directory/<name of attribute>, so in this case , it will be mounted a /mnt/in_directory/direct. There is also another environment variable called __DIRECTORY_MOUNTED_<name of attribute> which contains the directory where its mounted. In this case, the environment variable called __DIRECTORY_MOUNTED_direct points to a value as /mnt/in_directory/direct. This helps to know where its mounted as long as we know the name of the attribute that it uses.

In the second example, it uses a reference as mentioned in the earlier sections. This is used in cases where the value is not directly available but available through a previous task.

File

file type contains files that can be passed into an action. Sample of this looks like

    attributes:
    - name: fil
      type: file
      file:
        pvcName: test-pvc
    - name: attr1
      type: file
      file:
        reference:
            taskName: task-1
            internalName: task11
            key: attr-output2

For Attributes of type file, the file that needs to be mounted should be stored as a Persistent Volume Claim, which is nothing but a Volume available in the system. This PVC needs to be created either in the namespace as that of the pipeline manually or through the internal D5D system. The directory is then mounted at /mnt/in_files/<name of attribute>, so in this case , it will be mounted a /mnt/in_files/fil. There is also another environment variable called __FILE_MOUNTED_<name of attribute> which contains the file where its mounted. In this case, the environment variable called __FILE_MOUNTED_fil points to a value as /mnt/in_files/fil. This helps to know where its mounted as long as we know the name of the attribute that it uses.

In the second example, it uses a reference as mentioned in the earlier sections. This is used in cases where the value is not directly available but available through a previous task.

Conditions

Conditions allow the system to decide whether an action needs to be done or not. If at the point of executing an Action, the conditions dont match, then the status of that task becomes Skipped and the Pipeline moves to the next available Task. It does not Block because the conditions dont match, it just skips that and moves on.

Conditions can act on Exported variables (Variables that are exported from all tasks prior to the current task) or the Pipeline variables (which are global variables available for configuration when a pipeline setup).

D5D also supports only three types of conditions at this time.

  1. VariableRelated This type does check whether the variable matches a given value. This is the most common type of condition used which checks if , say, a Key K matches value V1.
  2. VariableTrue This type checks whether the given variable results in a True value
  3. VariableFalse This type checks whether the given variable results in a False value

We will be adding more conditions further.

Conditions can be grouped into three types

Simple Conditions

Type: simple.

This is the simplest of cases where there is one condition to check for and the backend will check whether that condition matches. Example of this is:

- name: task-3
    type: pod
    ....
    conditions:
    - conditiontype: simple
      simple:
        name: "Test if T1 is something"
        conditionclause: variable-related
        variable_related_params:
          variable: T1
          operator: equals
          value: something

As can be seen above, the Type is "simple" and the "simple" struct below contains information about creating a simple condition. In this case it checks whether the variable "T1" has a value called "something". If this condition matches, then the corresponding action is executed, otherwise its skipped past.

And conditions

Type: and.

And conditions are used when you want to nest two conditions into an AND mode. Example is below:

  - name: task-2
    type: pod
    ....
    conditions:
    - conditiontype: and
      and:
        conditions:
        - conditiontype: simple
          simple:
            conditionclause: variable-related
            name: t_equal_value
            variable_related_params:
              operator: equals
              variable: T
              value: Value
        - conditiontype: simple
          simple:
            conditionclause: variable-related
            name: t1_equal_value1
            variable_related_params:
              operator: equals
              variable: T1
              value: Value1

As can be seen above, this checks for 2 conditions to be true. The variable "T" above must be equal to "Value" and the variable "T1" must be equal to "Value1". Both these conditions need to be true for this action to be executed.

OR Conditions

Type: or.

OR conditions are used when you want to nest two conditions into an OR mode. Example is below:

  - name: task-2
    type: pod
    ...
    conditions:
    - conditiontype: or
      or:
        conditions:
        - conditiontype: simple
          simple:
            conditionclause: variable-related
            name: t_equal_value
            variable_related_params:
              operator: equals
              variable: T
              value: Valued
        - conditiontype: simple
          simple:
            conditionclause: variable-related
            name: t1_equal_value1
            variable_related_params:
              operator: equals
              variable: T1
              value: Value12

This is very similar to the AND conditions in that except of AND-ing two , it OR-es the two. i.e. either "T" is equal to "Valued"" or "T1" equals "Value12".

You can also combine the AND and OR internally, so you can have a nesting of AND inside OR inside AND conditions. Some samples of AND with OR is shown below

  - name: task-2
    internalName: task13
    conditions:
    - conditiontype: and
      and:
        conditions:
        - conditiontype: simple
          simple:
            conditionclause: variable-related
            name: t_equal_value
            variable_related_params:
              operator: equals
              variable: T
              value: Value
        - conditiontype: or
          or:
            conditions:
            - conditiontype: simple
              simple:
                conditionclause: variable-related
                name: t1_equal_value11
                variable_related_params:
                    operator: equals
                    variable: T1
                    value: Value1
            - conditiontype: simple
              simple:
                conditionclause: variable-related
                name: t2_equal_value2
                variable_related_params:
                    operator: equals
                    variable: T2
                    value: Value2

In the case above, there is an internal OR condition, which checks if "T1" equals "Value1" OR "T2" equals "Value2". The output of this is AND-ed with the condition check of "T" equals "Value"

Example of OR with AND:

    conditions:
    - conditiontype: or
      or:
        conditions:
        - conditiontype: simple
          simple:
            conditionclause: variable-related
            name: t_equal_value
            variable_related_params:
              operator: equals
              variable: T
              value: Value
        - conditiontype: and
          and:
            conditions:
            - conditiontype: simple
              simple:
                conditionclause: variable-related
                name: t1_equal_value11
                variable_related_params:
                    operator: equals
                    variable: T1
                    value: Value11
            - conditiontype: simple
              simple:
                conditionclause: variable-related
                name: t2_equal_value2
                variable_related_params:
                    operator: equals
                    variable: T2
                    value: Value21

In the above example, the inner ANDing of "T1" equals "Value11" AND "T2" equals "Value21" is OR-ed with the output of "T" equals "Value".

The operator can be one of equals,not_equals,is,is_not,greater_than,greater_than_or_equal,less_than,less_than_or_equal

Observers

Observers are a way to notify other parties about outputs of the system. Right now, we only support

  1. Slack
  2. Email

for notifications. Only Success/Failed/Blocked information are output at this time. We do not emit any other information as part of the system.

Observers can be at a pipeline level or at a task level. If its at a task level, then after the Task is completed/blocked, it will be notified. If its at a pipeline level, then after the pipeline is completed/blocked, it will be notified.

For Slack to be configured, we need the token and channel name to which information needs to be sent. Sample posted below:

spec:
  name: pipeline-12
  observers:
  - type: slack
    attributes:
    - key: API_KEY
      value: "xoxb-abcdefgh-gh1v3oNgYt4YgBUC8Agmf3xf"
    - key: CHANNEL
      value: "#monitor"
  tasks:
  - name: task-1
    type: pod

As can be seen above, observers have a type of slack to indicate this is an observer for Slack and it takes parameters. The parameters for Slack are the API_KEY, which is the token for Slack and CHANNEL_NAME.

For Email to be configured, we need the Sendgrid Api key and the email addresses to which notifications need to be sent. Sample posted below:

spec:
  name: pipeline-12
  observers:
  - type: email
    attributes:
    - key: API_KEY
      value: "SG.15_86eMBknIXQcEwsRHhDYaa0rQPpLWEAOKvkG_u7bD6Lw"
    - key: TO_ADDRESS
      value: "rajasaur@gmail.com,raja@starsystems.in"
  tasks:
  - name: task-1
    type: pod

We need the API_KEY from Sendgrid and the TO_ADDRESS which can contain a comma separated list of addresses to which emails need to be sent.

Triggers

Triggers are very similar to conditions with the only difference being that Triggers will Block and not proceed the pipeline until the approvals are obtained.Right now, we only support manual triggers before a task is done and the approval be done by a user or by a group. There can only be one trigger for an action and the trigger can be a combination of Simple, AND , OR as mentioned in the Conditions as before.

Sample trigger is shown below, which is an OR condition that waits for either Shiva's or Raja's approval

  - name: task-2
    type: pod
    pod:
      commands:
        - date
    triggers:
      - type: manual
        condition:
          conditiontype: or
          or:
            name: "test if Raja OR Shiva needs approval"
            conditions:
              - conditiontype: simple
                simple:
                  name: "Test if Raja as user is approving"
                  conditionclause: approval-user
                  approval_user:
                    username: raja
              - conditiontype: simple
                simple:
                  name: "Test if Shiva as user is approving"
                  conditionclause: approval-user
                  approval_user:
                    username: shiva

The system checks if there is a block required and if its not matched, it creates an object of type PipelineTrigger The frontend application will then need to unblock the user by using the unblocked-by after verifying that the user/group is ready to unblock. Once the PipelineTrigger object is updated, the system again checks if the action can be unblocked and if not , it again updates the PipelineTrigger and waits for it to be freed. If , on the other hand, it is unblocked and there are no other triggers, then the action proceeds to be executed.