Aggregating results of downstream parameterised jobs in Jenkins

I'll outline the manual solution (as mentioned in the comments), and provide more details if you need them later:

Let P be the parent job and D be a downstream job (you can easily extend the approach to multiple downstream jobs).

  1. An instance (build) of P invokes D via Parameterized Trigger Plugin via a build step (not as a post-build step) and waits for D's to finish. Along with other parameters, P passes to D a parameter - let's call it PARENT_ID - based on P's build's BUILD_ID.
  2. D executes the tests and archives them as artifacts (along with jUnit reports - if applicable).
  3. P then executes an external Python (or internal Groovy) script that finds the appropriate build of D via PARENT_ID (you iterate over builds of D and examine the value of PARENT_ID parameter). The script then copies the artifacts from D to P and P publishes them.

If using Python (that's what I do) - utilize Python JenkinsAPI wrapper. If using Groovy - utilize Groovy Plugin and run your script as system script. You then can access Jenkins via its Java API.


I came up with the following solution using declarative pipelines.

It requires installation of "copy artifact" plugin.

In the downstream job, set "env" variable with the path (or pattern path) to result file:

post {
  always {
    steps {
      script {
        // Rem: Must be BEFORE execution that may fail   
        env.RESULT_FILE='Devices\\resultsA.xml'
      }
      xunit([GoogleTest(
        pattern: env.RESULT_FILE,
      )])
    }
  }
}

Note that I use xunit but the same apply with junit

In the parent job, save build variables, then in post process I aggregate results with following code:

def runs=[]

pipeline {
  agent any
  stages {
    stage('Tests') {
      parallel {
        stage('test A') {
          steps {
            script {
              runs << build(job: "test A", propagate: false)
            }
          }
        }
        stage('test B') {
          steps {
            script {
              runs << build(job: "test B", propagate: false)
            }
          }
        }
      }
    }
  }
  post {
    always {
      script {
        currentBuild.result = 'SUCCESS'
        def result_files = []
        runs.each {
          if (it.result != 'SUCCESS') {
            currentBuild.result = it.result
          }
          copyArtifacts(
            filter: it.buildVariables.RESULT_FILE,
            fingerprintArtifacts: true,
            projectName: it.getProjectName(),
            selector: specific(it.getNumber().toString())
          )
          result_files << it.buildVariables.RESULT_FILE
        }
        env.RESULT_FILE = result_files.join(',')
        println('Results aggregated from ' + env.RESULT_FILE)
      }
      archiveArtifacts env.RESULT_FILE
      xunit([GoogleTest(
        pattern: env.RESULT_FILE,
      )])
    }
  }
}

Note that the parent job also set the "env" variable so it can itself be aggregated by a parent job.

Tags:

Jenkins