How to wait for user input in a Declarative Pipeline without blocking a heavyweight executor
See best practice 7: Don’t: Use input within a node block. In a declarative pipeline, the node selection is done through the agent
directive.
The documentation here describes how you can define none
for the pipline and then use a stage-level agent
directive to run the stages on the required nodes. I tried the opposite too (define a global agent on some node and then define none
on stage-level for the input), but that doesn't work. If the pipeline allocated a slave, you can't release the slave for one or more specific stages.
This is the structure of our pipeline:
pipeline {
agent none
stages {
stage('Build') {
agent { label 'yona' }
steps {
...
}
}
stage('Decide tag on Docker Hub') {
agent none
steps {
script {
env.TAG_ON_DOCKER_HUB = input message: 'User input required',
parameters: [choice(name: 'Tag on Docker Hub', choices: 'no\nyes', description: 'Choose "yes" if you want to deploy this build')]
}
}
}
stage('Tag on Docker Hub') {
agent { label 'yona' }
when {
environment name: 'TAG_ON_DOCKER_HUB', value: 'yes'
}
steps {
...
}
}
}
}
Generally, the build stages execute on a build slave labeled "yona", but the input stage runs on the master.
Another way to do it is using the expression directive and beforeAgent, which skips the "decide" step and avoids messing with the "env" global:
pipeline {
agent none
stages {
stage('Tag on Docker Hub') {
when {
expression {
input message: 'Tag on Docker Hub?'
// if input is Aborted, the whole build will fail, otherwise
// we must return true to continue
return true
}
beforeAgent true
}
agent { label 'yona' }
steps {
...
}
}
}
}
I know this thread is old, but I believe a solution to the "Edit 2" issue, besides stashing, is to use nested stages.
https://jenkins.io/blog/2018/07/02/whats-new-declarative-piepline-13x-sequential-stages/#running-multiple-stages-with-the-same-agent-or-environment-or-options
According to this page:
... if you are using multiple agents in your Pipeline, but would like to be sure that stages using the same agent use the same workspace, you can use a parent stage with an agent directive on it, and then all the stages inside its stages directive will run on the same executor, in the same workspace.
Here is the example provided:
pipeline {
agent none
stages {
stage("build and test the project") {
agent {
docker "our-build-tools-image"
}
stages {
stage("build") {
steps {
sh "./build.sh"
}
}
stage("test") {
steps {
sh "./test.sh"
}
}
}
post {
success {
stash name: "artifacts", includes: "artifacts/**/*"
}
}
}
stage("deploy the artifacts if a user confirms") {
input {
message "Should we deploy the project?"
}
agent {
docker "our-deploy-tools-image"
}
steps {
sh "./deploy.sh"
}
}
}
}