Multiple commands during an SSH inside an SSH session
Consider that &&
is a logical operator. It does not mean "also run this command" it means "run this command if the other succeeded".
That means if the rm
command fails (which will happen if any of the three directories don't exist) then the mkdir
won't be executed. This does not sound like the behaviour you want; if the directories don't exist, it's probably fine to create them.
Use ;
The semicolon ;
is used to separate commands. The commands are run sequentially, waiting for each before continuing onto the next, but their success or failure has no impact on each other.
Escape inner quotes
Quotes inside other quotes should be escaped, otherwise you're creating an extra end point and start point. Your command:
ssh -n $username@$masterHostname "ssh -t -t $username@$line "rm -rf Input Output Partition && mkdir -p EC2_WORKSPACE/$project Input Output Partition""
Becomes:
ssh -n $username@$masterHostname "ssh -t -t $username@$line \"rm -rf Input Output Partition && mkdir -p EC2_WORKSPACE/$project Input OutputPartition\""
Your current command, because of the lack of escaped quotes should be executing:
ssh -n $username@$masterHostname "ssh -t -t $username@$line "rm -rf Input Output Partition
if that succeeds:
mkdir -p EC2_WORKSPACE/$project Input Output Partition"" # runs on your local machine
You'll notice the syntax highlighting shows the entire command as red on here, which means the whole command is the string being passed to ssh. Check your local machine; you may have the directories Input
Output
and Partition
where you were running this.
You can always define in your jumpbox Multiplexing in OpenSSH
Multiplexing is the ability to send more than one signal over a single line or connection. With multiplexing, OpenSSH can re-use an existing TCP connection for multiple concurrent SSH sessions rather than creating a new one each time.
An advantage with SSH multiplexing is that the overhead of creating new TCP connections is eliminated. The overall number of connections that a machine may accept is a finite resource and the limit is more noticeable on some machines than on others, and varies greatly depending on both load and usage. There is also significant delay when opening a new connection. Activities that repeatedly open new connections can be significantly sped up using multiplexing.
For that do in /etc/ssh/ssh_config
:
ControlMaster auto
ControlPath ~/.ssh/controlmasters/ssh_mux_%h_%p_%r
ControlPersist 30m
In this way, any consecutive connections made to the same server in the following 30 minutes will be done reusing the previous ssh connection.
You can also define it for a machine or group of machines. Taken from the link provided.
Host machine1
HostName machine1.example.org
ControlPath ~/.ssh/controlmasters/%r@%h:%p
ControlMaster auto
ControlPersist 10m
You can put all your commands into a separate script on your "master" server.
Master Script
#!/bin/bash
rm -rf "Input Output Partition"
mkdir -p "EC2_WORKSPACE/$project Input Output Partition"
Then in your ssh script call it like this: SSH Script
username="ubuntu"
masterHostname="myMaster"
while read line
do
ssh -n $username@$masterHostname "ssh -t -t $username@$line < /path/to/masterscript.sh"
ssh -n $username@$masterHostname "ssh -t -t $username@$line "rsync --delete -avzh /EC2_NFS/$project/* EC2_WORKSPACE/$project""
done < slaves.txt
OR if all files must be on the initial machine you could do something like this:
script1
script2="/path/to/script2"
username="ubuntu"
while read line; do
cat $script2 | ssh -t -t $username@line
done < slaves.txt
script2
#!/bin/bash
rm -rf "Input Output Partition"
mkdir -p "EC2_WORKSPACE/$project Input Output Partition"
rsync --delete -avzh "/EC2_NFS/$project/* EC2_WORKSPACE/$project"
ssh script
script1="/path/to/script1"
username="ubuntu"
masterHostname="myMaster"
cat $script1 | ssh -n $username@$masterHostname