This tutorial was created using information from the following sites:
Connecting SSH Without Using A Password
How To Automount SSHFS Filesystems Automatically at Startup
Background
There are two reasons why I did this. The first is that I constantly move files back and forth between systems using the command line, and SSHFS is the easiest way to do it. The second is that I am connecting my XBMC's together using MySQL for the database, but thumbnail files need to stay in the .xbmc/userdata folder on each machine. This will also setup an easy rsync without having to create an SSH connection on the fly.
SSH Passwordless Login
This could be a whole section by itself, but it is so simple I'll just include it for now. This will share public keys between systems so you don't have to enter a password when connecting with the same username. It's fairly simple, but does require some command line work.
- Make sure you have SSH installed on both systems. If you don't, type this in the command line:
sudo apt-get install ssh
- Generate the public and private keys for your computer
ssh-keygen -t rsa
This command creates two files in your /home/user/.ssh/ directory, id_rsa and id_rsa.pub. id_rsa.pub is your public key and you can give this to anyone you trust. id_rsa is your private key, which is not for sharing. At all. Ever.
- Copy your public key to the remote computer you want to connect to. I used FileZilla because I had it open, but you can use SCP if you want:
scp ~/.ssh/id_rsa.pub [username]@[remotecomputer]:~/.ssh/id_rsa_[localcomputer].pub
Obviously replace [username], [remotecomputer], and [localcomputer] with the appropriate names.
- Log into the remote host (this is the last time you will have to use your password!) and add your public key to the authorized_keys2:
cd .ssh && cat id_rsa_localbox.pub >> authorized_keys2 && rm id_rsa_localbox.pub
That's it! Do the same from the remote computer side if you want to have a two-way passwordless authentication between computers.
Automounting SSHFS
For most Linux systems, you mount using fstab. This is a tried and true way of mounting local and remote shares. I've done it a couple times with samba, and very often with local drives. The problem with this method is that it won't wait until the network is up and running, and your SSHFS mounts will fail. The easy fix is to run after the network starts up. This script does just that.
- Create a new shell script
sudo nano connectsshfs.sh
- In the shell script, paste the following code:
ip=XXX.XXX.XXX.XXX # Replace with your IP address or DNS name
user=user
# Replace with your username
dir=/path/on/remote/computer # Replace with the directory on the remote computer
r=1
c=0
until [ $r = 0 ]
do
if test $c -gt 120
then
exit 1
fi
ping -c 2 $ip
r=$?
c=`expr $c + 1`
sleep 5
done
sshfs $user@$ip:$dir /path/to/local/folder # Replace with your local folder path
- Change the information in the script to match your network setup
- Save the script (if you are using nano, ctrl+x and hit enter to save)
- Set the script as executable
sudo chmod a+x connectsshfs.sh
- Move the script to your user/bin folder
sudo mv connectsshfs.sh /usr/bin/
- Back in the GUI, click on Menu>Preferences>Startup Programs (this is for Linux Mint, for Ubuntu go to System>Preferences>Sessions)
- Click the add button and fill in any name for the title, and /usr/bin/connectsshfs.sh as the program to run
- Click Close
Your SSHFS drives are now mounted at startup, and because of the public key authentication between computers you will not be prompted for a password. Simple, huh?