dradis:professional

Support » Guides » Deploying Dradis Pro to a Linode VPS

Deploying Dradis Pro to a Linode VPS

This guide covers how to deploy your Dradis Professional VM into a Linode virtual private server (VPS).

The officially supported platforms for running Dradis Professional are VirtualBox and VMware as described in the FAQ. If you deploy your Dradis Pro appliance on Linode (or any other unsupported platform), we may not be able to help you.

You can click on the images in the guide to get a bigger/uncropped version.

The Dradis Pro appliance was never intended to be exposed to the internet, but to be deployed in a secure internal network. Make sure you apply firewalling, access controls and general hardening before exposing your appliance.

1 A quick summary of operations

In order to provide data-at-rest security we’re going to create encrypted volumes in our VPS and then we’re going to copy across (via rsync) the contents of the original Dradis Pro appliance into each of the volumes.

So here is a break down of the tasks:

  • Launch a new VPS (large enough to support our appliance)
  • Attach disk images for boot, root and swap partitions.
  • Boot in rescue mode and encrypt those images
  • Install a minimal OS and configure boot sequence
  • Boot in normal mode
  • Copy our data across
  • Verify we’re up and running

Some great references on the encryption / LUKS side of things:

2 Add a new Linode to your account

We’re going for the Linode 4096 plan to be on the safe side:

We’re going to attach three disk images. Do not select a file system for any of them, just leave them as raw:

  • Boot (256 MB, raw)
  • Root (96000 MB, raw)
  • Swap (2048 MB, raw)

3 Boot in Rescue Mode

This is the 4th tab in the Linode application (i.e. “Rescue”).

Map the Boot drive to /dev/xvda, Root to /dev/xvdb and Swap to /dev/xvdc:

Now you can click on Reboot in Rescue Mode.

In order to access the vm in this mode we’re going to use Lish, in the second tab of the interface (i.e. “Remote Access”).

4 Remote Access via Lish

You can skip this section if this is not your first Linode VPS and you are familiar with Lish.

Linode provides a way to access your server’s console for management called Lish. You’ll need to add your SSH public key to:

After that, you can go back to the “Remote Access” tab of your Linode to find the command line you need to run from your laptop to access the newly created server. Note that this command will depend on the data center you VPS is hosted in. For me it was:


$ ssh -t user@lish-newark.linode.com dradis_cloud

Find your command in the Lish via SSH section, at the bottom of the “Remote Access” page.

5 Configure LUKS volumes and formatting disk images

Connect to your VPS via Lish using a terminal window. The server is running in Rescue Mode, so you should see something like this:

The next few steps mimic exactly what you can find in Linode’s own guide to disk encryption, but for completeness, here we go.

First encrypt the Root image:


root@hvc0:~# cryptsetup luksFormat /dev/xvdb

WARNING!
========
This will overwrite data on /dev/xvdb irrevocably.

Are you sure? (Type uppercase yes): YES
Enter passphrase:
Verify passphrase:

It is imperative that you don’t forget this password, or you won’t be able to access your data again. I’d recommend you use XKCD’s pass-phrase generation algorithm.

Open this encrypted device for access by entering the following command:


root@hvc0:~# cryptsetup luksOpen /dev/xvdb crypt-xvdb

Now we can format it using ext4 with:


root@hvc0:~# mkfs.ext4 /dev/mapper/crypt-xvdb

Next we move on to the encrypted swap partition. Because we don’t care about the data in this partition, we’ll use /dev/urandom as the key for the encrypted volume:


root@hvc0:~# cryptsetup -d /dev/urandom create crypt-swap /dev/xvdc

We can activate the swap space now:


root@hvc0:~# mkswap /dev/mapper/crypt-swap
mkswap: /dev/mapper/crypt-swap: warning: don't erase bootbits sectors
        on whole disk. Use -f to force.
Setting up swapspace version 1, size = 2097148 KiB
no label, UUID=1092a9dc-dce6-4517-a3f8-1663e57e7253
root@hvc0:~# swapon /dev/mapper/crypt-swap

Finally the /boot partition. This will be un-encrypted and formatted using ext2:


root@hvc0:~# mkfs.ext2 /dev/xvda

That’s it, now we have our three disk images ready to use. Next step, providing a minimal OS to boot the server.

6 Bootstrapping the server OS

We’re going to mount the disk images under the newsystem/ mount point and bootstrap a Debian Wheezy operating system on them.

First we mount the root partition and use debootstrap to install the bare minimums:


root@hvc0:~# mkdir newsystem
root@hvc0:~# mount /dev/mapper/crypt-xvdb newsystem
root@hvc0:~# debootstrap --arch amd64 --include=openssh-server,vim,cryptsetup wheezy newsystem/

Now we mount /boot and the other required pseudo-filesystems and chroot into the new environment:


root@hvc0:~# mount /dev/xvda newsystem/boot/
root@hvc0:~# mount -o bind /dev newsystem/dev
root@hvc0:~# mount -o bind /dev/pts/ newsystem/dev/pts
root@hvc0:~# mount -t proc /proc/ newsystem/proc/
root@hvc0:~# mount -t sysfs /sys newsystem/sys/
root@hvc0:~# chroot newsystem/ /bin/bash

7 Configuring the new server

7.1 Hostname and root password


root@finnix:/# echo dradispro > /etc/hostname
hostname -F /etc/hostname

root@finnix:/# passwd
Enter new UNIX password:
Retype new UNIX password:
passwd: password updated successfully

7.2 /etc/crypttab and /etc/fstab

The /etc/crypttab file maps encrypted volumes with keys and the resulting unencrypted devices. Here we have two entries:

  • The encrypted /dev/xvdb volume doesn’t have a key (so the pass-phrase will be required at boot time), and will be opened as /dev/mapper/crypt-xvdb.
  • The encrypted swap (in /dev/xvdc) will use a random key, different with each boot, but we don’t care about the old data on it.
/etc/crypttab

# [target name] [source device]         [key file]      [options]
crypt-xvdb              /dev/xvdb       none            luks
crypt-swap              /dev/xvdc       /dev/urandom    swap

In /etc/fstab we map the unencrypted devices to mountpoints in the file system:

/etc/fstab

/dev/xvda       /boot   ext2    defaults        0 2
/dev/mapper/crypt-xvdb  /       ext4    noatime,errors=remount-ro       0 1
/dev/mapper/crypt-swap  none    swap    sw      0 0
proc            /proc   proc    defaults        0 0

One final operation on the filesystem front is to make sure our /etc/mtab is up to date with:


root@finnix:/# cat /proc/mounts > /etc/mtab

7.3 /etc/inittab

Open /ect/inittab and locate this line

/etc/inittab

1:2345:respawn:/sbin/getty 38400 tty1

Replace it with:


1:2345:respawn:/sbin/getty 38400 hvc0

7.4 Grub and kernel

Next install Grub 1 (a.k.a. Grub legacy) and the latest kernel image:


root@finnix:/# apt-get install grub-legacy linux-image-amd64

We need to prepare the /boot file system for Grub use:


root@finnix:/# mkdir /boot/grub
root@finnix:/# update-grub

Now we need to make a few tweaks to /boot/grub/menu.lst.

First, the timeout setting, from 5 bump it to 10:

/boot/grub/menu.lst

timeout         10

Next the kopt setting. Since we’re using an encrypted volume for our root partition, we need to let the kernel know. Note that you should keep the heading hash symbol (#):

/boot/grub/menu.lst

# kopt=root=/dev/mapper/crypt-xvdb cryptdevice=/dev/xvdb:crypt-xvdb console=hvc0 ro

Finally, make sure that the groot setting matches the following line:

/boot/grub/menu.lst

# groot=(hd0)

Save the changes and run:


root@finnix:/# update-grub
root@finnix:/# update-initramfs -u

You’ll need to make some changes to the structure in /boot due to the way Linode’s pvgrub expects to see your boot disk:


root@finnix:~# cd /boot/
root@finnix:/boot# mkdir boot
root@finnix:/boot# mv grub boot/
root@finnix:/boot# ln -nfs boot/grub/ grub

7.5 Exit and shut down

Lets exit the chroot environment and umount all filesystems:


root@finnix:~# exit
root@hvc0:~# umount -l newsystem/

And we can shut down the server:


root@finnix:~# shutdown -h now

8 Create a new Configuration Profile

Now that our disk images are ready, we need to create the final configuration profile.

Up until this point we have only run our server in “Rescue Mode”, but now our disk images contain a minimal Debian OS that we can use to boot the instance in normal mode.

Go back to the first tab in the Linode app (i.e. “Dashboard”), and at the very top, in the “Dashboard” section find the link to Create a new Configuration Profile.

  • Choose a name for this profile, for instance: Dradis Professional Edition
  • Make sure that you select the pv-grub-x86_64 setting from the Kernel dropdown list.

Give it a name, and make sure the disk images are mapped as they should:

  • /dev/xvda: Boot
  • /dev/xvdb: Root
  • /dev/xvdc: Swap

Down at the bottom, in the “Filesystem/Boot Helpers” section, make sure to deselect “Xenify Distro” and “Automount devtmpfs”:

Save the changes and boot the server. Don’t close the terminal window.

If all goes according to plan, you should be prompted for your LUKS password, and the system would start just fine.

Once you verify the system is bootable and that you can access it via the Lish console, we’re good to go. The only thing that’s left is to copy the Dradis Pro data across.

9 Copying the Dradis Pro data across

9.1 Restarting in Rescue Mode

Because we’re going to overwrite some critical files in the system, we need to shut down the server and start in rescue mode.

If the system is still running, shut it down. Go to the “Rescue” tab and Reboot into Rescue Mode.

9.2 Mount the root partition

Next thing we need is to mount the root partition so we can SCP data across.

First, unlock the LUKS volume:


root@hvc0:~# cryptsetup luksOpen /dev/xvdb crypt-xvdb
Enter passphrase for /dev/xvdb:

Then, mount the open volumen into a temporary location:


root@hvc0:~# mount /dev/mapper/crypt-xvdb /mnt/

And we’re good to go.

9.3 SSH access

Because we’re going to SSH into the rescue mode server, we have to make some minimal configuration: a) a temporary root password and b) enabling SSH access.


root@hvc0:~# passwd
root@hvc0:~# /etc/init.d/ssh start

Now, make a note of the server’s IP address:


root@hvc0:~# /sbin/ifconfig eth0

For this example we’ll use 66.11.22.33.

9.4 Use rsync to send the data

We’re going to copy across the entire file system from your local instance of Dradis Pro into the cloud server.

Make sure your local instance is running, and open and SSH connection to it (as root).

Now the moment of truth, rsync from the Dradis Pro console to the remote server


root@dradispro:~# cd /
root@dradispro:/# rsync -avh --progress --exclude={/boot/*,/etc/fstab,/etc/crypttab,/etc/inittab,/dev/*,/proc/*,/sys/*,/tmp/*,/run/*,/mnt/*,/media/*,/lost+found} . root@66.11.22.33:/mnt/

Note that we’re excluding the /boot directory, /etc/fstab, /etc/crypttab, and /etc/inittab this is because the server layout (kernel, disk images, LUKS and LVM configuration, etc.) is significantly different in your local appliance than it will be in your cloud instance.

This operation will take a few minutes, so maybe you want to grab a cup of tea now :)

9.5 Umount the partition and reboot the system

After the rsync operation is complete, you can go back to your cloud server and umount the /mnt partition:


root@hvc0:~# ls /mnt/opt/
dradispro  rbenv
root@hvc0:~# umount /mnt/

If you see a /opt/dradispro directory in there, it sounds like things worked out as they should.

You can shutdown the server.


root@hvc0:~# shutdown -h now

10 Verify you’re up and running

Boot up using your normal Configuration Profile, and the Lish console should present you the familiar Dradis Professional banner:

Guide contents

  1. A quick summary of operations

  2. Add a new Linode to your account

  3. Boot in Rescue Mode

  4. Remote Access via Lish

  5. Configure LUKS volumes and formatting disk images

  6. Bootstrapping the server OS

  7. Configuring the new server

  8. Create a new Configuration Profile

  9. Copying the Dradis Pro data across

  10. Verify you’re up and running

Download resources

Our users can download the resources used in this guide here.