Thank god for password managers.

Ever since LastPass came into my life in 2013, I have not manually created a password since. Well, except for my master password of course.

LastPass captivated me with features such as encrypted passwords being available over the cloud–accessible anywhere and everywhere–and auto-filling credentials on most websites.

Before migrating to LastPass, there was another password manager, KeePass, which I was using at that time and really liked. What fascinated me in the beginning about KeePass, or password managers in general, was how simple of an idea it is: an encrypted file of my usernames and passwords that I can backup to Google Drive, email and even my USB drive.

When I found out about Bitwarden, I realized it was the perfect combination of LastPass and KeePass: open-source and self-hostable.

Rather than deploying the official Bitwarden docker container, I opted to depoy Bitwarden_rs instead. Bitwarden_rs is a lightweight version of Bitwarden that is developed in Rust and uses SQLite as the backend database. It is available as a docker container and uses about ~45 MB.

Check out sample stats here (running on AWS free tier EC2 instance):

bash# sudo docker stats bitwarden

CONTAINER ID        NAME                CPU %               MEM USAGE / LIMIT     MEM %               NET I/O             BLOCK I/O           PIDS
c49130c798f8        bitwarden             0.00%               43.14MiB / 990.1MiB   4.36%               75.2MB / 181MB      8.06GB / 44.8MB     16

By default, data created in a Docker container is non-persistent. This means that the data will be lost whenever I stop and start my containers. On a VPS, this will most likely occur when I’m testing containers or when I’m rebooting the server. Aside from that, the bitwarden container will be running most of the time. But, rather than mapping a directory on my AWS EC2 instance to the Docker container, I created a Docker volume. I won’t go into the details of the differences between the two. But, the former link list general use cases of the two.

  • docker volume create bw_data
  • docker volume list

Getting Started

Running Bitwarden_rs in background

  • docker run -d --rm --name bitwarden --mount src=bw_data,dst=/data/ -p 80:80 mprasil/bitwarden:latest

Docker run flags:

  • -d: run container in detached mode (in the background)
  • –rm: remove container automatically when it stops running
  • –mount: mount docker volume (src) into directory (dst) on docker container
  • -p: bind localhost:PORT to container’s port

In order to backup Bitwarden data, we need to mount the data volume and copy it to our local machine.

Launching a Shell with Data Volume Mounted and Local Directory Bounded

  • docker run -it --rm --mount src=bw_data,dst=/bw_data/ -v /tmp/bw_data/:/local/ alpine /bin/ash

In this command, we are doing 2 things: 1) Mounting the Docker data volume bw_data to the mountpoint, /bw_data/ 2) Mapping /tmp/bw_data/ on your local machine to /local/ on the container

We are doing this in order to tar and zip the Docker data volume files and copy it over to /local/ which can then be accessed from your local machine for archival purposes. To accomplish this, we are using a very, very lightweight Linux distro called alpine and then using tar to compress our data.

Backing up Bitwarden data

  • docker run --rm --mount src=bw_data,dst=/bw_data/ -v /tmp/bw_data/:/local/ alpine tar -zcf /local/bw-data-backup.tgz /bw_data/

In English, the Docker data volume “bw_data” will be mounted into the container as “/bw_data/”. Next, our local machine’s directory “/tmp/bw_data/” will be bound to “/local/” in the Docker instance. Files created locally in /tmp/bw_data/ will appear in “/local/” in the container and vice-versa. Lastly, we’ll be zipping up the “/bw_data/” directory (the Docker data volume which holds our Bitwarden files) and copying to “/local/” which can then be accessed locally on our server.

Now that I’ve taken care of backing up my Bitwarden data, I can setup secured communication, HTTPS, to my Bitwarden server.

First, I downloaded CertBot from Let’s Encrypt. CertBot allows me to request an SSL certification from Let’s Encrypt, a Certificate Authority. Before I can install CertBot though, I need to install EPEL (Extra Packages for Enterprise Linux).

cd /tmp
wget https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm

Generate SSL Certificate from CA Let’s Encrypt

certbot certonly --standalone -d bitwarden.erikxu.com

 - Congratulations! Your certificate and chain have been saved at:
   /etc/letsencrypt/live/bitwarden.erikxu.com/fullchain.pem
   Your key file has been saved at:
   /etc/letsencrypt/live/bitwarden.erikxu.com/privkey.pem
   Your cert will expire on 2019-05-02. To obtain a new or tweaked
   version of this certificate in the future, simply run certbot
   again. To non-interactively renew *all* of your certificates, run
   "certbot renew"

Remember to backup your SSL certs located in /etc/letsencrypt/!
There is a rate limit of 50 Certificates per Registered Domain per week in case you “forget”.

How to redirect bitwarden.erikxu.com to my Docker Instance Running Bitwarden?
My domain is registered on Namecheap. I configured the nameservers to point to CloudFlare nameservers.
On Cloudflare, I created a CNAME (bitwarden.erikxu.com) going to my EC2 public IP.


Update (2019-04-26)

For the life of me, I was not able to renew my SSL certificate.

Everytime I executed sudo certbot renew --dry-run or sudo certbot certonly --standalone -d bitwarden.erikxu.com --dry-run, I was getting a variation of the following error:

$ sudo certbot certonly --standalone -d bitwarden.erikxu.com --dry-run
Saving debug log to /var/log/letsencrypt/letsencrypt.log
Plugins selected: Authenticator standalone, Installer None
Starting new HTTPS connection (1): acme-staging-v02.api.letsencrypt.org
Cert is due for renewal, auto-renewing...
Renewing an existing certificate
Performing the following challenges:
http-01 challenge for bitwarden.erikxu.com
Waiting for verification...
Cleaning up challenges
Failed authorization procedure. bitwarden.erikxu.com (http-01): urn:ietf:params:acme:error:unauthorized :: The client lacks sufficient authorization :: Invalid response from https://bitwarden.erikxu.com/.well-known/acme-challenge/JxPkROXbgo_EZHBnzZltwuTCNDhHOP9pdy1QVMwe3_g [2606:4700:30::6818:61d2]: "<!DOCTYPE html>\n<!--[if lt IE 7]> <html class=\"no-js ie6 oldie\" lang=\"en-US\"> <![endif]-->\n<!--[if IE 7]>    <html class=\"no-js "

IMPORTANT NOTES:
 - The following errors were reported by the server:

   Domain: bitwarden.erikxu.com
   Type:   unauthorized
   Detail: Invalid response from
   https://bitwarden.erikxu.com/.well-known/acme-challenge/JxPkROXbgo_EZHBnzZltwuTCNDhHOP9pdy1QVMwe3_g
   [2606:4700:30::6818:61d2]: "<!DOCTYPE html>\n<!--[if lt IE 7]>
   <html class=\"no-js ie6 oldie\" lang=\"en-US\">
   <![endif]-->\n<!--[if IE 7]>    <html class=\"no-js "

   To fix these errors, please make sure that your domain name was
   entered correctly and the DNS A/AAAA record(s) for that domain
   contain(s) the right IP address.

After screwing around with changing the A record and CNAME entries on CloudFlare and watching Avengers: Infinity War on the side, I decided to take a shower to clear my mind and because it was getting late. By changing my environment and putting in a second to think about the issue, I realized that I didn’t understand exactly how LetsEncrypt generates a SSL certificate. In a nutshell, LetsEncrypt will spin up a webserver temporarily to host a file with some special content. Then, certbot will query your domain (e.g. curl bitwarden.erikxu.com) and compare the output of the query against what it’s hosting. If it succeeds, then certbot will generate or renew your certificate. See the official documentation for greater detail.

Having understood this, I realized that certbot was successfully hosting the file but the output of the curl query was not matching up as we can see above. I was browsing around CloudFlare and I realized that it may have been caching requests to bitwarden.erikxu.com and as a result certbot was failing to complete its “challenges”, the queries. I enabled development mode (“Temporarily bypass our cache. See changes to your origin server in realtime.”), ran certbot again and, voila, it succeeded.

Along this journey, I re-solidied my knowledge of A records vs CNAMEs with help from this article. I realized that there was no need to configure a CNAME for bitwarden.erikxu.com and that I can directly configure an A record.


Putting Everything Together

After generating the SSL certs, let’s configure Bitwarden to use it!

sudo docker run -d --name bwarden \
    -e ROCKET_TLS={certs='"/ssl/live/bitwarden.erikxu.com/fullchain.pem",key="/ssl/live/bitwarden.erikxu.com/privkey.pem"'} \
    -e ROCKET_PORT='8000' \
    -e SIGNUPS_ALLOWED=false \
    -e DOMAIN=https://bitwarden.erikxu.com \
    --mount src=bw_data,dst=/data/ \
    -v /etc/letsencrypt/:/ssl/ \
    -v /icon_cache/ \
    -p 443:8000 \
    mprasil/bitwarden:latest

Source: https://blog.codesplice.net/2018/09/bitwarden-self-hosted-on-free-google_9.html?m=1

My original docker run command was exiting for some reason. I originally mounted /etc/letsencrypt/live/bitwarden.erikxu.com/ to /ssl/, but turns out that the files in ./live/ are symlinks to /etc/letsencrypt/archive/bitwarden.erikxu.com/ which are inaccessible because we mounted this path /etc/letsencrypt/live/bitwarden.erikxu.com/ specifically. Everything above such as /etc/letsencrypt/archive/ and /etc/letsencrypt/ will be inaccessible. docker logs bwarden was helpful in narrowing down this issue.

For future reference: if I have multiple services self-hosted on the same machine, how do I configure my custom sub-domains to redirect to each of them?
Answer is setting up an Nginx reverse proxy!

For example,

  • A.domain.com:443 -> localhost:A
  • B.domain.com:443 -> localhost:B

Nginx will listen in on sub-domain on port X and re-direct the traffic to localhost on port Y.
Sample NGINX configuration:

server {
    listen 80;
    server_name myapp.domain.com;

    location / {
        proxy_pass http://localhost:8080;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }
}

Using RClone to back up Bitwarden Data to Google Drive

Rather than mounting a Google Drive FUSE, I decided to go with rclone to sync my data. I tried FUSE in the past, but performance is quite spotty. After installing rclone, I navigated to Google Cloud Platform API Dashboard to create credentials for rclone.

During the initial rclone config, I made the scope drive.file and changed the root drive to a test folder because I didn’t want to accidentally wipe out my personal files. Next I navigated to the website to create a “Service account key”, downloaded the JSON and uploaded it to my box. After trying to create a directory using rclone mkdir TEST, I was unable to see it on Google Drive…tried a rclone -v --drive-impersonate with my email, but was received the following error:

Client is unauthorized to retrieve access tokens using this method, or client not authorized for any of the scopes requested.

I decided to change 2 things (not a great troubleshooting step, btw):
1) service account to oauth
2) scope to “drive”
and was able to successfully see the files on the GUI.

So, what is the difference between OAuth and Service Account?

The Google OAuth 2.0 system supports server-to-server interactions such as those between a web application and a Google service. 
For this scenario you need a service account, which is an account that belongs to your application instead of to an individual end user

Still, I wanted to change scope from drive back to drive.file to only be able to modify application files. Checked the GUI again, but it didn’t show up. Now, I’m puzzled why the files aren’t showing up.

Description of drive.file says:

View and manage Google Drive files and folders that you have opened or created with this app

If OAuth2 is making requests on behalf of my user account, I don’t understand why I can’t view the files in the GUI. Refreshed page and I saw the changes reflected…derp. Navigating to a different folder didn’t update the changes made on rclone because naviation is a client-side change

So the final configurations that allow me to 1) upload files to specific directory (by default), 2) see in the GUI, 3) modify files only created using the OAUTH key are:

1) Set the root_folder_id
2) OAuth2 authentication using application client ID and secret
3) Scope: drive.file

I tried changing the root_folder_id value to default, “”, to see if I can ls my personal files/folders. I was unable to see anything which is what I expect!

At last, let’s automate things!!!
I’m going to create a cronjob that compresses Bitwarden data and another cronjob that backs up to Google Drive.
Trying to figure out cron…difference between /etc/crontab vs crontab -e. (TLDR: use the crontab command) We don’t need to restart the cron service after editing crontab if that’s what you’re wondering (source).

Final results (BW_backup.sh):

#!/bin/bash

docker run --rm --mount src=bw_data,dst=/bw_data/ -v /tmp/bw_data/:/local/ alpine tar -zcf /local/bw-data-backup.tgz /bw_data/
rclone sync /tmp/bw_data/bw-data-backup.tgz GOOGLE_DRIVE:/

Let’s edit our crontab and run our shell script every hour. The rate limit for Google Drive API is 1,000,000,000 queries per day so we are good. Note: Don’t forget to give executable permissions to your script!

bash# crontab -e

0 * * * * /home/ec2-user/backup-bw.sh > /dev/null 2>&1

Standard output is redirected to /dev/null, and the standard error (2) output is redirected to same place as standard output (1).

Notice that we redirected the standard output and error to /dev/null. If we don’t, then we’ll see this message printed on terminal everytime the crob job is executed: “You have new mail in /var/spool/mail/ec2-user”

The end!


PS. Need to re-visit restoring Bitwarden data on a new machine.

Update (2019-12-09)

My AWS free tier was approaching expiration and I had to look for an alternative VPS provider.
I stumbled upon Upcloud months ago after researching differences between DigitalOcean, Vultr, and Linode.
The pricing is very similar between the 4 providers, but Upcloud claims to be faster.
Anyway, I spun up an instance using their free trial and began the preparation work for migrating my services over.

Preparations included:
1) Installing Rclone
2) Copying over my Rclone configuration (rclone config dump)
3) Installing Certbot
4) Copying over my existing LetsEncrypt certificates (sudo rsync -rl /etc/letsencrypt/ DST:/etc/letsencrypt/)
5) Downloading my data volume backup using Rclone (rclone copy REMOTE_DIR:/bw-data-backup.tgz /tmp/ && tar -xf bw-data-backup.tgz -C /tmp/)
6) Extracting my backup to the Docker data volume (docker run -it --rm --mount src=bw_data,dst=/bw_data/ -v /tmp/bw_data/:/local/ alpine /bin/cp -R /local/* /bw_data/)
7) Re-creating my backup script (BW_backup.sh)
8) Setting an hourly cron job to execute the backup script

Afterward, I created a new Docker instance on the server to validate the existence of my credentials.
Indeed, they were all there. Lastly, I updated my Cloudflare A record to point to my new server’s IP address.