Category Archives: Linux

Modify the name of the Desktop and Downloads folders on Ubuntu

To modify the names of your Desktop and Downloads folder you just modify ~/.config/user-dirs.dirs:

1
2
XDG_DESKTOP_DIR="$HOME/desktop"
XDG_DOWNLOAD_DIR="$HOME/downloads"

Disable SSLv3 in HAProxy

I just learned that my load balancer is vulnerable to the POODLE attack due to SSL 3. The recommended solution is to disable SSL 3.

I explained my HAProxy setup in a previous post, and also how I do SSL termination.

The section from my configuration I care about is:

1
2
3
4
5
6
frontend https-in
        bind *:443 ssl crt /certs/ncona.pem

        acl ncona-web-frontend hdr(host) -i ncona.com www.ncona.com

        use_backend ncona-web if ncona-web-frontend

This mode is called SSL offloading in HAProxy terms. Fixing it is as simple as adding a keyword (no-sslv3):

1
        bind *:443 ssl crt /certs/ncona.pem no-sslv3

Getting Rails to run in an Alpine container

I’m trying to get a little Rails application ready for production, and just for fun I’m trying to make the image a little slimmer than it currently is (860 MB).

There is a official docker image of ruby with alpine (ruby:alpine), but because of the libraries that rails uses (with native bindings), it is a little more challenging than just referencing that image on the top of the Dockerfile.

Solving the issues

I added FROM ruby:2.4.1-alpine at the top of my Dockerfile and tried to create the image. The first problem I faced was with myql2. For the mysql2 gem to work on Alpine it is necessary to have a compiler (build-base) and MySQL development libraries (mariadb-dev). I added this to my Dockerfile:

Read more »

Update let’s encrypt certificate without restarting your server

I started using HTTPS in my blog a few months ago and today came the time to renew my certificate. I thought I had automated the process correctly but it turns out for my configuration I have to take some extra steps.

In my previous post I suggested using this command:

1
21 7,19 * * * /home/user/certbot-auto renew --quiet --no-self-upgrade

But it tries to spin a server in port 80, and I’m already using port 80 for my blog, so the server fails to start.

There is another approach that allows you to renew your certificate without having to free port 80. It works by writing a file to a folder in your webroot and having let’s encrypt server read that file. This sounds pretty straight forward but it was actually a little tricky for me, since I’m using docker.

My blog runs WordPress inside a docker container. Inside the docker container the webroot is /var/www/html and this folder contains all wordpress files. I can’t write directly to this folder because it is inside the docker container, so I had to use a volume. I also can’t mount the whole /var/www/html folder because there are already files in that location inside the container. To make it work I had to mount to /var/www/html/.well-known, which is the folder certbot-auto creates.

Read more »

Securing your network with iptables

There comes a time on every system administrator’s life when they need to start being a little more conscious about security. That time has finally come for me.

I have a couple of servers in DigitalOcean where I run various sites and services. Some of these need to communicate with each other to do their job, for example, this blog runs in a server with Apache and PHP and communicates with another server that is running a MySQL database.

This is all good, but one of the most important rules of security is to only allow access to resources on a per-need basis. What this means is that from a security standpoint, nobody should be able to access a resource unless explicitly allowed. This rule applies to almost all scenarios that require some kind of access control and is a good idea to follow it whenever possible.

Read more »

Simple strategy for MySQL backups

I now have a good amount of data in my blog that I would be very sad if I lost. As a precautionary measure I decided to build a little system that will backup my data regularly so I’m prepared in case of a disaster.

The strategy

The strategy is going to be very simple. I’m going to create a user in my database that has read permissions on the tables I want to backup. This user will run mysqldump from a different machine and will save the backups there. I will create a cron job that will do this once a day.

Read more »

Free centralized log management with Loggly

I’m looking for a centralized log management system that I can plug into some of my hobby projects and while I was about to spin up my ELK server (Elasticsearch, Logstash, Kibana) I found that Loggly has a free tier. I have used Loggly before and it is pretty good so I decided to give it a try.

Before we start setting things up in Loggly, we need to decide which logs we want to send. Here are a few that apply for me:
– Apache logs for ncona.com (Running inside docker container)
– MySQL logs (Running inside Digital Ocean droplet)
– Cron logs (Also inside Digital Ocean droplet)

Before we start configuring our system we need to create a Loggly account.

Read more »

Host your Docker images for free with canister.io

I’m slowly incrementing the number of projects I host in my personal servers and as the number increases I find the need to standardize the way I deploy each service. Currently each service has a different way of running and I have to try to remember how to do it each time I have an update. As one of the steps to a more streamlined deploy process I decided for each service to have a production ready image hosted in a Docker registry. The deploy will then just be a matter of downloading and running the image in the production machine (not perfect, but a step forward).

My first idea was to host a Docker registry myself, but luckily I found a service that offers 20 private repositories for free. To start using canister.io, you just need to register for the basic plan and create a new repo.

To push images you can use the command line. Start by logging in:

1
docker login --username=username cloud.canister.io:5000

Read more »

SSL termination with HA-Proxy

SSL termination refers to the process of terminating the encrypted connection at the load balancer and handling all internal traffic in an unencrypted way. This means that traffic between your load balancer and internal services (or between internal services) will not be encrypted, so you should make sure your network is secure. If you have your own data center, you can trust your network, otherwise you should set up a VPN so traffic can’t be sniffed.

Terminating SSL at the load balancer has a few advantages:

  • Single place responsible of managing encryption and decryption of traffic
  • Centralized place to store certificates
  • The load balancer can analyze the traffic and take special actions based on this
  • The load balancer can modify the request and response if necessary

A somewhat common scenario of wanting the load balancer to modify the request is adding headers to HTTP requests. More specifically, it is common to have the load balancer add a X-Forwarded-For header, which includes the IP address where the request originated. Without this header, all requests would look like they originated in the load balancer.

Read more »

Free SSL certificates with Let’s encrypt

This blog and a few other of my personal projects are not using HTTPS at the moment of this writing. Using plain HTTP has a couple of disadvantages that could result in catastrophic consequences:

  • Traffic can be sniffed – If somebody monitored the traffic in your network they would be able the see everything you are sending and receiving (including usernames and passwords).
  • Traffic can be modified – When using plain HTTP, there is no guarantee that who you are talking to is who they say they are. Because of this, somebody could intercept your traffic and give you a response of their own. They could give you a log-in form to trick you into entering your credentials

Read more »