servers

Magic thingies.

Networking on the cheap

It is not hard to get your hands dirty on computer networking basics and operating networking equipment. I've been running my own home network for the past 4 years and messing around with IPv6, VLANs and multiple networks, all without expensive racked routers or switches.

Router

The most important device you'll need is a router. Any computer with at least one Ethernet port is already a router. You can even use an old laptop as a router. (It comes with a free keyboard and mouse too!)

Alternatively, if you have an old consumer router and access point combination (wireless router) that comes from your ISP, it too can be used as an advanced router.

Usually, your computer or wireless router won't support advanced features like VLANs and have restrictive configuration options, so software is the next step in getting these devices bend to your will. There are a ton of router operating systems that you can install onto devices, or if you want to spend the extra effort, you can also use a Linux dis

Infrastructure 2017: Configuring CoreOS and Kubernetes

With the new infrastructure, I wanted the entire setup to be reproducible from a set of configuration files and scripts. This would mean that I can restore a fresh state anytime in the future, and provide a way to track changes in configuration. This also means that I can quickly spawn another Kubernetes setup to test new features safely.

Before I created installation automation scripts, I spent a while learning about how Kubernetes works by manually running the generic multi-node scripts Kubernetes provides, and failing repeatedly. A while back Sudharshan gave me one of his old OEM desktops, and it became really useful for testing Kubernetes installs. CoreOS is best installed using a Ignition configuration file. It's a JSON file with a specific format that CoreOS would read on the first boot and install the appropriate files or configurations specified. Usually, it is preferred to write these files in YML and then transpile them to JSON with a special tool. A great thing about CoreOS

Quick start to HTTPS with Caddy

Caddy is a easy-to-use web server and reverse proxy. You can use it to enable HTTPS on your self-hosted app with little effort.

To start off, first download caddy for your platform. Place the executable in a nice folder. We'll call that your working directory.

Now in the same folder, you will need to write a Caddyfile, which is just a text file. Open your text editor and paste this:

:2015 {
    proxy / localhost:8080
}

Save it as Caddyfile without any file extension.

This will start a normal HTTP server at port 2015 and proxy all requests to your app at port 8080.

Now, in the terminal or command prompt, cd into the working directory and run Caddy. On Windows you would type caddy.exe and on macOS or Linux ./caddy

You can now visit localhost:2015 to check that everything works.

Next, you have to get a domain to point to your server. You can get free domains from freenom.tk, or use your dynamic DNS provider. You can test that your domain works by visiting your-domain.com:2015 over mobile

Infrastructure 2017: A new DNS setup

Previously, DNS has been a pain to maintain. I was using a cloud DNS service, so for every subdomain, I would have to log on to CloudNS and use their web interface to update DNS records. This made it hard to switch DNS providers and easily edit records. And their three domain limit made it impossible to add my other domains, thus I had to run another DNS server. I made the decision to run BIND, but I had to run a service to update my zone files when my IP address changes because I didn't purchase a static IP.

To simplify my complex setup, I decided to host all my DNS locally and solve the dynamic IP problem at once. I made the switch from BIND to CoreDNS due to its extensibility. To ensure that parent DNS servers are always pointing to the right IP, I got a couple of free domains (an example is "ns.makerforce-infrastructure.gq") and pointed it to HE's DNS service, where I set up dynamic DNS updates. In my local zone files, I could also use CNAME records to point to the same d

Infrastructure 2017: Router Setup

After getting the hardware and installing embedded pfSense to a flash drive is configuration. My initial intention was to have my servers and home network (which guests use) on separate VLANs. However, I quickly realised that enabling VLANs caused poorer network performance, so I went back to a single network and used static DHCP allocations.

But I was still getting lower than 500Mbps speeds. My CPU was running at a 100%. While messing with settings, I found an odd solution: Enabling PowerD in System > Advanced > Miscellaneous. With it enabled, I could finally get close-to-gigabit speeds on wired clients!

My guess would be that PowerD allowed the CPU to run at higher clock rates.

After using this machine for a few days, I'm satisfied at the performance it delivers, for only $120 SGD! I'd recommend this to you for building a home server or router as a low-cost, low-power setup.

The case for this PC is not rackmounted, so I created a small shelf using a slab of wood and L brackets

Infrastructure 2017: Router Hardware

October last year, I switched my routing to a virtual machine running pfSense, in the hopes of having better control over my home network. Turns out, many hiccups have occurred since the move. Issues with OpenVPN (which I have since disabled), Linux bridges being reassigned after software updates and other seemingly random issues. The virtual network card also caused a reduction in maximum throughput, saturating at 200Mbps instead of the 800Mpbs previously on the RT-N56U.

Since then, I've also been wanting to make the switch from services (like this blog and GitLab) running in user accounts and virtual machines to containers. Containers are isolated environments for processes to run in, providing the isolation of a virtual machine with close to native performance.

So here's the start to a series on the upgrade of our infrastructure to a new setup powered by containers! I'll also be documenting progress and code on GitHub.

To start off the upgrade, I needed a better router (because my sibl

Moving to pfSense

Quite a while back, I was introduced to pfSense by a friend. At first looks, I didn't much get the benefits of using pfSense over manual iptables or a hardware router. The MakerForce server had been behind a consumer RT-N56U (quite a rock-solid access point cum router), with port forwards, and for a while, fully 1:1 NATed (or "DMZ"). It served us well for quite a while, but recently I noticed some occasional freezes. So, I decided to jump ship and revamp the networking at home.

I began by decommissioning the RT-N56U to just act as an access point. I have two NICs on my main box, and since I didn't have another box and was already running QEMU/KVM, virtualising pfSense was the best and only option.

The two NICs are important, because it allowed me to use one NIC for WAN, and the other for LAN. In virt-manager, I configured the WAN NIC to use macvtap to passthrough into pfSense, and then set up a bridge in the host networking side (I had problems when QEMU managed the bridge). The host bri

Our server setup

First, some history!

I started off my sysadmin adventures back in 2012. After learning Python, PHP and web for one and a half years, I discovered Node.js. Back then, there was quite a bit of media hype on Node.js and it's potentials. Having stronger JavaScript experience and used to the event-driven style of writing code, I picked it up in a breeze.

Now, that was at the age of 12 and I had no credit card, neither did I know of any free VPSes, nor could I explain to my parents what a VPS was. I had been using 000webhost to run a couple of sites written in PHP or static HTML, but I wanted to host Node.js apps.

I had a laptop. Or rather, I shared a family laptop, but no one else uses it so I had Ubuntu installed on the HP520 (with a long backstory on how I accidentally switched to Ubuntu). My home network was a "3G WiFi Router" that had horrible reception issues and was unstable, so I switched to a USB dongle attached to my laptop, and left my laptop on its side on my desk.

So t

Daydreams

daydreams are nice
you get to look at the datacenter you are gonna build
and all the twisted pairs you imagine yourself routing
and the switches and patch panels
then you fill each 4U slot with those hotswap storage servers
and 2U slots with application servers
and then you look at the SFP+ connectors
with fiber cables stuck inside
and all the link aggregation
with 20Gbps throughput from a single node
then you imagine the processors in each node
probably a dual socket motherboard
with two Xeon 8 core processors
and that low clock rate
the SSDs that are the drives
epic cable management inside too
dont forget the router running pfSense
and the 10Gbps connection to your ISP
all the application servers running CoreOS
all the management and data storage servers running Ubuntu
and so your storage nodes
are all btrfs
in the btrfs RAID10 configuration
exposed using NFS to your app servers
your app servers
all would run rkt containers
with 4 designated as etcd leaders
but due to cost constraint

NGINX vs nghttpx

Recently in version 1.9.5, NGINX introduced their experimental HTTP/2 support, purposefully dropping SPDY protocol support in favour of HTTP/2. Having no SPDY support in NGINX would leave about 60% of users capable of SPDY/3.1 having to fall back to HTTP/1.1 connections, thus lowering performance.

So being interested in webserver performance, I decided to do some small benchmarks, as a rough guide to how NGINX and nghttp2 may perform in a deployed environment. Of course, my tests are an inaccurate representation of real life performance: I'm only serving a single 100kb text file through NGINX.

Setup

The tests are done on my Ubuntu machine, the server this very site runs on. It's an i7 4790 desktop with 20GB of RAM, running Ubuntu 15.10 Desktop.

I'm running three tests:

  1. NGINX 1.9.7 with HTTP/2
  2. nghttpx using the above as a backend over HTTP
  3. NGINX 1.9.7 with SPDY/3.1

And here are my compile flags:

nginx version: nginx/1.9.7
built by gcc 5.2.1 20151010 (Ubuntu 5.2.1-22ubuntu2) 
built wit