Continuing the “Router rumble” with pfSense 2.3.2 and a FW-7540

Following up from my previous round of router testing, I managed to get a spare Lanner FW-7540 with an Intel Atom D525 CPU to test how my current pfSense 2.3.2 setup compared to an EdgeRouter Lite. The results were well below what I was expecting: the pfSense box topped out at 490Mbit in the 1MB test and was very spiky when looking at the netdata graphs.

The results file is also available if you’d like to look directly at the ab output.


Filesize Average Mbit/s Total Failed Requests Notes
10K 145.07 87 10K concurrency test only resulted in 49Mbit. No failed requests in 10, 100 and 1000 concurrency tests.
100K 421.71 4896 No failed requests in 10, 100 and 1000 concurrency tests.
1MB 489.96 3341 No failed requests in 10, 100 and 1000 concurrency tests.

This test fairly obviously shows a ceiling. For WAN connections of over 500Mbit, it looks like something beefier than an Atom D525 is necessary to run the NAT as anticipated.

I also ran some more informal WAN to LAN iPerf3 testing on direct connection (MDI-X), the EdgeRouter Lite and the pfSense/7540 combination to get some synthetic numbers:

Connection iPerf Result
Direct 941Mbit with no retries
EdgeRouter Lite 939Mbit with retries
pfSense/7540 829Mbit with no retries

Given how well the EdgeRouter Lite seems to perform for its price, and since it beats out the more general purpose hardware, I suspect I will be swapping out for an ERL or ER-Pro very shortly.

Replicating the Ars Technica “Router rumble” with a Ubiquiti EdgeRouter Lite

A friend and colleague of mine (Matt) and I have an ongoing discussion about over-specced gear for our home networks. Our core routers have been FW-7540s running pfSense (Atom D525, 4GB RAM, 4 Intel NICs) since 2013. pfSense offers a huge advantage over commercial-grade routers – I run dual WAN with failover based on ping, link, and packet loss, have extremely customizable DNS and DHCP, and can set up an OpenVPN server in just a few minutes. Matt and I also recently have had 500Mbit+ downstream connections installed, so it’d be good to know what hardware and software combination is “for sure” capable of utilizing the full pipe.

There have been a series of excellent articles at Ars Technica this year by Jim Salter that constantly get mentioned in our discussions:

The first two initial articles were mildly interesting – we do plenty of Linux-based routing at the office, but I don’t really want to build a router from scratch at home if there is a distribution that works as well. The results in Jim’s latest Router rumble article with pfSense 2.3.1 and the homebrew Celeron J1900 were described as “tweaky” and didn’t seem to hold up against the homebrew variant running Linux. I found this a bit odd because FreeBSD is widely assumed to have a hardened, robust and performant network stack; the general impression amongst networking folks I’ve talked to that Linux isn’t quite as good for this use case.

Coming from 2.2, the 2.3 series of pfSense is not exactly everything I’m looking for. I had to ‘factory reset’ the unit shortly after the 2.2 to 2.3 upgrade to avoid firewall rules displaying errors in the web configuration UI. As a personal irritation, the development team also took out the RRD-style graphs and replaced them with a “Monitoring” page, which I am not a fan of.


The Router rumble article, though, tested the UniFi Security Gateway but not the 3-port EdgeRouter Lite, which is my preferred option for users that need more capability than their ISP-provided modem/router combination. Jim did mention that they were both not up to routing gigabit from WAN to LAN, so I figured I’d see if I could replicate the results and if the ERL was any better than the USG.

Configuration and Setup

Following the posts, I configured two machines to act as client and server. Both were booted to Ubuntu 16.04.1 live USB sticks and had ‘apt-get update; apt-get upgrade’ run before any tests were performed. I also had to run “rm -rf /var/lib/apt/lists” to get apt to start working.

  • The “client” machine at 192.168. running the test script and the netdata graphing and collection system is a Core i7 4770K, 16GB RAM and a PCI-Express Intel 82574L gigabit network card.
  • The “server” machine with nginx and the sample files is a Lenovo X230, Core i5 3320M, 16GB RAM and an onboard Intel 82579LM gigabit NIC.

Some additional changes from the Ars Technica article are more suitable for my configuration and testing. On Ubuntu 16.04, the command to install ab and nginx should be apt-get install apache2-utils nginx (the ‘ab’ package doesn’t exist.) I made the same configuration changes to /etc/nginx/nginx.conf, /etc/default/nginx and /etc/sysctl.conf as suggested in the article:


events {
    # The key to high performance - have a lot of connections available
    worker_connections  19000;

# Each connection needs a filehandle (or 2 if you are proxying)
worker_rlimit_nofile    20000;

http {
  # ... existing content
  keepalive_requests 0;
  # ... existing content


# Note: You may want to look at the following page before setting the ULIMIT.
# Set the ulimit variable if you need defaults to change.
#  Example: ULIMIT="-n 4096"
ULIMIT="-n 65535"


kernel.sem = 250 256000 100 1024
net.ipv4.ip_local_port_range = 1024 65000
net.core.rmem_default = 4194304
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 262144
net.ipv4.tcp_wmem = 262144 262144 262144
net.ipv4.tcp_rmem = 4194304 4194304 4194304
net.ipv4.tcp_max_syn_backlog = 4096
net.ipv4.tcp_mem = 1440715	2027622	3041430

The testing script was modified to use a -s 20 parameter as indicated in the latest article, as well as sleeping for 10 and 20 seconds at appropriate times to distinguish each test run in the graphs:

mkdir -p ~/tests
mkdir -p ~/tests/$1
ulimit -n 100000

ab -rt180 -c10 -s 20 2>&1 | tee ~/tests/$1/$1-10K-ab-t180-c10-client-on-LAN.txt; sleep 10
ab -rt180 -c100 -s 20 2>&1 | tee ~/tests/$1/$1-10K-ab-t180-c100-client-on-LAN.txt; sleep 10
ab -rt180 -c1000 -s 20 2>&1 | tee ~/tests/$1/$1-10K-ab-t180-c1000-client-on-LAN.txt; sleep 10
ab -rt180 -c10000 -s 20 2>&1 | tee ~/tests/$1/$1-10K-ab-t180-c10000-client-on-LAN.txt
sleep 20
ab -rt180 -c10 -s 20 2>&1 | tee ~/tests/$1/$1-100K-ab-t180-c10-client-on-LAN.txt; sleep 10
ab -rt180 -c100 -s 20 2>&1 | tee ~/tests/$1/$1-100K-ab-t180-c100-client-on-LAN.txt; sleep 10
ab -rt180 -c1000 -s 20 2>&1 | tee ~/tests/$1/$1-100K-ab-t180-c1000-client-on-LAN.txt; sleep 10
ab -rt180 -c10000 -s 20 2>&1 | tee ~/tests/$1/$1-100K-ab-t180-c10000-client-on-LAN.txt
sleep 20
ab -rt180 -c10 -s 20 2>&1 | tee ~/tests/$1/$1-1M-ab-t180-c10-client-on-LAN.txt; sleep 10
ab -rt180 -c100 -s 20 2>&1 | tee ~/tests/$1/$1-1M-ab-t180-c100-client-on-LAN.txt; sleep 10
ab -rt180 -c1000 -s 20 2>&1 | tee ~/tests/$1/$1-1M-ab-t180-c1000-client-on-LAN.txt; sleep 10
ab -rt180 -c10000 -s 20 2>&1 | tee ~/tests/$1/$1-1M-ab-t180-c10000-client-on-LAN.txt

I also generated ‘JPEG’ files with /dev/urandom and placed them in /var/www/html (default nginx directory):

dd if=/dev/urandom of=/var/www/html/10K.jpg bs=1024 count=10
dd if=/dev/urandom of=/var/www/html/100K.jpg bs=1024 count=100
dd if=/dev/urandom of=/var/www/html/1M.jpg bs=1024 count=1024

Finally, installing netdata on the client needed a different set of dependencies (16.04 may have changed some of them):

sudo apt-get install zlib1g-dev uuid-dev libmnl-dev gcc make git autoconf libopts25-dev libopts25 autogen-doc automake pkg-config curl

After cloning the Git repository and running the suggested install steps, you may also need to edit /etc/netdata/netdata.conf and add the following sections (replacing enp5s0 with your network interface from ifconfig) in order to get the same graphs:


  enabled = yes

  enabled = yes


You can download the test runs in a ZIP file, which contains the ‘ab’ output from the tests. Note that some of the graphs show a larger separation between the ab runs with different filesizes; this was due to different ‘sleep’ values being tested in the script.

Direct Connection (Auto MDI-X)

Many NICs support auto MDI-X, which allows a standard Ethernet cable to act like a crossover cable if both network cards support it. I ran a test with the server directly connected to the client and the graph appeared very cleanly.



Filesize Average Mbit/s Total Failed Requests Notes
10KB 700.34 3117 10K concurrency test only resulted in 308Mbit. Failed requests only in 10K concurrency test.
100KB 785.03 3368 10K concurrency test only resulted in 417Mbit. Failed requests only in 10K concurrency test.
1MB 912.16 5533 All tests had a similar speed. Failed requests only in 10K concurrency test.

Switched Connection

With both systems connected to a Netgear GS108T switch, the graphs were fairly consistent with one unexplained valley in the 1MB/-c 100 test – but there were no failed requests to nginx noted in the ab results. This seemed to be a fluke; I wasn’t able to reproduce the problem in the exact same spot later. However, the valley did appear during other tests, lending suspicion that the GS108T may be causing a problem.


Filesize Average Mbit/s Total Failed Requests Notes
10KB 651.75 3939 10K concurrency test only resulted in 131Mbit. No failed requests in 10, 100 and 1000 concurrency tests.
100KB 760.61 1085 10K concurrency test only resulted in 319Mbit. No failed requests in 10, 100 and 1000 concurrency tests.
1MB 908.38 6690 All tests had a similar speed. Failed requests only on 1000 and 10K concurrency tests.

EdgeRouter Lite

The ERL was flashed with 1.9.0 firmware and configured using the “Basic Setup” wizard, which sets configuration back to default values. The eth0 port acts as the WAN interface and provides NAT to the eth1 (LAN) interface. The wizard also configures some default firewall rules. I set up the WAN interface with a static IP of, and the laptop at was plugged into eth0. The LAN interface (eth1) had an IP range of and provided an IP via DHCP to the desktop. The resulting config.boot file is also available for inspection.


Unfortunately the scale and size of this image is slightly off from the direct switched test, but the peaks and dips in the graph should be sufficient to demonstrate the differences in performance. We can see that the 10KB test is particularly brutal on the EdgeRouter Lite, with speeds topping out at about 215Mbit/s. The 100KB test is slightly better in terms of bandwidth, with the lowest test result at 626.82Mbit, but the top of the graph is not smooth on each test. Finally, the ERL with this firmware pulls out a great performance on the 1MB test, with only the last 10K concurrency run showing a few dips in the graph; the lowest result from ab sits at 904.73Mbit.

Filesize Average Mbit/s Total Failed Requests Notes
10KB 153.81 55 10K concurrency test was especially terrible at 51.25Mbit/s. No failed requests in 10, 100 and 1000 concurrency tests.
100KB 800.28 48 10K concurrency test only resulted in 626.82Mbit/s. Failed requests in 1000 (3) and 10K (45) concurrency tests.
1MB 908.81 23723 10K concurrency test failed more requests than completed.

Followup and Further Testing

These test runs raised some additional questions. For now, it convinced me to not immediately run out and get an EdgeRouter Pro, since according to these results, at 100KB to 1MB filesizes I’d still be able to utilize my full download bandwidth on an ERL. What I really need to do is pull my pfSense box out of line and run it through this test scenario to compare it directly to the EdgeRouter Lite and a direct connection.

Performance and Bandwidth

  • I am surprised at the performance difference between the Ars tests of the UniFi Security Gateway and the EdgeRouter Lite in this configuration. Since they have similar specs (512MB RAM, promised 1 million packets per second at 64 bytes, promised line rate at >=512 byte packets), I would expect to see similar results. I’m wondering whether the USG was not using Cavium hardware offload support or if there were significant changes in the 1.9.0 firmware from the tested 1.8.5 version.
  • The 100KB test in all configurations had its average bandwidth brought down significantly by the 10K concurrency run.  It is not very clear what the ‘receive’ and ‘exceptions’ fields in the ab output indicate, but I suspect these are contributing factors. During further testing I would be curious to find out if there is a concurrency parameter between 1000 and 10,000 that would result in no errors in the output.
  • The 1MB/10K concurrency test through the ERL, while it returned >900Mbit in throughput, failed more HTTP requests than it completed. What is interesting is that there is nothing in the nginx error log on the laptop to indicate a failed response on the server side, and a brief packet capture didn’t return any non-200 status codes for responses.

Tweaking and Tuning the Test

  • sysctl parameters could likely use some additional tweaking for the two systems. The original Ars article didn’t document each option and while I trust Jim’s parameters, there may be something more we can do with the 16GB of RAM in the test clients.
  • Consider changing the nginx web root where the .jpg files are stored to a ramdisk, to avoid the risk of the webserver process having to repeatedly read from the SSD. Of course, nginx may already be caching these files in memory; I could look at iotop during the ab run to see what disk access patterns look like.
  • Consider if there is a better way to simulate NNTP and BitTorrent downloads rather than HTTP traffic, because that’s really what people are doing with gigabit-to-the-home on the downstream end. NNTP traffic, for example, generally looks like TLS inside TCP. For most copyright-infringing purposes, also requires the client to reassemble yEncoded chunks – so there is a CPU impact on the client that is not necessarily present with straight TCP + HTTP. It would be interesting to come up with a “minimum system requirement” to be able to download and reasonably process NNTP data at 1000Mbit line rate.
  • Consider varying contents of the data in each file downloaded – that is, a performant enough server should be able to spew out different data content

Outstanding Questions

  • The netdata graphs presented in the latest Ars article do not seem to match mine with respect to width of each segment. Given that the filesizes are changing during each test (so obviously there will be more data and packets transferred in the 1MB test, which will take more time on the horizontal axis), I’m curious as to what causes this difference.
  • I have concerns about the GS108T and whether it is causing drops during the testing; I’ll have to bring in several switches and re-run the tests.
  • Unrelated, but I also happened to notice the netdata statistics were indicating TCP errors and handshakes when the desktop was plugged into a different switch on my main home network segment, despite ethtool and ifconfig not indicating any issues on the interface. This concerns me; I’m wondering if there is a misbehaving device on the LAN and if I can isolate it with packet captures or unplugging sections of the network until the problems disappear.

Office 365 and Exchange Migration Notes

This post is a collection of my recent Windows/Exchange administrative work.

Run AD Directory Sync Manually (New Version of Start-OnlineCoexistenceSync)



Import-Module ADSync
Start-ADSyncSyncCycle -PolicyType Delta


Start-ADSyncSyncCycle -PolicyType Initial

How do I check total mailbox sizes for Office 365/Exchange Online mailboxes?



# In PowerShell:
$LiveCred = Get-Credential
$Session = New-PSSession -ConfigurationName Microsoft.Exchange -ConnectionUri -Credential $LiveCred -Authentication Basic -AllowRedirection 
Import-PSSession $Session

get-mailbox | get-mailboxstatistics | ft displayname, totalitemsize 

# When done:
Remove-PSSession $Session

Error during migration: MigrationPermanentException: Cannot find a recipient that has mailbox GUID ” error message when you try to move a mailbox in an Exchange hybrid deployment


  • Ensure the local user object doesn’t have an exchange GUID. From the local Exchange Management Shell:
    Get-RemoteMailbox <MailboxName> | Format-List ExchangeGUID
  • Get the GUID from the error message, or retrieve it from the O365/Exchange Online shell (connect as above):
    Get-Mailbox <MailboxName> | Format-List ExchangeGUID
  • Set the exchange GUID for the user from the local Exchange Management Shell:
    Set-RemoteMailbox <MailboxName> -ExchangeGUID <ExchangeGUID>
  • Force directory sync. Using the latest Azure AD Connect commands, on the server with the directory sync tool installed:
    Import-Module ADSync
    Start-ADSyncSyncCycle -PolicyType Delta
  • Monitor with “Azure AD Connect Synchronization Service Manager” GUI application if needed.


Error during migration:  MigrationPermanentException: Mailbox size 12.56 GB ‎(13,489,367,463 bytes)‎ exceeds target quota 2.3 GB ‎(2,469,396,480 bytes)‎.


  • If applicable to a single user, use ADSI Edit to set the “mDBUseDefaults” property to False on the applicable user object, then try again.
  • If database or organization-wide, use the Exchange Administrative Center to remove quotas for the database.

I have a migration batch that partially failed. Now I can’t get those mailboxes to migrate.


Scenario: A migration batch was partially successful (one or more mailboxes in the batch migrated properly). The errors for the remaining mailboxes have been corrected. I’d like to start a new migration batch containing the failed mailboxes, but the batch bombs out with an email to the Exchange Online administrator. The batch online looks like it’s still migrating, but the CSV with the results that was emailed contains the following error messages for each account:

The user "" is already included in migration batch "My Migration Batch Name."  Please remove the user from any other batch and try again.

In this case you need to remove user from migration batch using the Remove-MigrationUser cmdlet when connected to the Exchange Online PowerShell session:

  • Get the details of all users in migration batches, or get the details for the specific user being migrated:
  • Remove the user from the migration batch. Use the additional -Force parameter if you aren’t running interactively.
  • Clean up any migration batches that may still be in progress with the ‘already included’ error.
  • Create a new migration batch containing the affected mailboxes.

Fix: trying to overwrite ‘/usr/share/accounts/services/google-im.service’ installing kubuntu-desktop

I have an Ubuntu 16.04 desktop installation with Unity and wanted to try KDE, so I ran sudo apt-get install kubuntu-desktop. apt failed with the following message:

trying to overwrite '/usr/share/accounts/services/google-im.service', which is also in package account-plugin-google [...]

The original issue at Ask Ubuntu has several suggestions but none of them worked – any apt commands returned the same requirement to run apt-get -f install, which in turn gave the original “trying to overwrite” error message. synaptic also wasn’t installed so I couldn’t use it (or install it, as all other apt installation commands failed.)

I was able to get the dpkg database out of its bad state and continue to install kubuntu-desktop by running the following:

dpkg -P account-plugin-google unity-scope-gdrive
apt-get -f install

(Link to original Kubuntu bug for posterity:

This post was cross-posted to The Linux Experiment, where I haven’t written anything for months.

RiteBite and Invisalign, just over a year in

I’m just over a year in since starting Invisalign treatment with RiteBite Orthodontics – and here’s how things stand.

Positive Experience

I want to reiterate that I’m quite pleased with the experience I’ve had with Dr. Luis and RiteBite. Everyone at the Waterloo office has been friendly, professional and my appointments have always started on time. I feel like Invisalign was definitely a better option over braces. Even under the perpetually ticking clock of their Terminal Services-hosted dental software, everyone that’s put their hands in my mouth has done a great job.

Don’t you just love the graphics?

One of the best improvements RiteBite has made since I signed up has been the addition of the Case Graphics / Patient Records section to their patient portal. Despite its dated “win a 4th-gen iPod” banner on the landing page, it has X-rays and full sets of mouth and jaw pictures from every appointment where the digital camera comes out.

These photos are perhaps the most convincing tool they could use to convince customers that money spent on orthodontic/Invisalign treatment is worth it. The progress made after just eight months of trays was phenomenal. Teeth are shifting into their proper positions and I have much higher confidence in a successful result.

Social Media Milling, aka Poisoning the Well

RiteBite’s Internet presence/social media strategy is intended to attract new customers. They have a decent website and the usual Twitter / Facebook / Pinterest / Instagram accounts. Current patients are enrolled in the Patient Rewards Program, where 10 points = $1 in gift card value, redeemable with a minimum 100 points.

Straight out of 2001, the RiteBite Rewards Hub.

At a typical appointment (6-8 weeks apart) you might get 2 points for “brushing after signing in”, another 2 for “being on time”, and 3 for “wearing appliance as instructed”. The higher point values in this program are designed to encourage social media interaction – a YouTube video testimonial will get you 20 points, and 10 points goes to the author of a Google Maps review.

Given these values, it’s a bit of a grind to make it to your $10/100 point Tim Card.

Online review and social media activity for RiteBite is inevitably going to skew on the positive side, because there’s a reward for doing so. As a cynical tech worker, I’m also highly allergic to anything like a “selfie contest”. Occasionally I’ll get an email promoting one and I scowl before remembering that a large portion of RiteBite’s patients are teenagers with nothing better to do than hashtag.

Full disclosure: I was credited with a whopping 250 points for referring a friend to the practice shortly after I signed up, but I have yet to exchange them for anything.

Align Technologies Inc.

Be aware that Align Technologies, Inc. is also very heavily involved in managing their online presence and regularly comps “mommy bloggers” with treatment either for themselves or their kids. You can usually find these disclosures at the bottom of the page or post in question in FTC-compliant language. These posts exclusively skew positively for Invisalign over other types of treatment, and hammer home the main marketing points (can remove trays, easy to use, comparable in cost to braces, no metal mouth.)

They also appear to engage in patent-troll like behaviour, but I don’t currently have any solid opinion on the merits of their legal maneuvering.

Invisalign Drawbacks

I’d still pick Invisalign if I had to choose between it and conventional braces again, but consider the following:

  • For best results, trays have to be in for 20-22 hours per day, and you’re not supposed to drink anything other than water with the trays in. So it’s really only for meals that removing the trays is practical. I can’t just try a drink or have a bite of food – it becomes a whole ordeal to remove them, and then they’re supposed to be replaced as soon as possible. In what might be seen as a net positive for my health, I’ve switched to drinking soda water (rather than cola or coffee) during the work day because of this inconvenience.
  • Plastic in my mouth during the night sucks. I tend to drool overnight with the trays in, and even through a pillow protector I’ve ruined at least one pillow.
  • You still have to have attachments bonded to your teeth, which are initially rough on the inside of the mouth. The installation process is also unpleasant affixing as it requires your jaw to remain open in an odd position for several minutes for each attachment.
  • A surprise to me – and not really fully described at my initial appointment – is that my second set of trays required installation of a “button” (a metal protrusion cemented to a tooth) and use of an elastic. This also complicates insertion and removal. More complicated cases are likely to have more elastics and buttons.
  • It’s not completely painless. Switching to a new set of trays causes pressure and occasional tooth pain. I find popping two Advil is necessary on the first day of a new set, or otherwise I can’t concentrate at work.
  • Don’t lose or break your aligners; it’s a $150 replacement fee per set. I have heard that depending on where you are in your treatment process or cycle, you may be able to skip to the next set instead. With braces you have to be cautious about breaking brackets or loose wires, but with a set of trays it’s incredibly easy to leave them in a napkin at a restaurant or misplace the Invisalign case.

“4 Strikes and They’re Off!”

At RiteBite, apparently you can get kicked out of braces (or Invisalign) if you don’t have decent oral hygiene at four appointments. According to the initial contract I received, RiteBite also can “rat you out” to your dentist with a letter and won’t perform whatever orthodontic process was scheduled for the day.

Since I haven’t heard about this system since the initial package of paperwork, I think this is more of a way for parents to threaten their kids into compliance for orthodontic treatment – “if you get a 3 or less on this arbitrary grading system, you’re in trouble!” I suspect it’s not frequently employed to its full extent.

With this in mind, Invisalign itself does encourage better oral hygiene. You won’t want to put the aligners back on without cleaning your teeth well – a fragment of steak in between molars becomes very painful when compressed with plastic trays.

Aligner Use and Abuse: Beer, Whisky and Vodka

One of the really frequent questions online is “can I drink (beer) with Invisalign in?” I’ll refer you to “Another Invisalign Blog“, where the author has written specifically about Drinking Beer With Invisalign and Pros And Cons of Invisalign: Revised After 2+ Years of Wearing Aligners. Although the author’s recent posts have gone into the realm of what I’d consider unnecessary surgery, her writing was crucial for me in my early research.

I am sure that Dr. Luis would not approve of drinking anything other than water, but here’s my experience:

  • It is definitely possible to drink light-coloured beer with aligners in. In 2014 I tried this at Oktoberfest, matching each beer with a glass of water, and didn’t notice any discolouration afterwards. In contrast, for 2015’s festival of German-style debauchery I decided to remove them entirely for the evening.
  • Do not drink dark-coloured liquors with aligners in – it will absolutely stain the trays.
  • I will drink cider (Grower’s 1927 Premium Dry), vodka/soda or vodka/7Up with a set of trays still in, but am meticulous about removing, cleaning, and reinstalling them for the night and going to bed. The trays don’t seem to be any worse for wear as long as they are cleaned with a separate toothbrush dedicated to this task.

What’s Next

Since this post was initially drafted, I am presently waiting on a third set of trays, targeting completion in August 2016. The “button” remains on in between sets but the attachments get taken off.

I’ll follow up in a few months with progress on the next set.

Another “Let’s Encrypt” post for nginx

I’ve replaced the certificate on this site with one issued by Let’s Encrypt and plan to do so for all clients (or enable SSL in the first place) as their domains come up for renewal, or other maintenance work is contracted. The big downside is a 90 day expiry time, which requires a service nginx reload at least that often.

I had no end of issues using the official client as it wouldn’t create the .well-known/acme-challenge files necessary to get the domain to validate (yes, I checked directory permissions.) Instead, Vincent Composieux has some excellent instructions on just using the certonly parameter inside a script. Rundown, including my changes in case the article disappears:

  • clone letsencrypt repository to /opt/letsencrypt
  • create /usr/local/etc/
# We use a 4096 bit RSA key instead of 2048
rsa-key-size = 4096

email =
domains =,

authenticator = webroot

# This is the webroot directory of your domain in which
# letsencrypt will write a hash in /.well-known/acme-challenge directory.
webroot-path = /var/www/
ssl_session_timeout 5m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_session_cache shared:SSL:50m;
ssl_dhparam /etc/ssl/dhparams.pem;

if ($myroot = false) {
        set $myroot $realpath_root;

location '/.well-known/acme-challenge' {
    root $myroot/;
    try_files $uri /$1;
  • For each site in /etc/nginx/sites-enabled, update the SSL definition to store the webroot in the $myroot variable, then have the root directive (and ssl.conf) reference it:
server {
        listen 443 ssl;
        # [...]
        set $myroot /var/www/;
        root $myroot;
        include global/ssl.conf;
        # [...]
  • Create the certificate: sudo /opt/letsencrypt/letsencrypt-auto certonly --config /usr/local/etc/
  • Add the certificate paths to each site in sites-enabled:
server {
        # [...]
        include global/ssl.conf;
        ssl_certificate /etc/letsencrypt/live/;
        ssl_certificate_key /etc/letsencrypt/live/;
        # [...]
  • To automatically renew certificates 30 days before expiry, checking each day: ln -snf /usr/local/bin/ /etc/cron.daily/

Some adjustments are obviously necessary for multiple sites but this got me past the point where site validation failed.

Invisalign on my own dime: orthodontics in Kitchener-Waterloo

I’ve been meaning to write about my experiences with Invisalign and the orthodontic consultation and treatment process since I started investigating various options in July 2014. On advice from my dentist Dr. Reddy and her staff at King Street Dental, I received several referrals to orthodontists in the KW area, and did my own research into reputation, pricing and treatment options.

Before getting into the orthodontic part of the piece, I would definitely recommend Dr. Reddy. In my experience, she handles both routine and emergency dental work to a very high standard.

Initially, Dr. Reddy suggested that she could extract one or more teeth to correct crowding in my lower jaw, but also indicated that I should look at orthodontic treatment as an alternative.


I received evaluations from three orthodontic practices in the Kitchener/Waterloo area:

TL;DR: Out of these three options, I opted for Invisalign treatment with RiteBite in August 2014 and began wearing the aligner trays in October 2014. As of October 2015 I am on a second box of trays, but from everything I’ve heard, I am on target to finish within 24 months.

My main concerns with orthodontic work were the following:

  • What is the cost? Despite the fact that I have health coverage through my employer, orthodontic coverage is generally limited to dependents under 19 years of age, so I’m on the hook for the whole bill. This is a common theme with corporate health benefits – even if you have 100% dental, orthodontic work is generally provided for your kids only.
  • Is it going to be a gigantic pain in the ass? I have heard horror stories of people breaking brackets and popping wires, unable to eat anything but soup after getting braces tightened, and having to use crappy plastic mouthguards or slimy retainers for the rest of their natural life. I also didn’t want things to drag on for months or years past the quoted timeframe.

I received significantly different options and opinions from each practice, so I’d highly recommend getting multiple evaluations performed. Payments are typically 0% financing with monthly installments over the expected course of the treatment, plus an upfront deposit. (You get a 2% to 5% discount at these practices for a lump sum payment, which I didn’t find to be worth it.)


I first went to Nicolucci Orthodontics, based on the initial recommendation by Dr. Reddy. I’ll note that my experience may have been negatively biased by the fact that it was my first consultation, and I wasn’t quite ready to make a decision on braces vs. tooth yanking.

Dr. Mai performed an initial evaluation, and the results complicated the situation. Before they would perform any orthodontic work, they’d want one lower tooth extracted. She and her assistant also indicated that I would likely require gum grafts, and that they wouldn’t begin treatment until my gum health improved. Treatment time was 24 to 30 months and the only option available was traditional braces.

Pricing was the least expensive (not by much) of the three practices at $5500. After reviewing the documentation in preparation for this post, on top of that was a “diagnostic records fee” of $300. The initial deposit requested was $2200 (so really $2500), then $3300 spread out over 24 months.

Even though I knew my oral health wasn’t especially great, I didn’t think it was bad enough to warrant a hard stop. The experience was really discouraging. I wasn’t impressed that tooth extraction and additional procedures were going to be needed on top of braces.


After the initial experience at Nicolucci, I wanted to price compare and see if there were other options available. TriCity was one of two additional referrals from Dr. Reddy’s office.

When I initially called to schedule the consultation, the receptionist indicated that there would be a $50 initial examination fee (which wasn’t listed on their website or referral card.) I balked a bit, and they were willing to waive the fee because “my dentist hadn’t mentioned it.” This was the only practice I went to that wanted to charge for the evaluation.

I was very impressed with Dr. Phan. He addressed all my questions, explained everything in a satisfactory fashion, and was upfront about timeline (24 months) and expected results. He and his assistant had no issues with my oral health and was willing to begin treatment immediately, using clear braces for the upper teeth and conventional metal brackets for the lower ones. He indicated this would give a better result in fixing the bottom crowding.

More critically, Dr. Phan did not want to extract any teeth, and suggested that if I did go ahead with any extraction operations, it could put me into a situation where I’d need up to four upper and lower teeth removed in order to get the results he wanted. His recommendation was to complete my other evaluations and then make a decision, but not to have any teeth pulled in the meantime.

Cost was the highest of the three options, at $6300. I don’t have a precise payment plan breakdown available but it also involved an upfront deposit followed by 24 months of equal payments.

Dr. Phan and TriCity ended up being a really close #2 in my evaluation – only beaten out by the later option of Invisalign with RiteBite.


My last stop was at RiteBite, which is Dr. Luis’ practice. They have three locations: Waterloo, Cambridge and Listowel. I’ve only ever been to the Waterloo location but apparently you can book appointments and receive treatment at any one of the offices.

Going into the office was a stark contrast to the other practices I had visited. All the chairs in the lobby were occupied by children and their parents, and this has been consistent at nearly every appointment I’ve been to since. It’s a bit of a zoo compared to the other options – TriCity was completely serene and had a very upscale waiting room, and Nicolucci had much more of a high-end surgical practice feel.

I was seen promptly, though, and one of the treatment coordinators took digital photos of various angles of my face and teeth, rather than having the orthodontist examine my mouth directly. I thought this was a novel and sensible approach. After a bit of evaluation of the pictures, Dr. Luis came in for a short discussion. The takeaway was his claim that “whatever I can do with braces, I can do with Invisalign” and that he also recommended against any tooth extraction prior to orthodontic work. Timeframe was quoted at 24 months as well.

Initially I wasn’t as comfortable with Dr. Luis as I was with Dr. Phan. Dr. Luis has an Bluetooth earpiece perpetually attached, and he seems in quite a hurry to get from patient to patient. The treatment room in the Waterloo office is an assembly line –  there are PCs at each station precisely timing the length of the visit based on the treatment plan for that appointment.

During later sessions, despite him clearly being torn in many directions, Dr. Luis has been quite friendly and given me his full attention when I posed questions. I also have to compliment the treatment staff: they are clearly on a tight schedule but are professional and perform tasks the right way, not just the fast way.

Pricing was the middle option – $5880 total (equal cost, regardless of braces or Invisalign choice), and the financial coordinator was easy to work with. My initial deposit could have been as low as $500, with the remainder of the balance spread out into payments over 24 months. They were also willing to charge my credit card on a recurring monthly basis for the instalments.

After a few email followups with my treatment coordinator and some research about Invisalign versus conventional braces, I ended up signing the treatment forms and going with Dr. Luis and RiteBite for this work.

What’s next?

A subsequent post will provide further details on Invisalign and RiteBite, having spent a year living with the trays and treatment. As a preview, though, I definitely recommend RiteBite/Dr. Luis and Invisalign as an orthodontic treatment option if it’s available.

Review: Roam Mobility in Las Vegas with a Nexus 5

I recently returned from a five day trip to Las Vegas, to once again play the low-limit blackjack at Hooters Casino Hotel, enjoy the complimentary drinks and see a few shows. I’ve done this before with friends, but the first major change is that this is the first year I’ve had cell coverage in the US thanks to Roam Mobility. I’d used them on a conference trip to San Francisco earlier this year and it was quite handy.

The general principle is that you pay $4/day for unlimited talk/text (including voice/SMS back to Canadian numbers), and also get a 300MB allotment of 4G/LTE data per day of the plan. Thus, if you buy three days you get 900MB to use at any time during the total plan. If you go through the allotment, it degrades to “unlimited” data at EDGE/128kbps speeds.

To get started, you have to buy a Roam SIM card ($20; comes in regular+micro combo pack or nano form factor) plus a $2 “LTE upgrade” fee as a one-time purchase. The SIM stays active for up to 365 days between plan purchases, so it’s worthwhile if you plan to use the service in the States at least once a year or can loan the SIM to someone who will use it. You also need an unlocked phone, and preferably one that supports the 1700MHz and 2100MHz bands used by T-Mobile US (Roam acts as a MVNO on T-Mobile service.)

I wasn’t entirely satisfied with how Roam Mobility worked with my Nexus 5 device during the trip. Some of the issues I encountered are specific to the Android device and some seem to be network/service related on Roam/T-Mobile US’ part. Here are a few caveats to consider.

Scheduling: Plan Early

Roam’s website lets you schedule plans in advance to start at a specific 15 minute time. I had scheduled a plan to start at 10:30PM Pacific in preparation for our flight landing shortly before that, and had switched my APN to ‘roam’ prior. When we landed, I was unable to get voice, data or SMS service in Terminal 3 despite having “five bars” of coverage. While power cycling the phone, I received SMSes advising that I had fewer than 30 voice minutes remaining. There was also a stuck voicemail indicator that simply redirected me to the Roam customer service line (which is closed at that time of night.)

Example messages I received while waiting for the Roam Mobility plan to activate. The phone is now set in Eastern time, so subtract three hours.
Example messages I received while waiting for the Roam Mobility plan to activate. In this screenshot the phone is set to Eastern time, so subtract three hours.


Service did not activate until 11:08PM, after we’d reached the hotel. I received another SMS from the 7850 shortcode advising that the 4-day plan was now active with 1200MB of data. Two friends of mine who were using Roam as well had no such difficulties – they had bought their plans briefly before boarding the plan and started service immediately. Their service became active on first power-up in the airplane while waiting to arrive at the gate in Vegas. So in short, if you want your Roam plan to be active right as you land, be generous with the start time.

From perusing the Roam support site after the fact, if you run into a similar problem, you may want to try messaging ‘start’ to *7850 to see if this kicks off the account provisioning process right away. I suspect this is what customer service would have told me to try if I called within business hours, but there wasn’t an easy way to get this information. I’d suggest that Roam add this trick as an message to their IVR in the ‘activate service’ option.

LTE Intermittent on Nexus 5

The coverage near the hotel and on the south end of the Strip (specific locations I checked included MGM Grand, Excalibur, and as far up as Bellagio) was very good, consistently displaying 4 to 5 bars of LTE. It also received and delivered Hangouts and BBM messages promptly. Coverage was also good near the north end by the Neon Museum and the Fremont Street Experience area.

However, when we headed slightly off-Strip, further down E. Tropicana to the Pinball Hall of Fame, my phone lost connectivity with no data symbol near the network indicator. The workaround was to back the network type down to 3G/HSPA. On the Nexus 5 this option is under Settings > (Wireless & Networks) > More… > Mobile networks > Preferred network type. Immediately after I forced 3G coverage, data began flowing again.

My travelling companions also didn’t suffer this issue: one was using an unlocked iPhone 5S and didn’t do anything special with his network settings throughout the trip, and the other had a Nexus 4 without LTE capabilities. For me, this problem took the advantage away from the $2 LTE upgrade to the SIM. I had to keep the phone in 3G only mode for the rest of the trip to ensure I could get coverage everywhere.

No Mobile Hotspot on Nexus 5

Due to some apparent shenanigans with Android 4.4, the phone tries to use a different APN when in tethering/mobile hotspot mode. This is documented on the Roam Mobility site as an issue specific to the Nexus 5, but I believe it could affect all stock Android devices. A fix is promised but the issue has definitely not yet been resolved in September 2014. Hooters has (slightly pokey) WiFi in room for hotel guests, and that was good enough to get my laptop online for flight check-in and researching shows.

As a note, though, it’s not just Android devices with an APN issue; once again the support site advises that iOS devices may need to get a custom configuration for hotspot use, and that BlackBerry 10 users could also encounter issues getting mobile hotspot capabilities to work.


Overall, compared to the data roaming gouge-fest from the usual Canadian carriers, Roam can’t be beat despite the issues I ran into. Next time, I’ll look at bringing a different or secondary phone to compare network and activation behaviours.

For a longer trip, I’d also consider a prepaid T-Mobile SIM ($10 SIM plus $30/month, 5GB LTE data.) The main issue with a T-Mo US SIM is that you’ll have to have it shipped to your destination or get one at a convenience store or dealer in the States, whereas Roam has retail presence at a variety of stores in Canada or will ship you a SIM for free.

If you’re currently with WIND Mobile, they have a $15/month US roaming option that can be purchased for 30 days and then removed from your account, which makes it a less expensive option than even buying the Roam SIM the first time. It also uses T-Mobile US towers so coverage should be the same.

Update, 2014-10-15: Looks like the Roam support team on Twitter is advising use of the ‘wholesale’ APN now rather than ‘roam’ to resolve issues with data connectivity. Something to try out if you ran into similar issues. I expect I’ll be using Roam early next year again and will report back.

WordPress file permissions and upgrades with

(Post updated 2015-05-07 with the results of some helpful feedback from mbrowne. Comments, GitHub issues and pull requests are always welcome!)

I maintain a Github repository of small useful scripts (at least to me) and occasionally get comments or email about them. I received an email yesterday asking about WordPress file permissions when applied with, which is a simple Python wrapper around a few common filesystem operations. I’d initially written about it a few years ago as a utility to allow sites to auto-update.

Since was written, it appears that there have been some changes in the way that WordPress performs upgrades. I’ll excerpt the issue from the original email:

I have recently ran your script on our wordpress website to fix permission issue.

But we are getting below error while we try to upgrade wordpress from admin panel.


“This is usually due to inconsistent file permissions.: wp-admin/includes/update-core.php”


When i look the permission I could see update-core.php file have only read permission for webserver user “www-data”. Is your script designed to set 644 for files in this folder ?

-rw-r--r-- 1 username www-data  47326 Aug  1 06:09 update-core.php


I took it upon myself to read some of the WordPress code that performs core updates, as well as some of the documentation. To answer the original question, does set 644 permissions on all WordPress files in the directory tree, then goes through the wp-content directory and adds group write permissions only where necessary.

The auto-update documentation at states:

When you tell WordPress to perform an automatic update, all file operations are performed as the user that owns the files, not as the web server’s user. All files are set to 0644 and all directories are set to 0755, and writable by only the user and readable by everyone else, including the web server.

Unfortunately this doesn’t seem to match with the behavior in the code – when a direct FS_METHOD is used for manipulating files rather than through FTP or SSH, operations get performed as the web server user (www-data). Therefore, the 644 permissions on wp-admin are too restrictive to allow core upgrades.

There are a few solutions to this problem:

  • If you do not accept the risks of having the webserver (www-data) user having write access to your WordPress contents, use the wp-cli ( core update command running as the user that owns the WordPress files. This is my preferred method and it can be scripted to batch update sites.
  • If you completely control the webserver and can be assured that nobody will upload a potentially malicious plugin or execute code that traverses the filesystem, set the permissions to 664 for all files (not directories) under wp-admin and wp-includes directories and have the group set to www-data:

    • find $WORDPRESS_DIR/wp-admin -type f -exec chmod 664 {} \;
      find $WORDPRESS_DIR/wp-includes -type f -exec chmod 664 {} \;
      chgrp -R www-data $WORDPRESS_DIR/wp-{admin,includes}
    • I would not recommend this in a shared hosting environment. When you upgrade, the more permissive group write flag will be preserved on these files (see the WP_Filesystem function in wp-admin/includes/file.php for details on how FS_CHMOD_DIR and FS_CHMOD_FILE are set.)
  • If you have FTP or SSH access to the server, and want to upgrade using this technique, remove the define('FS_METHOD', 'direct'); line from wp-config.php. This ensures that file delete, write and move operations are performed as the FTP/SSH user.

I will be adding parameters to shortly to address the last two points, and allow users to either set more permissive permissions on wp-admin/wp-includes directories or remove the FS_METHOD define.

Fixing SYSVOL DFS replication on Server 2012

Huge thanks to Matt Hopton at “How Do I Computer?” for this informative article on fixing DFS replication issues with the SYSVOL directory. In my case, symptoms were similar – AD group policies weren’t being successfully updated at a remote site with its own read-only domain controller. This was present in gpresult /h output.html, where scripts that had recently been added at logon to the main office DC earlier in the day were not able to be found on the branch domain controller.

Some additional notes:

  • Look in Event Viewer under Applications and Services Logs > DFS Replication for a warning with ID 2213, which provides the wmic command needed to resume replication
  • If the DC has been out of sync too long, there will be an Error with ID 4012; use:wmic.exe /namespace:\\root\microsoftdfs path DfsrMachineConfig set MaxOfflineTimeInDays=65

    and replace 65 with a number that is above the “server has been disconnected from other partners” value. Then, rerun the wmic command from the first event. Give it a few minutes and be patient and if all goes well, another event will pop into the log indicating successful initialization of the SYSVOL folder.