Experimenting with AWS Control Tower and Lightsail

I’ve been trying desperately to catch up on my personal email these past couple months, since it’s rare (with the addition of two babies to our family) to have large uninterrupted blocks of time in which to hack. One of the recurring messages has been a “high CPU” notice from Linode every few days. In my experience this can mean a variety of things, ranging from “your site got quite a few visitors in a short timeframe” to “the backup process is going wonky” to “someone hacked your box and is trying to use it to mine cryptocurrency.”

Rather than put a whole bunch of time into investigating the root cause, I know the system needs an entire OS upgrade and we’re running a bunch of services that are no longer in use like IRC and Jabber servers – these have been replaced, at the cost of our freedom-as-in-speech, with Slack.

So, in the spirit of “cattle, not pets”, my goal is to decommission the Linode VM and move into AWS, and automate as much as I can while doing it. Having the suite of services all in one place is ideal even on a $20/month budget, and there are a number of services like Lambda, IAM, Parameter Store and DynamoDB where I could make good use of them and never pay anything directly.

Many of the people I support with web hosting aren’t willing or able to give up WordPress, so we’ll have to maintain that capability, but I’d also like a migration path for myself to a static site generator that publishes to S3/CloudFront. The best server is one you don’t have to run yourself.

On one hand, Control Tower

Enter Control Tower, which is one of the more enterprise-y services AWS offers, but surprisingly not at an exorbitant price. It’s a managed way to ensure account compliance and heavily leans on Organizations, SSO, Config, CloudTrail, Service Catalog and CloudFormation (stacks and StackSets) to actually carry out its work.

The biggest cost to me so far has been Config rule creations and evaluations, which added up to $4.76US for my first partial month in June (with four account creations and a couple missteps) but is sitting at $0.12US for July.

SSO, Organizations and the CloudFormation pieces are effectively free. If you’ve never played with AWS SSO, I also highly recommend it – it gives you a landing page similar to Okta where you can assume roles into your authorized AWS accounts for both console and CLI/API access.

Since having separate accounts is effectively the highest level of separation you can get, my idea with Control Tower is to run separate accounts for each tenant that may want to take control of their own services and billing eventually.

I’m not sold on what Config does for its price, but service control policies, CloudFormation-managed VPC definitions, the Account Factory strategy and CloudTrail everywhere are effectively the only way you can effectively maintain a secure, multi-account lifecycle. If anyone from AWS is listening, I’d love a way to use Control Tower with only SCP-based guardrails, and just accept the lack of Config. I may try and hack this solution myself at some point since it’s definitely not best practice.

If you decide to experiment with Control Tower yourself, I recommend disabling the public subnet, private subnet, and “regions for VPC creation” options in the Account Factory setup:

These settings should prevent the creation of managed NAT Gateways (with a hourly usage charge of $0.45 US, not including any data you put through them.) They are also created per region that you select for VPC creation. I did this when provisioning my first couple of accounts and caught it after a couple hours, but even after updating the StackSet in the master account with what I thought were appropriate parameters to remove the resources, the gateways still remained.

If you want, you’ll have to go back in and create default VPCs in each of the above regions for each new account – the provisioning process removes them from the five above but not from other regions, like Canada (Central).

Low-end Lightsail

In what might be considered the polar opposite of service offerings to Control Tower, Amazon Lightsail is the other piece of the puzzle I’ve been experimenting with. It’s AWS’ answer to Linode, Digital Ocean, OVH and other VPS providers. In this market, you pay for a VM with a certain amount of disk, a static IPv4 address, some included bandwidth, and perhaps some DNS management capabilities.

Linode and Digital Ocean are reputable providers in this space and have expanded their offerings beyond VMs to include things like block storage, load balancers or a managed Kubernetes control plane at additional cost. Assume you’re probably spending $5/month for a Linux VM with a gig of RAM, 25GB storage, 1 vCPU of whatever their v measurement is and 1TB Internet data transfer.

For those familiar with AWS capabilities and pricing, Lightsail is interesting because it has some inclusions over vanilla EC2 instances to bring it in line with the above “developer cloud” providers. This makes the pricing much more predictable and transparent compared to the “simple” monthly calculator.

Ignoring the 12-month free tier, you could run a t3a, t3 or t2.micro instance in EC2, but already those are $6.80 to $8.47 monthly without reserving them or committing to a Savings Plan. You’re then paying $0.10US/GB/month for a gp2 SSD-based EBS volume, so kick in another $2.50 monthly on your bill for 20GB disk.

AWS’ outbound data charges are also well-known to be entirely convoluted, but for the sake of this argument let’s assume you’re running the instance in a public subnet, sending more than 1GB and less than 10TB to the Internet in a month. Starting $0.09/GB in the cheapest regions, to use 1TB comes out to just over $92US in AWS – and that would be included in the $5 monthly fee for both Linode and Digital Ocean.

Lightsail has a much more user-friendly pricing structure if you’re willing to live with some limitations. You give up per-second billing for hourly granularity, but get 20GB EBS storage, 1TB bandwidth and three DNS zones (which would be $1.50/month in Route53 proper) with the smallest “nano” $3.50US/month plan.

A quick tour

Lightsail also uses a drastically different console interface than the rest of AWS, even when compared to the “new” and “old” designs that you might see in the EC2 or VPC consoles:

There’s quite a friendly interface for visualizing CPU usage and how the burst capacity works. Select the three-dot menu for your instance, choose Manage and then pick the Metrics tab:

There are also network views and status check views, which is good because even though these are clearly CloudWatch metrics for an instance, you don’t have any access to them through the CloudWatch console. Another interesting capability here is the ability to set up to two CloudWatch alarms per metric, again within the Lightsail console only and with no direct SNS access:

A real t2, but not in your own account

AWS obscures the true underpinnings of Lightsail a little bit. I assume this is an effort to distinguish the service from EC2, as well as capture the set of developers who don’t want to learn about the nuances of burst credits and ephemeral vs. block storage, and just want a VM with some disk and network. Indeed, the Lightsail FAQ explicitly mentions “burstable performance instances” with similar language to the EC2 FAQ, but is never clear as to if it’s really a t2 behind the scenes.

If you compare the RAM and vCPU specs on each of the plans, they line up fairly closely with the same t2-class instances – so the lowest end $3.50/month plan maps to a t2.nano, the $5/month plan a t2.micro, and so on from there – culminating in $160/month for a Lightsail-badged t2.2xlarge. Indeed, if you poke around with the Lightsail API, the GetBundles call returns a response with an instanceType element that reflects this mapping as well.

But to truly prove if a Lightsail instance walks, talks and quacks like an EC2 instance, you can query the instance metadata service once you have shell access:


dutifully comes back with t2.nano on the $3.50 per month plan.

But now that we know it’s an EC2 instance, we can get other attributes like ami-id. In ca-central-1 on a Ubuntu 18.04, my sample instance comes back with ami-0427e8367e3770df1 – the public, official build from 2018-09-12, so it’s not using “special” AMIs.

Reviewing the security-groups metadata returns a ps-adhoc-22_443_80 entry reflecting the ports I’ve allowed in from the Internet at large, as well as two more: Your Parkside Resources and Ingress from Your Parkside LoadBalancers. Perhaps “Parkside” is the Lightsail service codename?

The last metadata item I poked at was iam/info, which returned the following:

  "Code" : "Success",
  "LastUpdated" : "2020-07-26T20:26:24Z",
  "InstanceProfileArn" : "arn:aws:iam::956326628589:instance-profile/AmazonLightsailInstanceProfile",
  "InstanceProfileId" : "AIPA55KMCJTWZI2WL3WI5"

For good measure, I then ran aws sts get-caller-identity (which is an STS API call that always goes through) from the AWS CLI and got back:

    "UserId": "AROA55KMCJTWRW2HIHLEG:i-00b79bdc9e2156f8f",
    "Account": "956326628589",
    "Arn": "arn:aws:sts::956326628589:assumed-role/AmazonLightsailInstanceRole/i-00b79bdc9e2156f8f"

Note the the account ID 956326628589 is not one of the accounts in my organization – so what I think is actually happening here is that a Lightsail instance is indeed just a t2-class EC2 instance running in an Amazon-managed account. I also started Lightsail instances in us-east-1 and us-east-2 and confirmed that this account ID stays the same.

This has got to be really interesting to sort out on the AWS side in terms of how they bill customers (perhaps also contributing to the hourly granularity), but also makes sense as to why you can’t actually see the true instance, IAM resources, CloudWatch metrics or ENIs from console, API or CLI.

It’s also in the same vein as authorizing Classic|Application Load Balancer logs to be written to an S3 bucket in your account – you have to allow one of the predefined AWS account IDs (127311923021 for us-east-1, 797873946194 for us-west-2, 985666609251 for ca-central-1) in the bucket policy to be able to have logs written to your own bucket. They keep adding account IDs for new regions to the ELB list, so I expect this is a pattern that persists despite the introduction of service-linked roles or the service principal used in NLB logging.

Next up

I did successfully manage to move my own website to Lightsail – and honestly, the nano size is performing quite well despite its 512MB footprint, so the next process will be continuing to document the relevant Terraform and Ansible code to be able to provision a new instance from scratch, and restore its content if it were to get terminated.

While I’m experienced with CloudFormation, Terraform is somewhat necessary for doing anything close to Infrastructure as Code with Lightsail – the only CloudFormation reference with Lightsail is when you go to export a snapshot to EC2. I have several complaints about Terraform, specifically around a couple of fairly-reasonable pull requests for Lightsail resources that have just sat there for a while, but that’s a whole separate post best illustrated with code examples.

I’d also like to poke around a little bit more with VPC peering; this is another area where I assume the service limits have been raised on the AWS side, and to find out exactly how isolated my Lightsail instances are from other people’s.

With this move, I also have a few feature requests for the Lightsail team:

  • CloudFormation support for all resources! Please!
  • Ability to customize policy attached to the role/instance profile. The use case is to grant the instance direct access to read from/write to an S3 bucket rather than using an IAM access key. Or to be able to use DynamoDB, or Parameter Store, or invoking Lambda functions directly.
    • I recognize this might be difficult, especially in light of the underlying instance running in a different account…
    • Maybe one could allow the Lightsail instance to assume a role in our own accounts for this purpose…
  • Ubuntu 20.04 and Amazon Linux 2 “blueprints”
  • Move to, or choice of t3 or t3a-backed instances which have much better network performance
  • S3 and DynamoDB gateway VPC endpoints
  • Graviton (ARM) instance support would be absolutely fascinating for this use case (Linux server running nginx/PHP/MySQL), although likely dependent on an entirely theoretical “t4g.nano” EC2 instance

But for now, I leave the Internet to see if we can exhaust the burst credit balance on this thing!

Repurposing a Lanner FW-8758 as a Linux server

My employer recently divested themselves of some end-of-life hardware and several of my coworkers and I came into the possession of Lanner FW-8758 1U “network appliances”. These seemed like they’d make pretty good Linux servers, and I figured I’d document a little bit about the platform and process.

My other home servers are currently a Supermicro 5017C-MF and an IBM x3650 M2, which are both quite noisy. The FW-8758 has four small system fans plus a PSU fan, which together still seem somewhat quieter than the Supermicro. I haven’t put the system under any serious load yet though.

Hardware specs

The Lanner box can be configured with a variety of options. The one I received has a Core i7-3770 CPU, 8GB of DDR3 1333MHz RAM, and a single power supply. The x8 PCI-E expansion slot is not populated, and requires a card in a “Golden Finger” form factor. In the real world, this slot might be occupied with an expander that provides more copper or fibre network ports on the front of the unit.

Since there are only two memory slots on this board, I will have to debate whether it’s worth buying 2x8GB sticks to max out the 16GB capacity in the future. The unit also came with the optional VGA header and port, which was very useful in getting the OS up and running.

For local storage, I installed a 2.5″ Crucial MX100 256GB solid state drive that I had available from a decommissioned laptop. The unit could accommodate a second 2.5″ drive – there is physical room in the bracket, the motherboard has another SATA port and the 220W PSU also has a second SATA power connector that can be routed to the correct side of the case.

OS installation over serial

I attempted to install Ubuntu 18.04 using the server ISO written to a USB stick, making attempts with Rufus and Unetbootin. Rufus seemed better at generating a functional UEFI-compatible stick.

Since not all 8758’s have the VGA port hardware, my first set of attempts was trying to get the installation kicked off with an RJ45-to-serial cable, followed by a serial-to-USB adapter. Once I got PuTTY connected to the correct COM port at 115200 speed, the BIOS came up no problem, but the default Ubuntu install options didn’t display anything.

For serial output, there are isolinux/syslinux/grub directives that need to be added to the appropriate config files. ynkjm at GitHub has instructions for 16.04 and Unetbootin at https://github.com/ynkjm/ubuntu-serial-install that I managed to adapt and get to the text-mode installer. (Also see: http://www.sundby.com/index.php/install-ubuntu-16-04-with-usb-stick-and-serial-console/)

I was thwarted later on in the installation process by what I think is Ubuntu bug 1820604, where after the partitioning phase the installer dies with a OSError: Unable to find helpers or 'curtin' exe to add to path error. Dropping back to the shell, I couldn’t log in with ubuntu, root or Live System User – although I didn’t try liveuser which apparently can be added by USB stick generator tools.

A workaround might be to have a completely blank disk with no partitions, but at this point I resigned myself to dragging out a VGA cable to complete the install. Another approach might have been to install Ubuntu to the target SSD from another machine, and then simply move the SSD back into the FW-8758 chassis. Or, maybe, by the time you read this, the installation images will be fixed.

Of note, I selected to install with the HWE kernel, and made sure to select the OpenSSH server during the installation process. There are also quite a few pending updates, some of which are kernel vulnerabilities, so an apt update / apt upgrade / reboot cycle was the first thing I ran.

Software installation

The FW-8758 (without expansion card) has six Intel Ethernet ports on the motherboard; in 18.04 the leftmost one appears as enp2s0 and then they are mapped up to enp7s0 .

I debated a bit whether I wanted to Chef-ify or Ansible-ize the configuration for this host, but figured that most of it would be run through Docker anyway, so not much gain there.

I installed Docker CE using the official instructions, then added and ran a Plex Media Server image with my own scripts. Besides these, I also installed unzip.

For my CIFS mounts, I did the following:

  • sudo apt install cifs-tools
  • edited /etc/fstab to add servers with the following parameters. _netdev is supposed to avoid mounting before network is live and x-systemd.automount is supposed to make things play a bit better with systemd.
# Drobo 5N/5N2 with 'guest' access turned on
//      /mnt/drobo5n    cifs    rw,guest,uid=1000,file_mode=0777,dir_mode=0777,_netdev,x-systemd.automount 0       0

# Direct-attached disk shared from a Server 2019 system with SMB3
// /mnt/seagate8tb     cifs    rw,vers=3.0,uid=1000,forceuid,file_mode=0777,dir_mode=0777,_netdev,x-systemd.automount,user=nas,pass=supersecret 0     0

You could probably also use a credentials=/path/to/secretfile parameter instead of the user and pass parameters, but the nas login in my setup isn’t super sensitive.

Rack mounting/ears

There is an available rack/rail set for this appliance, but it’s a separate line item to order and I don’t have a full-depth cabinet in the basement (…yet.) Instead, I have a 4U vertical wallmount rack that currently contains the aforementioned 5017C-MF server, a Cisco SG500 52-port PoE switch (surplus gear), an Ubiquiti EdgeSwitch 24 and a 12-port Monoprice patch panel. Because the server isn’t too weighty, I was hoping to get away with just “ears”.

The left and right sides of the FW-8758 do have a rectangular pattern set of screws/screwholes that are 13mm wide (horizontal) by 10mm high (vertical), plus a fifth screw hole closer to the front of the unit. I haven’t found a “universal” set of rack ears that fits this pattern, and the spares I had from a Netgear switch are not compatible.

I’ll update this post if I do find something that works. Warren suggested he may look into 3D-printing a bracket that matches the pattern, but for now the machine is perched on a resin rack.

Cruise review: NCL Sky to Florida and Bahamas, February 2019

Two cruises in less than sixty days? Not entirely unusual for us, but this experience on the Norwegian Sky was a departure from usual in numerous, positive ways.

After being subjected to constant tales of delightful experiences aboard a NCL ship, our good friends Jon and Steph expressed interest in taking a break from winter weather. (I highly suggest you also read Steph’s first-timer review over on CruiseCritic, as well as peruse her copious collection of dailies and dining menus.)

We eventually settled on a 5-day February 2019 voyage that met both timing and budget requirements, and surrounded it by two days in Miami – one day before and one after the cruise.

Sky also offered a unique opportunity to compare our recent experiences with the newfangled, race-track-equipped Bliss. The Sky is one of Norwegian’s oldest ships in service – possibly the oldest depending on how you calculate Spirit’s age. Fortunately for us, our sailing was the second to happen after a dry dock from January 22 to February 7, which meant that a good portion of the ship would be newly refurbished and ready for us to enjoy.

The age and smaller size of the ship did not diminish our enjoyment, and we had a number of “Vacation Hero” experiences where staff and crew went above and beyond to make things stress-free and provide excellent service. It’s a tough decision as to whether this takes the title for “best cruise” for me, since other NCL cruises we’ve taken have their unique high points. If you’re debating Sky, though, assume that any review prior to February 2019 is prior to refurbishment, and give this ship a fair chance. My only regret is that we didn’t have a longer cruise.

Continue reading

Windows 7 – missing desktop icons hotfix

Apparently Microsoft has pulled down the necessary hotfix to disable automatic scheduled maintenance shortcut deletion (eg: if you have multiple unused or “broken” desktop shortcuts) from KB2642357. This has affected me in an environment where a number of users link to applications, folders or files on network drives.

I republish the x64 version of the hotfix so you can use it where necessary, then set the “IsUnusedDesktopIconsTSEnabled” and “IsBrokenShortcutsTSEnabled ” DWORDs to 0x0 in the HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\ScheduledDiagnostics Registry key and avoid this spurious behaviour that I’d once written off as user error.

Windows6.1-KB2642357-x64.msu (enclosed in ZIP file)

Cruise review: NCL Bliss to Eastern Caribbean, December 2018

As summer started disappearing in Southwestern Ontario, Kayla and I began to pine for another week on the ocean featuring better temperatures. With some finagling of work schedules and liberal use of credit card travel points, we secured an inside cabin on the new Norwegian Bliss for the week before Christmas.

This was our fifth NCL cruise, and the combination of ship and staff made it arguably the best sailing we’ve been on. We’ve gotten into a good position with pre-trip planning and now have a decent handle on Norwegian’s processes and amenities. Bliss is a decent refinement of the Breakaway class, so it was fairly easy to navigate having been on similar ships.

Continue reading

Cruise review: Celebrity Infinity to Alaska, August 2018

After finally publishing the 11,000-word epic that was the NCL Getaway review, I found there was far too much content for friends, well-wishers and Reddit stalkers to tolerate, even when split into seven parts. Perhaps I should have called it – BuzzFeed clickbait style – “7 Weird Things You Must Know About Cruising Or You’ll Fall Off The Ship!” Of course, then I’d have to pad the content with ads and create a quiz to find out which ship best represents you.

This review of our cruise on Celebrity Infinity to Alaska is about half the size, and will involve a number of comparisons between Norwegian Cruise Line and Celebrity Cruises (since those are the two lines we’ve sailed on so far.) I’m definitely glad we did the cruise with Celebrity, even at the least so I have a better idea of a premium RCCL product, as well as an understanding of what older, slightly smaller “hardware” has to offer.

I would likely sail Celebrity again if the right opportunity presented itself, but with a few changes based on this experience. My wife Kayla was a bit more negative on Infinity, mainly due to the constant upsell of specialty dining. 

Continue reading

Cruise review: NCL Getaway – February 18, 2018 [Part 7 – At Sea and Return to Miami]

This post is part 7 in a series of 7 about our vacation on the NCL Getaway, from February 18-25, 2018. You can read the other parts here:

Day 7: From buffet to steak

Our last full day was at sea, involving a trip to the buffet for both breakfast and lunch. Again, there was nothing exceptional to point out at either meal, but both of us didn’t have any complaints about the food. We always seem to find quite a few things we like and the buffet has no shortage of options. It seemed like the bar stock at the Garden Cafe had deteriorated by lunchtime as there was a much more limited selection of beer available. Other bars didn’t seem to have the same issue throughout the day but it was a noticeable change upstairs, possibly indicating the impending end of the trip.

In the early afternoon we did a circuit of the Waterfront on deck 8, finding the Sugarcane Mojito Bar to be too windy, and the Sunset Bar to be less of a sunset and more of an oven-like heat and light experience. Kayla went to try and find a seat with a happy medium between the two extremes, while I milled around the Sunset Bar. Another indicator that the cruise was wrapping up was that the bartenders were actively soliciting people to fill out comment cards.

Continue reading

Cruise review: NCL Getaway – February 18, 2018 [Part 6 – Cozumel]

This post is part 6 in a series of 7 about our vacation on the NCL Getaway, from February 18-25, 2018. You can read the other parts here:

Day 6: Fishin’ in Cozumel

One of our more in-depth excursions this trip was to take a fishing charter while in Cozumel. We’d done some research and settled on Cozumel Charters, selecting a 4-hour bottom fishing tour on an economy-class boat good for up to 4 people. We picked the bottom fishing option over deep-sea fishing, again mainly due to online reviews claiming that there was a higher likelihood of catching something. I am pleased to report that the collective knowledge of the Internet did not disappoint and we had a great time.

After submitting our details and 30% deposit by credit card, we got a confirmation email shortly afterward, containing a list of detailed instructions including where to meet the charter, what to bring, keeping the fish (they’re yours) and where to get them cooked if you’d like to eat your catch. There was also a handy PDF acting as confirmation and an invoice. The rest of the payment is made in USD at the port when you get picked up.

Our instructions were to take a taxi to Puerto Abrigo after disembarking the ship. There’s a bit of up the stairs, dodging the shops, and down the stairs to get to the taxi pickup at the port, but the first person who asked if us if we needed a cab was in fact a legitimate port representative. The 10-minute ride there cost $10 US plus tip; there is a whole conversion racket and they don’t take credit cards, so you might do better with pesos if you already have them. As of May 2018, apparently the standard rate was $15 US so I don’t feel like we did too badly.

Continue reading

Cruise review: NCL Getaway – February 18, 2018 [Part 5 – Harvest Caye and Roatan]

This post is part 5 in a series of 7 about our vacation on the NCL Getaway, from February 18-25, 2018. You can read the other parts here:

Day 4: Harvest Caye (vs. Great Stirrup Cay)

Awoken to the rattling of the VOIP/PoE phone across the desk, and combined with the time change of minus one hour, Kayla and I were able to rouse ourselves in enough time for a full service breakfast at Savor. She selected the Eggs Benedict, and I chose the eggs to order (over easy) with a side of link sausage. It was a fairly standard breakfast offering, but nothing to complain about.

Continue reading

Cruise review: NCL Getaway – February 18, 2018 [Part 4 – Costa Maya]

This post is part 4 in a series of 7 about our vacation on the NCL Getaway, from February 18-25, 2018. You can read the other parts here:

Day 3: A lovely pile of rocks in Costa Maya

The title of this section comes from a TripAdvisor review (filter by 3 star/Average) in which the reviewer is unimpressed with the Chacchoben Mayan ruins, declaring them “a pile of rocks”. I mean, points for calling it like you see it, but they’re historic rocks – what exactly were you expecting?

The docking process this morning seemed unreasonably lengthy and loud, but I’m only an amateur and any loud noises in the morning have been a subject of contention since a very early age.

Before disembarking, we went to the buffet and acquired some food. I’m not typically a breakfast person, but made a good attempt as it wasn’t clear when lunch would be offered on our tour. One noticeable omission from the morning buffet was bananas, which I’d figured would be a standard and highly available breakfast item, but none were to be seen. Of course, I didn’t actually ask anybody, so this could just be chalked up to early-morning grogginess.

Keep in mind that in general, you can’t take food off the ship into the ports lest ye incur the wrath of vessel security and foreign customs officers, so that “apple to go” better be down to the core and ready to be pitched by the time you’re on the lower decks.

Continue reading