I've recently become interested in setting up a minimal home server using some old hardware I have. These are some notes documenting the process.
I actually don't have a great reason (excuse) yet, this is a proof of concept right now. Also, why not? I already own the hardware and the energy costs are minimal.
I am using an old netbook, a Dell Inspiron Mini that has been gathering dust in a closet. Under-powered by today's standards of personal computers, it isn't on par with even the smaller VPSs that I typically use.
CPU : Intel(R) Atom(TM) CPU N270 @ 1.60GHz
L2 cache : 512KB
RAM : 1GB
But that doesn't mean it isn't useful, basic file-sharing within the network and some simple automation are well within its capabilities.
I've wiped the old installations and taken it down to a minimal install of
Debian and then manually configured the Broadcom Wi-Fi (which probably deserves
its own post for the amount of searching it took). The process is a pain, but
once you identify the correct non-free repositories you can use apt
to install
the correct drivers and configure wpa_supplicant
for systemd
to join the
network on-boot. As a note to myself in the future:
Broadcom Corporation BCM4312
deb http://httpredir.debian.org/debian jessie main contrib non-free
The running joke of "wireless on Linux" is actually not such a sorry state of
affairs. The real problem I found in configuring old hardware for wireless was
that all of the information is circa 2010 (when these netbooks were relevant)
and contained in forum and blog posts that are succumbing to link-rot. Even the
inherent differences with systemd
are well maintained.
The next step for me was figuring out how to expose a home computer to greater internet. It turns out this is a simple process, made annoying only by the sub-par router software provided by the ISP. The hardware in my case is an AT&T 2Wire series router.
Once you navigate past a confusing series of logins and settings, there is a firewall configuration per-device. The server is automatically registered by hostname and it was a simple matter to add a single port exception (for SSH access in this case).
Reaching the newly opened port is possible1 directly via the IP address, but
in my case my home has only a dynamic IP address, subject to change at
boot-time. The domain registrar I use provides dynamic
DNS as a free service, provided you use their name servers. To configure this I
just had to opt-into the service and create an A record
for the subdomain to
the desired IP address.
The standard client for dynamic DNS on Linux seems to be ddclient
, but I was
pleasantly surprised to find Namecheap provides a single URL interface to
updating an IP address using an auth token. The result is I run the following
command on a crontab and don't have to install a separate client:
0 * * * * wget -qO- "https://dynamicdns.park-your-domain.com/update?host=[subdomain]&domain=nprescott.com&password=[token]"
The one downside I have found in using Namecheap's nameservers, as compared to afraid.org, is the latency in propagating changes. Afraid.org was nearly instant, Namecheap can take 45 minutes.
I have done my best so far to harden the server to external sources.
ufw
)fail2ban
)ssh
remote loginsOnly time will tell whether this is sufficient, I think I may hold off on configuring a webserver in the meantime.
/etc/hosts
or running your own domain name server to circumvent this. As I understand it, it is a failing of the router to identify loopbacks to within its own network after fetching from a remote DNS.