nullroute | hosts | NFS

[Summary: Showing off my setup.]

With some inspiration from Windows and Plan 9, I have Kerberos-authenticated file access available across nearly all my machines. Most of them use NFSv4, with SMBv3 provided for my desktop PC and SMBv1 for a few retro laptops.


On the Linux systems, /net is an autofs map with entries for the machines I want to access; e.g. /net/ember/srv leads to the /srv directory on Ember, my workspace server. Usually the map looks like this, with some variations:

*      -nfsvers=4.2,proto=tcp,soft,softreval,nosuid,nodev,sec=krb5p &:/
ember  -nfsvers=4.2,proto=tcp,soft,softreval,nosuid,nodev,sec=krb5i &:/

(This does not use the NFSv4-style /exports; all recent Linux versions are capable of directly exporting individual parts of the filesystem with the server kernel providing a virtual / root as necessary, so it works exactly as in earlier NFS versions.)


As a shortcut, a tiny fuse.slashn filesystem implements automatic links from /n/<host> to /net/<host>/home/grawity; something that could have been achieved using static symlinks but is instead done by dynamically looking up the home directory of the accessing user. This allows me to access /n/myth/Dropbox, for example, or /n/wind/src/systemd, without typing long paths.


The on tool runs a command remotely over SSH but within the local working directory, automatically mapping it to an NFS path. If I'm working on a package and decide that I want to build it on a larger server, I can do that – and then install it on yet another machine – without having to scp all files across:

$ on wind makepkg
$ on frost pacman -U *.pkg.tar.gz

Invoked as @ or as @<host>, the same tool instead maps the local directory to its remote equivalent, which is mostly useful with ~/Dropbox or other folders that Syncthing replicates (or with those managed by git-annex); instead of downloading a file locally and waiting for it to sync, it's faster to have it downloaded remotely.

(frost ~/Dropbox/memes)
$ @ember yt-dlp

(This is just as frequently used without a command, e.g. to jump from one host's /etc to another.)

Finally, invoked as just <host>, it makes a regular SSH connection to the remote home directory, with the small difference that tty allocation is enabled even for direct commands.

frost$ ember
ember> land htop
ember> wind
wind$ star tmux a -t irc

Sometimes this accidentally leads to SSH sessions 5+ deep.


For the Windows desktop (or the retro XP laptop), autofs is not necessary as the OS has built-in support for UNC paths, though I usually have my Dropbox mounted on X:\.

Since I do actually use Windows on a daily basis (i.e. not only as a gaming PC), it has similar PowerShell aliases such as ember to quickly SSH into hosts as well as !ember to run things on the remote equivalent of a local Syncthing folder – though, at the time, not an equivalent for on over SMB.



Sun was not the only UNIX vendor who had implemented a network file system; a considerable amount of those can be found described in various texts from 1980s, most of them in USENIX papers available through

One of the more memorable names was The Newcastle Connection (1982) from (of course) University of Newcastle upon Tyne, with a "super-root" implemented as /.. (one level above the root!) through which one could escape from the local filesystem and reach adjacent hosts; the resulting paths looked like /../foonix/usr/brian/file.c. As the system supported hierarchical grouping of hosts (with UK/NEWCASTLE/DAYSH/REL/U5 being given in the paper as an example), one could go even further up to access hosts in a different organizational unit – this would lead to paths such as /../../REL/U5/usr.

One of the core ideas within the paper describing the Newcastle Connection (with it having the subtitle "UNIXes of the World, Unite!") was that it wasn't merely "file sharing" but had the goal of joining all hosts into a single, transparent system with the ability to access even machines in other organizations without ever needing to log in again; this rather predated Plan 9 by several years.

Another similar project, FREEDOMNET ("A State-Wide UNIX Distributed Computing System", USENIX 1986 Summer p.499) from RTI, likewise used /..-style paths but also had the interesting property of making program execution remote as well. As per the example in the paper, invoking /../convex1/usr/cad/spice would not merely transfer the binary to the local system – it would actually run on the host convex1 with its stdout transparently going to the local system.

Many other implementations existed: TRFS (1985), which used /@<host>/foo style paths; IBIS (1984) which extended the path syntax to host:/path at kernel level; Locus (1984) which allowed fork() to be remote; AT&T RFS (1986) which gave us EDOTDOT, and can be found in SunOS sources; Masscomp EFS (1985) which seemingly invented "mount points"; and of course Sun NFS.


AFS, the Andrew File System (of the same Andrew project that resulted in MIME and IMAP), is still in use today despite showing its age; it was probably the only major project that actually succeeded in providing a global, unified namespace, namely /afs. Any AFS client host can access any file on any cell using the same /afs/<cell>/<path> syntax, with DNS (or the large, manually maintained CellServDB) resolving the cell name to file server names.

Surprisingly many AFS cells are still up, with many having their user home directories largely open for public access; one can find a MIT student's AFS directory collecting nearly five decades of their studies, employment, and other projects.

Plan 9

Plan 9 was not the first, but certainly one of the more famous examples, as it did make everything a file – /net was the interface to the host's IP stack, for example, so instead of connecting to a remote system (a 'cpu server') via terminal interface or some VPN protocol, one would mount that system's /net locally via 9P.

Plan 9 also had the /n directory where known filesystems would have been mounted, often across the network via 9P, such as /n/sources that was mounted from the remote host by default – quite the difference from Linux, where having a distro serve package updates via NFS would be seen as pain in the ass, or Windows, where it would be blocked by half the world's firewalls. (Sysinternals did have but that was WebDAV.)

Windows, meanwhile, did have a very important feature – UNC paths – built right into the operating system. You don't have to mount a SMB share on Windows; it's done automatically as soon as a \\host\share-like path is accessed. There is no manual automounter setup, nothing. To some extent it is even protocol-agnostic; the same syntax works for SMB, NFS (with Interix installed), Netware, &c.

(It did of course have the issue that NTLM authentication was not well-suited for use outside a trusted environment – and Windows didn't have a concept of disjoint authentication domains for a long time, leading to it simply sending your credentials to any server that asked – which made it incredibly risky for organizations to permit SMB access across their network boundary. Kerberos would've been fine, but even to this day that's a pain to set up without having a full Active Directory environment.)


At first, all NFSv4 connections used krb5p, Kerberos with data encryption (privacy), and simply ran across the public Internet. This is supposed to be fairly secure, although I did not have 100% trust on the in-kernel Linux NFS service implementation (as far as things like buffer overflows go; though I believe the recent XDR rewrites have made it much more robust), so the servers had a list of trusted IP addresses to accept connections from – but otherwise the NFSv4 port was even exempt from IPsec between the servers.

Likewise, the SMBv3 connection to \\ember initially went straight over the Internet and used its built-in AES encryption feature – which is secure when coupled with Kerberos (not so much with NTLM). As conditions changed it – and then became a LAN connection to \\myth, my new home server. (SMBv3 is surprisingly usable across Internet, so long as latency is reasonably low (the path to Ember at my workplace is typically ~30 ms); lack of upload throughput of my home Internet connection was the main driver for setting up a file server at home.)

Eventually, however, my home Internet access changed significantly (better performance at the cost of no longer having even a remotely static IP address) so this switched to IPsec and later WireGuard tunnels.


Windows systems are able to use Kerberos with SMB, even without Active Directory on either side; all that's necessary is to store credentials using cmdkey (and to use ksetup to define the Kerberos realm as non-AD). With SMBv3 this leads to comparable security as with NFSv4 (although, I believe, with better performance – in reality it should be comparable to NFS-over-TLS).

I also have a few "retro" hosts that run XP or older Windows versions; those cannot speak SMBv3 or anything else securely, so they instead access \\lanman which is a Debian 10 container duplicated across both Ember and Myth (with their ~/Dropbox and other synchronized folders being directly mounted into the respective container).


Unfortunately, our home Internet access hasn't kept up with the increasing demands, so it's unavoidable that my workspace has to be replicated across both locations. At first this was only ~/Dropbox, using (as the name implies) Dropbox to synchronize it, but eventually the service had degraded to the point where it was no longer worth the cost. Now, the machines run Syncthing.

I also rely on git-annex to keep track of large files (e.g. my video or software collections) that wouldn't be practical to fully replicate, but it's a bit of a chore. (I should perhaps try the built-in assistant app, though, but it relies far too much on my ability to precisely describe upfront what is wanted where – my typical usage is the complete opposite, with ad-hoc annex get.)