indexwritingsjournal › 2018


Dropbox and filesystem compatibility

On the 10th of August, Dropbox announced that their Linux client will "soon" stop working on any filesystem that's not ext4. Their argument for this new limitation (thereby excluding XFS, Btrfs, etc.) was:

One of our requirements is that the Dropbox folder must be located on a filesystem that supports extended attributes.

Dropbox support article

A supported file system is required as Dropbox relies on extended attributes (X-attrs) to identify files in the Dropbox folder and keep them in sync. We will keep supporting only the most common file systems that support X-attrs, so we can ensure stability and a consistent experience.

Dropbox forum moderator

This, of course, is pure bullshit for several reasons. These days nearly all popular Linux filesystems support xattrs and so far Dropbox has worked without any problems on them, littering its com.dropbox.attributes everywhere. (And—as far as I know—Linux xattrs as a feature came along with XFS in the first place, which makes this argument doubly ridiculous.)

Instead, I have reasons to believe that the cause of this new policy was other filesystems' incompatibility with the way Dropbox attempts to encrypt (obfuscate) its on-disk configuration, including the network credentials.

The origins

Dropbox started doing so in 2011, after one Derek Newton posted an article on his blog complaining about how someone could, having gained access to your computer, simply make a copy of your config.db and forever have access to your Dropbox that way.

[…] Here’s the problem: the config.db file is completely portable and is *not* tied to the system in any way.

Derek Newton's 2011 blog post

The blog post was rather alarmist, having no concrete suggestions other than Don’t use Dropbox and/or allow your users to use Dropbox, and somewhat implying that other products weren't vulnerable to this and it was all Dropbox's fault. In other words, it had all the right properties to cause public outrage. (Perhaps unusually, HN did not take the bait.)

Soon after that happened, Dropbox pushed out a new version which encrypted the configuration file with a host-specific identifier. This was easy on Windows, as DPAPI already had a function called CryptProtectData() just for this purpose. However, the available methods on Linux were far more limited – no decent account-specific credential storage (libsecret wasn't around until next year, and is of very limited use outside GNOME even today); no decent host identifier (/etc/machine-id didn't exist, and they probably didn't want to depend on dbus-daemon being present). Fortunately they didn't do something quite as stupid as binding to the MAC address, but the method they ended up choosing turned out to be almost as bad.

The (possible) actual reason

The Dropbox client uses two filesystem properties to generate a per-host encryption key: the inode number of ~/.dropbox/instance1/ and the fsid of the filesystem that the directory resides on. The first part is not too bad (nearly all Linux filesystems have permanent inode numbers) and probably would have been sufficient on its own to avoid simple copy&pasting. The second part is where the trouble lies.

Calling statvfs() on a path will give you struct statvfs with various filesystem parameters, mainly disk space currently used and available (this is used by df). One of the remaining fields is f_fsid, described vaguely as "Filesystem ID". As seen in the documentation of statfs(2) (which is the underlying Linux syscall):

The general idea is that f_fsid contains some random stuff such that the pair (f_fsid,ino) uniquely determines a file. Some operating systems use (a variation on) the device number, or the device number combined with the filesystem type. Several operating systems restrict giving out the f_fsid field to the superuser only (and zero it for unprivileged users), because this field is used in the filehandle of the filesystem when NFS-exported, and giving it out is a security concern.

Linux statfs(2) manual

The problem is, nowhere is it specified that this filesystem ID will remain static through the lifetime of that filesystem. An encryption key has to remain fixed, otherwise of course it will fail to decrypt the data – so if the fsid ever changes on its own, all Dropbox configuration suddenly becomes inaccessible and you get prompted to reauthenticate, re-link the computer, reindex and so on.

And that's exactly what happens with, say, XFS. If you check the fsid of an XFS filesystem, you'll notice that it's not a random code as with ext4; instead it's only based on the underlying device node's "major/minor" numbers:

$ stat -f ~/.dropbox/instance1
  File: "/home/grawity/.dropbox/instance1"
    ID: 80300000000 Namelen: 255     Type: xfs
Block size: 4096       Fundamental block size: 4096
Blocks: Total: 242021853  Free: 132274009  Available: 132274009
Inodes: Total: 484083200  Free: 481465381

Aside from the obvious problem of not actually being different between hosts, it simultaneously has the opposite problem of (not necessarily, but often) changing across reboots on the exact same host. For example, when my laptop had its Linux root disk in the CD-tray slot (with the old HDD being internal) the ID was 80400000000. As soon as I switched the disks around, the filesystem ID changed and I had to reconfigure Dropbox anew and wait for it to reindex tons of files.

(At first I thought it was the database that got corrupted, but no matter what backups I'd restore the client would just quietly delete all configuration upon starting it. Since I really wanted to avoid the long reindex, that's what prompted me to dig deeper into this. However, I'm certainly not the first to discover the mechanism – indeed, I simply got all information from a few GitHub repositories aimed at "Dropbox forensics", one of which even had the actual Dropbox Python code implementing the 'hostkey' storage.)

The fun part

What happened somewhere around June: While waiting for Dropbox to reindex my 80 GB of files, I was browsing the net looking to see whether anyone else had the same problem, and found a Dropbox Forum thread with someone's very similar complaint: every few reboots, all Dropbox settings would simply disappear. When I asked to compare stat -f -c %i outputs across several reboots, the original poster soon confirmed that it was the exact same problem, and went to directly contact the Dropbox support team. Unfortunately, they got a discouraging reply:

I have received a reply from our specialized team,

Unfortunately, this does not meet the minimum requirements for the Dropbox application. OpenSUSE along with the file system is not supported.

Please review our recommended minimum requirements on the following page:

We are of course always looking for user input when creating the next version of the Dropbox app. I will make sure your comments are passed along to our development team.

Dropbox support quoted in a forum post

And pass to their development team they did. Come 9th of August, Dropbox staff announced this:

Hi everyone, on Nov. 7, 2018, we’re ending support for Dropbox syncing to drives with certain uncommon file systems. […]

official post in Dropbox forums

So instead of looking into other ways to protect the credentials, Dropbox chose to just continue relying on undocumented, ext4-specific behavior and hope that if users go away, the problem also goes away. Dickbags.

The alternatives

It would be unfair to just rant about what was done without trying to come up with good alternatives for the future. (On the off chance that Dropbox developers are looking into alternative methods for protecting the hostkeys, perhaps they would welcome suggestions.)

Personally, if I was pressed to implement per-host obfuscation somehow, I would combine the inode number and /etc/machine-id. There just isn't much else to go with.


If I had a nickel every time some keyboard cowboy said "just use rsync" or "just use nextcloud", I could make a giant "You are missing the point" sculpture out of the nickels.

Shutting down

Long overdue, I have finally shut down the remaining core infrastructure of, namely the website, LDAP directory, and Kerberos KDCs.

I joined Cluenet almost 10 years ago (Aug 2008), and back then it was a popular shell account provider – certainly not the largest, but the only place where people were interested more in system administration than running IRC bouncers and Eggdrop bots. (I mean, have you seen Kerberos outside a large corporate installation recently?) Besides that, it also had somewhat unusual IRC policies and a signup process that involved a questionnaire/introduction letter of sorts, which I somehow memed my way through.

A few years later, the number of people had dwindled, the original owners had left for greener pastures, and I had somehow inherited root access to the central servers. I think by 2012 I was the only one to run the place?…

I was planning to keep the services around for as long as there was even a single host or person using it, but I'm sure that six years is enough. I mean, it's not a technical burden (indeed the same servers now run my own Nullroute LDAP directory anyway), but just seeing it gets me down every time. And on top of that, I still have no control over the DNS domain – nobody knows if the original owners will renew it next January, or whether it will expire again.

So I deleted 481 user accounts, wiped clean all the server ACLs, old signup essays, Kerberos principals, et cetera. It now exists only as an IRC channel and a couple of tarballs in my backup disks. Nobody's going to miss it.

Checking media [fail]

My Dell laptop has recently started showing “Checking media” on every boot, before reaching the bootloader. At first I thought it has something to do with the EFI system partition, but that's not the case.

This message is shown when Dell firmware is trying to PXE-boot from the network, in which the first step is of course to verify the Ethernet connection – the 'media'. This can take up to several seconds when there's no cable, and Dell does it once for the IPv4-capable loader, and again for the IPv6-capable one.

So its presence means the UEFI boot order got reshuffled and somehow the built-in “Onboard LAN IPv4” and “Onboard LAN IPv6” entries have highest priority. (Usually the boot order on Dell UEFI systems looks like this: custom OS entry; built-in “PXE” entries; built-in fallback “HDD” entries; built-in “BIOS mode” (CSM) entries.)

What it also very likely means is that the custom OS-prepared boot entry has disappeared completely, and the computer only boots thanks to the fallback \EFI\BOOT\BOOTX64.EFI loader that happens to be present.

Fortunately this is easy to repair. On my Arch Linux system (which uses systemd-boot), the convenient way is “bootctl install”. Windows 10 systems can be repaired by running “bcdboot c:\windows”. The rest – by very carefully using efibootmgr.

(Alternatively, of course, the fallback “HDD1-1” boot entry could be moved to the front, before PXE, but that's just lazy.)


More from the series of “grawity is hopelessly obsessed with networks”, and because this journal is looking a bit sad & empty at the moment, here's what I've been into recently.

A while ago (two years ago) I signed up on the dn42 network to play around with routing and BGP for a bit. It's a large overlay network that simulates the Internet – in the sense that participants set up their own “autonomous systems”, create WHOIS entries, and set up BGP peerings; although the nodes are connected using just about anything except physical links (IPsec, GRE, IPsec/GRE, OpenVPN, Wireguard, L2TP)... There also are seriously flaky links to parts of freifunk and ChaosVPN.

The second network is Internet – after switching IPv6 tunnels many times, I have obtained an AS number and an IPv6 prefix of my own. (Technically it's still a provider-aggregated prefix but that doesn't stop me from announcing it, primarily via Tunnelbroker and NetAssist.) So now I'm building my own IPv6 tunnels with GRE, OSPF, ZeroTier, and wet string.

(Actually the first time I joined dn42 was three years ago – it seems I have a habit of picking something up, forgetting it after a month or two, and a year later picking it up again for reals. Which, incidentally, is also what happened with the LISP beta network.)

Finally I'm experimenting with LISP – “Locator/Identifier Separation Protocol”, it's a relatively new protocol and one of the proposals brought to IETF for improving the scalability of the global routing table. (It turns out that allowing every nerd to announce their own /48 over shitty ADSL doesn't work all that well.) The current LISP Beta Network is similar in purpose to e.g. the 6BONE network; it's built by several big-name companies, but anyone is allowed to join and obtain a range of EIDs for themselves.

(Interestingly, the IPv4 EIDs provided by LISPnet belong to “Usenix/UUNET Technology Inc.” according to ARIN WHOIS lookup – which is not trivial to perform, as IANA's WHOIS server thinks the addresses belong to APNIC, and APNIC thinks you should ask IANA. The IPv6 EIDs, meanwhile, are just an ordinary Cisco netblock.)

I bought another 30-year-old book about computer networks. This one is called The Matrix: Computer Networks and Conferencing Systems Worldwide – released in 1989 by Digital Equipment Corporation, and yes, the first page out of ~700 refers to a “The Matrix, a worldwide metanetwork”. What more could you want from life. Oh yeah, a soundtrack.