Tag Archive: offsite

Nov 23

Crashplan and Self-Help

I’ve recently reviewed my backup plan. It turns out, I’ve been spending too much money.


I’ve been very happy with datastorageunit.com, and in principle, I love it. I love the owner, John Wooten, who gives great support. I love the fact that it’s a small business, which means that you can negotiate—John gave me a bunch of extra free weeks on my trial just so I could get my data uploaded.


The problem is, at $150 US a year (for 300 GB, of which I’m using about 270 GB), it’s getting overpriced. Not to say that it’s expensive: it isn’t. It’s a good deal. It’s a fraction of what I was paying to store the same data using Jungledisk on the Rackspace storage network. And it’s a DIY kind of solution: you use standard software (like rsync) to connect to it. I initially chose it over Crashplan, because Crashplan was only marginally cheaper, and forced me to constantly be running Hard_Disk_5973 (2)their client.


However, in the last 10 months, the cost of Crashplan has dropped. They now offer an unlimited subscription for less than $3 US a month (that’s less than $36 a year) per machine. That can save me a lot of money, so unfortunately, I will allow my datastorageunit plan to expire this coming March. I’ve installed the Crashplan client again.


I am paying for the unlimited plan for one machine. That one machine is my homebrew NAS, from which I will back up everything. With this new unlimited plan, I will not only be able to backup my photos to a remote server, but also all my videos (the ones I make myself).


I will be getting rid of my main documents backup set as well (the one I’ve been synchronizing via Jungledisk to Rackspace). My rackspace storage is only costing me $4.50 a month right now, but that’s more than the entire crashplan subscription. I’ve decided to synchronize my main documents folder using Windows Live Mesh, since 90% of my syncing happens on my LAN anyway, and back up that folder to a folder on my NAS (using a scheduled Beyond Compare script), which will— surprise!— be backed up remotely to Crashplan’s servers. I should be able to bring my Rackspace and Amazon S3 accounts down to minimal amounts (will probably leave my wife’s documents on them for the time being), and pay under $0.40 US per month (since payment is only based on usage), using them only when I need to post something publicly that’s too big for Dropbox or my webhosting company.


One frustrating issue, though, is Crashplan’s support.


After subscribing (I did my 30-day trial back in February when I was also evaluating datastorageunit), I found my upload speeds very slow. Something on the order of 200 – 300 kpbs. which might sound fast, but it meant that my photo backups would take more than three months to upload. After browsing the support forums on Crashplan, and trying all the tweaks, I came across several threads that suggested that while this was a widespread problem, it seemed to apply mostly to users assigned to a datacenter in Atlanta. I had already opened a support ticket after doing all the troubleshooting I could think of (not, as of yet, responded to), when I came across a forum thread that suggested resetting the backup machine’s ID.


This meant losing three days of backup, but at the speeds I was getting, that wasn’t a big loss. I did it. I checked my settings. This time, I was assigned a server in another datacenter. I started a backup. 7000 kbps!


The interesting thing is this: Crashplan seem to refuse to acknowledge in the forum that there is a problem. The issue seems widespread, and the fix is to switch datacenters– which suggests the problem is with them, and not their users. I do wonder what I’ve gotten myself into by signing up with them, but now that I’ve got decent speeds again (usually 1000 – 7000 kbps, seemingly more limited by my hardware than by bandwidth on their side), the low cost is encouraging me to continue with them.


However, as I mentioned before, I am starting a backup consulting business, and I am now considering taking crashplan off the menu for my clients due to this serious issue with support. I’m just waiting to see what happens with my support ticket.

Mar 04

Backups and Data

I am no expert on this at all, but I felt like writing briefly about how I try to keep my data relatively safe.


There’s almost no such thing as a perfect backup strategy, but everyone should have one.


And each backup strategy should have three components:


  • Live data (this is the data where it lives when you work with it)
  • Offsite replica (this is a copy of the live data that is stored somewhere physically removed from the live data; in Japan, that means far enough away not to be destroyed in the same earthquake as the live data and offline replica)
  • Offline replica (this is a copy of the live data that is stored on a device that is only activated when data is being copied; this copy protects against something like a virus or other forms of data corruption.


Here is my solution.


All my data is copied incrementally, once a month, to an external 2TB hard disk, that I then unplug from the system.  This is my offline backup.


Documents: Live data is automatically synchronized via Jungledisk software to an encrypted location on a Rackspace server in the U.S.  The same program also creates a local copy on each of my three main computers.  Documents that require a high availability (instant replication across all PCs, like my password file) are synchronized using a free dropbox.com account.


Photos: The live data lives on a 2.5” encrypted USB HDD, which I can bring with me if I’m on the road.  I synchronize that data manually (using Beyond Compare) to my Nexenta server.  Nexenta is a variant of Open Solaris that is focused on being a NAS (Network Attached Storage) server.  Like Open Solaris, it uses the zfs filesystem (yes, I know that’s like saying “the HIV virus”) which, to put it simply, handles large amounts of data very well.  At the moment, I also synchronize these files manually to the Rackspace server as my offsite.


However, with 150GB of photos and around 50GB of video, the Rackspace charges ($0.15 USD/GB/Month) are starting to get high.  So I’ve decided to move my photos to datastorageunit in order to save money.  I currently am using 160GB of data on Rackspace, which is costing me about $24 USD a month ($288 USD) per year.  On the other hand, datastorageunit costs $150 USD per year ($12.50 per month) for 300 GB, which means that I can add video backups to that as well.


The advantages of Rackspace (via Jungledisk) are that it is easy to use and sync, and it is encrypted, both in transmission, and on the server.  Which is why I am leaving ~25GB of my live documents there.


Datastorageunit’s philosophy is more homebrew in that the user can (must) decide how to connect and transfer files.  This means that while transmission is encrypted, the remote filesystem is not.  However, there are options available to the user to encrypt that data, though I’ve decided not to in order to keep transfer times lower and avoid a massive headache.  When it comes down to it, though, my photos do not need encryption.

I will probably never move my main documents folder there, because I need the multiple-machine synchronization features that JungleDisk offers me.  It’s really nice turning on my laptop and having it automatically download all the recent changes to my files.


Once the data has been transferred over, I need to figure out how to automate my rsync job so the data gets mirrored to datastorageunit every night to preserve changes I’ve made throughout the day.


A quick Google search reveals that this may not be as straightforward as I originally thought…