Elsewhere

Joey Hess: Kite: a server's tale

Planet Debian - Thu, 10/04/2014 - 17:17

My server, Kite, is finishing its 20th year online.

It started as kite.resnet.cornell.edu, a 486 under the desk in my dorm room. Early on, it bounced around the DNS -- kite.ithaca.ny.us, kite.ml.org, kite.preferred.com -- before landing on kite.kitenet.net. The hardware has changed too, from a succession of desktop machines, it eventually turned into a 2u rack-mount server in the CCCP co-op. And then it went virtual, and international, spending a brief time in Amsterdam, before relocating to England and the kvm-hosting co-op.

Through all this change, and no few reinstalls from scratch, it's had a single distinct personality. This is a multi-user unix system, of the old school, carefully (and not-so-carefully) configured and administered to perform a grab-bag of functions. Whatever the users need.

I read the olduse.net hacknews newsgroup, and I see, in their descriptions of their server in 1984, the prototype of Kite and all its ilk.

It's consistently had a small group of users, a small subset of my family and friends. Not quite big enough to really turn into a community, and we wall and talk less than we once did.

Exhibit: Kite as it appeared in the 90's

[Intentionally partially broken, being able to read the cgi source code is half the fun.]

Kite was an early server on the WWW, and garnered mention in books and print articles. Not because it did anything important, but because there were few enough interesting web sites that it slightly stood out.

Many times over these 20 years I've wondered what will be the end of Kite's story. It seemed like I would either keep running it indefinitely, or perhaps lose interest. (Or funding -- it's eaten a lot of cash over the years, especially before the current days of $5/month VPS hosting.) But I failed to anticipate what seems to really be happening to it. Just as I didn't fathom, when kite was perched under my desk, that it would one day be some virtual abstract machine in a unknown computer in anther country.

Now it seems that what will happen to Kite is that most of the important parts of it will split off into a constellation of specialized servers. The website, including the user sites, has mostly moved to branchable.com. The DNS server, git server and other crucial stuff is moving to various VPS instances and containers. (The exhibit above is just one more automatically deployed, soulless container..) A large part of Kite has always been about me playing with bleeding-edge stuff and installing random new toys; that has moved to a throwaway personal server at cloudatcost.com which might be gone tomorrow (or might keep running for free for years).

What it seems will be left is a shell box, with IMAP access to a mail server, and a web server for legacy /~user/ sites, and a few tools that my users need (including that pine program some of them are still stuck on.)

Will it be worth calling that Kite?

[ Kite users: This transition needs to be done by December when the current host is scheduled to be retired. ]

Categories: Elsewhere

Acquia: DrupalCon Training with the Acquia team! UX and Security

Planet Drupal - Thu, 10/04/2014 - 16:02

The Acquia team is getting ready for DrupalCon Austin. We're excited about the official announcement of training at DrupalCon which is held on the Monday before the conference on Monday, June 2. We have two courses on offer which we think you'll love. One will get you started evaluating your designs with users, and how to conduct usability testing. One will help ensure your site or application is secure which is increasingly important for sites of all sizes.

Categories: Elsewhere

Craig Small: WordPress update needed for stable too

Planet Debian - Thu, 10/04/2014 - 15:10

Yesterday I mentioned that wordpress had an important security update to 3.8.2  The particular security bugs also impact the stable Debian version of wordpress, so those patches have been backported.  I’ve uploaded the changes to the security team so hopefully there will new package soon.

The version you are looking for will be 3.6.1+dfsg-1~deb7u2 and will be on the Debian security mirrors.

Categories: Elsewhere

Code Karate: Responsive Navigation Module

Planet Drupal - Thu, 10/04/2014 - 14:22
Episode Number: 142

In this episode of the Daily Dose of Drupal we go over the Responsive Navigation module. This module can be used to make your Drupal menu "responsive" so that it displays nicely on mobile and tablet devices. If you are trying to build a responsive Drupal website or a responsive Drupal theme, this module can help.

Tags: DrupalContribDrupal 7Drupal PlanetUI/DesignResponsive Design
Categories: Elsewhere

Blair Wadman: Top 10 Drush commands - follow up

Planet Drupal - Thu, 10/04/2014 - 13:54

I recently posted my top 10 Drush commands. In the blog post comments, over email and Twitter, a bunch of people let me know their top commands and tips. This was such great feedback, I decided to write up the list as another post. Here is the top 10 Drush commands from the fine folk who contributed.

Tags: DrushPlanet Drupal
Categories: Elsewhere

Michal Čihař: Heartbleed fun

Planet Debian - Thu, 10/04/2014 - 10:00

You probably know about heartbleed bug in OpenSSL as it is so widespread that it got to mainstream medias as well. As I'm running Debian Wheezy on my servers, they were affected as well.

The updated OpenSSL library was installed immediately after it has been released, but there was still option that somebody got private data from the server before (especially as the vulnerability exists for quite some time). So I've revoked and reissued all SSL certificates. This has nice benefit that they now use SHA 256 intermediate CA compared to SHA 1 which was used on some of them before.

Though there is no way to figure out whether there was some information leak or not, I have decided to reset all access tokens for OAuth (eg. GitHub), so if you have used GitHub login for Weblate, you will have to reauthenticate.

Filed under: English phpMyAdmin Weblate | 0 comments | Flattr this!

Categories: Elsewhere

Andrew Pollock: [life] Day 72: The Workshops, and zip lining into a pool

Planet Debian - Thu, 10/04/2014 - 06:38

Today was jam packed, from the time Zoe got dropped off to the time she was picked up again.

I woke up early to go to my yoga class. It had moved from 6:15am to 6:00am, but was closer to home. I woke up a bunch of times overnight because I wanted to make sure I got up a little bit earlier (even though I had an alarm set) so I was a bit tired.

Sarah dropped Zoe off, and we quickly inspected our plaster fish from yesterday. Because the plaster had gotten fairly thick, it didn't end up filling the molds completely, so the fish weren't smooth. Zoe was thrilled with them nonetheless, and wanted to draw all over them.

After that, we jumped in the car to head out to The Workshops Rail Museum. We were meeting Megan there.

We arrived slightly after opening time. I bought an annual membership last time we were there, and I'm glad we did. The place is pretty good. It's all indoors, and it's only lightly patronised, even for school holidays, so it was nice and quiet.

Megan and her Dad and sister arrived about an hour later, which was good, because it gave Zoe and I a bit of time to ourselves. We had plenty of time on the diesel engine simulator without anyone else breathing down our neck wanting a turn.

The girls all had a good time. We lost Megan and Zoe for a little bit when they decided to take off and look at some trains on their own. Jason and I were frantically searching the place before I found them.

There was a puppet show at 11am, and the room it was in was packed, so we plonked all three kids down on the floor near the stage, and waited outside. That was really nice, because the kids were all totally engrossed, and didn't miss us at all.

After lunch and a miniature train ride we headed home. Surprisingly, Zoe didn't nap on the way home.

Jason was house sitting for some of his neighbours down the street, and he'd invited us to come over and use their pool, so we went around there once we got back home. The house was great. They also had a couple of chickens.

The pool was really well set up. It had a zip line that ran the length of the pool. Zoe was keen to give it a try, and she did really well, hanging on all the way. They also had a little plastic fort with a slippery slide that could be placed at the end of the pool, and the girls had a great time sliding into the pool that way.

We got back home from all of that fun and games about 15 minutes before Sarah arrived to pick Zoe up, so it was really non-stop day.

Categories: Elsewhere

Christine Spang: a tuturial about search

Planet Debian - Thu, 10/04/2014 - 05:46

Today I gave a tutorial at PyCon 2014 entitled Search 101: An Introduction to Information Retrieval.

It was an experiment of sorts: the first workshop I've run primarily by myself, my first tutorial at PyCon, my first paid teaching gig. It was an opportunity to take some of the lessons I learned from teaching the Boston Python Workshop and apply them to a new situation.

The material itself is a distillation of many hours of frustration with the documentation for various open source search engine libraries, frustration that they didn't tell me where to start or about the big picture, they just jumped straight into the details.

Here's what worked:

  • IPython Notebook. Oh em gee. I started writing the class's handout using IPython Notebook because it was a simple way to easily embed syntax-highlighted code into a markdown document that was viewable in a browser. Not only was it a super quick and fun way to write the handout, but many students used the interactive execution features to play around with the example code.
  • Not having a paper handout. Saved trees, printing hassle, and no one seemed to mind.
  • Putting everything in a git repo... git is sufficiently ubiquitous these days that students didn't really have trouble getting a copy, and appreciated having everything in one place, with simple setup instructions. I brought a clone of the repo on a USB stick as a backup plan.

Here's what caused problems:

  • Mostly, the IPython dependency pyzmq, which requires compilation. I don't know what the current landscape is for Python distribution, but installing these libraries through pip is still a pain. I've heard rumour that more ubiquitous wheels may solve this in the future.
  • Some people aren't used to using virtualenv everywhere. Even seeing that, I still think it's worth the confusion to put it forth as the recommended setup method.

Intermediate students are a different crowd than beginners. There was less of an air of discovery in the room, though I organized the class around open-ended tasks. Since the material allowed for folks to take it in the direction of their interest, I found it a bit difficult to gauge whether people were following or not. Overall though, everyone was attentive and studious. I had fun.

Ruben and Stuart, the PyCon tutorial organizers, had logistics running super smoothly, AV, lunch, everything. Thanks for that you guys, you rock. And thanks as well to my helpers: Leo, the tutorial host, Eben, my TA, and Roberto, on AV. It's impossible to pay adequate attention to 20+ people as a single person, couldn't have done a decent job without y'all.

Categories: Elsewhere

Steinar H. Gunderson: Movit 1.1 released

Planet Debian - Thu, 10/04/2014 - 01:15

I just released version 1.1 of Movit, my GPU-based video filter library. This is basically for two things: A bunch of accumulated small fixed and tweaks, and support for GLES 3.0 (think mobile).

So, what now? Well, perhaps unsurprisingly, releasing a library does not bring an army of interested developers to your door, so as a library writer, most of my time actually goes into projects further up in the hierarchy. In particular, when you start imposing unreasonable demands such as “working OpenGL” onto end users who like to use Gentoo but don't know how to install a package, there is some fallout.

However, it also exposes you to a lot of scenarios you never really thought about, which can be frustrating, but in the end also increases the quality and robustness of your code. In particular, I know there's some issue (probably in Kdenlive's Movit support and not Movit, though) where NVIDIA's OpeNGL drivers are much stricter than Mesa's with regards to multithreading, and it's damn near impossible to track down without having one in a desktop machine myself. (I have one in my HTPC, but it's Atom-based and only has the TV for monitor, so debugging there is something I'd rather not do.)

So, what's next? The answer is pretty simple: Probably a break. I have to go to travel now (vacation and work) for the next month or so, so I fear Movit will get less attention for a little while. Then again, it's in fairly good shape, so I'm not that worried that the world will be screaming for me when I come back. :-)

Categories: Elsewhere

Drupal Association News: Building the Future of Drupal

Planet Drupal - Thu, 10/04/2014 - 00:18

If Drupal adoption is going to increase, we’ll need to grow the community— and that means continuing to bring developers, web designers, and digital experts into the Drupal fold. For the finale of our series on Drupal training options, we spoke to several of the many experts in Drupal training, and wanted to share their thoughts with the community.

When it comes to increasing the amount of Drupal talent in the market, there are more options to learn the platform than ever before.

Categories: Elsewhere

Thorsten Glaser: Heartbleed vs. Startcom / StartSSL

Planet Debian - Wed, 09/04/2014 - 21:51

First of all, good news, MirBSD is not vulnerable to The Heartbleed Bug due to my deliberate choice to stick to an older OpenSSL version. My inquiry (in various places) as to what precisely could leak when a vulnerable client connected to a nōn-vulnerable server has yet to be answered, though we can assume private key material is safe.

Now the bad news: while the CA I use¹ and a CA I don’t use offer free rekeying (in general), a CA I also use occasionally² refuses to do that. The ugly: they will not even revoke the certificates, so any attacker who gained your key, for example when you have been using a certificate of theirs on a Debian system, will be able to use it (e.g. to MITM your visitors traffic) unless you shell over lots of unreasonable money per certificate. (Someone wrote they got the fee waived, but others don’t, nor do I. (There’s also a great Twitter discussion-thingy about this involving Zugschlus, but I won’t link Twitter because they are not accessible to Lynx users like me and other Planet Debian authors.)

① I’ve been using GoDaddy privately for a while, paid for a wildcard certificate for *.mirbsd.org, and later also at work. I’ve stopped using it privately due to current lack of money.

② Occasionally, for nōn-wildcard gratis SSL certificates for HTTP servers. Startcom’s StartSSL certificates are unusable for real SSL as used in SMTP STARTTLS anyway, so usage isn’t much.

Now I’ve got a dilemma here. I’ve created a CA myself, to use with MirBSD infrastructure and things like that – X.509 certificates for my hosts (especially so I can use them for SMTP) and possibly personal friends (whose PGP key I’ve signed with maximum trust after the usual verification) but am using a StartSSL certificate for www.mirbsd.org as my GoDaddy wildcard certificate expires in a week or so (due to the aforementioned monetary issues), and I’d rather not pay for a limited certificate only supporting a single vhost. There is absolutely no issue with that certificate and key (only ever generated and used on MirBSD, only using it in Apache mod_ssl). Then, there’s this soon-to-be tax-exempt non-profit society of public utility I’m working with, whose server runs Debian, and which is affected, but has been using a StartSSL certificate for a while. Neither the society nor I can afford to pay for revocation, and we do not see any possible justification for this especially in the face of CVE-2014-0160. I expect a rekey keeping the current validity end date, and would accept a revocation even if I were unable to get a new certificate, since even were we to get a certificate for the society’s domain from someplace else, an attacker could still MITM us with the previous one from Startcom.

The problem here is: I’d really love to see (all of!) Startcom dropped from the global list of trustworthy CAs, but then I’d not know from where to get a cert for MirBSD; Globalsign is not an option because I will not limit SSL compatibility to a level needed to pass their “quality” test… possibly GoDaddy, ISTR they offer a free year to Open Source projects… no idea about one for the society… but it would solve the problem of not getting the certificates revoked. For everyone.

I am giving Startcom time until Friday after $dayjob (for me); after that, I’ll be kicking them off MirBSD’s CA bundle and will be lobbying for Debian and Mozilla to do the same.

Any other ideas of how to deal with that? I’d probably pay 5 € for a usable certificate accepted by people (including old systems, such as MSIE 5.0 on Win2k and the likes) without questioning… most of the time, I only serve public content anyway and just use SSL to make the NSA’s job more difficult (and even when not I’m not dealing with any payment information, just the occasional login protected area).

By the way, is there any way to access the information that is behind a current-day link to groups.google.com with Lynx or Pine? I can’t help but praise GMane for their NNTP interface.

ObFunfact: just when I was finished writing this wlog entry, I got a new eMail “Special offer just for you.” from GoDaddy. Sadly, no offer for a 5 € SSL certificate, just the usual 20-35% off coupon code.

Categories: Elsewhere

Lucas Nussbaum: speedtest.net, or how not to do bandwidth tests

Planet Debian - Wed, 09/04/2014 - 21:23

While trying to debug a bandwidth problem on a 3G connection, I tried speedtest.net, which ranks fairly high when one searches for “bandwidth test” on various search engines. I was getting very strange results, so I started wondering if my ISP might be bandwidth-throttling all traffic except the one from speedtest.net tests. After all, that’s on a 3G network, and another french 3G ISP (SFR) apparently uses Citrix ByteMobile to optimize the QoE by minifying HTML pages and recompressing images on-the-fly (amongst other things).

So, I fired wireshark, and discovered that no, it’s just speedtest being a bit naive. Speedtest uses its own text-based protocol on port 8080. Here is an excerpt of a download speed test:

> HI
< HELLO 2.1 2013-08-14.01
> DOWNLOAD 1000000
< DOWNLOAD JABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFG

Yeah, right: sequences of “ABCDEFGHIJ”. How course, extremely easy to compress, which apparently happens transparently on 3G (or is it PPP? but I tried to disable PPP compression, and it did not see any change).

It’s funny how digging into problems that look promising at first sight often results in big disappointments :-(

Categories: Elsewhere

Acquia: Sensio Labs UK - Lessons and chances from Drupal 8 early adoption

Planet Drupal - Wed, 09/04/2014 - 20:02

Part 2 of 2 - I spoke with Richard Miller and Tom Kitchin, software engineers at SensioLabs UK and its parent company Inviqa respectively, via a Google Hangout on Air recently. Here, I learn the inside story on one of the first Drupal 8 sites online, www.sensiolabs.co.uk, what their goals were, how they built it and have kept it running since May 2013, and how Drupal 8 will change the way they design applications for clients going forward.

Categories: Elsewhere

Drupal core announcements: Help unblock Drupal 8.0-beta1 at the NYC Camp Drupal 8 sprint

Planet Drupal - Wed, 09/04/2014 - 18:42
Start:  2014-04-10 (All day) - 2014-04-12 (All day) America/New_York Sprint Organizers:  xjm Event url: 

http://www.nyccamp.org/event/d8-core-sprint

After three years of Drupal 8 development, we are finally closing in on a Drupal 8.0-beta1 release. Of about 150 critical issues that have blocked the first Drupal 8 beta release, only 32 beta blockers remain. Most of these remaining issues are too complex for any one developer to resolve alone, but we need help on numerous tasks that will accelerate them. Join us at the NYC Camp D8 Core Sprint to see firsthand the work that's in progress and contribute to our momentum. Look for the "IRL issue queue" on colored construction paper at the sprint. :)

(New to Drupal 8 or core contribution? Check out the Get Involved with Core sprint instead.)

Categories: Elsewhere

Ben's SEO Blog: Blue Drop Awards [Infographic]

Planet Drupal - Wed, 09/04/2014 - 17:44

We'd like to share this infographic we've made depicting interesting facts about the Blue Drop Awards. Without the wonderful community support, The Blue Drop Awards simply would not and could not exist; we appreciate it.

Celebrating Drupal Innovationblue drop award, drupal, Planet Drupal
Categories: Elsewhere

Open Source Training: Add Links to Fields in Views

Planet Drupal - Wed, 09/04/2014 - 15:44

This blog post is the answer to a common request we get from people learning how to use Views.

The question is: "How do I automatically add a link to a field"?

The answer is straightforward ... once you know how.

Categories: Elsewhere

Acquia: Live tutorial on using Bootstrap with Drupal today!

Planet Drupal - Wed, 09/04/2014 - 15:38

Our old training site was looking a bit long in the tooth. It was not only Drupal 6, but also had an old Acquia design several versions behind the current main site. It was time for a major update.

Step by step tutorials

Dave Myburgh, Lead developer for Acquia.com recently gave two webinars about the experience. He shares specific tips on what modules he used to keep the development lightweight and flexible.

Categories: Elsewhere

Craig Small: Important WordPress update

Planet Debian - Wed, 09/04/2014 - 15:08

WordPress 3.8.2 was released yesterday which contains some important security fixes. This is an important security release and the Debian packages were uploaded to the ftp-master a few minutes ago.

Besides fixing Debian Bug #744018, the release fixes the following two vulnerabilities (as mentioned in the bug report):

  • CVE-2014-0165 WordPress privilege escalation: prevent contributors from publishing posts
  • CVE-2014-0166 WordPress potential authentication cookie forgery

I recommend if you use the Debian package to upgrade as soon as it is available.

 

Related articles
Categories: Elsewhere

Julian Andres Klode: ThinkPad X230 UEFI broken by setting a setting

Planet Debian - Wed, 09/04/2014 - 14:57

Today, I decided to set my X230 back to UEFI-only boot, after having changed that for a bios upgrade recently (to fix a resume bug). I then choose to save the settings and received several error messages telling me that the system ran out of resources (probably storage space for UEFI variables).

I rebooted my machine, and saw no logo appearing. Just something like an underscore on a text console. The system appears to boot normally otherwise, and once the i915 module is loaded (and we’re switching away from UEFI’s Graphical Output Protocol [GOP]) the screen works correctly.

So it seems the GOP broke.

What should I do next?


Filed under: General
Categories: Elsewhere

Petter Reinholdtsen: S3QL, a locally mounted cloud file system - nice free software

Planet Debian - Wed, 09/04/2014 - 11:30

For a while now, I have been looking for a sensible offsite backup solution for use at home. My requirements are simple, it must be cheap and locally encrypted (in other words, I keep the encryption keys, the storage provider do not have access to my private files). One idea me and my friends have had many years ago, before the cloud storage providers showed up, have been to use Google mail as storage, writing a Linux block device storing blocks as emails in the mail service provided by Google, and thus get heaps of free space. On top of this one can add encryption, RAID and volume management to have lots of (fairly slow, I admit that) cheap and encrypted storage. But I never found time to implement such system. But the last few weeks I have looked at a system called S3QL, a locally mounted network backed file system with the features I need.

S3QL is a fuse file system with a local cache and cloud storage, handling several different storage providers, any with Amazon S3, Google Drive or OpenStack API. There are heaps of such storage providers. S3QL can also use a local directory as storage, which combined with sshfs allow for file storage on any ssh server. S3QL include support for encryption, compression, de-duplication, snapshots and immutable file systems, allowing me to mount the remote storage as a local mount point, look at and use the files as if they were local, while the content is stored in the cloud as well. This allow me to have a backup that should survive fire. The file system can not be shared between several machines at the same time, as only one can mount it at the time, but any machine with the encryption key and access to the storage service can mount it if it is unmounted.

It is simple to use. I'm using it on Debian Wheezy, where the package is included already. So to get started, run apt-get install s3ql. Next, pick a storage provider. I ended up picking Greenqloud, after reading their nice recipe on how to use s3ql with their Amazon S3 service, because I trust the laws in Iceland more than those in USA when it come to keeping my personal data safe and private, and thus would rather spend money on a company in Iceland. Another nice recipe is available from the article S3QL Filesystem for HPC Storage by Jeff Layton in the HPC section of Admin magazine. When the provider is picked, figure out how to get the API key needed to connect to the storage API. With Greencloud, the key did not show up until I had added payment details to my account.

Armed with the API access details, it is time to create the file system. First, create a new bucket in the cloud. This bucket is the file system storage area. I picked a bucket name reflecting the machine that was going to store data there, but any name will do. I'll refer to it as bucket-name below. In addition, one need the API login and password, and a locally created password. Store it all in ~root/.s3ql/authinfo2 like this:

[s3c] storage-url: s3c://s.greenqloud.com:443/bucket-name backend-login: API-login backend-password: API-password fs-passphrase: local-password

I create my local passphrase using pwget 50 or similar, but any sensible way to create a fairly random password should do it. Armed with these details, it is now time to run mkfs, entering the API details and password to create it:

# mkdir -m 700 /var/lib/s3ql-cache # mkfs.s3ql --cachedir /var/lib/s3ql-cache --authfile /root/.s3ql/authinfo2 \ --ssl s3c://s.greenqloud.com:443/bucket-name Enter backend login: Enter backend password: Before using S3QL, make sure to read the user's guide, especially the 'Important Rules to Avoid Loosing Data' section. Enter encryption password: Confirm encryption password: Generating random encryption key... Creating metadata tables... Dumping metadata... ..objects.. ..blocks.. ..inodes.. ..inode_blocks.. ..symlink_targets.. ..names.. ..contents.. ..ext_attributes.. Compressing and uploading metadata... Wrote 0.00 MB of compressed metadata. #

The next step is mounting the file system to make the storage available.

# mount.s3ql --cachedir /var/lib/s3ql-cache --authfile /root/.s3ql/authinfo2 \ --ssl --allow-root s3c://s.greenqloud.com:443/bucket-name /s3ql Using 4 upload threads. Downloading and decompressing metadata... Reading metadata... ..objects.. ..blocks.. ..inodes.. ..inode_blocks.. ..symlink_targets.. ..names.. ..contents.. ..ext_attributes.. Mounting filesystem... # df -h /mnt Filesystem Size Used Avail Use% Mounted on s3c://s.greenqloud.com:443/bucket-name 1.0T 0 1.0T 0% /s3ql #

The file system is now ready for use. I use rsync to store my backups in it, and as the metadata used by rsync is downloaded at mount time, no network traffic (and storage cost) is triggered by running rsync. To unmount, one should not use the normal umount command, as this will not flush the cache to the cloud storage, but instead running the umount.s3ql command like this:

# umount.s3ql /s3ql #

There is a fsck command available to check the file system and correct any problems detected. This can be used if the local server crashes while the file system is mounted, to reset the "already mounted" flag. This is what it look like when processing a working file system:

# fsck.s3ql --force --ssl s3c://s.greenqloud.com:443/bucket-name Using cached metadata. File system seems clean, checking anyway. Checking DB integrity... Creating temporary extra indices... Checking lost+found... Checking cached objects... Checking names (refcounts)... Checking contents (names)... Checking contents (inodes)... Checking contents (parent inodes)... Checking objects (reference counts)... Checking objects (backend)... ..processed 5000 objects so far.. ..processed 10000 objects so far.. ..processed 15000 objects so far.. Checking objects (sizes)... Checking blocks (referenced objects)... Checking blocks (refcounts)... Checking inode-block mapping (blocks)... Checking inode-block mapping (inodes)... Checking inodes (refcounts)... Checking inodes (sizes)... Checking extended attributes (names)... Checking extended attributes (inodes)... Checking symlinks (inodes)... Checking directory reachability... Checking unix conventions... Checking referential integrity... Dropping temporary indices... Backing up old metadata... Dumping metadata... ..objects.. ..blocks.. ..inodes.. ..inode_blocks.. ..symlink_targets.. ..names.. ..contents.. ..ext_attributes.. Compressing and uploading metadata... Wrote 0.89 MB of compressed metadata. #

Thanks to the cache, working on files that fit in the cache is very quick, about the same speed as local file access. Uploading large amount of data is to me limited by the bandwidth out of and into my house. Uploading 685 MiB with a 100 MiB cache gave me 305 kiB/s, which is very close to my upload speed, and downloading the same Debian installation ISO gave me 610 kiB/s, close to my download speed. Both were measured using dd. So for me, the bottleneck is my network, not the file system code. I do not know what a good cache size would be, but suspect that the cache should e larger than your working set.

I mentioned that only one machine can mount the file system at the time. If another machine try, it is told that the file system is busy:

# mount.s3ql --cachedir /var/lib/s3ql-cache --authfile /root/.s3ql/authinfo2 \ --ssl --allow-root s3c://s.greenqloud.com:443/bucket-name /s3ql Using 8 upload threads. Backend reports that fs is still mounted elsewhere, aborting. #

The file content is uploaded when the cache is full, while the metadata is uploaded once every 24 hour by default. To ensure the file system content is flushed to the cloud, one can either umount the file system, or ask s3ql to flush the cache and metadata using s3qlctrl:

# s3qlctrl upload-meta /s3ql # s3qlctrl flushcache /s3ql #

If you are curious about how much space your data uses in the cloud, and how much compression and deduplication cut down on the storage usage, you can use s3qlstat on the mounted file system to get a report:

# s3qlstat /s3ql Directory entries: 9141 Inodes: 9143 Data blocks: 8851 Total data size: 22049.38 MB After de-duplication: 21955.46 MB (99.57% of total) After compression: 21877.28 MB (99.22% of total, 99.64% of de-duplicated) Database size: 2.39 MB (uncompressed) (some values do not take into account not-yet-uploaded dirty blocks in cache) #

I mentioned earlier that there are several possible suppliers of storage. I did not try to locate them all, but am aware of at least Greenqloud, Google Drive, Amazon S3 web serivces, Rackspace and Crowncloud. The latter even accept payment in Bitcoin. Pick one that suit your need. Some of them provide several GiB of free storage, but the prize models are quire different and you will have to figure out what suit you best.

While researching this blog post, I had a look at research papers and posters discussing the S3QL file system. There are several, which told me that the file system is getting a critical check by the science community and increased my confidence in using it. One nice poster is titled "An Innovative Parallel Cloud Storage System using OpenStack’s SwiftObject Store and Transformative Parallel I/O Approach" by Hsing-Bung Chen, Benjamin McClelland, David Sherrill, Alfred Torrez, Parks Fields and Pamela Smith. Please have a look.

Given my problems with different file systems earlier, I decided to check out the mounted S3QL file system to see if it would be usable as a home directory (in other word, that it provided POSIX semantics when it come to locking and umask handling etc). Running my test code to check file system semantics, I was happy to discover that no error was found. So the file system can be used for home directories, if one chooses to do so.

If you do not want a locally file system, and want something that work without the Linux fuse file system, I would like to mention the Tarsnap service, which also provide locally encrypted backup using a command line client. It have a nicer access control system, where one can split out read and write access, allowing some systems to write to the backup and others to only read from it.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Categories: Elsewhere

Pages

Subscribe to jfhovinne aggregator - Elsewhere