This weekend I moved my blog to a different server. This meant I could:
- Enable IPv6
- Enable SSL
- Set up a pump.io profile
I've tested it, and it's working. I'm hoping that I can swap out the Node.js modules one-by-one for the Debian-packaged versions.
Another thing that was developed as a result of my big Commerce project (see my previous blog post for the run-down of the various modules this contributed back to Drupal) was a bit of code for generating a graph that represents the relationships between entity types.
For a site with a lot of entityreference fields it's a good idea to draw diagrams before you get started, to figure out how everything ties together. But it's also nice to have a diagram that's based on what you've built, so you can compare it, and refer back to it (not to mention that it's a lot easier to read than my handwriting).
The code for this never got released; I tried various graph engines that work with Graph API, but none of them produced quite what I was hoping for. It just sat in my local copy of Field Tools for the last couple of years (I didn't even make a git branch for it, that shows how rough it was!). Then yesterday I came across the Sigma.js graph library, and that inspired me to dig out the code and finish it off.
To give the complete picture, I've added support for the relationships that are formed between entity types by their schema fields: things like the uid property on a node. These are easily picked out of hook_schema()'s foreign keys array.
In the end, I found Sigma.js wasn't the right fit: it looks very pretty, but it expects you to dictate the position of the nodes in the canvass, which for a generated graph doesn't really work. There is a plugin for it that allows the graph to be force-directed, but that was starting to be too fiddly. Instead though, I found Springy, that while maybe not quite as versatile, automatically lays out the graph nodes out of the box. It didn't take too long to write a library module for using Springy with Graph API.
Here's the result:
The next stage of development for this tool is figuring out a nice way of showing entity bundles. After all, entityreference fields are on specific bundles, and may point to only certain bundles. However, they sometimes point to all bundles of an entity type. And meanwhile, schema properties are always on all bundles and point to all bundles. How do we represent that without the graph turning into a total mess? I'm pondering adding a form that lets you pick which entity types should be shown as separate bundles, but it's starting to get complicated. If you have any thoughts on this, or other ways to improve this feature, please share them with me in the Field Tools issue queue!
After the talk, various DebConf participants have approached me and started hacking on Debsources, which is awesome! As a result of their work, new shiny features will probably be announced shortly. Stay tuned.
When discussing with new contributors (hi Luciano, Raphael!), though, it quickly became clear that getting started with Debsources hacking wasn't particularly easy. In particular, doing a local deployment for testing purposes might be intimidating, due to the need of having a (partial) source mirror and whatnot. To fix that, I have now written a HACKING file for Debsources, which you can find at top-level in the Git repo.
Happy Debsources hacking!
By pure chance I was able to accept 237 packages, the same number as last month. 33 times I contacted the maintainer to ask a question about a package and 55 times I had to reject a package. The reject number increased a bit as I also worked on packages that already got a note but had not been fully processed. In contrast I only filed three serious bugs this month.
Currently there are about 200 packages still waiting in the NEW queue As the freeze for Jessie comes closer every day, I wonder whether all of them can be processed in time. So I don’t mind if every maintainer checks the package again and maybe uploads an improved version that can be processed faster.
All in all I got assigned a workload of 16.5h for August. I spent these hours to upload new versions of
- [DLA 32-1] nspr security update
- [DLA 34-1] libapache-mod-security security update
- [DLA 36-1] polarssl security update
- [DLA 37-1] krb5 security update
- [DLA 39-1] gpgme1.0 security update
- [DLA 41-1] python-imaging security update
As last month I prepared these uploads on the basis of the corresponding DSAs for Wheezy. For these packages backporting the Wheezy patches to Squeeze was rather easy.
I also had a look at python-django and eglibc. Although the python-django patches apply now, the package fails some tests and these issues need some further investigation. In case of eglibc, my small pbuilder didn’t have enough resources and trying to build the package resulted in a full disk after more than three hours of work.
For PHP5 Ondřej Surý (the real maintainer) suggested to use point releases of upstream instead of applying only patches. I am curious about how much effort is needed for this approach. Stay tuned, next month you will be told more details!
Anyway, this is still a lot of fun and I hope I can finish python-django, eglibc and php5 in September.
This month my meep packages plus mpb have been part of a small hdf5 transition. All five packages needed a small patch and a new upload. As the patch was already provided by Gilles Filippini, this was done rather quickly.
If you would like to support my Debian work you could either be part of the Freexian initiative (see above) or consider to send some bitcoins to 1JHnNpbgzxkoNexeXsTUGS6qUp5P88vHej. Contact me at firstname.lastname@example.org if you prefer another way to donate. Every kind of support is most appreciated.
apt-offline 1.4 has been released . This is a minor bug fix release. In fact, one feature, offline bug reports (--bug-reports), has been dropped for now.
The Debian BTS interface seems to have changed over time and the older debianbts.py module (that used the CGI interface) does not seem to work anymore. The current debbugs.py module seems to have switched to the SOAP interface.
There are a lot of changes going on personally, I just haven't had the time to spend. If anyone would like to help, please reach out to me. We need to use the new debbugs.py module. And it should be cross-platform.
Also, thanks to Hans-Christoph Steiner for providing the bash completion script.Categories:
Drupal 8, Plugins, Guzzle, CMI, Caching... If those buzzwords trigger your interest, you should keep reading this article. We will cover those topics as we are building one of our first Drupal 8 modules. Recently one of our clients requested a solution to integrate a custom feed called IBP Catalog. The IBP Catalog is a filterable XML feed, which enable to easily collect web component like banners, documents or even audio files. Those components are selected by the broker through a dedicated website.Read More...
Sociological Images has an interesting article making the case for phasing out the US $0.01 coin . The Australian $0.01 and $0.02 coins were worth much more when they were phased out.
Multiplicity is a board game that’s designed to address some of the failings of SimCity type games . I haven’t played it yet but the page describing it is interesting.
Adam Bryant wrote an interesting article for NY Times about Google’s experiments with big data and hiring . Among other things it seems that grades and test results have no correlation with job performance.
Jennifer Chesters from the University of Canberra wrote an insightful article about the results of Australian private schools . Her research indicates that kids who go to private schools are more likely to complete year 12 and university but they don’t end up earning more.
Wired has an interesting article about the Bittorrent Sync platform for distributing encrypted data . It’s apparently like Dropbox but encrypted and decentralised. Also it supports applications on top of it which can offer social networking functions among other things.
The AbbottsLies.com.au site catalogs the lies of Tony Abbott . There’s a lot of work in keeping up with that.
-  http://www.hezmatt.org/~mpalmer/blog/2014/06/29/adventures-in-dnssec.html
-  http://thesocietypages.org/socimages/2012/05/05/a-case-against-the-penny/
-  http://www.molleindustria.org/blog/multiplicity/
-  http://carlos.bueno.org/2014/06/mirrortocracy.html
-  http://tinyurl.com/mrxjuff
-  http://tinyurl.com/k6abj26
-  https://play.google.com/store/apps/details?id=org.kiwix.kiwixmobile
-  http://www.mamamia.com.au/news/world-congress-of-families-conference/
-  http://tinyurl.com/n7sczw4
-  http://tinyurl.com/k3k36sq
-  http://tinyurl.com/k3k36sq
-  http://tinyurl.com/l42mckp
-  http://www.abbottslies.com.au/
-  http://www.racialicious.com/2009/12/21/and-we-shall-call-this-moffs-law/
-  http://tinyurl.com/karyryk
-  http://tinyurl.com/puydo94
This is part one in a series of blog posts about the Drupal Community. There is NO SHORTAGE of posts on this topic, but I wanted to take the time to tell my story of how I got here and what the Drupal community means to me.
If you have ever attended one of my private or public trainings then chances are good that you have heard me utter the phrase that titles this blog post. You can also hear me saying this on a recent Podcast I did with the good folks at LightSky.com: http://www.lightsky.com/podcasts/drupal-community
Here is that quote again in longer form:“Drupal is a community and there happens to be a piece of software by the exact same name, and that can be confusing for some.”
If you read that statement slow enough, or maybe a few times, I believe you will agree that this is a VERY loaded statement, a provocative one even. How does it make you feel when you read it? Do you instantly agree? Do you instantly disagree? Do you wonder if it is hyperbole or sensationalism at some level? I think all these reactions, and more, are well within the realm of expected, and acceptable, responses.
You see, my early exposure to “Drupal” started with a rather humongous dose of the Drupal Community. Therefore, it stands to reason that I know it well, love it dearly, and engage and describe it as often as I do. But it wasn’t just my early exposures that set me on a path of life long Drupal Community advocacy. It was the opportunities for continued exposures that were afforded to me by the very members of the community. It fed me, equipped me, and empowered me which, in turn, motivated me to energetically continue on in my role as an active Drupal Community member.
How it started:
For the LONGER version of this story, go listen to my 2009 DrupalEasy Podcast Interview.
Suffice to say that I discovered Drupal in December of 2007 and after becoming convinced that Drupal ROCKED I discovered that there was a training in Portland Oregon. This outfit with a real funny name was doing a 5 day training on module development for Drupal 5. What was that funny named company? Well, it was Lullabot, of course! :-) There I met MANY of the people who I count as good friends, partners, and colleagues to this day.
Let me keep this simple with a visual timeline of just how much Drupal Community interaction I had right out of the gate:
- December | Discover Drupal
- Jan, Portland | 5 days of Drupal5 Module Development Training with Lullabot & 2 dozen other [soon to be] friends.
- Jan, Indianapolis | I start the local Indy Drupal Users Group. Why? Because in Portland, Addi Berry told me to!
- Feb, Los Angeles | 5 days of Drupal5 Theme Development Training with Lullabot & some of my new friends from Portland PLUS some brand new friends.
- Mar, Boston | 4 days at DrupalCon with 850 Drupalers, so many of which I already knew from the 2 Lullabot classes
- May, Minneapolis | 5 days of Drupal6 Module/Theming training AGAIN with Lullabot & many familiar faces & new ones.
- June, Toronto | 5 days of Drupal6 Intensive Training AGAIN with Lullabot & many familiar faces & new ones.
- July, Chicago | 2 days helping to man the Drupal booth at HostingCon. Kieran Lal had put out a request for people to take shifts. I showed up and never left the booth. I was an animal doing everything I could to educate ppl on how awesome I thought Drupal was. I COULD NOT [would not?] shut up. I impressed the local Chicago Users Group members and they asked me if I would come speak at their first ever DrupalCamp Chicago. I AGREED! [Still didn’t understand what a DrupalCamp was!?!?!]
- Oct, Chicago | DrupalCamp Chicago is my 1st ever DrupalCamp! I wound up delivering over 8 sessions and leading a couple BoFs as I discovered my new title, King Of The N00bs!
- Nov, Indianapolis | I become aware of an event called IndyBANG [Indy Business & Arts Networking Get-together] I pay for booth space, print up a huge banner, and enjoyed some local entertainment, beverages, and got to tell my own city about this awesome thing called Drupal!
- May, Chicago | My First PAID Gig! I am invited to deliver a workshop at the 1st annual CMS Expo in Evanston IL. Local community leader Matthew Lechleider and I wow a good sized crowd for a 1/2 day Drupal intro workshop. I end up meeting many ppl who will play important, longterm roles in my professional life.
You get the idea! right? :-)
So if you do the math, My first 90 days in Drupal included 80hrs of lullabot workshops, and the first “solo” DrupalCon in North America. That’s pretty intense! It only stands to reason that my perspective on Drupal is one that is Community driven. When I think of Drupal, I think of the Drupal community.
Other upcoming topics include:
- Why it's important to distinguish the Drupal Communuity as its own entity and appreciate its value and power.
- How companies have leveraged the Drupal Community and how they've achieved measurable ROI from doing so.
- How the Drupal Community is a "Value Added" consideration in the sales process and why the Drupal Community matters when businesses consider which CMS to use for their organization.
- The evolution of DrupalCamps across the years. Many things have changed!
- Other topics? Leave a comment on this post if you have an idea for a future blog post! :-)
Today we have a little diversion to talk about the National Health Service. The NHS is the publicly funded healthcare system in the UK.
Actually there are four such services in the UK, only one of which has this name:
- The national health service (England)
- Health and Social Care in Northern Ireland.
- NHS Scotland.
- NHS Wales.
In theory this doesn't matter, if you're in the UK and you break your leg you get carried to a hospital and you get treated. There are differences in policies because different rules apply, but the basic stuff "free health care" applies to all locations.
(Differences? In Scotland you get eye-tests for free, in England you pay.)
My wife works as an accident & emergency doctor, and has recently changed jobs. Hearing her talk about her work is fascinating.
The hospitals she's worked in (Dundee, Perth, Kirkcaldy, Edinburgh, Livingstone) are interesting places. During the week things are usually reasonably quiet, and during the weekend things get significantly more busy. (This might mean there are 20 doctors to hand, versus three at quieter times.)
Weekends are busy largely because people fall down hills, get drunk and fight, and are at home rather than at work - where 90% of accidents occur.
Of course even a "quiet" week can be busy, because folk will have heart-attacks round the clock, and somebody somewhere will always be playing with a power tool, a ladder, or both!
So what was the point of this post? Well she's recently transferred to working for a childrens hospital (still in A&E) and the patiences are so very different.
I expected the injuries/patients she'd see to differ. Few 10 year olds will arrive drunk (though it does happen), and few adults fall out of trees, or eat washing machine detergent, but talking to her about her day when she returns home is fascinating how many things are completely different from how I expected.
Adults come to hospital mostly because they're sick, injured, or drunk.
Children come to hospital mostly because their parents are paranoid.
A child has a rash? Doctors are closed? Lets go to the emergency ward!
A child has fallen out of a tree and has a bruise, a lump, or complains of pain? Doctors are closed? Lets go to the emergency ward!
I've not kept statistics, though I wish I could, but it seems that she can go 3-5 days between seeing an actually injured or chronicly-sick child. It's the first-time-parents who bring kids in when they don't need to.
Understandable, completely understandable, but at the same time I'm sure it is more than a little frustrating for all involved.
Finally one thing I've learned, which seems completely stupid, is the NHS-Scotland approach to recruitment. You apply for a role, such as "A&E doctor" and after an interview, etc, you get told "You've been accepted - you will now work in Glasgow".
In short you apply for a post, and then get told where it will be based afterward. There's no ability to say "I'd like to be a Doctor in city X - where I live", you apply, and get told where it is post-acceptance. If it is 100+ miles away you either choose to commute, or decline and go through the process again.
This has lead to Kirsi working in hospitals with a radius of about 100km from the city we live in, and has meant she's had to turn down several posts.
And that is all I have to say about the NHS for the moment, except for the implicit pity for people who have to pay (inflated and life-changing) prices for things in other countries.
After an intensive evening of brainstorming by the 5th floor cabal, I am happy to release the very first version of the Debian Trivia, modeled after the famous TCP/IP Drinking Game. Only the questions are listed here — maybe they should go (with the answers) into a package? Anyone willing to co-maintain? Any suggestions for additional questions?
- what was the first release with an “and-a-half” release?
- Where were the first two DebConf held?
- what are Debian releases named after? Why?
- Give two names of girls that were originally part of the Debian Archive Kit (dak), that are still actively used today.
- Swirl on chin. Does it ring a bell?
- What was Dunc Tank about? Who was the DPL at the time? Who were the release managers during Dunc Tank?
- Cite 5 different valid values for a package’s urgency field. Are all of them different?
- When was the Debian Maintainers status created?
- What is the codename for experimental?
- Order correctly lenny, woody, etch, sarge
- Which one was the Dunc Tank release?
- Name three locations where Debian machines are hosted.
- What does the B in projectb stand for?
- What is the official card game at DebConf?
- Describe the Debian restricted use logo.
- One Debian release was frozen for more than a year. Which one?
- name the kernel version for sarge, etch, lenny, squeeze, wheezy. bonus for etch-n-half!
- What happened to Debian 1.0?
- Which DebConfs were held in a Nordic country?
- What does piuparts stand for?
- Name the first Debian release.
- Order correctly hamm, bo, potato, slink
- What are most Debian project machines named after?
Recently I was doing some work on the alioth infrastructure like fixing things or cleaning up things.
One of the more visible things I done was the switch from gitweb to cgit. cgit is a lot of faster and looks better than gitweb.
The list of repositories is generated every hour. The move also has the nice effect that user repositories are available via the cgit index again.
I don’t plan to disable the old gitweb, but I created a bunch of redirect rules that - hopefully - redirect most use cases of gitweb to the equivalent cgit url.
If I broke something, please tell me, if I missed a common use case, please tell me. You can usually reach me on #alioth@oftc or via mail (email@example.com)
People also asked me to upload my cgit package to Debian, the package is now waiting in NEW. Thanks to Nicolas Dandrimont (olasd) we also have a patch included that generates proper HTTP returncodes if repos doesn’t exist.
Modern web applications are much more complicated than the simple Perl CGI scripts or PHP pages of the past. They usually start with a framework and include lots of external components both on the front-end and on the back-end.
Here's an example from the Node.js back-end of a real application:$ npm list | wc -l 256
What if one of these 256 external components has a security vulnerability? How would you know and what would you do if of your direct dependencies had a hard-coded dependency on the vulnerable version? It's a real problem and of course one way to avoid this is to write everything yourself. But that's neither realistic nor desirable.
However, it's not a new problem. It was solved years ago by Linux distributions for C and C++ applications. For some reason though, this learning has not propagated to the web where the standard approach seems to be to "statically link everything".
What if we could build on the work done by Debian maintainers and the security team?Case study - the Libravatar project
As a way of discussing a different approach to the problem of dependency management in web applications, let me describe the decisions made by the Libravatar project.Description
Libravatar is a federated and free software alternative to the Gravatar profile photo hosting site.
From a developer point of view, it's a fairly simple stack:
The service is split between the master node, where you create an account and upload your avatar, and a few mirrors, which serve the photos to third-party sites.
Like with Gravatar, sites wanting to display images don't have to worry about a complicated protocol. In a nutshell, all that a site needs to do is hash the user's email and add that hash to a base URL. Where the federation kicks in is that every email domain is able to specify a different base URL via an SRV record in DNS.
For example, firstname.lastname@example.org hashes to 7cc352a2907216992f0f16d2af50b070 and so the full URL is:http://cdn.libravatar.org/avatar/7cc352a2907216992f0f16d2af50b070
whereas email@example.com hashes to 0110e86fdb31486c22dd381326d99de9 and the full URL is:http://fmarier.org/avatar/0110e86fdb31486c22dd381326d99de9
due to the presence of an SRV record on fmarier.org.Ground rules
The main rules that the project follows is to:
- only use Python libraries that are in Debian
- use the versions present in the latest stable release (including backports)
In addition to these rules around dependencies, we decided to treat the application as if it were going to be uploaded to Debian:
- The Makefile includes a test target which runs the unit tests and some lint checks (pylint, pyflakes and pep8).
- Debian packages are produced to encode the dependencies in the standard way as well as to run various setup commands in maintainer scripts and install cron jobs.
- The project runs its own package repository using reprepro to easily distribute these custom packages.
- In order to update the repository and the packages installed on servers that we control, we use fabric, which is basically a fancy way to run commands over ssh.
- Mirrors can simply add our repository to their apt sources.list and upgrade Libravatar packages at the same time as their system packages.
Overall, this approach has been quite successful and Libravatar has been a very low-maintenance service to run.
The ground rules have however limited our choice of libraries. For example, to talk to our queuing system, we had to use the raw Python bindings to the C Gearman library instead of being able to use a nice pythonic library which wasn't in Debian squeeze at the time.
There is of course always the possibility of packaging a missing library for Debian and maintaining a backport of it until the next Debian release. This wouldn't be a lot of work considering the fact that responsible bundling of a library would normally force you to follow its releases closely and keep any dependencies up to date, so you may as well share the result of that effort. But in the end, it turns out that there is a lot of Python stuff already in Debian and we haven't had to package anything new yet.
Another thing that was somewhat scary, due to the number of packages that were going to get bumped to a new major version, was the upgrade from squeeze to wheezy. It turned out however that it was surprisingly easy to upgrade to wheezy's version of Django, Apache and Postgres. It may be a problem next time, but all that means is that you have to set a day aside every 2 years to bring everything up to date.Problems
The main problem we ran into is that we optimized for sysadmins and unfortunately made it harder for new developers to setup their environment. That's not very good from the point of view of welcoming new contributors as there is quite a bit of friction in preparing and testing your first patch. That's why we're looking at encoding our setup instructions into a Vagrant script so that new contributors can get started quickly.
One thing worth noting is that while the Django project is very good at issuing backwards-compatible fixes for security issues, sometimes there is no way around disabling broken features. In practice, this means that we cannot run unattended-upgrades on our main server in case something breaks. Instead, we make use of apticron to automatically receive email reminders for any outstanding package updates.
On that topic, it can occasionally take a while for security updates to be released in Debian, but this usually falls into one of two cases:
- You either notice because you're already tracking releases pretty well and therefore could help Debian with backporting of fixes and/or testing;
- or you don't notice because it has slipped through the cracks or there simply are too many potential things to keep track of, in which case the fact that it eventually gets fixed without your intervention is a huge improvement.
Finally, relying too much on Debian packaging does prevent Fedora users (a project that also makes use of Libravatar) from easily contributing mirrors. Though if we had a concrete offer, we would certainly look into creating the appropriate RPMs.Is it realistic?
It turns out that I'm not the only one who thought about this approach, which has been named "debops". The same day that my talk was announced on the DebConf website, someone emailed me saying that he had instituted the exact same rules at his company, which operates a large Django-based web application in the US and Russia. It was pretty impressive to read about a real business coming to the same conclusions and using the same approach (i.e. system libraries, deployment packages) as Libravatar.
Regardless of this though, I think there is a class of applications that are particularly well-suited for the approach we've just described. If a web application is not your full-time job and you want to minimize the amount of work required to keep it running, then it's a good investment to restrict your options and leverage the work of the Debian community to simplify your maintenance burden.
The second criterion I would look at is framework maturity. Given the 2-3 year release cycle of stable distributions, this approach is more likely to work with a mature framework like Django. After all, you probably wouldn't compile Apache from source, but until recently building Node.js from source was the preferred option as it was changing so quickly.
While it goes against conventional wisdom, relying on system libraries is a sustainable approach you should at least consider in your next project. After all, there is a real cost in bundling and keeping up with external dependencies.
Monday morning, 1:45AM.
Laura and I walk into the boys’ room. We turn on the light. Nothing happens. (They’re sound sleepers.)
“Boys, it’s time to get up to go get on the train!”
Four eyes pop open. “Yay! Oh I’m so excited!”
And then, “Meow!” (They enjoy playing with their stuffed cats that Laura got them for Christmas.)
Before long, it was out the door to the train station. We even had time to stop at a donut shop along the way.
We climbed into our family bedroom (a sleeping car room on Amtrak specifically designed for families of four), and as the train started to move, the excitement of what was going on crept in. Yes, it’s 2:42AM, but these are two happy boys:
Jacob and Oliver love trains, and this was the beginning of a 3-day train trip from Newton to Seattle that would take us through Kansas, Colorado, the Rocky Mountains of New Mexico, Arizona, Los Angeles, up the California coast, through the Cascades, and on to Seattle. Whew!
Here we are later that morning before breakfast:
Here’s our train at a station stop in La Junta, CO:
And at the beautiful small mountain town of Raton, NM:
Some of the passing scenery in New Mexico:
Through it all, we found many things to pass the time. I don’t think anybody was bored. I took the boys “exploring the train” several times — we’d walk from one end to the other and see what all was there. There was always the dining car for our meals, the lounge car for watching the passing scenery, and on the Coast Starlight, the Pacific Parlor Car.
Here we are getting ready for breakfast one morning.
Getting to select meals and order in the “train restaurant” was a big deal for the boys.
Laura brought one of her origami books, which even managed to pull the boys away from the passing scenery in the lounge car for quite some time.
Origami is serious business:
They had some fun wrapping themselves around my feet and challenging me to move. And were delighted when I could move even though they were trying to weight me down!
Several games of Uno were played, but even those sometimes couldn’t compete with the passing scenery:
The Coast Starlight features the Pacific Parlor Car, which was built over 50 years ago for the Santa Fe Hi-Level trains. They’ve been updated; the upper level is a lounge and small restaurant, and the lower level has been turned into a small theater. They show movies in there twice a day, but most of the time, the place is empty. A great place to go with little boys to run around and play games.
The boys and I sort of invented a new game: roadrunner and coyote, loosely based on the old Looney Tunes cartoons. Jacob and Oliver would be roadrunners, running around and yelling “MEEP MEEP!” Meanwhile, I was the coyote, who would try to catch them — even briefly succeeding sometimes — but ultimately fail in some hilarious way. It burned a lot of energy.
And, of course, the parlor car was good for scenery-watching too:
We were right along the Pacific Ocean for several hours – sometimes there would be a highway or a town between us and the beach, but usually there was nothing at all between us and the coast. It was beautiful to watch the jagged coastline go by, to gaze out onto the ocean, watching the birds — apparently so beautiful that I didn’t even think to take some photos.
Laura’s parents live in California, and took a connecting train. I had arranged for them to have a sleeping car room near ours, so for the last day of the trip, we had a group of 6. Here are the boys with their grandparents at lunch Wednesday:
We stepped off the train in Seattle into beautiful King Street Station.
Our first day in Seattle was a quiet day of not too much. Laura’s relatives live near Lake Washington, so we went out there to play. The boys enjoyed gathering black rocks along the shore.
We went blackberry picking after that – filled up buckets for a cobbler.
The next day, we rode the Seattle Monorail. The boys have been talking about this for months — a kind of train they’ve never been on. That was the biggest thing in their minds that they were waiting for. They got to ride in the very front, by the operator.
Nice view from up there.
We walked through the Pike Market — I hadn’t been in such a large and crowded place like that since I was in Guadalajara:
At the Seattle Aquarium, we all had a great time checking out all the exhibits. The “please touch” one was a particular hit.
Walking underneath the salmon tank was fun too.
We spent a couple of days doing things closer to downtown. Laura’s cousin works at MOHAI, the Museum of History and Industry, so we spent a morning there. The boys particularly enjoyed the old periscope mounted to the top of the building, and the exhibit on chocolate (of course!)
They love any kind of transportation, so of course we had to get a ride on the Seattle Streetcar that comes by MOHAI.
All weekend long, we had been noticing the seaplanes taking off from Lake Washington and Lake Union (near MOHAI). So finally I decided to investigate, and one morning while Laura was doing things with her cousin, the boys and I took a short seaplane ride from one lake to another, and then rode every method of transportation we could except for ferries (we did that the next day). Here is our Kenmore Air plane:
The view of Lake Washington from 1000 feet was beautiful:
I think we got a better view than the Space Needle, and it probably cost about the same anyhow.
After splashdown, we took the streetcar to a place where we could eat lunch right by the monorail tracks. Then we rode the monorail again. Then we caught a train (it went underground a bit so it was a “subway” to them!) and rode it a few blocks.
There is even scenery underground, it seems.
We rode a bus back, and saved one last adventure for the next day: a ferry to Bainbridge Island.
Laura and I even got some time to ourselves to go have lunch at an amazing Greek restaurant to celebrate a year since we got engaged. It’s amazing to think that, by now, it’s only a few months until our wedding anniversary too!
There are many special memories of the weekend I could mention — visiting with Laura’s family, watching the boys play with her uncle’s pipe organ (it’s in his house!), watching the boys play with their grandparents, having all six of us on the train for a day, flying paper airplanes off the balcony, enjoying the cool breeze on the ferry and the beautiful mountains behind the lake. One of my favorites is waking up to high-pitched “Meow? Meow meow meow! Wake up, brother!” sorts of sounds. There was so much cat-play on the trip, and it was cute to hear. I have the feeling we won’t hear things like that much more.
So many times on the trip I heard, “Oh dad, I am so excited!” I never get tired of hearing that. And, of course, I was excited, too.
I’m writing this blog post on the plane from Portland towards Europe (which I now can!), using the remaining battery live after having watched one of the DebConf talks that I missed. (It was the systemd talk, which was good and interesting, but maybe I should have watched one of the power management talks, as my battery is running down faster than it should be, I believe.)
I mostly enjoyed this year’s DebConf. I must admit that I did not come very prepared: I had neither something urgent to hack on, nor important things to discuss with the other attendees, so in a way I had a slow start. I also felt a bit out of touch with the project, both personally and technically: In previous DebConfs, I had more interest in many different corners of the project, and also came with more naive enthusiasm. After more than 10 years in the project, I see a few things more realistic and also more relaxed, and don’t react on “Wouldn’t it be cool to have crazy idea” very easily any more. And then I mostly focus on Haskell packaging (and related tooling, which sometimes is also relevant and useful to others) these days, which is not very interesting to most others.
But in the end I did get to do some useful hacking, heard a few interesting talks and even got a bit excited: I created a new tool to schedule binNMUs for Haskell packages which is quite generic (configured by just a regular expression), so that it can and will be used by the OCaml team as well, and who knows who else will start using hash-based virtual ABI packages in the future... It runs via a cron job on people.debian.org to produce output for Haskell and for OCaml, based on data pulled via HTTP. If you are a Debian developer and want up-to-date results, log into wuiet.debian.org and run ~nomeata/binNMUs --sql; it then uses the projectb and wanna-build databases directly. Thanks to the ftp team for opening up incoming.debian.org, by the way!
Unsurprisingly, I also held a talk on Haskell and Debian (slides available). I talked a bit too long and we had too little time for discussion, but in any case not all discussion would have fitted in 45 minutes. The question of which packages from Hackage should be added to Debian and which not is still undecided (which means we carry on packaging what we happen to want in Debian for whatever reason). I guess the better our tooling gets (see the next section), the more easily we can support more and more packages.
I am quite excited by and supportive of Enrico’s agenda to remove boilerplate data from the debian/ directories and relying on autodebianization tools. We have such a tool for Haskell package, cabal-debian, but it is unofficial, i.e. neither created by us nor fully endorsed. I want to change that, so I got in touch with the upstream maintainer and we want to get it into shape for producing perfect Debian packages, if the upstream provided meta data is perfect. I’d like to see the Debian Haskell Group to follows Enrico’s plan to its extreme conclusion, and this way drive innovation in Debian in general. We’ll see how that goes.
Besides all the technical program I enjoyed the obligatory games of Mao and Werewolves. I also got to dance! On Saturday night, I found a small but welcoming Swing-In-The-Park event where I could dance a few steps of Lindy Hop. And on Tuesday night, Vagrant Cascadian took us (well, three of us) to a blues dancing night, which I greatly enjoyed: The style was so improvisation-friendly that despite having missed the introduction and never having danced Blues before I could jump right in. And in contrast to social dances in Germany, where it is often announced that the girls are also invited to ask the boys, but then it is still mostly the boys who have to ask, here I took only half a minute of standing at the side until I got asked to dance. In retrospect I should have skipped the HP reception and went there directly...
I’m not heading home at the moment, but will travel directly to Göteborg to attend ICFP 2014. I hope the (usually worse) west-to-east jet lag will not prevent me from enjoying that as much as I could.