Some package maintainer (including me ;) are lazy, they forget about changes in their package when it was pushed to a repo (put & forget about it). And "last spurt" edit is hard for translators. We translators want to finish it with Debian release time but it's really hard thing.
How wonderful if release notes would be automatically generated! So, system should help them us. Then, how about adding [releasenote] section to debian/NEWS?
foobar (0.2.0-1) unstable; urgency=medium * update debian/NEWS file -- Hideki Yamane <email@example.com> Wed, 20 Aug 2014 07:12:51 +0900
and debian/NEWS file,
foobar (0.2.0-1) unstable; urgency=medium [releasenote: Stretch]
* "buz" package user should migrate other packages since this package
doesn't provide buz package anymore. -- Hideki Yamane <firstname.lastname@example.org> Wed, 20 Aug 2014 07:12:51 +0900
Then, parse all debian/NEWS files and generate release notes automatically.
It's just an idea, not well considered. But probably you'll get the point. "Big Bang release" style is not good, CI style is better - don't you think so?
Deployments are often one of the most important aspects of the Drupal development cycle. But sometimes, due to time and/or budget constraints (or the maturity of your company) developers clone databases downstream, manually reproduce content on production environments, and rely on other bad practices on a regular basis.
Today we will show you how we manage small (but critical) changes in module dependencies for our custom modules here at www.DrupalOnWindows.com.More articles...
- When PHP crashes: how to collect meaningful information and what to do with it
- Build GIT on Windows from Sources
- Setting up Code Syntax Higlighting with Drupal
- Git shell on Windows reports “sh.exe has stopped working (APPCRASH)”
- Drupal Session Handler: everything you need to know
- Drupal on IIS or Apache
- Getting #2,000 requests per second without varnish
- Deploying changing module dependencies with Drupal
- Deploying Drupal Like a Pro
- Installing Drupal on Windows and SQL Server
Google publishes Linux software repositories for several of their products, including Google Chrome, which is available from the following apt source:deb http://dl.google.com/linux/chrome/deb/ stable main
These repositories are signed with an 8 year old 1024-bit DSA key:pub 1024D/7FAC5991 2007-03-08 Key fingerprint = 4CCA 1EAF 950C EE4A B839 76DC A040 830F 7FAC 5991 uid Google, Inc. Linux Package Signing Key <email@example.com> sub 2048g/C07CB649 2007-03-08
Asymmetric 1024-bit keys are not considered strong enough and were, for instance, aggressively retired from Google's SSL frontends almost two years ago. Such short keys should not be used to protect the integrity of software package repositories.
Note that this key has a longer 2048-bit ElGamal subkey, which is not actually used to produce signatures, but only for encryption. In fact, only a signing key is needed to sign the files in a secure apt repository, and, for instance, the archive keys used to sign official debian.org repositories do not contain an encryption subkey.
Since years, many users have reported an error message like the following when running apt-get update:W: GPG error: http://dl.google.com stable Release: The following signatures were invalid: BADSIG A040830F7FAC5991 Google, Inc. Linux Package Signing Key <firstname.lastname@example.org>
This error might resolve itself if apt-get update is run again. Apparently, this is due to "bad pushes" occurring in the Google infrastructure. An example of this can be seen in the following curl output:$ curl -v http://dl.google.com/linux/chrome/deb/dists/stable/Release \ http://dl.google.com/linux/chrome/deb/dists/stable/Release.gpg * Hostname was NOT found in DNS cache * Trying 126.96.36.199... * Connected to dl.google.com (188.8.131.52) port 80 (#0) > GET /linux/chrome/deb/dists/stable/Release HTTP/1.1 > User-Agent: curl/7.38.0 > Host: dl.google.com > Accept: */* > < HTTP/1.1 200 OK < Accept-Ranges: bytes < Content-Length: 1347 < Content-Type: application/octet-stream < Etag: "518b8" < Expires: Sun, 22 Mar 2015 18:55:19 PDT < Last-Modified: Fri, 20 Mar 2015 04:22:00 GMT * Server downloads is not blacklisted < Server: downloads < X-Content-Type-Options: nosniff < X-Frame-Options: SAMEORIGIN < X-Xss-Protection: 1; mode=block < Date: Sun, 22 Mar 2015 01:55:19 GMT < Alternate-Protocol: 80:quic,p=0.5 < Origin: Google, Inc. Label: Google Suite: stable Codename: stable Version: 1.0 Date: Thu, 19 Mar 2015 22:55:29 +0000 Architectures: amd64 i386 Components: main Description: Google chrome-linux repository. MD5Sum: 53375c7a2d182d85aef6218c179040ed 144 main/binary-i386/Release c556daf52ac818e4b11b84cb5943f6e0 4076 main/binary-i386/Packages 867ba456bd6537e51bd344df212f4662 960 main/binary-i386/Packages.gz 2b766b2639b57d5282a154cf6a00b172 1176 main/binary-i386/Packages.bz2 89704f9af9e6ccd87c192de11ba4c511 145 main/binary-amd64/Release fa88101278271922ec9b14b030fd2423 4082 main/binary-amd64/Packages 1ba717117027f36ff4aea9c3ea60de9e 962 main/binary-amd64/Packages.gz 19af18f376c986d317cadb3394c60ac5 1193 main/binary-amd64/Packages.bz2 SHA1: 59414c4175f2cc22e67ba6c30687b00c72a7eafc 144 main/binary-i386/Release 1764c5418478b1077ada54c73eb501165ba79170 4076 main/binary-i386/Packages db24eafac51d3e63fd41343028fb3243f96cbed6 960 main/binary-i386/Packages.gz ad8be07425e88b2fdf2f6d143989cde1341a8c51 1176 main/binary-i386/Packages.bz2 153199d8f866350b7853365a4adc95ee687603dd 145 main/binary-amd64/Release 7ce66535b35d5fc267fe23af9947f9d27e88508b 4082 main/binary-amd64/Packages a72b5e46c3be8ad403df54e4cdcd6e58b2ede65a 962 main/binary-amd64/Packages.gz dbc7fddd28cc742ef8f0fb8c6e096455e18c35f8 1193 main/binary-amd64/Packages.bz2 * Connection #0 to host dl.google.com left intact * Found bundle for host dl.google.com: 0x7f24e68d06a0 * Re-using existing connection! (#0) with host dl.google.com * Connected to dl.google.com (184.108.40.206) port 80 (#0) > GET /linux/chrome/deb/dists/stable/Release.gpg HTTP/1.1 > User-Agent: curl/7.38.0 > Host: dl.google.com > Accept: */* > < HTTP/1.1 200 OK < Accept-Ranges: bytes < Content-Length: 198 < Content-Type: application/octet-stream < Etag: "518f4" < Expires: Sun, 22 Mar 2015 18:55:19 PDT < Last-Modified: Fri, 20 Mar 2015 04:05:00 GMT * Server downloads is not blacklisted < Server: downloads < X-Content-Type-Options: nosniff < X-Frame-Options: SAMEORIGIN < X-Xss-Protection: 1; mode=block < Date: Sun, 22 Mar 2015 01:55:19 GMT < Alternate-Protocol: 80:quic,p=0.5 < -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.10 (GNU/Linux) iEYEABECAAYFAlULm7YACgkQoECDD3+sWZFyxACeNPuK/zQ0v+3Py1n2s09Wk/Ti DckAni8V/gy++xIinu8OdUXv7c777V9H =5vT6 -----END PGP SIGNATURE----- * Connection #0 to host dl.google.com left intact
Note that both the Release and Release.gpg files were fetched with the same HTTP connection, so the two files must have come from the same web frontend. (Though, it is possible they were served by different backends.) However, the detached signature in Release.gpg does not match the content in Release:gpgv: Signature made Fri 20 Mar 2015 12:01:58 AM EDT using DSA key ID 7FAC5991 gpgv: BAD signature from "Google, Inc. Linux Package Signing Key <email@example.com>"
Performing the same pair of fetches again, the same Release.gpg file is returned, but the Release file is slightly different:$ curl -v http://dl.google.com/linux/chrome/deb/dists/stable/Release \ http://dl.google.com/linux/chrome/deb/dists/stable/Release.gpg * Hostname was NOT found in DNS cache * Trying 220.127.116.11... * Connected to dl.google.com (18.104.22.168) port 80 (#0) > GET /linux/chrome/deb/dists/stable/Release HTTP/1.1 > User-Agent: curl/7.38.0 > Host: dl.google.com > Accept: */* > < HTTP/1.1 200 OK < Accept-Ranges: bytes < Content-Length: 1347 < Content-Type: application/octet-stream < Etag: "518f3" < Expires: Sun, 22 Mar 2015 18:55:04 PDT < Last-Modified: Fri, 20 Mar 2015 04:05:00 GMT * Server downloads is not blacklisted < Server: downloads < X-Content-Type-Options: nosniff < X-Frame-Options: SAMEORIGIN < X-Xss-Protection: 1; mode=block < Date: Sun, 22 Mar 2015 01:55:04 GMT < Alternate-Protocol: 80:quic,p=0.5 < Origin: Google, Inc. Label: Google Suite: stable Codename: stable Version: 1.0 Date: Fri, 20 Mar 2015 04:02:02 +0000 Architectures: amd64 i386 Components: main Description: Google chrome-linux repository. MD5Sum: 89704f9af9e6ccd87c192de11ba4c511 145 main/binary-amd64/Release fa88101278271922ec9b14b030fd2423 4082 main/binary-amd64/Packages 1ba717117027f36ff4aea9c3ea60de9e 962 main/binary-amd64/Packages.gz 19af18f376c986d317cadb3394c60ac5 1193 main/binary-amd64/Packages.bz2 53375c7a2d182d85aef6218c179040ed 144 main/binary-i386/Release c556daf52ac818e4b11b84cb5943f6e0 4076 main/binary-i386/Packages 867ba456bd6537e51bd344df212f4662 960 main/binary-i386/Packages.gz 2b766b2639b57d5282a154cf6a00b172 1176 main/binary-i386/Packages.bz2 SHA1: 153199d8f866350b7853365a4adc95ee687603dd 145 main/binary-amd64/Release 7ce66535b35d5fc267fe23af9947f9d27e88508b 4082 main/binary-amd64/Packages a72b5e46c3be8ad403df54e4cdcd6e58b2ede65a 962 main/binary-amd64/Packages.gz dbc7fddd28cc742ef8f0fb8c6e096455e18c35f8 1193 main/binary-amd64/Packages.bz2 59414c4175f2cc22e67ba6c30687b00c72a7eafc 144 main/binary-i386/Release 1764c5418478b1077ada54c73eb501165ba79170 4076 main/binary-i386/Packages db24eafac51d3e63fd41343028fb3243f96cbed6 960 main/binary-i386/Packages.gz ad8be07425e88b2fdf2f6d143989cde1341a8c51 1176 main/binary-i386/Packages.bz2 * Connection #0 to host dl.google.com left intact * Found bundle for host dl.google.com: 0x7ffa33d8b6a0 * Re-using existing connection! (#0) with host dl.google.com * Connected to dl.google.com (22.214.171.124) port 80 (#0) > GET /linux/chrome/deb/dists/stable/Release.gpg HTTP/1.1 > User-Agent: curl/7.38.0 > Host: dl.google.com > Accept: */* > < HTTP/1.1 200 OK < Accept-Ranges: bytes < Content-Length: 198 < Content-Type: application/octet-stream < Etag: "518f4" < Expires: Sun, 22 Mar 2015 18:55:05 PDT < Last-Modified: Fri, 20 Mar 2015 04:05:00 GMT * Server downloads is not blacklisted < Server: downloads < X-Content-Type-Options: nosniff < X-Frame-Options: SAMEORIGIN < X-Xss-Protection: 1; mode=block < Date: Sun, 22 Mar 2015 01:55:05 GMT < Alternate-Protocol: 80:quic,p=0.5 < -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.10 (GNU/Linux) iEYEABECAAYFAlULm7YACgkQoECDD3+sWZFyxACeNPuK/zQ0v+3Py1n2s09Wk/Ti DckAni8V/gy++xIinu8OdUXv7c777V9H =5vT6 -----END PGP SIGNATURE----- * Connection #0 to host dl.google.com left intact
Note that the Date line in the Release file is different:@@ -6 +6 @@ -Date: Thu, 19 Mar 2015 22:55:29 +0000 +Date: Fri, 20 Mar 2015 04:02:02 +0000
The file hashes listed in the Release file are in a different order, as well, though the actual hash values are the same. This Release file does have a valid signature:gpgv: Signature made Fri 20 Mar 2015 12:01:58 AM EDT using DSA key ID 7FAC5991 gpgv: Good signature from "Google, Inc. Linux Package Signing Key <firstname.lastname@example.org>"
Note that the Release.gpg files in the good and bad cases are the same, and the same signature cannot cover two files with different content. Also note that the same mis-signed content is available via HTTPS, so it is probably not caused by a MITM attack.
The possibility of skew between the Release and Release.gpg files is precisely why inline signed Release files were introduced, but Google's repositories use only the older format with a detached signature.
It would be nice if Google could fix the underlying bug in their infrastructure that results in mis-signed repositories being published frequently, because it trains users to ignore cryptographic failures.
Earlier today, I gave a presentation on Ansible and Drupal 8 at MidCamp in Chicago. In the presentation, I introduced Ansible, then deployed and updated a Drupal 8 site on a cluster of 6 Raspberry Pi computers, nicknamed the Dramble.
My slides from the presentation are embedded below, and I'll be posting a video of the presentation as soon as it's available.
PSA: If you are a web professional, work in a digital agency or build mobile apps, please read this article now: Taking the social model of disability online
"The social model of disability reframes discussion of disability as a problem of the world, rather than of the individual. The stairs at the train station are the problem, rather than using a wheelchair."
El Gibbs has reminded me of question time during Gian Wild's keynote at Drupal Downunder in 2012. Gian asserts that accessibility guidelines are a legal requirement for everyone, not just Government. There was an audible gasp from the audience.
It's true that our physical environment needs to include ramps, lifts, accessible toilets, reserved parking spaces, etc in order to accommodate those with mobility needs. Multi-lingual societies require multi-lingual signage. There are hearing loops - but for some reason, this "social model" of accessibility doesn't seem to have extended online.
Making the digital world accessible, and counteracting the systemic discriminatory impact of failing to do so is something we must take seriously. We must build this in during planning and design, we must make it easy for content editors to maintain WCAG compliance AFTER a site or app is delivered.
Building accessibility features in from the beginning also means it costs less to implement, and delivers a double win of making the whole team more mindful of these issues to begin with. It should be part of the acceptance criteria, it should be part of the definition of done.
I'd like to see us tackle these issues directly in Drupal core. If you're interested in keeping track of accessibility issues in Drupal, you might like to follow drupala11y on twitter, and check out issues on drupal.org that have been tagged with "accessibility".
Accessibility traps might not affect you now, but they will. This is probably affecting people you know right now. People who silently struggle with small font sizes, poor contrast, cognitive load, keyboard traps, video without captions.
My own eyesight and hearing is not what it was. My once able parents now require mobility aids. My cousin requires an electric wheelchair. A friend uses a braille reader, and yet I still forget. It's not front and centre for me, but it should be. Let's all take a moment to think about how we can focus on making our online and digital world more accessible for everyone. It really does benefit us all.
On March 21, 2015, there was a fairly well-attended Camp Organizers BoF at MidCamp in Chicago. I took notes during the BoF and am simply publishing them here for the benefit of camp organizers in the Drupal Community. They're fairly raw, but hopefully they'll be helpful for you!
Got some unexpected results from a hardware upgradeFirst, GPU upgrade Old videocard
My current video card was getting a bit long in the tooth. I kept delaying the upgrade, because newer Radeon cards are pretty inefficient, energy-wise, and I didn't want to upgrade my PSU as well.
My old card had a TDP of 150W, and I was looking for upgrading to something in the same ballpark. While there were more current similar cards, the performance benefit was not that great - to get a real boost, I'd need to upgrade to something 200W+, if not 250W.
Additionally, I was focused on AMD-only cards because of Linux open-source support, even though newer AMD cards don't support EXA anymore (plain 2D).Surprised to learn about Nvidia Maxwell
While looking at what AMD cards to upgrade to, I happen to learn about the now ~1 year old Nvidia Maxwell architecture, which is - surprisingly - much more energy efficient. So efficient, that I could upgrade to a top-of-the-line card, with around 6× performance on most benchmarks compared to my current card, with only a 25W TDP increase.
I couldn't believe I missed this for almost a year, just because I was focused only on AMD cards.
I research some more, I try to console myself about going back to Nvidia's binary blobs until Nouveau supports GM20x card well, but in the end the results seem too good to ignore.Upgrade: in-game performance and noise
For the card I bought, Nvidia says a PSU with 500W output is the minimum. That matched exactly the PSU I had, and it was a quality producer (Seasonic), so I bought the new videocard and installed it.
Performance was, surprisingly, as expected: my new card is faster at maximum settings than my old card was on low settings in two or three games that I tested. So all good from this side.
On Linux, moving to the non-free Nvidia driver was a walk in the part, thanks to the maintainers of all things Nvidia: thanks! Last I used an Nvidia card, many years ago, it was a bit more painful. And yes, Nvidia doesn't enable all monitors upon boot, requiring some reshuffling of the outputs for multi-monitor work. Finding that I still had an .nvidia-settings-rc in my homedir from ages ago was fun
The downside was that the system was noisier under load; slightly noisier in some games, to much more noisier in others. This didn't match my expectations, since the specific version of the card I bought was not overclocked and had extra large fans, and with only a +25W TDP it shouldn't have been significantly noisier. Well, that's it, I said, not all marketing/reviews should be believed.
One interesting thing was that I wasn't clearly able to pin-point what was generating the additional noise.PSU upgrade
I was thinking anyway about doing a PSU upgrade as well, since my current PSU was even older than my videocard, and was at the limit.Dust…
So I bought a PSU as well, and spent about half a day installing it. Why half a day? Because the new PSU is modular, and the combination with the case I have means I could redo the cabling inside my case, significantly.
In the process, I found a lot of accumulated dust which I cleaned. I also found out that parts of the CPU cooler fins were blocked by dust, so the fan was not as effective as when new. I also realised that one case fan was no longer effective in its position, since I have no HDDs that need cooling (this case is split between MB and HDD/PSU areas), so I could move it in a place that cools better the various PCIe devices.… and silence!
After all was said and done, the PC booted up just fine. Everything seemed correct, the new position of the fan was drawing in cold air and pushing it over the PCIe cards, so it was time to see if all the cleanup had any effect on the behaviour under load.
So I start a game, the card gets slightly noisier compared to idle, and stays there. I go on playing for 10 minutes, which would have been more than enough to heat the whole system enough that it becomes noisy, but nothing, just slightly above the normal "PC is on" noise. Before all the upgrades, my old card was definitely noisier when playing…
I don't know if there is a single, key factor, or if it's a combination of all of:
- better CPU cooling
- PSU with higher wattage, which means it has to work less for the same load; at idle these PSUs are very silent, but not so much at 80-90% of the maximum
- better cooling of the video-card, since it doesn't only recycle the air inside the case, but actually has cold air pushed over it.
In any case, I'm happy now. I got much better performance (5-6× is nothing to laugh at) for slight increase in energy consumption at load (~+25W). If I had stopped here, it would have been good enough. But spending 3 hours cleaning and simplifying the cabling means I also got a much quieter PC.
The only downside is Linux with binary drivers. Waiting now for Nouveau…
One of fresh additions to Debian, that is showing Debian's commitment to diversity in all fields is Laura Arjona Reina. A helpful hand on channels and a great flux of FLOSS energy she brings with herself. Although applied for non-packaging Debian Developer status, Laura does recognize that there are still some technical aspects what must grasp on. Her dedication to FLOSS and trying to solve some of its issues is astonishing, as this woman is doing a lot of self-hosting and system administration. Yes, you read it right - she does all of that and still applied for non-packaging Debian Developer. She is perfect example how FLOSS enhances humans in many ways. Hello Laura.
Who are you?
I am Laura Arjona, I work as IT assistant at Technic University of Madrid, I am married and I have a son, and I use and promote free (libre) software both at job and at home and with friends. I have a nice time contributing in Debian and other FLOSS projects but I always want to do more than what I manage to actually do (I hope I can improve that, as time goes by... and maybe when I get retired I am the SuperLArjona that you wrote about!).
What parts of FLOSS community are you engaged?
I use Debian, and CyanogenMod + F-Droid in my phone. I coordinate the translation of the Debian website into Spanish, help with FSFE website translation too, and translate some other free software (GNU MediaGoblin, F-Droid, Android apps that I use, web services that we use at work...). I use and promote some free social networks: pump.io, GNU Social, and XMPP. My work/friends environment is mostly Windows/Android so I try to find/promote libre software replacements or interesting applications for them. I give free FLOSS stickers to everybody showing interest for libre software, and a nice Debian sticker if they finally install it in their computers.
Setup of your main machine?
My machine is a humble Compaq Mini 110C laptop (32bits, Atom N270@1.60GHz, 1GB RAM) and I have Debian Jessie (future-stable ATM) with xfce on it. I'm not tied to particular tools, for example I use Mousepad for editing here in my xfce, Kate in my desktop at work, nano in the server. The only "tuning" that I always do is to set a dark background for terminals and text editors, but I don't even switch to a desktop dark theme... (BTW I love Jessie's theme, "Lines"!). I know there are awesome pieces of software out there (hey emacs-org-mode!), but I just don't have fun having to learn them by myself (no LUG near, I'm afraid...).
Some memorable moments from Debian conferences?
I've only been at Barcelona MiniDebConf Women 2014 and it was great. It was memorable that I promoted the keysigning for that MiniDebConf, and came home with lots of signatures and papers to verify and sign... and then I was not remembering my GPG main key passphrase! so I hid under my desk for two months, and then, decided to start again (created a new key and tried to meet some Debian people in Madrid...). So I guess I should go to some (Mini)DebConf again.
You are currently involved in process of becoming non-packaging Debian Developer - what made you take that step?
I began translating in 2011, and since then, I enjoyed contributing in Debian (women, l10n-es, website, publicity). I'm quite regular with the translation work, and applying for DD is a plan to 'force' myself to find chunks of time to contribute more in the other areas too. I also believe that applying I may help other people to also apply or get more involved or become more visible. So here I am.
Although you applied for non-packaging Debian Developer you recognize that there is still a technical learning curve in Debian even for that - what are the technical aspects a non-Developer should grasp?
Well, I suppose it depends on the area you are contributing. In Publicity you find repositories in git (bits.debian.org), and subversion (Debian Project News). The website uses CVS (www.debian.org)... So you need the basics of 3 different version control systems to commit your changes (or send them to the mailing list and wait somebody to commit them). We use a mail robot to coordinate the translation work, so you need to write the subject with a certain format, and some people complains when somebody send mail in HTML (plain text is preferred). There are some other tools such as IRC and GPG that I began using just for contributing to Debian. Once that you learn them a bit and you learn how Debian works, you understand they are the great tools and you get in love (hey meetbot and KGB! hey i18n.debian.org!), but I wonder how people with no technical background, or even Computer Engineering students nowadays, accostumed to instant messaging in the mobile, fancy web interfaces and so on, look at these tools and just don't even try.
How do you see future of Debian development?
I don't know, Debian is huge... Some areas in which I hope we, as a community, find the way to work more: packaging (or help configuring) web applications or network services, provide LTS support, and keep on improving outreach/diversity.
What are your future plans in Debian, what would you like to work on?
In the Spanish team area, my plan is to go on translating the website, jump more often into translating package descriptions too, and help first-time-contributors to keep themselves involved. In the website and publicity teams, I hope I manage to put some weekly time to help with pending tasks/bugs, and serve as liaison with the other areas in which I'm involved (women, l10n, contributors...). If I become DD, I would like to create/adopt some data sources for contributors.debian.org, or convince people to do it I'm not sure if I will be able to attend DebConf some year; meanwhile, at least, I'll continue trying to help with the blog and promotion (as -publicity-team member).
Why should developers and users join Debian community? What makes Debian a great and happy place?
For using it: the desktop experience has improved very much in the last years, there's a clear separation between free and non-free software so the Debian users always know where are we, and there is wide documentation (in English, at least. Probably in other languages too). For getting involved: I like very much that you can lurk what almost everybody does: just join some mailing lists or IRC channels, the Debian people work in the open. So you know a bit where are you jumping in. Later you learn that everything is easier than what was looking from the outside, because you make friends and with friends everything is better.
Contributions made to Debian have many chances to reach a very wide community: Debian users, upstream projects, and the hundreds of derivatives. It's a quite horizontal, decentralized organization (that has its downsides too, but I can live with them).
Is there something you would change in FLOSS ecosystem?
We need much more internationalization and localization efforts. People don't need English for using libre software nowadays in their desktop, it's one of our big strenghts, but they definitely need English for using libre software for Android, or solving problems with the libre software they use in your computers/devices, or to contribute to any community. I think we need more local groups for user support/outreach, more libre-software-based translation tools and online services (replacements for Google Translator, for Transifex...),and more internationalization and localization efforts (manuals, websites... not only the software itself). If we work hard in this area, we'll gain much more users and much more contributors.
Why does privacy matter to you?
I have a son, at our family we interchange photos, and sometimes I have private conversations using the smartphone, mail or other internet services. I want to have the chance that the day that I (or my family) need privacy, we can have it easily. And I want that the people that really need privacy today, have proper tools at their hand. So I try to use PGP, selfhost my multimedia website, use decentralized, free software based networks and XMPP mobile apps... to help those projects to thrive. I try do my part of the network effect!
You are being upset with rise of Github - why is that and what would change would you like to see?
I totally agree with Mako's essay "Free software needs free tools". Yes, many nonfree web services are easier and have better features right now, but the key is that only dogfooding can change that, the same like changed it with the GNU/Linux desktop and Open/LibreOffice for example...
I would like to see more people trying to selfhost, use and promote libre software based forges, so in addition to avoid vendor lock in and win consistency in our discourse, we polish the available tools and eventually win the battle also in the technical side.
You are hosting yours own instances of Mediagoblin - as it is not officially packaged in Debian yet, how do you manage it and how would you encourage others to do it?
I followed the MediaGoblin documentation for its last stable release, and hanged on the IRC channel when in doubts/problems. It was not so hard, because it's well documented for Debian systems, and most of the dependencies are already packaged (in stable and testing). MediaGoblin is in its way to stretch too (thanks simonft and the rest of people working on this!). I'm documenting my adventures with selfhosting in [my blog (http://larjona.wordpress.com), but I need to write more often, and put more time in my small server (now I try to selfhost my git projects with cgit, and I want to setup an XMPP server and Etherpad Lite too).
You are trying to resolve with self-hosting personal issues with services such as WhatsUp and other non-free parts of our everyday lives- what issues are you hitting on during your way and how do you resolve them?
"#iloveemail", but people don't love it anymore, it seems... I've researched a bit about instant messaging to try to propose alternatives to WhatsApp to my family and friends. It seems Conversations with a community XMPP server where to create multi user chat rooms can be a replacement, so my plan is to try during this year. Meanwhile, I've setup the MediaGoblin site so I upload the photos there instead of sending them with the phone, and for the 1-to-1 chat I try to move people to Kontalk (instant messaging, GPG, photos, voice notes...). For the videocalls, I promote Jitsi or just point people to meet.jit.si/FancyNameofChatRoom. I have account and owndrive.com and will try to host Owncloud too. We'll see what happens.
You are interested in radio shows - what drives you into that field and will we see soon any podcasts from Laura?
I like to talk and I'm not a shy person, so the few times that in any of my social groups there was a chance to "talk in the radio", I volunteered and enjoyed. This has been, in my life, 6 or 7 times (in Spanish, talking about social activism or politics. No records, though!). Some months ago the people of "El Binario" invited me to talk at "findenegro" about pump.io and free networks (in Spanish), I accepted and had a very nice time (audios in my mediagoblin). I wish I have more free time to listen to podcasts and maybe to join some other people to participate in a program in a regular basis. OTOH, my son asks for a tale almost each day... I follow one of Gianni Rodari "The grammar of fantasy"'s approach: take some day-to-day facts and add something unexpected and crazy, and tailor a short story. Maybe I could record them and publish in my MediaGoblin... Of course their literary quality is not even near to Gianni Rodari's but people that listened to them when I was storytelling (in the metro, my mother at home, some friends...) say they are fun and interesting. Who knows!
Yesterday I was hacking on some Ruby code and getting a weird error which I thought was caused by mutually recursive require statements (i.e. A requires B, and B requires A). Later I realized that this is not an issue in Ruby, since the intepreter keeps track of what has already been required and will not enter a loop. But during the investigation I came up with something that turned out to be useful.
rrg will read the source code of a Ruby project and generate a graph based on the require statements in the code; nodes represent the source files and an arrow from A to B means that A contains a `require ‘B’` statement.
From the README:
Just run rrg at the root of your project. rrg will parse the code inside lib/, and generate a graph description in the Graphviz format. You can pipe the output to Graphviz directly, or store it in a file and process it to generate an image.
If you call rrgv instead, it will automatically process the graph with Graphviz, generate a PNG image, and open it.
Let’s see some examples. First the classical “analysing itself” example, the require graph for rrg itself:
Not very interesting, since all of the logic is currently in the main binary and not on library code. But 1) when I do the refactorings I want to, there will be more library code and 2) while writing this I implemented also parsing scripts in bin/.
Now chake which is a slightly larger project:
An even larger (but still not that big) project, gem2deb:
Note that these visualizations may not be accurate representations of the actual source code. In Ruby, nothing stops one from implementing class A::B in lib/x/y.rb, but most reasonable code will make sure that filenames and the classes namespaces actually match.
If you are working on a sane codebase, though, visualizing graphs like this helps understand the general structure of the code and perceive possible improvements. The gem2deb graph gave me some ideas already and I didn’t even paid much attention to it yet.
Notice: There were several requests for me to more elaborate on my path to Debian and impact on life so here it is. It's going to be a bit long so anyone who isn't interested in my personal Debian journey should skip it. :)
In 2007. I enrolled into Faculty of Mechanical Engineering (at first at Department of Industrial Management and later transfered to Department of Mechatronics - this was possible because first 3 semesters are same for both departments). By the end of same year I was finishing my tasks (consisting primarily of calculations, some small graphical designs and write-ups) when famous virus, called by users "RECYCLER", sent my Windows XP machine into oblivion. Not only it took control over machine and just spawned so many processes that system would crash itself, it actually deleted all from hard-disk before it killed the system entirely. I raged - my month old work, full of precise calculations and a lot of design details, was just gone. I started cursing which was always continued with weeping: "Why isn't there an OS that can whithstand all of viruses, even if it looks like old DOS!". At that time, my roommate was my cousin who had used Kubuntu in past and currently was having SUSE dual-booted on his laptop. He called me over, started talking about this thing called Linux and how it's different but de facto has no viruses. Well, show me this Linux and my thought was, it's probably so ancient and not used that it probably looks like from pre Windows 3.1 era, but when SUSE booted up it had so much more beautiful UI look (it was KDE, and compared to XP it looked like the most professional OS ever).
So I was thrilled, installed openSUSE, found some rough edges (I knew immediately that my work with professional CAD systems will not be possible on Linux machines) but overall I was bought. After that he even talked to me about distros. Wait, WTF distros?! So, he showed me distrowatch.com. I was amazed. There is not only a better OS then Windows - there where dozens, hundreds of them. After some poking around I installed Debian KDE - and it felt great, working better then openSUSE but now I was as most newbies, on fire to try more distros. So I was going around with Fedora, Mandriva, CentOS, Ubuntu, Mint, PCLinuxOS and in beginning of 2008 I stumbled upon Debian docs which where talking about GNU and GNU Manifesto. To be clear, I was always as a high-school kid very much attached to idea of freedom but started loosing faith by faculty time (Internet was still not taking too much of time here, youth still spent most of the day outside). So the GNU Manifesto was really a big thing for me and Debian is a social bastion of freedom. Debian (now with GNOME2) was being installed on my machine.
As all that hackerdom in Debian was around I started trying to dig up some code. I never ever read a book on coding (until this day I still didn't start and finish one) so after a few days I decided to code tetris in C++ with thought that I will finish it in two days at most (the feeling that you are powerful and very bright person) - I ended it after one month in much pain. So instead I learned about keeping Debian system going on, and exploring some new packages. I got thrilled over radiotray, slimvolley (even held a tournament in my dorm room), started helping on #debian, was very active in conversation with others about Debian and even installed it on few laptops (I became de facto technical support for users of those laptops :D ).
Then came 2010 which with negative flow that came in second half of 2009, started to crush me badly. I was promised to go to Norway, getting my studies on robotics and professor lied (that same professor is still on faculty even after he was caught in big corruption scandal over buying robots - he bought 15 years old robots from UK, although he got money from Norway to buy new ones). My relationship came to hard end and had big emotional impact on me. I fell a year on faculty. My father stopped financing me and stopped talking to me. My depression came back. Alcohol took over me. I was drunk every day just not to feel anything. Then came the end of 2010, I somehow got to the information that DebConf will be in Banja Luka. WHAT?! DebConf in city where I live. I got into #debconf and in December 2010/January 2011 I became part of the famous "local local organizers". I was still getting hammered by alcohol but at least I was getting out of depression. IIRC I met Holger and Moray in May, had a great day (a drop of rakia that was too much for all of us) and by their way of behaving there was something strange. Beatiful but strange. Both were sending unique energy of liberty although I am not sure they were aware of it. Later, during DebConf I felt that energy from almost all Debian people, which I can't explain. I don't feel it today - not because it's not there, it's because I think I integrated so much into Debian community that it's now a natural feeling which people here, that are close to me are saying that they feel it when I talk about Debian.
DebConf time in Banja Luka was awesome - firstly I met Phil Hands and Andrew McMillan which were a crazy team, local local team was working hard (I even threw up during the work in Banski Dvor because of all heat and probably not much of sleep due to excitement), met also crazy Mexican Gunnar (aren't all Mexicans crazy?), played Mao (never again, thank you), was hanging around smart but crazy people (love all) from which I must notice Nattie (a bastion of positive energy), Christian Perrier (which had coordinated our Serbian translation effort), Steve Langasek (which asked me to find physiotherapist for his co-worker Mathias Klose, IIRC), Zach (not at all important guy at that time), Luca Capello (who gifted me a swirl on my birthday) and so many others that this would be a post for itself just naming them. During DebConf it was also a bit of hard time - my grandfather died on 6th July and I couldn't attend the funeral so I was still having that sadness in my heart, and Darjan Prtic, a local team member that came from Vienna, committed suicide on my birthday (23 July). But DebConf as conference was great, but more importantly the Debian community felt like a family and Meike Reichle told me that it was. The night it finished, me and Vedran Novakovic cried. A lot. Even days after, I was getting up in the morning having the feeling I need something to do for DebConf. After a long time I felt alive. By the end of year, I adopted package from Clint Adams and Moray became my sponsor. In last quarter of 2011 and beginning of 2012, I (as part of LUG) held talks about Linux, had Linux installation in Computer Center for the first time ever, and installed Debian on more machines.
Now fast forwarding with some details - I was also on DebConf13 in Switzerland, met some great new friends such as Tincho and Santiago (and many many more), Santiago was also my roommate in Portland on the previous DebConf. In Switzerland I had really great and awesome time. Year 2014 - I was also at DebConf14, maintain a bit more packages and have applied for DD, met some new friends among which I must put out Apollon Oikonomopoulos and Costas Drogos which friendship is already deep for such a short time and I already know that they are life-long friends. Also thanks to Steve Langasek, because without his help I wouldn't be in Portland with my family and he also gave me Arduino. :) 2015. - I am currently at my village residence, have a 5 years of working experince as developer due to Debian and still a lot to go, learn and do but my love towards Debian community is by magnitude bigger then when I thought I love it at most. I am also going through my personal evolution and people from Debian showed me to fight for what you care, so I plan to do so.
I can't write all and name all the people that I met, and believe me when I say that I remember most and all of you impacted my life for which I am eternally grateful. Debian, and it's community effect literally saved my life, spring new energy into me and changed me for better. Debian social impact is far bigger then technical, and when you know that Debian is a bastion of technical excellence - you can maybe picture the greatness of Debian. Some of greatest minds are in Debian but most important isn't the sheer amount of knowledge but the enormous empathy. I just hope I can in future show to more people what Debian is and to find all lost souls as me to give them the hope, to show them that we can make world a better place and that everyone is capable to live and do what they love.
P.S. I am still hoping and waiting to see Bdale writing a book about Debian's history to this day - in which I think many of us would admire the work done by project members, laugh about many situations and have fun reading a book about project that was having nothing to do but fail and yet it stands stronger then ever with roots deep into our minds.
Not so long ago, many of us were satisfied handling deployment of our projects by uploading files via FTP to a web server. I was doing it myself until relatively recently and still do on occasion (don’t tell anyone!). At some point in the past few years, demand for the services and features offered by web applications rose, team sizes grew and rapid iteration became the norm. The old methods for deploying became unstable, unreliable and (generally) untrusted.
So was born a new wave of tools, services and workflows designed to simplify the process of deploying complex web applications, along with a plethora of accompanying commercial services. Generally, they offer an integrated toolset for version control, hosting, performance and security at a competitive price.
Platform.sh is a newer player on the market, built by the team at Commerce Guys, who are better known for their Drupal eCommerce solutions. Initially, the service only supported Drupal based hosting and deployment, but it has rapidly added support for Symfony, Wordpress, Zend and ‘pure’ PHP, with node.js, Python and Ruby coming soon.
It follows the microservice architecture concept and offers an increasing amount of server, performance and profiling options to add and remove from your application stack with ease.
I tend to find these services make far more sense with a simple example. I will use a Drupal platform as it’s what I’m most familiar with.
Platform.sh has a couple of requirements that vary for each platform. In Drupal’s case they are:
- An id_rsa public/private key pair
- The Platform.sh CLI
I won’t cover installing these here; more details can be found in the Platform.sh documentation section.
I had a couple of test platforms created for me by the Platform.sh team, and for the sake of this example, we can treat these as my workplace adding me to some new projects I need to work on. I can see these listed by issuing the platform project:list command inside my preferred working directory.
Continue reading %First Look at Platform.sh – a Development and Deployment SaaS%
The UDD bugs interface currently knows about the following release critical bugs:
- In Total:
155 bugs affecting
- Affecting Jessie:
87 (key packages:
61) That's the number we need to get down to zero
before the release. They can be split in two big categories:
- Affecting Jessie and unstable:
71 (key packages:
52) Those need someone to find a fix, or to finish the
work to upload a fix to unstable:
- 15 bugs are tagged 'patch'. (key packages: 12) Please help by reviewing the patches, and (if you are a DD) by uploading them.
- 1 bugs are marked as done, but still affect unstable. (key packages: 0) This can happen due to missing builds on some architectures, for example. Help investigate!
- 55 bugs are neither tagged patch, nor marked done. (key packages: 40) Help make a first step towards resolution!
- Affecting Jessie only: 16 (key packages: 9) Those are already fixed in unstable, but the fix still needs to migrate to Jessie. You can help by submitting unblock requests for fixed packages, by investigating why packages do not migrate, or by reviewing submitted unblock requests.
- Affecting Jessie and unstable: 71 (key packages: 52) Those need someone to find a fix, or to finish the work to upload a fix to unstable:
- Affecting Jessie: 87 (key packages: 61) That's the number we need to get down to zero before the release. They can be split in two big categories:
How do we compare to the Squeeze and Wheezy release cycles?Week Squeeze Wheezy Jessie 43 284 (213+71) 468 (332+136) 319 (240+79) 44 261 (201+60) 408 (265+143) 274 (224+50) 45 261 (205+56) 425 (291+134) 295 (229+66) 46 271 (200+71) 401 (258+143) 427 (313+114) 47 283 (209+74) 366 (221+145) 342 (260+82) 48 256 (177+79) 378 (230+148) 274 (189+85) 49 256 (180+76) 360 (216+155) 226 (147+79) 50 204 (148+56) 339 (195+144) ??? 51 178 (124+54) 323 (190+133) 189 (134+55) 52 115 (78+37) 289 (190+99) 147 (112+35) 1 93 (60+33) 287 (171+116) 140 (104+36) 2 82 (46+36) 271 (162+109) 157 (124+33) 3 25 (15+10) 249 (165+84) 172 (128+44) 4 14 (8+6) 244 (176+68) 187 (132+55) 5 2 (0+2) 224 (132+92) 175 (124+51) 6 release! 212 (129+83) 161 (109+52) 7 release+1 194 (128+66) 147 (106+41) 8 release+2 206 (144+62) 147 (96+51) 9 release+3 174 (105+69) 152 (101+51) 10 release+4 120 (72+48) 112 (82+30) 11 release+5 115 (74+41) 97 (68+29) 12 release+6 93 (47+46) 87 (71+16) 13 release+7 50 (24+26) 14 release+8 51 (32+19) 15 release+9 39 (32+7) 16 release+10 20 (12+8) 17 release+11 24 (19+5) 18 release+12 2 (2+0)
One reason no-one listens to Nedjo Rogers on this subject is that what he's saying is not that simple to understand. But I assure you it's well worth the effort. He's saying that the Drupal 8 Configuration Management system is built around a single use case that favors a certain enterprise need, namely that of single site configuration stabilization and propagation to other environments, principally live.
In his initial article on this subject (Bibliography #4, Nedjo Rogers) Nedjo wrote that the fact that “Sites own their configuration, not modules” (as stated in Bibliography #3, Alex Pott) constitutes nothing less than “a seismic shift in Drupal that's mostly slipped under the radar”. Nedjo first reviews the history of exportable configuration in Drupal, and correctly highlights the fact that there are two main use cases involved:
To share and distribute configuration among multiple sites.
To move configuration between multiple versions of a single site.
“By and large, the two use cases serve different types of users. Sharing configuration among multiple sites is of greatest benefit to smaller, lower resourced groups, who are happy to get the benefits of expertly developed configuration improvements, whether through individual modules or through Drupal distributions. Moving configuration between different instances of the same site fits the workflow of larger and enterprise users, where configuration changes are carefully planned, managed, and staged....”
“If anything, the multiple site use case was a driving force behind the development and management of configuration exports. The Features module and associated projects - Strongarm, Context, and so on - developed configuration exporting solutions specifically for supporting distributions, in which configuration would be shared and updated among tens or hundreds or thousands of sites.”
“For Drupal 8, however, the entire approach to configuration was rewritten with one use case primarily in mind: staging and deployment. The confiugration system "allows you to deploy a configuration from one environment to another, provided they are the same site."
If this is the case, then we really need to get to the bottom of this issue. The objective of this article is to briefly summarize the whole debate (see Bibliography), remove any items that are blurring or clouding the issue, and then underline three times those points that really deserve not being kept “off the radar” and which I hope others will delve into so that we can get a clear picture of perspectives and solutions (many of which Nedjo himself, and others, are spearheading already in third party modules; see below). It's an important question: what's in store for us in terms of industry-wide best practices for Configuration Management in Drupal 8, taking into account all important use cases? And it's a question that Nedjo took the trouble to raise in the Drupal Community as far back as January, 2012. But no-one listened.
Today is an exciting day for the Drupal community! Collectively, we’re all moving a few steps closer to a full release of Drupal 8 with the help of a program called Drupal 8 Accelerate. This is a pilot program from the Drupal Association designed to put $250,000 of community funds towards eliminating the last 50 critical issues between us and release.
The Drupal Association has been an incredible leader in the effort to release Drupal 8, pledging to set aside $62,500 to match every dollar donated to the provide Drupal 8 Acceleration Grants.What’s the latest with Drupal 8 Accelerate?
But we knew we could do even more to turbocharge this project. Today we are announcing that D8 Accelerate is now getting a huge boost from seven anchor sponsors, who have pledged to “match the match,” amplifying every donation made and accelerating the community’s investment in Drupal 8.
Phase2, Acquia, Appnovation, Lullabot, Palantir, PreviousNext, and Wunderkraut have collectively pledged another $62,500 to match the Drupal Association’s matches of community donations. This is an all-out, everyone-in community effort to move D8 from beta to release. Our goal is to bring the total to $250,000 available for grants by September. We are now more than half way there.Why should we all want Drupal 8 to succeed?
The answer is simple: D8 will empower us to use Drupal the way many of us have wanted to for a long time. D8 improves the API layer, multi-lingual capabilities, theming and the editor experience. It also makes is much more powerful for developers (which matters a lot to us at Phase2).
Historically, it has been a challenge to integrate new libraries or different front-end elements without a lot of leg work. Imagine, for example, how the availability of Twig theming will enhance your projects. Or how flexible implementations can be with dependencies on meaningful external software integrated through Symfony routing. We will even be able to more seamlessly incorporate mobile apps into the digital strategies we develop, correcting one of the main weak points of previous Drupal releases.
Put simply, Drupal 8 is a win for our collective clients, and therefore it is a win for all of us.Phase2 & Drupal 8
At Phase2, we want Drupal 8 to succeed because our clients have increasingly big needs and major challenges, and we believe that Drupal 8 is moving in the direction to address those. For that reason, we’ve made investing in Drupal 8 a priority, not only by way of the Drupal 8 Accelerate program, but also in the form of contributed code and shared knowledge gleaned from major enterprise Drupal 8 implementations.
Taking on early Drupal 8 implementations enables us to commit our people to the D8 cause, while directly supporting our client’s mission. It also provides us with a group of advanced scouts to report back from the front lines and develop training for the rest of our team.
Principle among these scouts was Software Architect Jonathan Hedstrom, whose contributions to D8 include Drush support, core patch reviewing, testing and re-rolling, writing tests, modules upgrades (Redis), and more. In addition to Jonathan, Senior Developer Brad Wade made important front-end contributions, while Software Architect Mike Potter has been a significant part of Features development.
We’ll be sharing a lot of what we learned from our D8 work so far at DrupalCon Los Angeles, so stay tuned for our session announcements next!An all-out, everyone-in effort
It took the whole Drupal community – including individuals, companies, the Drupal Association – to get D8 to the place it is now. We are honored to have contributed alongside everyone involved. It has certainly been a heavy lift for many community members, so to each of these people and organizations, we say thank you. The success of Drupal 8 is the most important priority of our community.
However, Drupal 8 still needs a strong push to get over the finish line. So we must ask one more time for the support of our fellow Drupalers. We all have a major stake in the success of the project, and everyone can play an instrumental role getting it out the door. Even the smallest donation makes a difference when every dollar you donate is now matched, compounding your impact. You can read more about how the funds actually support the grant program to achieve the work on the Drupal Association D8 Accelerate page.
If you would like to donate, please visit the D8 Accelerate Fundraising site and please consider using my profile as a way to easily make your contribution so we can start enjoying those launch parties!
Last November we launched Drupal 8 Accelerate, a grant program designed to eliminate Drupal 8 release blockers. Through the progam, we’ve made a small number of grants that have had a huge impact. In fact, we only have about 50 release blockers left between us and release. So now the Association is going to take it to the next level. We've already pledged $62,500 of our general operating budget in 2015 as matching funds for you donations. Now we are announcing that the board has partnered with 7 outstanding community supporters to “match the match” and provide another $62,500 of the program, bringing us to $125,000 available for grants.
Now it's your turn! We're asking you to help us raise another $125,000 to make the total amount available for these grants $250,000. You can give knowing that every dollar you contribute is already matched by the Association and these anchor donors, doubling your impact. Your donations will allow us to make more grants, faster, increasing our impact and getting D8 out the door!
This is an all-out, everyone-in effort to raise $250,000 to kill the last release blockers in our way.This is our moment - together, we are going to move Drupal 8 from beta to release with the Drupal 8 Accelerate program. We already know it works. Drupal 8 Accelerate grants have already tackled release blockers issues related to menus, entity field validation, and caching. As a donor, you will always know exactly what you're funding because we're making it all public.
Join us today and make your donation. The sooner we get this done, the sooner we can all enjoy those launch parties!
Special thanks to our anchor donors, Acquia, Appnovation, Lullabot, Palantir.net, Phase2, PreviousNext, and Wunderkraut, for making this matching campaign possible. These seven organizations stepped up to the plate and made this entire campaign possible. Thank them on Twitter using the #D8Accelerate hashtag.
The D8 Accelerate project is designed to help move Drupal 8 from the initial beta to a full release. This directly relates to the Association's mission: uniting a global open source community to build and promote Drupal. This is a pilot program from the Drupal Association to put $250,000 of community funds toward accelerating the release of Drupal 8, due to the strategic impact this work has on the entire Drupal ecosystem.
Jo and I just got back from our massive holiday in Australia. We had an awesome time overall, fitting in lots of stuff in 4 weeks. Time for a quick write-up and some photos!
We flew into Sydney, then straight onto Uluru for the obligatory sunset and sunrise viewings. We didn't climb the Rock, both for sensitivity reasons and (to be more honest!) it looked way too much like hard work in 40-plus degree heat.
Coach over to Alice Springs, where we had a very quick look around before taking the Ghan train down to Adelaide. The train was fun for a day, and we got to see a lot of desert. In Adelaide, we had a look around the city (lovely colonial feel!) and got a couple of evenings in fun comedy shows at the Fringe. Great fun!
On to Tasmania, where we did a quick (3 days) run around the island by car: into Hobart, up the east coast. Stopped in Swansea (a nice version!) for some heavenly Devonshire teas, then on up to Grindelwald near Launceston. Visited Trowunna Wildlife Park to see (and cuddle!) lots of local animals, which was amazing - Jo's favourite day of the holiday. Then on to Queenstown and drive back down to Hobart past some impossibly beautiful views around Cradle Mountain. Tassie's gorgeous - like the best bits of Scotland, Wales and Cornwall but with even fewer people and better weather.
Next, on to Sydney for Harry and Cath's wedding. We stayed up in Chatswood. Not knowing anything about the area beforehand, we were a little surprised to basically find ourselves back in Hong Kong! We spent most of the weekend catching up with friends from the wedding group, and the wedding itself was at Quarantine Station, overlooking the harbour. It couldn't have been a more perfect location / weather / view for our friends' big day! We squeezed in a couple of the open-top bus tours of Sydney on the Sunday, but got caught in the horrendous storm that hit and ended up sheltering downstairs under cover on the bus. I'm told Bondi is lovely, but it all looked grey from the bus. :-P
Down to Melbourne on the train (bit of a wasted day, in hindsight), where we wandered around the city quite a bit. Caught up with an old friend who lives there for a day, and we did a wine tour up the Yarra Valley which was fun too.
Up to Port Douglas, where we headed out to the Reef for my highlight of the holiday: a snorkelling tour with some local marine experts who showed us the local flora and fauna. We also visited a local Aboriginal cultural centre, skyrail and scenic railway around Kuranda village.
Down to Hervey Bay and a 1-day tour of Fraser Island - an amazing place in combination with quite a thrill-ride experience just being driven around on the sand tracks. Finally, down to Brisbane where we wandered around and visited both the Lone Pine Koala Sanctuary (more cuddles!) and the Gold Coast. Then the long flights home. Whew!
We're knackered now. We knew we could't fit everything in, but we're glad we travelled all over and got tastes of almost everything. Now we can work out where we want to spend more time on our future visit(s). We'll definitely want to head over and see Perth and some of WA next time, and definitely more time in Tasmania, Sydney and Adelaide.
On the Maintainer Dashboard side, the main new feature is a QA checks table that provides an overview of results from lintian, reproducible builds, piuparts, and ci.debian.net. Check the dashboard for the Ruby team for an example. Also, thanks to Daniel Pocock, the TODO items can now be exported as iCalendar tasks.
Bugs Search now has much better JSON and YAML outputs. It’s probably a good start if you want to do some data-mining on bugs. Packages can now be selected using the same form as the Maintainer Dashboard’s one, which makes it easy to build your own personal bug list, and will suppress the need for some of the team-specific listings.
Many bugs have been fixed too. More generally, thanks to the work of Christophe Siraut, the code is much better now, with a clean separation of the data analysis logic and the rendering sides that will make future improvements easier.
I've run OpenWRT on my home router for a long time, and these days I maintain a couple of packages for the project. In order to make most efficient use of the hardware resources on my router, I run a custom build of the OpenWRT firmware with some default features removed and others added. For example, I install bind and ipsec-tools, while I disable the web UI in order to save space.
There are quite a few packages required for the OpenWRT build process. I don't necessarily want all of these packages installed on my main machine, nor do I want to maintain a VM for the build environment. So I investigated using Docker for this.
Starting from a base jessie image, which I created using the Docker debootstrap wrapper, the first step was to construct a Dockerfile containing instructions on how to set up the build environment and create a non-root user to perform the build:FROM jessie:latest MAINTAINER Noah Meyerhans <email@example.com> RUN DEBIAN_FRONTEND=noninteractive apt-get update && apt-get -y install \ asciidoc bash bc binutils bzip2 fastjar flex git-core g++ gcc util-linux gawk libgtk2.0-dev intltool jikespg zlib1g-dev make \ genisoimage libncurses5-dev libssl-dev patch perl-modules \ python2.7-dev rsync ruby sdcc unzip wget gettext xsltproc \ libboost1.55-dev libxml-parser-perl libusb-dev bin86 bcc sharutils \ subversion RUN adduser --disabled-password --uid 1000 --gecos "Docker Builder,,," builder
And we generate a docker image based on this Dockerfile per the docker build documentation. At this point, we've got a basic image that does what we want. To initialize the build environment (download package sources, etc), I might run:
docker run -v ~/src/openwrt:/src/openwrt -u builder -t -i jessie/openwrt sh -c "cd /src/openwrt/openwrt && scripts/feeds update -a"
Or configure the system:
docker run -v ~/src/openwrt:/src/openwrt -u builder -t -i jessie/openwrt make -C /src/openwrt/openwrt menuconfig
And finally, build the OpenWRT image itself:
docker run -v ~/src/openwrt:/src/openwrt -u builder -t -i jessie/openwrt make -C /src/openwrt/openwrt -j3
The -v ~/src/openwrt:/src/openwrt flags tell docker to bind mount my ~/src/openwrt directory (which I'd previously cloned using git) to /src/openwrt inside the running container. Without this, one might be tempted to clone the git repo directly into the container at runtime, but the changes to non-bind-mount filesystems are lost when the container terminates. This could be suitable for an autobuild environment, in which the sources are cloned at the start of the build and any generated artifacts are archived externally at the end, but it isn't suitable for a dev environment where I might be making and testing small changes at a relatively high frequency.
The -u builder flags tell docker to run the given commands as the builder user inside the container. Recall that builder was created with UID 1000 in the Dockerfile. Since I'm storing the source and artifacts in a bind-mounted directory, all saved files will be created with this UID. Since UID 1000 happens to be my UID on my laptop, this is fine. Any files created by builder inside the container will be owned by me outside the container. However, this container should not have to rely on a user with a given UID running it! I'm not sure what the right way to approach this problem is within Docker. It may be that someone using my image should create their own derivative image that creates a user with the appropriate UID (creation of this derivative image is a cheap operation in Docker). Alternatively, whatever Docker init system is used could start as root, add a new user with a specific UID, and execute the build commands as that new user. Neither of these seems as clean as it could be, though.
In general, Docker seems quite useful for such a build environment. It's easy to set up, and it makes it very easy to generate and share a common collection of packages and configuration. Because images are self-contained, I can reclaim a bunch of disk space by simple executing "docker rmi".