Feed aggregator

blog.studio.gd: Overview of CMI in Drupal 8

Planet Drupal - Mon, 08/02/2016 - 11:56
Some notes about the new Configuration management system in Drupal 8
Categories: Elsewhere

Orestis Ioannou: Debian - your patches and machine readable copyright files are available on Debsources

Planet Debian - Mon, 08/02/2016 - 10:33

TL;DR All Debian license and patches are belong to us. Discover them here and here.

In case you hadn't already stumbled upon sources.debian.net in the past, Debsources is a simple web application that allows to publish an unpacked Debian source mirror on the Web. On the live instance you can browse the contents of Debian source packages with syntax highlighting, search files matching a SHA-256 hash or a ctag, query its API, highlight lines, view accurate statistics and graphs. It was initially developed at IRILL by Stefano Zacchiroli and Matthieu Caneill.

During GSOC 2015 I helped introduce two new features.

License Tracker

Since Debsources has all the debian/copyright files and that many of them adopted the DEP-5 suggestion (machine readable copyright files) it was interesting to exploit them for end users. You may find interesting the following features:

  • an API that allows users to find the license of file "foo" or the licenses for a bunch of packages, using filenames or SHA-256 hashes

  • a better looking interface for debian/copyright files

Have a look at the documentation to discover more!

Patch tracker

The old patch tracker unfortunately died a while ago. Since Debsources stores all the patches it was, probably, natural for it to be able to exploit them and present them over the web. You can navigate through packages by prefix or by searching them here. Among the use cases:

  • a summary which contains all the patches of a package together with their diffs and summaries/subjects
  • links to view and download (quilt-3.0) patches.

Read more about the API!

Coming ...
  • In the future these informations will be added in the DB. This will allow:

    • the license tracker to provide interesting statistics and graphs about licensing trends (What do Pythonistas usually choose as a license, how many GPL-3 files are in Jessie etc). Those are going to be quite accurate since they will take into account each file in a given package and not just the "general" license of the package.

    • the patch tracker to produce a list of packages that contain patches - this will enable providing links from PTS to the patch tracker.

  • Not far in the horizon there is also an initial work for exporting debian/copyright files into SPDX documents. You can have a look at a beta / testing version on debsources-dev. (Example)

I hope you find these new features useful. Don't hesitate to report any bugs or suggestions you come accross.

Categories: Elsewhere

Janez Urevc: janezurevc.name runs on Drupal 8!

Planet Drupal - Mon, 08/02/2016 - 07:30
janezurevc.name runs on Drupal 8!

Drupal 8 was officially released last November. Since then I was planning to try migrating my blog from previous version of this great CMS. Drupal 8 comes with many improvements and I definitely wanted to leverage those also on my site.

Besides that I always used my personal site also as an experimental sandbox where I tested new Drupal modules, themes, technologies. Even if I am very active contributor to Drupal core and contributed modules and I've been working on an enterprise Drupal 8 project at my work I actually never migrated a site to Drupal 8 to this date. It was definitely something I wanted to try.

Previous version of janezurevc.name was running on Drupal 7. It is important to note migration from 7 to 8 isn't officially supported yet. Drupal 7 won't reach EOL for at least few more years, which makes this migration not critical. However, migrations from Drupal 6 have been fully supported since the day 8 was release. 6 will reach EOL this month, which makes migration from 6 to 8 an absolute priority.

Migration

My site is actually very basic. I am using content (2 content types), taxonomy (1 vocabulary), few contributed modules and that is really it. It turns out that every that I needed migrates reliably.

I started the process by reading official documentation. Besides Migrate and Migrate Drupal modules that come with core I needed few contributed modules. Drupal upgrade, Migrate tools and Migrate plus.

Migration itself was extremely easy. I installed Drupal 8 site, enabled migrate modules, started migration and waited for a few minutes. That's it! At least for core stuff. There are some glitches when it comes to contributed modules, but even that was fairly easy to resolve.

I can just thank to everyone that contributed to Migrate in Drupal core. You did an awesome job!

Theme

Drupal 7 version of my blog used Sky theme, which is unfortunately not ported to 8 yet. For that reason I needed to search theme repository and came across Bootstrap clean blog.

It looked nice and it had a Drupal 8 -dev release. Regardless of that it works as a charm. I even contributed minor patches and am planning to contribute few more.

How do you like the theme?

Modules

Like almost every Drupal website out there mine also uses few contributed modules. Let's see how that went.

Disqus

Disqus module has been ported as part of the Google summer of code project, which I've mentored in 2014. Module itself works very well. We changed architecture a bit; instead of having a custom database table we rather used a dedicated field type. This approach comes with many benefits. By doing this we're not limited to nodes any more. Disqus can be used on any entity type now.

Even if the port was there migration was not. I used this opportunity to dig into this part of Drupal a bit more. I wrote 7 to 8 migration support for everything Disqus needs. This includes general configuration, fields on entities, statuses and identifiers. My code is already committed and you can give it a try.

Did you try Disqus migration? Let me know how did it work for you.

Pathauto and Redirect

D8 ports are available on their Drupal.org project pages. They work as a charm. While core migrates existing aliases alias patterns, redirects and other configuration aren't supported yet. I had just 3 alias patterns and less than 10 redirects on my old site so this wasn't hard to fix manually.

If you meet @Berdir please buy him a beer. He did an awesome job porting this (any many other) modules.

Media

I was using media to embed images in WYSIWYG, which is using legacy embed token in Drupal 7. This part was unfortunately not ported yet. I was using this on lees than 10 places so I decided to fix this manually too. I used a simple SQL query to get node IDs of content that used legacy token. Then I simply changed it to standard tag with data-entity-* attributes, which Drupal 8 uses for it's own image embeds.

Markdown filter

Recently I found out that I prefer Markdown when producing written content. It doesn't interfere my writing flow nearly as much as WYSIWYG editors do. When using Markdown I focus on content instead of the appearance (for the same reason I really liked LaTeX during my university years).

Guess what? There is a module for that! Markdown filter comes with a text filter that will convert Markdown syntax to HTML when displaying it. And it also has a Drupal 8 port. Download, install, configure, use. It was as easy as that!

How does Markdown work for you? Do you prefer it over WYSIWYG editor as much as I do?

Other modules

I use few other modules on the page. All of them have some kind of Drupal 8 release:

All of them work without any problem. I downloaded, installed and configured them. Google analytics even comes with support for migration (which meant 3rd step was not needed).

Great work maintainers and contributors!

Other interesting stuff

I also used this migration to move my blog to a new hosting solution. Old blog was hosted on a VPS that I used in the past and am slowly moving away from it. Most of my sites and services are currently hosted on a dedicated server at Hetzner (they provide excellent value for a price so I'd definitely recommend them).

Recently I started using Docker for my development environments and I wanted to try it in production too (I mentioned I am (ab)using my personal site for experimenting, right? :)). As a result of that I'm hosting janezurevc.name in a Dockerized environment managed via Docker compose. Compose is super nice as it allows you to describe your infrastructure stack in a single YAML file. This allows you to commit it to a VCS repository and replicate infrastructure anywhere you want. But this is already a topic for some of my future posts.

Did you try to migrate your Drupal 6 or 7 sites to 8? How did it go? Which contributed modules are you using?

slashrsm Mon, 08.02.2016 - 07:30 Tags Drupal

View the discussion thread.

Categories: Elsewhere

Russ Allbery: Converted personal web sites to TLS

Planet Debian - Mon, 08/02/2016 - 05:44

I've been in favor of using TLS and encryption for as much as possible for a while, but I never wanted to pay money to the certificate cartel. I'd been using certificates from CAcert, but they're not recognized by most browsers, so it felt rude to redirect everything to TLS with one of their certificates.

Finally, the EFF and others put together Let's Encrypt with free, browser-recognized certificates and even a really solid automatic renewal system. That's perfect, and also eliminated my last excuse to go do the work, so now all of my personal web sites use TLS and HTTPS by default and redirect to the encrypted version of the web site. And better yet, all the certificates should just renew themselves automatically, meaning one less thing I have to keep track of and deal with periodically.

Many thanks to Wouter Verhelst for his short summary of how to get the Let's Encrypt client to work properly from the command line without doing all the other stuff it wants to do in order to make things easier for less sophisticated users. Also useful was the SSL Labs server test to make sure I got the modern TLS configuration right. (All my sites should now be an A. I decided to not cut off support for Internet Explorer older than version 11 yet.

I imported copies of the Debian packages needed for installation of the Let's Encrypt package on Debian jessie that weren't already in Debian backports into my personal Debian repository for my own convenience, but they're also there for anyone else.

Oh, that reminds me: this also affects the archives.eyrie.org APT repository (the one linked above), so if any of you were using that, you'll now need to install apt-transport-https and might want to change the URL to use HTTPS.

Categories: Elsewhere

Mike Hommey: SSH through jump hosts, revisited

Planet Debian - Mon, 08/02/2016 - 00:26

Close to 7 years ago, I wrote about SSH through jump hosts. Twice. While the method used back then still works, Openssh has grown an new option in version 5.3 that allows it to be simplified a bit, by not using nc.

So here is an updated rule, version 2016:

Host *+* ProxyCommand ssh -W $(echo %h | sed 's/^.*+//;s/^\([^:]*$\)/\1:22/') $(echo %h | sed 's/+[^+]*$//;s/\([^+%%]*\)%%\([^+]*\)$/\2 -l \1/;s/:\([^:+]*\)$/ -p \1/')

The syntax you can use to connect through jump hosts hasn’t changed compared to previous blog posts:

  • With one jump host:
    $ ssh login1%host1:port1+host2:port2 -l login2
  • With two jump hosts:
    $ ssh login1%host1:port1+login2%host2:port2+host3:port3 -l login3
  • With three jump hosts:
    $ ssh login1%host1:port1+login2%host2:port2+login3%host3:port3+host4:port4 -l login4
  • etc.

Logins and ports can be omitted.

Update: Add missing port to -W flag when one is not given.

Categories: Elsewhere

Iain R. Learmonth: After FOSDEM 2016

Planet Debian - Sun, 07/02/2016 - 23:55

FOSDEM was fun. It was great to see all these open source projects coming together in one place and it was really good to talk to people that were just as enthusiastic about the FOSS activities they do as I am about mine.

Thanks go to Saúl Corretgé who looked after the real-time communications dev room and made sure everything ran smoothly. I was very pleased to find that I had to stand for a couple of talks as the room was full with people eager to learn more about the world of RTC.

I was again pleased on the Sunday when I had such a great audience for my talk in the distributions dev room. Everyone was very welcoming and after the talk I had some corridor discussions with a few people that were really interesting and have given me a few new things to explore in the near future.

A few highlights from FOSDEM:

  • ReactOS: Since I last looked at this project it has really matured and is getting to be rather stable. It may be possible to start seriously considering replacing Windows XP/Vista machines with ReactOS where the applications being run just cannot be used with later versions of Windows.
  • Haiku: I used BeOS a long long time ago on my video/music PC. I can't say that I was using it over a Linux or BSD distribution for any particular reason but it worked well. I saw a talk that discussed how Haiku was keeping up-to-date with drivers and also there was a talk, that I didn't see, that talked about the new Haiku package management system. I think I may check out Haiku again in the near future, even if only for the sake of nostalgia.
  • Kolab: Continuing with the theme of things that have matured since I last looked at them, I visited the Kolab stand at FOSDEM and I was impressed with how far it has come. In fact, I was so impressed that I'm looking at using it for my primary email and calendaring in the near future.
  • picoTCP: When I did my Honours project at University, I was playing with Contiki. This looks a lot easier to get started with, even if it's perhaps missing parts of the stack that Contiki implements well. If I ever find time for doing some IoT hacking, this will be on the list of things to try out first.

This is just some of the highlights, and I know I'm missing out a lot here. One of the main things that FOSDEM has done for me is open my eyes as to how wide and diverse our community is and it has served as a reminder that there is tons of cool stuff out there if you take a moment to look around.

Also, thanks to my trip to FOSDEM, I now have four new t-shirts to add into the rotation: FOSDEM 2016, Debian, XMPP and twiki.org.

Categories: Elsewhere

Joey Hess: letsencrypt support in propellor

Planet Debian - Sun, 07/02/2016 - 23:10

I've integrated letsencrypt into propellor today.

I'm using the reference letsencrypt client. While I've seen complaints that it has a lot of dependencies and is too complicated, it seemed to only need to pull in a few packages, and use only a few megabytes of disk space, and it has fewer options than ls does. So seems fine. (Although it would be nice to have some alternatives packaged in Debian.)

I ended up implementing this:

letsEncrypt :: AgreeTOS -> Domain -> WebRoot -> Property NoInfo

This property just makes the certificate available, it does not configure the web server to use it. This avoids relying on the letsencrypt client's apache config munging, which is probably useful for many people, but not those of us using configuration management systems. And so avoids most of the complicated magic that the letsencrypt client has a reputation for.

Instead, any property that wants to use the certificate can just use leteencrypt to get it and set up the server when it makes a change to the certificate:

letsEncrypt (LetsEncrypt.AgreeTOS (Just "me@my.domain")) "example.com" "/var/www" `onChange` setupthewebserver

(Took me a while to notice I could use onChange like that, and so divorce the cert generation/renewal from the server setup. onChange is awesome! This blog post has been updated accordingly.)

In practice, the http site has to be brought up first, and then letsencrypt run, and then the cert installed and the https site brought up using it. That dance is automated by this property:

Apache.httpsVirtualHost "example.com" "/var/www" (LetsEncrypt.AgreeTOS (Just "me@my.domain"))

That's about as simple a configuration as I can imagine for such a website!

The two parts of letsencrypt that are complicated are not the fault of the client really. Those are renewal and rate limiting.

I'm currently rate limited for the next week because I asked letsencrypt for several certificates for a domain, as I was learning how to use it and integrating it into propellor. So I've not quite managed to fully test everything. That's annoying. I also worry that rate limiting could hit at an inopportune time once I'm relying on letsencrypt. It's especially problimatic that it only allows 5 certs for subdomains of a given domain per week. What if I use a lot of subdomains?

Renewal is complicated mostly because there's no good way to test it. You set up your cron job, or whatever, and wait three months, and hopefully it worked. Just as likely, you got something wrong, and your website breaks. Maybe letsencrypt could offer certificates that will only last an hour, or a day, for use when testing renewal.

Also, what if something goes wrong with renewal? Perhaps letsencrypt.org is not available when your certificate needs to be renewed.

What I've done in propellor to handle renewal is, it runs letsencrypt every time, with the --keep-until-expiring option. If this fails, propellor will report a failure. As long as propellor is run periodically by a cron job, this should result in multiple failure reports being sent (for 30 days I think) before a cert expires without getting renewed. But, I have not been able to test this.

Categories: Elsewhere

Iustin Pop: mt-st project new homepage

Planet Debian - Sun, 07/02/2016 - 21:34

A short public notice: mt-st project new homepage at https://github.com/iustin/mt-st. Feel free to forward your distribution-specific patches for upstream integration!

Context: a while back I bought a tape unit to help me with backups. Yay, tape! All good, except that I later found out that the Debian package was orphaned, so I took over the maintenance.

All good once more, but there were a number of patches in the Debian package that were not Debian-specific, but rather valid for upstream. And there was no actual upstream project homepage, as this was quite an old project, with no (visible) recent activity; the canonical place for the project source code was an ftp site (ibiblio.org). I spoke with Kai Mäkisara, the original author, and he agreed to let me take over the maintenance of the project (and that's what I intend to do: maintenance mostly, merging of patches, etc. but not significant work). So now there's a github project for it.

There was no VCS history for the project, so I did my best to partially recreate the history: I took the debian releases from snapshots.debian.org and used the .orig.tar.gz as bulk import; the versions 0.7, 0.8, 0.9b and 1.1 have separate commits in the tree.

I also took the Debian and Fedora patches and applied them, and with a few other cleanups, I've just published the 1.2 release. I'll update the Debian packaging soon as well.

So, if you somehow read this and are the maintainer of mt-st in another distribution, feel free to send patches my way for integration; I know this might be late, as some distributions have dropped it (e.g. Arch Linux).

Categories: Elsewhere

Ben Armstrong: Bluff Trail icy dawn: Winter 2016

Planet Debian - Sun, 07/02/2016 - 15:45

Before the rest of the family was up, I took a brief excursion to explore the first kilometre of the Bluff Trail and check out conditions. I turned at the ridge, satisfied I had seen enough to give an idea of what it’s like out there, and then walked back the four kilometres home on the BLT Trail.

I saw three joggers and their three dogs just before I exited the Bluff Trail on the way back, and later, two young men on the BLT with day packs approaching. The parking lot had gained two more cars for a total of three as I headed home. Exercising appropriate caution and judgement, the first loop is beautiful and rewarding, and I’m not alone in feeling the draw of its delights this crisp morning.

Click the first photo below to start the slideshow.

Click to start the slideshow At the parking lot, some ice, but passable with caution Trail head: a few mm of sleet Many footprints since last snowfall Thin ice encrusts the bog The boardwalk offers some loose traction Mental note: buy crampons More thin bog ice Bubbles captured in the bog ice Shelves hang above receding water First challenging boulder ascent Rewarding view at the crest Time to turn back here Flowing runnels alongside BLT Trail Home soon to fix breakfast If it looks like a tripod, it is Not a very adjustable tripod, however Pretty, encrusted pool The sun peeks out briefly Light creeps down the rock face Shimmering icy droplets and feathery moss Capped with a light dusting of sleet
Categories: Elsewhere

Steve Kemp: Redesigning my clustered website

Planet Debian - Sun, 07/02/2016 - 11:28

I'm slowly planning the redesign of the cluster which powers the Debian Administration website.

Currently the design is simple, and looks like this:

In brief there is a load-balancer that handles SSL-termination and then proxies to one of four Apache servers. These talk back and forth to a MySQL database. Nothing too shocking, or unusual.

(In truth there are two database servers, and rather than a single installation of HAProxy it runs upon each of the webservers - One is the master which is handled via ucarp. Logically though traffic routes through HAProxy to a number of Apache instances. I can lose half of the servers and things still keep running.)

When I setup the site it all ran on one host, it was simpler, it was less highly available. It also struggled to cope with the load.

Half the reason for writing/hosting the site in the first place was to document learning experiences though, so when it came to time to make it scale I figured why not learn something and do it neatly? Having it run on cheap and reliable virtual hosts was a good excuse to bump the server-count and the design has been stable for the past few years.

Recently though I've begun planning how it will be deployed in the future and I have a new design:

Rather than having the Apache instances talk to the database I'll indirect through an API-server. The API server will handle requests like these:

  • POST /users/login
    • POST a username/password and return 200 if valid. If bogus details return 403. If the user doesn't exist return 404.
  • GET /users/Steve
    • Return a JSON hash of user-information.
    • Return 404 on invalid user.

I expect to have four API handler endpoints: /articles, /comments, /users & /weblogs. Again we'll use a floating IP and a HAProxy instance to route to multiple API-servers. Each of which will use local caching to cache articles, etc.

This should turn the middle layer, running on Apache, into simpler things, and increase throughput. I suspect, but haven't confirmed, that making a single HTTP-request to fetch a (formatted) article body will be cheaper than making N-database queries.

Anyway that's what I'm slowly pondering and working on at the moment. I wrote a proof of concept API-server based CMS two years ago, and my recollection of that time is that it was fast to develop, and easy to scale.

Categories: Elsewhere

ARREA-Systems: Drupal 8 Guided Tour module

Planet Drupal - Sun, 07/02/2016 - 01:38
Drupal 8 Guided Tour module JK Sun, 02/07/2016 - 08:38

In Drupal 8 there is a Tour module in core that is very useful when it comes to web applications. In EK management tools we target professional users with small to medium scale companies. They usually have limited resources and time to spend on back office trainings. This is where the Tour module is very convenient to introduce functionalities to users who can quickly grasp the functions available to manage their back office.

We use the Tour functionality in our pages to guide users in their daily tasks like for instance in the form to create a new invoice or project page:

 

 

Categories: Elsewhere

Dimitri John Ledkov: Blogging about Let's encrypt over HTTP

Planet Debian - Sun, 07/02/2016 - 00:30
So let's encrypt thing started. And it can do challenges over http (serving text files) and over dns (serving .txt records).

My "infrastructure" is fairly modest. I've seen too many of my email accounts getting swamped with spam, and or companies going bust. So I got my own domain name surgut.co.uk. However, I don't have money or time to run my own services. So I've signed up for the Google Apps account for my domain to do email, blogging, etc.

Then later i got the libnih.la domain to host API docs for the mentioned library. In the world of .io startups, I thought it's an incredibly funny domain name.

But I also have a VPS to host static files on ad-hoc basis, run VPN, and an irc bouncer. My irc bouncer is ZNC and I used a self-signed certificate there, thus i had "ignore" ssl errors in all of my irc clients... which kind of defeats the purposes somewhat.

I run my VPS on i386 (to save on memory usage) and on Ubuntu 14.04 LTS managed with Landscape. And my little services are just configured by hand there (not using juju).

My first attempt at getting on the let's encrypt bandwagon was to use the official client. By fetching debs from xenial, and installing that on LTS. But the package/script there is huge, has support for things I don't need, and wants dependencies I don't have on 14.04 LTS.

However I found a minimalist implementation letsencrypt.sh implemented in shell, with openssl and curl. It was trivial to get dependencies for and configure. Specified a domains text file, and that was it. And well, added sym links in my NGINX config to serve the challenges directory & a hook to deploy certificate to znc and restart that. I've added a cronjob to renew the certs too. Thinking about it, it's not complete as I'm not sure if NGINX will pick up certificate change and/or if it will need to be reloaded. I shall test that, once my cert expires.

Tweaking config for NGINX was easy. And I was like, let's see how good it is. I pointed https://www.ssllabs.com/ssltest/ at my https://x4d.surgut.co.uk/ and I got a "C" rating. No forward secrecy, vulnerable to down grade attacks, BEAST, POODLE and stuff like that. I went googling for all types of NGINX configs and eventually found website with "best known practices" https://cipherli.st/ However, even that only got me to "B" rating, as it still has Diffie-Hellman things that ssltest caps at "B" rating. So I disabled those too. I've ended up with this gibberish:

ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_ciphers "EECDH+AESGCM:AES256+EECDH";
ssl_ecdh_curve secp384r1; # Requires nginx >= 1.1.0
ssl_session_cache shared:SSL:10m;
#ssl_session_tickets off; # Requires nginx >= 1.5.9
ssl_stapling on; # Requires nginx >= 1.3.7
ssl_stapling_verify on; # Requires nginx => 1.3.7
#resolver $DNS-IP-1 $DNS-IP-2 valid=300s;
#resolver_timeout 5s;
add_header Strict-Transport-Security "max-age=63072000; includeSubdomains; preload";
add_header X-Frame-Options DENY;
add_header X-Content-Type-Options nosniff;

I call it gibberish, because IMHO, I shouldn't need to specify any of the above... Anyway I got my A+ rating.
However, security is as best as the weakest link. I'm still serving things over HTTP, maybe I should disable that. And I'm yet to check how "good" the TLS is on my znc. Or if I need to further harden my sshd configuration.
This has filled a big gap in my infrastructure. However a few things remain served over HTTP only.
http://blog.surgut.co.uk is hosted by an Alphabet's / Google's Blogger service. Which I would want to be served over HTTPS.
http://libnih.la is hosted by GitHub Inc service. Which I would want to be served over HTTPS.
I do not want to manage those services, experience load / spammers / DDoS attacks etc. But I am happy to sign CSRs with let's encrypt and deploy certs over to those companies. Or allow them to self-obtain certificates from let's encrypt on my behalf. I used gandi.net as my domain names provider, which offers an RPC API to manage domains and their zones files, thus e.g. I can also generate an API token for those companies to respond with a dns-01 challenge from let's encrypt.
One step at a time I guess.
The postings on this site are my own and don't necessarily represent any past/present/future employers' positions, strategies, or opinions.
Categories: Elsewhere

Entity Pilot: New in beta6 - share and move content between different Drupal sites

Planet Drupal - Sat, 06/02/2016 - 23:36

Entity pilot beta-6 comes with the ability to share and move content between different Drupal 8 sites.

Up until beta-5 your sites had to share the same configuration: i.e. content-types, fields etc.

From beta 6, you can now enable the Entity Pilot Map Config sub-module and decide how to handle missing fields and content-types.

Categories: Elsewhere

Andrew Shadura: Community time at Collabora

Planet Debian - Sat, 06/02/2016 - 18:22

I haven’t yet blogged about this (as normally I don’t blog often), but I joined Collabora in June last year. Since then, I had an opportunity to work with OpenEmbedded again, write a kernel patch, learn lots of things about systemd (in particular, how to stop worrying about it taking over the world and so on), and do lots of other things.

As one would expect when working for a free software consultancy, our customers do understand the value of the community and contributing back to it, and so does the customer for the project I’m working on. In fact, our customer insists we keep the number of locally applied patches to, for example, Linux kernel, to minimum, submitting as much as possible upstream.

However, apart from the upstreaming work which may be done for the customer, Collabora encourages us, the engineers, to spend up to two hours weekly for upstreaming on top of what customers need, and up to five days yearly as paid Community days. These community days may be spent working on the code or doing volunteering at free software events or even speaking at conferences.

Even though on this project I have already been paid for contributing to the free software project which I maintained in my free time previously (ifupdown), paid community time is a great opportunity to contribute to the projects I’m interested in, and if the projects I’m interested in coincide with the projects I’m working with, I effectively can spend even more time on them.

A bit unfortunately for me, I haven’t spent enough time last year to plan my community days, so I used most of them in the last weeks of the calendar year, and I used them (and some of my upstreaming hours) on something that benefitted both free software community and Collabora. I’m talking about SparkleShare, a cross-platform Git-based file synchronisation solution written in C#. SparkleShare provides an easy to use interface for Git, or, actually, it makes it possible to not use any Git interface at all, as it monitors the working directory using inotify and commits stuff right after it changes. It automatically handles conflicts even for binary files, even though I have to admit its handling could still be improved.

At Collabora, we use SparkleShare to store all sorts of internal documents, and it’s being used by users not familiar with command line interfaces too. Unfortunately, the version we recently had in Debian had a couple of very annoying bugs, making it a great pain to use it: it would not notice edits in local files, or not notice new commits being pushed to the server, and that led to individual users’ edits being lost sometimes. Not cool, especially when the document has to be sent to the customer in a couple of minutes.

The new versions, 1.4 (and recently released 1.5) was reported as being much better and also fixing some crashes, but it also used GTK+ 3 and some libraries not yet packaged for Debian. Thanh Tung Nguyen packaged these packages (and a newer SparkleShare) for Ubuntu and published them in his PPA, but they required some work to be fit for Debian.

I have never touched Mono packages before in my life, so I had to learn a lot. Some time was spent talking to upstream about fixing their copyright statements (they had none in the code, and only one author was mentioned in configure.ac, and nowhere else in the source), a bit more time went into adjusting and updating the patches to the current source code version. Then, of course, waiting the packages to go through NEW. Fixing parallel build issues, waiting for buildds to all build dependencies for at least one architecture… But then, finally, on 19th of January I had the updated SparkleShare in Debian.

As you may have already guessed, this blog post has been sponsored by Collabora, the first of my employers to encourage require me to work on free software in my paid time :)

Categories: Elsewhere

Neil Williams: lava.debian.net

Planet Debian - Sat, 06/02/2016 - 15:33

With thanks to Iain Learmonth for the hardware, there is now a Debian instance of LAVA available for use and the Debian wiki page has been updated.

LAVA is a continuous integration system for deploying operating systems onto physical and virtual hardware for running tests. Tests can be simple boot testing, bootloader testing and system level testing. Extra hardware may be required for some system tests. Results are tracked over time and data can be exported for further analysis.

LAVA has a long history of supporting continuous integration of the Linux kernel on ARM devices (ARMv7 & ARMv8). So if you are testing a Linux kernel image on armhf or arm64 devices, you will find a lot of similar tests already running on the other LAVA instances. The Debian LAVA instance seeks to widen that testing in a couple of ways:

  • A wider range of tests including use of Debian artifacts as well as mainline Linux builds
  • A wider range of devices by allowing developers to offer devices for testing from their own desks.
  • Letting developers share local test results easily with the community without losing the benefits of having the board on your desk.

This instance relies on the latest changes in lava-server and lava-dispatcher. The 2016.2 release has now deprecated the old, complex dispatcher and a whole new pipeline design is available. The Debian LAVA instance is running 2015.12 at the moment, I’ll upgrade to 2016.2 once the packages migrate into testing in a few days and I can do a backport to jessie.

What can LAVA do for Debian? ARMMP kernel testing

Unreleased builds, experimental initramfs testing – this is the core of what LAVA is already doing behind the scenes of sites like http://kernelci.org/.

U-Boot ARM testing

This is what fully automated LAVA labs have not been able to deliver in the past, at least without a usable SD Mux

What’s next

LOTS. This post actually got published early (distracted by the rugby) – I’ll update things more in a later post. Contact me if you want to get involved, I’ll provide more information on how to use the instance and how to contribute to the testing in due course.

Categories: Elsewhere

Pages

Subscribe to jfhovinne aggregator