This month I marked 281 package for accept and rejected 58, so almost back to normal processing. I also sent 19 emails to maintainers asking questions.
As mentioned in October the accept-number has reached another milestone. I accepted package 6666 on 20151221, it was python-skbio_0.4.1-1. The winner of a fast processed package with the best guess of this date is: *tata* Javi. Ok, he was the only participant . So, who can guess the date of 7777?
This month several people had to reduce their contribution, so all in all I got a workload of 30h. Altogether I uploaded those DLAs:
- [DLA 392-1] roundcube security update
- [DLA 393-1] srtp security update
- [DLA 394-1] passenger security update
- [DLA 399-1] foomatic-filters security update
- [DLA 398-1] privoxy security update
- [DLA 401-1] imlib2 security update
For the first time this month, I was also involved in three embargoed uploads. Ben and I were informed about some security issues before they got published and I prepared the DLAs. Although the real upload for all suites were still done by the security team, it was really exciting.
I also spent some time on #796095 and prepared another patch for review. Further I am almost done with the next upload of PHP 5.3. Just before starting dupload, another issue appeared. As I think that this will be the last upload of PHP for Squeeze LTS, I also want to take care of this latecomer. The upload of krb5 is waiting in the pipeline, I am just waiting for a confirmation that everything is fine.
This month I also had another term of doing frontdesk work and looked for CVEs that are important for Squeeze LTS or could be ignored.
As Wheezy LTS is just before the start, I already prepared the new build environment. So either now or later in April, I am ready …
Due to the high LTS workload, there was no time for other stuff .
As some of you may know, I have been working on a small hardware project called the Beer'o'Meter whose purpose is to allow us to extend Ye Olde Vic's beer board to indicate the approximate fullness of each cask. For some time now, we've been operating an electronic beer board at the Vic which you may see tweeted out from time to time. The pumpotron has become very popular with the visitors to the pub, especially that it can be viewed online in a basic textual form.
Of course, as many of you who visit pubs know only too well. That a beer is "on" is no indication of whether or not you need to get there sharpish to have a pint, or if you can take your time and have a curry first. As a result, some of us have noticed a particular beer on, come to the pub after dinner, and then been very sad that if only we'd come 30 minutes previously, we'd have had a chance at the very beer we were excited about.
Combine this kind of sadness with a two week break at Christmas, and I started to develop a Beer'o'Meter to extend the pumpotron with an indication of how much of a given beer had already been served. Recently my boards came back from Elecrow along with various bits and bobs, and I have spent some time today building one up for test purposes.
As always, it's important to start with some prep work to collect all the necessary components. I like to use cake cases as you may have noticed on the posting yesterday about the oscilloscope I built.
Naturally, after prep comes the various stages of assembly. You start with the lowest-height components, so here's the board after I fitted the ceramic capacitors:
And here's after I fitted the lying-down electrolytic decoupling capacitor for the 3.3 volt line:
Next I should have fitted the six transitors from the middle cake case, but I discovered that I'd used the wrong pinout for them. Even after weeks of verification by myself and others, I'd made a mistake. My good friend Vincent Sanders recently posted about how creativity is allowing yourself to make mistakes and here I had made a doozy I hadn't spotted until I tried to assemble the board. Fortunately TO-92 transistors have nice long legs and I have a pair of tweezers and some electrical tape. As such I soon had six transistors doing the river dance:
With that done, I noticed that the transistors now stood taller than the pins (previously I had been intending to fit the transistors before the pins) so I had to shuffle things around and fit all my 0.1" pins and sockets next:
Then I could fit my dancing transistors:
We're almost finished now, just one more capacitor to provide some input decoupling on the 9v power supply:
Of course, it wouldn't be complete without the ESP8266Huzzah I acquired from AdaFruit though I have to say that I'm unlikely to use these again, but rather I might design in the surface-mount version of the module instead.
And since this is the very first Beer'o'Meter to be made, I had to go and put a 1 on the serial-number space on the back of the board. I then tried to sign my name in the box, made a hash of it, so scribbled in the gap
Finally I got to fit all six of my flow meters ready for some testing. I may post again about testing the unit, but for now, here's a big spider of a flow meter for beer:
This has been quite a learning experience for me, and I hope in the future to be able to share more of my hardware projects, perhaps from an earlier stage.
I have plans for a DAC board, and perhaps some other things.
The monthly core patch (bug fix) release window is this Wednesday, February 03. Drupal 8.0.3 and 7.42 will be released with fixes for Drupal 8 and 7. There will be no Drupal 6 bugfix release this month.
To ensure a reliable release window for the patch release, there will be a Drupal 8.0.x commit freeze from 00:00 to 23:30 UTC on Wednesday, February 03. Now is a good time to update your development/staging servers to the latest 8.0.x-dev or 7.x-dev code and help us catch any regressions in advance. If you do find any regressions, please report them in the issue queue. Thanks!
Other upcoming core release windows after this week include:
- Wednesday, February 24 (security release window)
- Wednesday, March 02 (patch release window)
- Wednesday, April 20 (scheduled minor release)
Over these past few months, I've been going through my personal belongings to reduce them to the bare essentials – more-or-less following the rule "If I cannot remember that I had it, I probably don't need it" – with an explicit goal to make it easier for me to relocate in the near future.
Part of that iterative process has involved selling surplus items via Facebook groups. While I initially rejoiced at the invention of the new Selling Something feature, the more I use it, the more I end up cursing at the lack of thought that went into designing that feature:
For instance, in the feature's most recent implementation, it has become possible to post the same ad in several groups. Great idea, right? Actually, no. What this option does is post individual copies of that exact same ad in each group. Managed to sell a particular item? You'll have to sort through each and every duplicate post in each group to mark the item as sold. Need to edit the ad to lower the price or add requested information about the item? Same thing: go through each individual copy of the ad in each group. Seriously, Facebook? Who the fuck designed that senseless implementation?!
Here's a better implementation: centrally manage all items via the user's account preferences, and let the user tick checkboxes besides each item to decide which groups will see that particular item. Ditto whenever editing the ad's content or marking an item as sold; do everything via the user's account preferences. Rather than making the content group-centric, make it user-centric.
In this post I will show you how we implemented a solution that allowed users to upload Powerpoint files to a PHP based web application (Drupal in this example) where they get converted to plain images and rendered back to the user. The requirements for the solution were:More articles...
- When PHP crashes: how to collect meaningful information and what to do with it
- Only update changed fields or properties for an entity in Drupal
- PHP 7 nightlies for Windows
- Fixing Drupal site locks during menu rebuild
- Download SQL Server Native Client: sqlncli
- Installing Drupal on Windows and SQL Server
- Best Drupal Hosting - Choosing the right one
- Drupal 8 Performance: Moving the service container cache away from the database
- Adding products with attributes to cart programmatically using Ubercart
- Git shell on Windows reports “sh.exe has stopped working (APPCRASH)”
- 605d757 Updated readme regarding Python compatibility.
At Debconf15 I gave a talk on topic of having backups be a default service on Debian machines. In that talk, I proposed that we create infrastructure to be included in a default Debian install to manage backups.
I still think this is a good idea, but over the past several months, I've had nearly no time at all to actually do this.
I'm afraid I have to say now that I won't be able to work on this in time for the stretch release. I would be very happy for others to do that, however.
The Debian wiki page https://wiki.debian.org/Backup acts as a central point of information for this. If you're interested in working on this, you can just do it.
Here is my monthly update covering a large part of what I have been doing in the free software world (previously):
- Changed Django's project/app templates to use a py-tpl suffix to workaround the 1.9 release series shipping invalid .py files that are used as templates. (#5735)
- Pushed a number of updates to my Strava Enhancement Suite Chrome extension:
- Added the ability to sort starred segments first. (#42)
- Added an option to display hidden segments by default. (#34)
- Also hide "Yearly Goals" as part of hiding upcoming data. (#41)
- Removed more "premium" badges. (#43)
- Fixed an issue where removing feed entries didn't correctly cleanup subsequently-empty date headers. (commit)
- Updated travis.debian.net, a hosted script to easily test and build Debian packages on Travis CI, to support the wheezy distribution and to improve error-handling in general. (Repo)
- Ensured that try.diffoscope.org, a hosted version of the diffoscope in-depth and content aware diff utility, periodically cleans up containers.
- Moved my personal music library to use dh-virtualenv and to deploy to its own server via Ansible. I also updated the codebase to the latest version of Django at the same time, taking advantage of a large number of new features and recommendations.
- Fixed a strange issue with empty IMAP messages in my tickle-me-email GTD email toolbox.
- It was also a month of significant bug reports and feature requests being sent to my private email.
- Had a talk proposal accepted (Reproducible Builds - fulfilling the original promise of free software) at FOSSASIA 16.
This month I have been paid to work 18 hours on Debian Long Term Support (LTS). In that time I did the following:
- Sevend days of "frontdesk" duties, triaging CVEs, etc.
- Issued DLA 386-1 for cacti to patch an SQL injection vulnerability.
- Issued DLA 388-1 for dwarfutils fixing a NULL deference issue.
- Issued DLA 391-1 for prosody correcting the use of a weak pseudo-random number generator.
- Issued DLA 404-1 for nginx to prevent against an invalid pointer deference.
- redis (2:3.0.7-1) — New upstream stable release, also ensure that test processes are cleaned up and replacing an existing reproducibility patch with a SOURCE_DATE_EPOCH solution.
- python-django (1.9.1-1) — New upstream release.
- disque (1.0~rc1-4) — Make the build reproducible via SOURCE_DATE_EPOCH, ensure that test processes are cleaned up and that the nocheck flag is correctly honoured.
- gunicorn (19.4.5-1) — New upstream release.
- redis (2:3.2~rc3-1) — New upstream RC release (to experimental).
- ftp.debian.org: RM: cakephp-instaweb -- RoM; abandoned upstream
- gem2deb: missing dependency on rake
- libgif7: DGifOpen() broken because it uses uninitialized memory
- python-getdns: backward-incompatible API change in getdns 0.9.0
I also filed 100 FTBFS bugs against apache-log4j2, awscli, binutils, brian, ccbuild, coala, commons-beanutils, commons-vfs, composer, cyrus-sasl2, debiandoc-sgml-doc-pt-br, dfvfs, dillo, django-compat, dulwich, git-annex, grpc, hdf-eos5, hovercraft, ideviceinstaller, ircp-tray, isomd5sum, javamail, jhdf, jsonpickle, kivy, klog, libcloud, libcommons-jexl2-java, libdata-objectdriver-perl, libdbd-sqlite3-perl, libpam-krb5, libproc-waitstat-perl, libslf4j-java, libvmime, linuxdcpp, lsh-utils, mailutils, mdp, menulibre, mercurial, mimeo, molds, mugshot, nose, obex-data-server, obexfs, obexftp, orafce, p4vasp, pa-test, pgespresso, pgpool2, pgsql-asn1oid, php-doctrine-cache-bundle, php-net-ldap2, plv8, pngtools, postgresql-mysql-fdw, pyfftw, pylint-common, pylint-django, pylint-django, python-ase, python-axiom, python-biopython, python-dcos, python-falcon, python-instagram, python-markdown, python-pysam, python-requests-toolbelt, python-ruffus, pytsk, pyviennacl, ros-class-loader, ros-ros-comm, ros-roscpp-core, roxterm, ruby-celluloid-extras, ruby-celluloid-fsm, ruby-celluloid-supervision, ruby-eye, ruby-net-scp, ruby-net-ssh, ruby-sidekiq, ruby-sidekiq-cron, ruby-sinatra-contrib, seaview, smc, spatial4j-0.4, swift-plugin-s3, tilecache, typecatcher, ucommon, undertaker, urdfdom, ussp-push, xserver-xorg-video-intel & yt.FTP Team
As a Debian FTP assistant I ACCEPTed 201 packages: abi-tracker, android-platform-build, android-platform-frameworks-native, android-platform-libcore, android-platform-system-core, animate.css, apitrace, argon2, autosize.js, bagel, betamax, bittorrent, bls-standalone, btfs, caja-dropbox, cegui-mk2, complexity, corebird, courier-authlib, cpopen, ctop, dh-haskell, django-python3-ldap, e2fsprogs1.41, emacs-async, epl, fast5, fastkml, flask-restful, flask-silk, gcc-6, gitlab, golang-github-kolo-xmlrpc, golang-github-kr-fs, golang-github-pkg-sftp, golang-github-prometheus-common, google-auth-library-php, h5py, haskell-aeson-compat, haskell-userid, heroes, hugo, ioprocess, iptables, ivy-debian-helper, ivyplusplus, jquery-timer.js, klaus, kpatch, lazarus, libatteanx-store-sparql-perl, libbrowserlauncher-java, libcgi-test-perl, libdata-sah-normalize-perl, libfsntfs, libjs-fuzzaldrin-plus, libjung-free-java, libmongoc, libmygpo-qt, libnet-nessus-rest-perl, liborcus, libperinci-sub-util-propertymodule-perl, libpodofo, librep, libsodium, libx11-xcb-perl, linux, linux-grsec-base, list.js, lombok, lua-mediator, luajit, maven-script-interpreter, midicsv, mimeo, miniasm, mlpack, mom, mosquitto-auth-plugin, moxie.js, msgpuck, nanopolish, neovim, netcdf, network-manager-applet, network-manager-ssh, node-esprima-fb, node-mocks-http, node-schlock, nomacs, ns3, openalpr, openimageio, openmpi, openms, orafce, pbsim, pd-iemutils, pd-nusmuk, pd-puremapping, pd-purest-json, pg-partman, pg-rage-terminator, pgfincore, pgmemcache, pgsql-asn1oid, php-defaults, php-jwt, php-mf2, php-redis, pkg-info-el, plr, pnmixer, postgresql-multicorn, postgresql-mysql-fdw, powa-archivist, previsat, pylint-flask, pyotherside, python-caldav, python-cookies, python-dcos, python-flaky, python-flickrapi, python-frozendict, python-genty, python-git, python-greenlet, python-instagram, python-ironic-inspector-client, python-manilaclient, python-neutronclient, python-openstackclient, python-openstackdocstheme, python-prometheus-client, python-pymzml, python-pysolr, python-reno, python-requests-toolbelt, python-scales, python-socketio-client, qdox2, qgis, r-cran-biasedurn, rebar.js, repmgr, rfcdiff, rhythmbox-plugin-alternative-toolbar, ripe-atlas-cousteau, ripe-atlas-sagan, ripe-atlas-tools, ros-image-common, ruby-acts-as-list, ruby-allocations, ruby-appraiser, ruby-appraiser-reek, ruby-appraiser-rubocop, ruby-babosa, ruby-combustion, ruby-did-you-mean, ruby-fixwhich, ruby-fog-xenserver, ruby-hamster, ruby-jeweler, ruby-mime-types-data, ruby-monkey-lib, ruby-net-telnet, ruby-omniauth-azure-oauth2, ruby-omniauth-cas3, ruby-puppet-forge, ruby-racc, ruby-reek, ruby-rubinius-debugger, ruby-rubysl, ruby-rubysl-test-unit, ruby-sidekiq-cron, ruby-threach, ruby-wavefile, ruby-websocket-driver, ruby-xmlhash, rustc, s-nail, scrm, select2.js, senlin, skytools3, slurm-llnl, sphinx-argparse, sptk, sunpy, swauth, swift, tdiary, three.js, tiny-initramfs, tlsh, ublock-origin, vagrant-cachier, xapian-core, xmltooling, & yp-tools.
I additionally REJECTed 29 packages.
So many options, so little time.
Many years ago I handled all my network connections via /etc/network/interfaces.
Then, I was in desperate need of a Internet connection and all I had was a borrowed USB cell phone modem. My friends running NetworkManager just plugged the stick in and were online. I was left with the task of figuring out how to manually configure this piece of hardware without being online. I ended up borrowing my friend's computer. Then, when I got home, I installed NetworkManager.
Once I had NetworkManager installed, I decided it was easier to find, connect to and manage passwords of wireless networks using a graphical tool rather than digging through the copious output of commands run from my terminal and trying to keep track of the passwords. So long wireless.
Then I had to help a colleague get on our Virtual Private Network. Wow. There's a NetworkManager GUI for that too. If I'm going to support my colleauge with this tool... I guess I should use it as well. I also managed to write a dispatcher script in /etc/NetworkManager/dispatcher.d that calls su -c "/usr/bin/smbnetfs /media/smbnetfs" -l jamie when it receives and action of "vpn-up" and umount /media/smbnetfs 2>/dev/null on "vpn-down." Now I can mount the samba share by simply connecting to the VPN via NetworkManager.
My cable connections are almost always configured using DHCP. Almost everything else is in NetworkManager, why not move enp1s0f2 as well?
What's left? My final piece is my bridge. I still run and manage my own KVM guests and I have a bridge to handle that traffic. I first decided to move this functionality to systemd.network because systemd can not only handle the bridge, but can also handle IP Forwarding, DHCP service, and best of all, IP Masquerading. Well, almost... not IP Masquerading after all, at least not yet.
Without IP masquerading, I figured I'd go with NetworkManager. Having all networking in the same place gives me an illusion of control at best and at worst makes it easier to debug. So, I setup a crufty script in /etc/NetworkManager/dispatcher.d that configures masquerading via iptables everytime either my wireless or wired network goes up or down, which I'm not crazy about. Maybe when #787480 is fixed I'll got back to systemd. I also edited /etc/sysctl.conf to enable #net.ipv4.ip_forward=1. Then I changed it back and added my own file in /etc/sysctl.d to do the same thing. Then I deleted that file and added sysctl net.ipv4.ip_forward=1 and sysctl net.ipv4.ip_forward=0 to my crufty dispatcher script. I decided to do without DHCP - I can manually configure the few KVM instances that I run.
Now /etc/network/interfaces is so lonely.
With Drupal 8 there is a very easy and practical way to add this custom view as a configuration that will be installed with the module.1) extract the configuration data
Navigate to "/admin/config/development/configuration/single/export".
On this page, select configuration type 'view' and configuration name 'My module list' that was created earlier.
2) create configuration install file
You will obtain from the above export a list of configuration data that you can copy and paste into a file called for instance "views.view.mymodule-list.yml";
Simply place this file into the install folder :
DrupalEasy: DrupalEasy Podcast 168 - Spooning with a Fork (Jen Lampton, Nate Haug - Backdrop Update)
Drupal fork Backdrop co-founders Jen Lampton (jenlampton) and Nate Haug (quicksketch) joined Mike, Anna, and Ted to discuss the current state of Backdrop, its (surprising) relations with the Drupal community, Drupal 8, as well as some current Drupal news and our picks of the week!
* Blue: The code from a Stack Exchange developer that saved my life and got the ball rolling for everything else.
* Green: My own comments for the purpose of this blog post.
* Orange: Additional edits made by me.
* Pink: Code that was deleted by me.
* Yellow: Code that was copied and pasted by me.
When using the Fivestar module (version 7.x-2.1), I can show the result in three ways:Tags: Drupal 7Drupal Planet
Spam is a big headache for many website owners. Using the Drupal impression module, I saw the relentlessness of the spammer bots. Every day, for a single site, I got thousands of hit from URLs like "/?q=user/register" and "/?q=node/add". I have someone commented on my LinkedIn update of my blog post Is there more computer bots than us?. She is "on the verge of giving up on Drupal after being unable to solve this problem". How do we address this issue and solve the problem? I know this is not the issue for one CMS like Drupal, but, it provides some mandate for us to do something. Build something for Drupal and usable by other CMS like Wordpress and Joomla.
I have a bold idea of blocking spam efficiently without taking a toll on the performance of every website. Let's set up a websites spam defence network. A network based on a global spam IP database. Each website is a node of the defence network. It provides spamming IP query as a web service.
The idea is to have a distributed but well-controlled spam IP servers. All participated website acting as a node in the network and capture spamming IP and report it. Web sites are connected and talk to each other and form a defendant line in front of spammers. The network will quarantine the spammer IP for 45 minutes or more depending on how active the spamming activity. The IP will get off the list after the quarantine time ended.
Web sites that join the network will have faster responding website by freeing up the resource taken by spamming activity. We will have a cleaner internet by eliminating the fake users, spamming comments and contents.
Technical wise, we use the open source solution. We can build distributed spam IP database like git repository. We use composer repository, so, all PHP based CMS websites can easily join the network.
I recently ordered some PCBs from Elecrow for the Vic's beer-measurement system I've been designing with Rob. While on the site, I noticed that they have a single-channel digital oscilloscope kit based on an STM32. This is a JYE Tech DSO138 which arrives as a PCB whose surface-mount stuff has been fitted, along with a whole bunch of pin-through components for you to solder up the scope yourself. There's a non-trivial number of kinds of components, so first you should prep by splitting them all up and double-checking them all.
Once you've done that, the instructions start you off fitting a whole bunch of resistors...
Then some diodes, RF chokes, and the 8MHz crystal for the STM32.
The single most-difficult bit for me to solder was the USB socket. Fine pitch leads, coupled with high-thermal-density socket.
There is a veritable mountain of ceramic capacitors to fit...
And then buttons, inductors, trimming capacitors and much more...
THe switches were the next hardest things to solder, after the USB socket...
Finally you have to solder a test loop and close some jumpers before you power-test the board.
The last bit of soldering is to solder pins to the LCD panel board...
Before you finally have a working oscilloscope
I followed the included instructions to trim the scope using the test point and the trimming capacitors, before having a break to write this up for you all. I'd say that it was a fun day because I enjoyed getting a lot of soldering practice (before I have to solder up the beer'o'meter for the pub) and at the end of it I got a working oscilloscope. For 40 USD, I'd recommend this to anyone who fancies a go.
Over the last few days I've been at the GNOME Developer Experience hackfest in Brussels, looking into xdg-app and how best to use it in Debian and Debian derivatives.
xdg-app is basically a way to run "non-core" software on Linux distributions, analogous to apps on Android and iOS. It doesn't replace distributions like Debian or packaging systems, but it adds a layer above them. It's mostly aimed towards third-party apps obtained from somewhere that isn't your distribution vendor, aiming to address a few long-standing problems in that space:
There's no single ABI that can be called "a standard Linux system" in the same way there would be for Windows or OS X or Android or whatever, apart from LSB which is rather limited. Testing that a third-party app "works on Linux", or even "works on stable Linux releases from 2015", involves a combinatorial explosion of different distributions, desktop environments and local configurations. Steam uses the Steam Runtime, a chroot environment closely resembling Ubuntu 12.04 LTS; other vendors tend to test on a vaguely recent Ubuntu LTS and leave it at that.
There's no widely-supported mechanism for installing third-party applications as an ordinary user. gog.com used to distribute Ubuntu- and Debian-compatible .deb files, but installing a .deb involves running arbitrary vendor-supplied scripts as root, which should worry anyone who wants any sort of privilege-separation. (They have now switched to executable self-extracting installers, which involve running arbitrary vendor-supplied scripts as an ordinary user... better, but not perfect.)
Relatedly, the third-party application itself runs with the user's full privileges: a malicious or security-buggy third-party application can do more or less anything, unless you either switch to a different uid to run third-party apps, or use a carefully-written, app-specific AppArmor profile or equivalent.
To address the first point, each application uses a specified "runtime", which is available as /usr inside its sandbox. This can be used to run application bundles with multiple, potentially incompatible sets of dependencies within the same desktop environment. A runtime can be updated within its branch - for instance, if an application uses the "GNOME 3.18" runtime (consisting of a basic Linux system, the GNOME 3.18 libraries, other related libraries like Mesa, and their recursive dependencies like libjpeg), it can expect to see minor-version updates from GNOME 3.18.x (including any security updates that might be necessary for the bundled libraries), but not a jump to GNOME 3.20.
To address the second issue, the plan is for application bundles to be available as a single file, containing metadata (such as the runtime to use), the app itself, and any dependencies that are not available in the runtime (which the app vendor is responsible for updating if necessary). However, the primary way to distribute and upgrade runtimes and applications is to package them as OSTree repositories, which provide a git-like content-addressed filesystem, with efficient updates using binary deltas. The resulting files are hard-linked into place.
To address the last point, application bundles run partially isolated from the wider system, using containerization techniques such as namespaces to prevent direct access to system resources. Resources from outside the sandbox can be accessed via "portal" services, which are responsible for access control; for example, the Documents portal (the only one, so far) displays an "Open" dialog outside the sandbox, then allows the application to access only the selected file.xdg-app for Debian
One thing I've been doing at this hackfest is improving the existing Debian/Ubuntu packaging for xdg-app (and its dependencies ostree and libgsystem), aiming to get it into a state where I can upload it to Debian experimental. Because xdg-app aims to be a general freedesktop project, I'm currently intending to make it part of the "Utopia" packaging team alongside projects like D-Bus and polkit, but I'm open to suggestions if people want to co-maintain it elsewhere.
In the process of updating xdg-app, I sent various patches to Alex, mostly fixing build and test issues, which are in the new 0.4.8 release.
I'd appreciate co-maintainers and further testing for this stuff, particularly ostree: ostree is primarily a whole-OS deployment technology, which isn't a use-case that I've tested, and in particular ostree-grub2 probably doesn't work yet.
Binaries (no trust path, so only use these if you have a test VM):
- deb https://people.debian.org/~smcv/xdg-app xdg-app main
Another thing I set out to do here was to make a runtime and an app out of Debian packages. Most of the test applications in and around GNOME use the "freedesktop" or "GNOME" runtimes, which consist of a Yocto base system and lots of RPMs, are rebuilt from first principles on-demand, and are extensive and capable enough that they make it somewhat non-obvious what's in an app or a runtime.
So, here's a step-by-step route through xdg-app, first using typical GNOME instructions, but then using the simplest GUI app I could find - xvt, a small xterm clone. I'm using a Debian testing (stretch) x86_64 virtual machine for all this. xdg-app currently requires systemd-logind to put users and apps in cgroups, either with systemd as pid 1 (systemd-sysv) or systemd-shim and cgmanager; I used the default systemd-sysv. In principle it could work with plain cgmanager, but nobody has contributed that support yet.Demonstrating an existing xdg-app
Debian's kernel is currently patched to be able to allow unprivileged users to create user namespaces, but make it runtime-configurable, because there have been various security issues in that feature, making it a security risk for a typical machine (and particularly a server). Hopefully unprivileged user namespaces will soon be secure enough that we can enable them by default, but for now, we have to do one of three things to let xdg-app use them:
enable unprivileged user namespaces via sysctl:sudo sysctl kernel.unprivileged_userns_clone=1
make xdg-app root-privileged (it will keep CAP_SYS_ADMIN and drop the rest):sudo dpkg-statoverride --update --add root root 04755 /usr/bin/xdg-app-helper
make xdg-app slightly less privileged:sudo setcap cap_sys_admin+ep /usr/bin/xdg-app-helper
First, we'll need a runtime. The standard xdg-app tutorial would tell you to download the "GNOME Platform" version 3.18. To do that, you'd add a remote, which is a bit like a git remote, and a bit like an apt repository:$ wget http://sdk.gnome.org/keys/gnome-sdk.gpg $ xdg-app remote-add --user --gpg-import=gnome-sdk.gpg gnome \ http://sdk.gnome.org/repo/
(I'm ignoring considerations like trust paths and security here, for brevity; in real life, you'd want to obtain the signing key via https and/or have a trust path to it, just like you would for a secure-apt signing key.)
You can list what's available in a remote:$ xdg-app remote-ls --user gnome ... org.freedesktop.Platform ... org.freedesktop.Platform.Locale.cy ... org.freedesktop.Sdk ... org.gnome.Platform ...
The Platform runtimes are what we want here: they are collections of runtime libraries with which you can run an application. The Sdk runtimes add development tools, header files, etc. to be able to compile apps that will be compatible with the Platform.
For now, all we want is the GNOME 3.18 platform:$ xdg-app install --user gnome org.gnome.Platform 3.18
Next, we can install an app that uses it, from Alex Larsson's nightly builds of a subset of GNOME. The server they're on doesn't have a great deal of bandwidth, so be nice :-)$ wget http://188.8.131.52/keys/nightly.gpg $ xdg-app remote-add --user --gpg-import=nightly.gpg nightly \ http://184.108.40.206/repo/ $ xdg-app install --user nightly org.mypaint.MypaintDevel
We now have one app, and the runtime it needs:$ xdg-app list org.mypaint.MypaintDevel $ xdg-app run org.mypaint.MypaintDevel [you see a GUI window] Digression: what's in a runtime?
Behind the scenes, xdg-app runtimes and apps are both OSTree trees. This means the ostree tool, from the package of the same name, can be used to inspect them.$ sudo apt install ostree $ ostree refs --repo ~/.local/share/xdg-app/repo gnome:runtime/org.gnome.Platform/x86_64/3.18 nightly:app/org.mypaint.MypaintDevel/x86_64/master
A "ref" has roughly the same meaning as in git: something like a branch or a tag. ostree can list the directory tree that it represents:$ ostree ls --repo ~/.local/share/xdg-app/repo \ runtime/org.gnome.Platform/x86_64/3.18 d00755 0 0 0 / -00644 0 0 493 /metadata d00755 0 0 0 /files $ ostree ls --repo ~/.local/share/xdg-app/repo \ runtime/org.gnome.Platform/x86_64/3.18 /files d00755 0 0 0 /files l00777 0 0 0 /files/local -> ../var/usrlocal l00777 0 0 0 /files/sbin -> bin d00755 0 0 0 /files/bin d00755 0 0 0 /files/cache d00755 0 0 0 /files/etc d00755 0 0 0 /files/games d00755 0 0 0 /files/include d00755 0 0 0 /files/lib d00755 0 0 0 /files/lib64 d00755 0 0 0 /files/libexec d00755 0 0 0 /files/share d00755 0 0 0 /files/src
You can see that /files in a runtime is basically a copy of /usr. This is not coincidental: the runtime's /files gets mounted at /usr inside the xdg-app container. There is also some metadata, which is in the ini-like syntax seen in .desktop files:$ ostree cat --repo ~/.local/share/xdg-app/repo \ runtime/org.gnome.Platform/x86_64/3.18 /metadata [Runtime] name=org.gnome.Platform/x86_64/3.16 runtime=org.gnome.Platform/x86_64/3.16 sdk=org.gnome.Sdk/x86_64/3.16 [Extension org.freedesktop.Platform.GL] version=1.2 directory=lib/GL [Extension org.freedesktop.Platform.Timezones] version=1.2 directory=share/zoneinfo [Extension org.gnome.Platform.Locale] directory=share/runtime/locale subdirectories=true [Environment] GI_TYPELIB_PATH=/app/lib/girepository-1.0 GST_PLUGIN_PATH=/app/lib/gstreamer-1.0 LD_LIBRARY_PATH=/app/lib:/usr/lib/GL
Looking at an app, the situation is fairly similar:$ ostree ls --repo ~/.local/share/xdg-app/repo \ app/org.mypaint.MypaintDevel/x86_64/master d00755 0 0 0 / -00644 0 0 258 /metadata d00755 0 0 0 /export d00755 0 0 0 /files
This time, /files maps to what will become /app for the application, which was compiled with --prefix=/app:$ ostree ls --repo ~/.local/share/xdg-app/repo \ app/org.mypaint.MypaintDevel/x86_64/master /files d00755 0 0 0 /files -00644 0 0 4599 /files/manifest.json d00755 0 0 0 /files/bin d00755 0 0 0 /files/lib d00755 0 0 0 /files/share
There is also a /export directory, which is made visible to the host system so that the contained app can appear as a "first-class citizen" in menus:$ ostree ls --repo ~/.local/share/xdg-app/repo \ app/org.mypaint.MypaintDevel/x86_64/master /export d00755 0 0 0 /export d00755 0 0 0 /export/share user@debian:~$ ostree ls --repo ~/.local/share/xdg-app/repo \ app/org.mypaint.MypaintDevel/x86_64/master /export/share d00755 0 0 0 /export/share d00755 0 0 0 /export/share/app-info d00755 0 0 0 /export/share/applications d00755 0 0 0 /export/share/icons user@debian:~$ ostree ls --repo ~/.local/share/xdg-app/repo \ app/org.mypaint.MypaintDevel/x86_64/master /export/share/applications d00755 0 0 0 /export/share/applications -00644 0 0 715 /export/share/applications/org.mypaint.MypaintDevel.desktop $ ostree cat --repo ~/.local/share/xdg-app/repo \ app/org.mypaint.MypaintDevel/x86_64/master \ /export/share/applications/org.mypaint.MypaintDevel.desktop [Desktop Entry] Version=1.0 Name=(Nightly) MyPaint TryExec=mypaint Exec=mypaint %f Comment=Painting program for digital artists ... Comment[zh_HK]=藝術家的电脑绘画 GenericName=Raster Graphics Editor GenericName[fr]=Éditeur d'Image Matricielle MimeType=image/openraster;image/png;image/jpeg; Type=Application Icon=org.mypaint.MypaintDevel StartupNotify=true Categories=Graphics;GTK;2DGraphics;RasterGraphics; Terminal=false
Again, there's some metadata:$ ostree cat --repo ~/.local/share/xdg-app/repo \ app/org.mypaint.MypaintDevel/x86_64/master /metadata [Application] name=org.mypaint.MypaintDevel runtime=org.gnome.Platform/x86_64/3.18 sdk=org.gnome.Sdk/x86_64/3.18 command=mypaint [Context] shared=ipc; sockets=x11;pulseaudio; filesystems=host; [Extension org.mypaint.MypaintDevel.Debug] directory=lib/debug Building a runtime, probably the wrong way
The way in which the reference/demo runtimes and containers are generated is... involved. As far as I can tell, there's a base OS built using Yocto, and the actual GNOME bits come from RPMs. However, we don't need to go that far to get a working runtime.
In preparing this runtime I'm probably completely ignoring some best-practices and tools - but it works, so it's good enough.
First we'll need a repository:$ sudo install -d -o$(id -nu) /srv/xdg-apps $ ostree init --repo /srv/xdg-apps
I'm just keeping this local for this demonstration, but you could rsync it to a web server's exported directory or something - a lot like a git repository, it's just a collection of files. We want everything in /usr because that's what xdg-app expects, hence usrmerge:$ sudo mount -t tmpfs -o mode=0755 tmpfs /mnt $ sudo debootstrap --arch=amd64 --include=libx11-6,usrmerge \ --variant=minbase stretch /mnt http://192.168.122.1:3142/debian $ sudo mkdir /mnt/runtime $ sudo mv /mnt/usr /mnt/runtime/files
This obviously has a lot of stuff in it that we don't need - most obviously init, apt and dpkg - but it's Good Enough™.
We will also need some metadata. This is sufficient:$ sudo sh -c 'cat > /mnt/runtime/metadata' [Runtime] name=org.debian.Debootstrap/x86_64/8.20160130 runtime=org.debian.Debootstrap/x86_64/8.20160130
That's a runtime. We can commit it to ostree, and generate xdg-app metadata:$ ostree commit --repo /srv/xdg-apps \ --branch runtime/org.debian.Debootstrap/x86_64/8.20160130 \ /mnt/runtime $ fakeroot ostree commit --repo /srv/xdg-apps \ --branch runtime/org.debian.Debootstrap/x86_64/8.20160130 $ fakeroot xdg-app build-update-repo /srv/xdg-apps
(I'm not sure why ostree and xdg-app report "Operation not permitted" when we aren't root or fakeroot - feedback welcome.)
build-update-repo would presumably also be the right place to GPG-sign your repository, if you were doing that.
We can add that as another xdg-app remote:$ xdg-app remote-add --user --no-gpg-verify local file:///srv/xdg-apps $ xdg-app remote-ls --user local org.debian.Debootstrap Building an app, probably the wrong way
The right way to build an app is to build a "SDK" runtime - similar to that platform runtime, but with development files and tools - and recompile the app and any missing libraries with ./configure --prefix=/app && make && make install. I'm not going to do that, because simplicity is nice, and I'm reasonably sure xvt doesn't actually hard-code /usr into the binary:$ install -d xvt-app/files/bin $ sudo apt-get --download-only install xvt $ dpkg-deb --fsys-tarfile /var/cache/apt/archives/xvt_2.1-20.1_amd64.deb \ | tar -xvf - ./usr/bin/xvt ./usr/ ./usr/bin/ ./usr/bin/xvt ... $ mv usr xvt-app/files
Again, we'll need metadata, and it's much simpler than the more production-quality GNOME nightly builds:$ cat > xvt-app/metadata [Application] name=org.debian.packages.xvt runtime=org.debian.Debootstrap/x86_64/8.20160130 command=xvt [Context] sockets=x11; $ fakeroot ostree commit --repo /srv/xdg-apps \ --branch app/org.debian.packages.xvt/x86_64/2.1-20.1 xvt-app $ fakeroot xdg-app build-update-repo /srv/xdg-apps Updating appstream branch No appstream data for runtime/org.debian.Debootstrap/x86_64/8.20160130 No appstream data for app/org.debian.packages.xvt/x86_64/2.1-20.1 Updating summary $ xdg-app remote-ls --user local org.debian.Debootstrap org.debian.packages.xvt The obligatory screenshot
OK, good, now we can install it:$ xdg-app install --user local org.debian.Debootstrap 8.20160130 $ xdg-app install --user local org.debian.packages.xvt 2.1-20.1 $ xdg-app run --branch=2.1-20.1 org.debian.packages.xvt
and you can play around with the shell in the xvt and see what you can and can't do in the container.
I'm sure there were better ways to do most of this, but I think there's value in having such a simplistic demo to go alongside the various GNOMEish apps.
- Betacowork Coworking Brussels and ICAB Business & Technology Incubator hosted the hackfest, with Collabora providing snacks, and the GNOME Foundation supporting the hackfest in general;
- Collabora also sponsored my travel, accommodation and time;
- my colleague Philip Withnall organised the hackfest.
Thanks to all those!
Drupal 8 was released just over two months ago. Is it time yet for you to start using it on your production sites?
You'll need to consider the state of the modules you typically use to build your sites, the state of the themes you typically use to build your sites, the nature of the site, the budget for the site and your own skill set.
There are, without a doubt, sites that are being launched on Drupal 8 already. And, at the same time there is this:
So, there is obviously still work to be done.
I’m currently over at FOSDEM, and have been asked by a couple of people about the state of ZFS and Debian. So, I thought I’d give a quick post to explain what Debian’s current plan is (which has come together with a lot of discussion with the FTP Masters and others around what we should do).
TLDR: It’s going in contrib, as a source only dkms module.
Debian has always prided itself in providing the unequivocally correct solution to our users and downstream distributions. This also includes licenses – we make sure that Debian will contain 100% free software. This means that if you install Debian, you are guaranteed freedoms offered under the DFSG and our social contract.
Now, this is where ZFS on Linux gets tricky. ZFS is licensed under the CDDL, and the Linux kernel under the GPLv2-only. The project views that both of these are free software licenses, but they’re incompatible with each other. This incompatibility means that there is risk to producing a combined work with Linux and a CDDL module. (Note: there is arguments about if a kernel module, once loaded, is a combined work with the kernel. I’m not touching that with a barge pole, as I Am Not A Lawyer.)
Now, does this mean that Debian would get sued by distributing ZFS natively compiled into the kernel? Well, maybe, but I think it’s a bit unlikely. This doesn’t mean it’s the right choice for Debian to take as a project though! It brings us back to our promise to our users, and our commercial and non-commercial downstream distributions. If a commercial downstream distribution took the next release of stable, and used our binaries, they may well get sued if they have enough money to make it worthwhile. Additionally, Debian has always taken its commitment to upstream licenses very seriously. If there’s a doubt, it doesn’t go in official Debian.
It should be noted that ZFS is something that is important to a lot of Debian users, who all want to be able to use ZFS in a manner that makes it easier for them to install. Thus, the position that we’ve arrived at is that we can ship ZFS as a source only, DKMS module. This means it will be built on the target machines, and we’re not distributing binaries. There’s also a warning in the README.Debian file explaining that care should be taken if you do things with the resultant binary – as we can’t promise it complies with the licenses.
Finally, I should point out that this isn’t my decision in the end. The contents of the archive is a decision for the FTP-Masters, as it’s delegated. However, what I have been able to do is coordinate many conflicting views, and I hope that ZFS will be accepted into the archive soon!