Planet Debian

Subscribe to Planet Debian feed
Planet Debian - http://planet.debian.org/
Updated: 28 min 34 sec ago

Mario Lang: deluXbreed #2 is out!

5 hours 34 min ago

The third installment of my crossbreed digital mix podcast is out!

This time, I am featuring Harder & Louder and tracks from Behind the Machine and the recently released Remixes.

  1. Apolloud - Nagazaki
  2. Apolloud - Hiroshima
  3. SA+AN - Darksiders
  4. Im Colapsed - Cleaning 8
  5. Micromakine & Switch Technique - Ascension
  6. Micromakine - Cyberman (Dither Remix)
  7. Micromakine - So Good! (Synapse Remix)
How was DarkCast born and how is it done?

I always loved 175BPM music. It is an old thing that is not going away soon :-). I recently found that there is a quite active culture going on, at least on BandCamp. But single tracks are just that, not really fun to listen to in my opinion. This sort of music needs to be mixed to be fun. In the past, I used to have most tracks I like/love as vinyl, so I did some real-world vinyl mixing myself. But these days, most fun music is only available digitally, at least easily. Some people still do vinyl releases, but they are actually rare.

So for my personal enjoyment, I started to digitally mix tracks I really love, such that I can listen to them without "interruption". And since I am an iOS user since three years now, using the podcast format to get stuff onto my devices was quite a natural choice.

I use SoX and a very small shell script to create these mixes. Here is a pseudo-template:

sox --combine mix-power \ "|sox \"|sox 1.flac -p\" \"|sox 3.flac -p speed 0.987 delay 2:28.31 2:28.31\"-p" \ "|sox \"|sox 2.flac -p delay 2:34.1 2:34.1\" -p" \ mix.flac

As you can imagine, it is quite a bit of fiddling to get these scripts to do what you want. But it is a non-graphical method to get things done. If you know of a better tool, possibly with a bit of real-time controls, to get the same job done, wihtout having to resort to a damn GUI, let me know.

Categories: Elsewhere

Gregor Herrmann: GDAC 2014/17

Wed, 17/12/2014 - 23:30

my list of IRC channels (& the list of people I'm following on micro-blogging platforms) has a heavy debian bias. a thing I noticed today is that I had read (or at least: seen) messages in 6 languages (English, German, Castilian, Catalan, French, Italian). – thanks guys for the free language courses :) (& the opportunity to at least catch a glimpse into other cultures)

this posting is part of GDAC (gregoa's debian advent calendar), a project to show the bright side of debian & why it's fun for me to contribute.

Categories: Elsewhere

Keith Packard: MST-monitors

Wed, 17/12/2014 - 10:36
Multi-Stream Transport 4k Monitors and X

I'm sure you've seen a 4k monitor on a friends desk running Mac OS X or Windows and are all ready to go get one so that you can use it under Linux.

Once you've managed to acquire one, I'm afraid you'll discover that when you plug it in, you're limited to 30Hz refresh rates at the full size, unless you're running a kernel that is version 3.17 or later. And then...

Good Grief! What Is My Computer Doing!

Ok, so now you're running version 3.17 and when X starts up, it's like you're using a gigantic version of Google Cardboard. Two copies of a very tall, but very narrow screen greets you.

Welcome to MST island.

In order to drive these giant new panels at full speed, there isn't enough bandwidth in the display hardware to individually paint each pixel once during each frame. So, like all good hardware engineers, they invented a clever hack.

This clever hack paints the screen in parallel. I'm assuming that they've got two bits of display hardware, each one hooked up to half of the monitor. Now, each paints only half of the pixels, avoiding costly redesign of expensive silicon, at least that's my surmise.

In the olden days, if you did this, you'd end up running two monitor cables to your computer, and potentially even having two video cards. Today, thanks to the magic of Display Port Multi-Stream Transport, we don't need all of that; instead, MST allows us to pack multiple cables-worth of data into a single cable.

I doubt the inventors of MST intended it to be used to split a single LCD panel into multiple "monitors", but hardware engineers are clever folk and are more than capable of abusing standards like this when it serves to save a buck.

Turning Two Back Into One

We've got lots of APIs that expose monitor information in the system, and across which we might be able to wave our magic abstraction wand to fix this:

  1. The KMS API. This is the kernel interface which is used by all graphics stuff, including user-space applications and the frame buffer console. Solve the problem here and it works everywhere automatically.

  2. The libdrm API. This is just the KMS ioctls wrapped in a simple C library. Fixing things here wouldn't make fbcons work, but would at least get all of the window systems working.

  3. Every 2D X driver. (Yeah, we're trying to replace all of these with the one true X driver). Fixing the problem here would mean that all X desktops would work. However, that's a lot of code to hack, so we'll skip this.

  4. The X server RandR code. More plausible than fixing every driver, this also makes X desktops work.

  5. The RandR library. If not in the X server itself, how about over in user space in the RandR protocol library? Well, the problem here is that we've now got two of them (Xlib and xcb), and the xcb one is auto-generated from the protocol descriptions. Not plausible.

  6. The Xinerama code in the X server. Xinerama is how we did multi-monitor stuff before RandR existed. These days, RandR provides Xinerama emulation, but we've been telling people to switch to RandR directly.

  7. Some new API. Awesome. Ok, so if we haven't fixed this in any existing API we control (kernel/libdrm/X.org), then we effectively dump the problem into the laps of the desktop and application developers. Given how long it's taken them to adopt current RandR stuff, providing yet another complication in their lives won't make them very happy.

All Our APIs Suck

Dave Airlie merged MST support into the kernel for version 3.17 in the simplest possible fashion -- pushing the problem out to user space. I was initially vaguely tempted to go poke at it and try to fix things there, but he eventually convinced me that it just wasn't feasible.

It turns out that all of our fancy new modesetting APIs describe the hardware in more detail than any application actually cares about. In particular, we expose a huge array of hardware objects:

  • Subconnectors
  • Connectors
  • Outputs
  • Video modes
  • Crtcs
  • Encoders

Each of these objects exposes intimate details about the underlying hardware -- which of them can work together, and which cannot; what kinds of limits are there on data rates and formats; and pixel-level timing details about blanking periods and refresh rates.

To make things work, some piece of code needs to actually hook things up, and explain to the user why the configuration they want just isn't possible.

The sticking point we reached was that when an MST monitor gets plugged in, it needs two CRTCs to drive it. If one of those is already in use by some other output, there's just no way you can steal it for MST mode.

Another problem -- we expose EDID data and actual video mode timings. Our MST monitor has two EDID blocks, one for each half. They happen to describe how they're related, and how you should configure them, but if we want to hide that from the application, we'll have to pull those EDID blocks apart and construct a new one. The same goes for video modes; we'll have to construct ones for MST mode.

Every single one of our APIs exposes enough of this information to be dangerous.

Every one, except Xinerama. All it talks about is a list of rectangles, each of which represents a logical view into the desktop. Did I mention we've been encouraging people to stop using this? And that some of them listened to us? Foolishly?

Dave's Tiling Property

Dave hacked up the X server to parse the EDID strings and communicate the layout information to clients through an output property. Then he hacked up the gnome code to parse that property and build a RandR configuration that would work.

Then, he changed to RandR Xinerama code to also parse the TILE properties and to fix up the data seen by application from that.

This works well enough to get a desktop running correctly, assuming that desktop uses Xinerama to fetch this data. Alas, gtk has been "fixed" to use RandR if you have RandR version 1.3 or later. No biscuit for us today.

Adding RandR Monitors

RandR doesn't have enough data types yet, so I decided that what we wanted to do was create another one; maybe that would solve this problem.

Ok, so what clients mostly want to know is which bits of the screen are going to be stuck together and should be treated as a single unit. With current RandR, that's some of the information included in a CRTC. You pull the pixel size out of the associated mode, physical size out of the associated outputs and the position from the CRTC itself.

Most of that information is available through Xinerama too; it's just missing physical sizes and any kind of labeling to help the user understand which monitor you're talking about.

The other problem with Xinerama is that it cannot be configured by clients; the existing RandR implementation constructs the Xinerama data directly from the RandR CRTC settings. Dave's Tiling property changes edit that data to reflect the union of associated monitors as a single Xinerama rectangle.

Allowing the Xinerama data to be configured by clients would fix our 4k MST monitor problem as well as solving the longstanding video wall, WiDi and VNC troubles. All of those want to create logical monitor areas within the screen under client control

What I've done is create a new RandR datatype, the "Monitor", which is a rectangular area of the screen which defines a rectangular region of the screen. Each monitor has the following data:

  • Name. This provides some way to identify the Monitor to the user. I'm using X atoms for this as it made a bunch of things easier.

  • Primary boolean. This indicates whether the monitor is to be considered the "primary" monitor, suitable for placing toolbars and menus.

  • Pixel geometry (x, y, width, height). These locate the region within the screen and define the pixel size.

  • Physical geometry (width-in-millimeters, height-in-millimeters). These let the user know how big the pixels will appear in this region.

  • List of outputs. (I think this is the clever bit)

There are three requests to define, delete and list monitors. And that's it.

Now, we want the list of monitors to completely describe the environment, and yet we don't want existing tools to break completely. So, we need some way to automatically construct monitors from the existing RandR state while still letting the user override portions of it as needed to explain virtual or tiled outputs.

So, what I did was to let the client specify a list of outputs for each monitor. All of the CRTCs which aren't associated with an output in any client-defined monitor are then added to the list of monitors reported back to clients. That means that clients need only define monitors for things they understand, and they can leave the other bits alone and the server will do something sensible.

The second tricky bit is that if you specify an empty rectangle at 0,0 for the pixel geometry, then the server will automatically compute the geometry using the list of outputs provided. That means that if any of those outputs get disabled or reconfigured, the Monitor associated with them will appear to change as well.

Current Status

Gtk+ has been switched to use RandR for RandR versions 1.3 or later. Locally, I hacked libXrandr to override the RandR version through an environment variable, set that to 1.2 and Gtk+ happily reverts back to Xinerama and things work fine. I suspect the plan here will be to have it use the new Monitors when present as those provide the same info that it was pulling out of RandR's CRTCs.

KDE appears to still use Xinerama data for this, so it "just works".

Where's the code

As usual, all of the code for this is in a collection of git repositories in my home directory on fd.o:

git://people.freedesktop.org/~keithp/randrproto master git://people.freedesktop.org/~keithp/libXrandr master git://people.freedesktop.org/~keithp/xrandr master git://people.freedesktop.org/~keithp/xserver randr-monitors RandR protocol changes

Here's the new sections added to randrproto.txt

❧❧❧❧❧❧❧❧❧❧❧ 1.5. Introduction to version 1.5 of the extension Version 1.5 adds monitors • A 'Monitor' is a rectangular subset of the screen which represents a coherent collection of pixels presented to the user. • Each Monitor is be associated with a list of outputs (which may be empty). • When clients define monitors, the associated outputs are removed from existing Monitors. If removing the output causes the list for that monitor to become empty, that monitor will be deleted. • For active CRTCs that have no output associated with any client-defined Monitor, one server-defined monitor will automatically be defined of the first Output associated with them. • When defining a monitor, setting the geometry to all zeros will cause that monitor to dynamically track the bounding box of the active outputs associated with them This new object separates the physical configuration of the hardware from the logical subsets the screen that applications should consider as single viewable areas. 1.5.1. Relationship between Monitors and Xinerama Xinerama's information now comes from the Monitors instead of directly from the CRTCs. The Monitor marked as Primary will be listed first. ❧❧❧❧❧❧❧❧❧❧❧ 5.6. Protocol Types added in version 1.5 of the extension MONITORINFO { name: ATOM primary: BOOL automatic: BOOL x: INT16 y: INT16 width: CARD16 height: CARD16 width-in-millimeters: CARD32 height-in-millimeters: CARD32 outputs: LISTofOUTPUT } ❧❧❧❧❧❧❧❧❧❧❧ 7.5. Extension Requests added in version 1.5 of the extension. ┌─── RRGetMonitors window : WINDOW ▶ timestamp: TIMESTAMP monitors: LISTofMONITORINFO └─── Errors: Window Returns the list of Monitors for the screen containing 'window'. 'timestamp' indicates the server time when the list of monitors last changed. ┌─── RRSetMonitor window : WINDOW info: MONITORINFO └─── Errors: Window, Output, Atom, Value Create a new monitor. Any existing Monitor of the same name is deleted. 'name' must be a valid atom or an Atom error results. 'name' must not match the name of any Output on the screen, or a Value error results. If 'info.outputs' is non-empty, and if x, y, width, height are all zero, then the Monitor geometry will be dynamically defined to be the bounding box of the geometry of the active CRTCs associated with them. If 'name' matches an existing Monitor on the screen, the existing one will be deleted as if RRDeleteMonitor were called. For each output in 'info.outputs, each one is removed from all pre-existing Monitors. If removing the output causes the list of outputs for that Monitor to become empty, then that Monitor will be deleted as if RRDeleteMonitor were called. Only one monitor per screen may be primary. If 'info.primary' is true, then the primary value will be set to false on all other monitors on the screen. RRSetMonitor generates a ConfigureNotify event on the root window of the screen. ┌─── RRDeleteMonitor window : WINDOW name: ATOM └─── Errors: Window, Atom, Value Deletes the named Monitor. 'name' must be a valid atom or an Atom error results. 'name' must match the name of a Monitor on the screen, or a Value error results. RRDeleteMonitor generates a ConfigureNotify event on the root window of the screen. ❧❧❧❧❧❧❧❧❧❧❧
Categories: Elsewhere

Gregor Herrmann: GDAC 2014/16

Tue, 16/12/2014 - 23:46

today I met with a young friend (attending the final year of technical high school) for coffee. he's exploring free software since one or two years, & he's running debian jessie on his laptop since some time. it's really amazing to see how exciting this travel into the free software cosmos is for him; & it's good to see that linux & debian are not only appealing to greybeards like me :)

this posting is part of GDAC (gregoa's debian advent calendar), a project to show the bright side of debian & why it's fun for me to contribute.

Categories: Elsewhere

Raphael Geissert: Editing Debian online with sources.debian.net

Tue, 16/12/2014 - 09:00
How cool would it be to fix that one bug you just found without having to download a source package? and without leaving your browser?

Inspired by github's online code editing, during Debconf 14 I worked on integrating an online editor on debsources (the software behind sources.debian.net). Long story short: it is available today, for users of chromium (or anything supporting chrome extensions).

After installing the editor for sources.debian.net extension, go straight to sources.debian.net and enjoy!

Go from simple debsources:


To debsources on steroids:


All in all, it brings:
  • Online editing of all of Debian
  • In-browser patch generation, available for download
  • Downloading the modified file
  • Sending the patch to the BTS
  • Syntax highlighting for over 120 file formats!
  • More hidden gems from Ace editor that can be integrated thanks to patches from you

Clone it or fork it:
git clone https://github.com/rgeissert/ace-sourced.n.git

For example, head to apt's source code, find a typo and correct it online: open apt.cc, click on edit, make the changes, click on email patch. Yes! it can generate a mail template for sending the patch to the BTS: just add a nice message and your patch is ready to be sent.

Didn't find any typo to fix? how sad, head to codesearch and search Debian for a spelling mistake, click on any result, edit, correct, email! you will have contributed to Debian in less than 5 minutes without leaving your browser.

The editor was meant to be integrated into debsources itself, without the need of a browser extension. This is expected to be done when the requirements imposed by debsources maintainers are sorted out.

Kudos to Harlan Lieberman who helped debug some performance issues in the early implementations of the integration and for working on the packaging of the Ace editor.
Categories: Elsewhere

Gustavo Noronha Silva: Web Engines Hackfest 2014

Tue, 16/12/2014 - 00:20

For the 6th year in a row, Igalia has organized a hackfest focused on web engines. The 5 years before this one were actually focused on the GTK+ port of WebKit, but the number of web engines that matter to us as Free Software developers and consultancies has grown, and so has the scope of the hackfest.

It was a very productive and exciting event. It has already been covered by Manuel RegoPhilippe Normand, Sebastian Dröge and Andy Wingo! I am sure more blog posts will pop up. We had Martin Robinson telling us about the new Servo engine that Mozilla has been developing as a proof of concept for both Rust as a language for building big, complex products and for doing layout in parallel. Andy gave us a very good summary of where JS engines are in terms of performance and features. We had talks about CSS grid layouts, TyGL – a GL-powered implementation of the 2D painting backend in WebKit, the new Wayland port, announced by Zan Dobersek, and a lot more.

With help from my colleague ChangSeok OH, I presented a description of how a team at Collabora led by Marco Barisione made the combination of WebKitGTK+ and GNOME’s web browser a pretty good experience for the Raspberry Pi. It took a not so small amount of both pragmatic limitations and hacks to get to a multi-tab browser that can play youtube videos and be quite responsive, but we were very happy with how well WebKitGTK+ worked as a base for that.

One of my main goals for the hackfest was to help drive features that were lingering in the bug tracker for WebKitGTK+. I picked up a patch that had gone through a number of iterations and rewrites: the HTML5 notifications support, and with help from Carlos Garcia, managed to finish it and land it at the last day of the hackfest! It provides new signals that can be used to authorize notifications, show and close them.

To make notifications work in the best case scenario, the only thing that the API user needs to do is handle the permission request, since we provide a default implementation for the show and close signals that uses libnotify if it is available when building WebKitGTK+. Originally our intention was to use GNotification for the default implementation of those signals in WebKitGTK+, but it turned out to be a pain to use for our purposes.

GNotification is tied to GApplication. This allows for some interesting features, like notifications being persistent and able to reactivate the application, but those make no sense in our current use case, although that may change once service workers become a thing. It can also be a bit problematic given we are a library and thus have no GApplication of our own. That was easily overcome by using the default GApplication of the process for notifications, though.

The show stopper for us using GNotification was the way GNOME Shell currently deals with notifications sent using this mechanism. It will look for a .desktop file named after the application ID used to initialize the GApplication instance and reject the notification if it cannot find that. Besides making this a pain to test – our test browser would need a .desktop file to be installed, that would not work for our main API user! The application ID used for all Web instances is org.gnome.Epiphany at the moment, and that is not the same as any of the desktop files used either by the main browser or by the web apps created with it.

For the future we will probably move Epiphany towards this new era, and all users of the WebKitGTK+ API as well, but the strictness of GNOME Shell would hurt the usefulness of our default implementation right now, so we decided to stick to libnotify for the time being.

Other than that, I managed to review a bunch of patches during the hackfest, and took part in many interesting discussions regarding the next steps for GNOME Web and the GTK+ and Wayland ports of WebKit, such as the potential introduction of a threaded compositor, which is pretty exciting. We also tried to have Bastien Nocera as a guest participant for one of our sessions, but it turns out that requires more than a notebook on top of a bench hooked up to   a TV to work well. We could think of something next time ;D.

I’d like to thank Igalia for organizing and sponsoring the event, Collabora for sponsoring and sending ChangSeok and myself over to Spain from far away Brazil and South Korea, and Adobe for also sponsoring the event! Hope to see you all next year!

Web Engines Hackfest 2014 sponsors: Adobe, Collabora and Igalia

Categories: Elsewhere

Gregor Herrmann: GDAC 2014/15

Mon, 15/12/2014 - 22:58

nothing exciting today in my debian life. just yet another nice example of collaboration around an RC bug where the bug submitter, the maintainer & me investigated via the BTS, & the maintainer also got support on IRC from others. – now we just need someone to come up with an actual fix for the problem :)

this posting is part of GDAC (gregoa's debian advent calendar), a project to show the bright side of debian & why it's fun for me to contribute.

Categories: Elsewhere

Holger Levsen: 20121214-not-everybody-is-equal

Mon, 15/12/2014 - 20:25
We ain't equal in Debian neither and wishful thinking won't help.

"White people think calling them white is racist." - "White people think calling them racist is racist."

(Thanks to and via 2damnfeisty and blackgirlsparadise!)

Posted here as food for thought. What else is invisible for whom? Or hardly visible or distorted or whatever shade of (in)visible... - and how can we know about things we cannot (yet) see...

Categories: Elsewhere

Thomas Goirand: Supporting 3 init systems in OpenStack packages

Mon, 15/12/2014 - 09:15

tl;dr: Providing support for all 3 init systems (sysv-rc, Upstart and systemd) isn’t hard, and generating the init scripts / Upstart job / systemd using a template system is a lot easier than I previously thought.

As always, when writing this kind of blog post, I do expect that others will not like what I did. But that’s the point: give me your opinion in a constructive way (please be polite even if you don’t like what you see… I had too many times had to read harsh comments), and I’ll implement your ideas if I find it nice.

History of the implementation: how we came to the idea

I had no plan to do this. I don’t believe what I wrote can be generalized to all of the Debian archive. It’s just that I started doing things, and it made sense when I did it. Let me explain how it happened.

Since it’s clear that many, and especially the most advanced one, may have an opinion about which init system they prefer, and because I also support Ubuntu (at least Trusty), I though it was a good idea to support all the “main” init system: sysv-rc, Upstart and systemd. Though I have counted (for the sake of being exact in this blog) : OpenStack in Debian contains currently 64 init scripts to run daemons in total. That’s quite a lot. A way too much to just write them, all by hand. Though that’s what I was doing for the last years… until this the end of this last summer!

So, doing all by hand, I first started implementing Upstart. Its support was there only when building in Ubuntu (which isn’t the correct thing to do, this is now fixed, read further…). Then we thought about adding support for systemd. Gustavo Panizzo, one of the contributors in the OpenStack packages, started implementing it in Keystone (the auth server for OpenStack) for the Juno release which was released this October. He did that last summer, early enough so we didn’t expect anyone to use the Juno branch Keystone. After some experiments, we had kind of working. What he did was invoking “/etc/init.d/keystone start-systemd”, which was still using start-stop-daemon. Yes, that’s not perfect, and it’s better to use systemd foreground process handling, but at least, we had a unique place where to write the startup scripts, where we check the /etc/default for the logging configuration, configure the log file, and so on.

Then around in october, I took a step backward to see the whole picture with sysv-rc scripts, and saw the mess, with all the tiny, small difference between them. It became clear that I had to do something to make sure they were all the same, with the support for the same things (like which log system to use, where to store the PID, create /var/lib/project, /var/run/project and so on…).

Last, on this month of December, I was able to fix the remaining issues for systemd support, thanks to the awesome contribution of Mikael Cluseau on the Alioth OpenStack packaging list. Now, the systemd unit file is still invoking the init script, but it’s not using start-stop-daemon anymore, no PID file involved, and daemons are used as systemd foreground processes. Finally, daemons service files are also activated on installation (they were not previously).

Implementation

So I took the simplistic approach to use always the same template for the sysv-rc switch/case, and the start and stop functions, happening it at the end of all debian/*.init.in scripts. I started to try to reduce the number of variables, and I was surprised of the result: only a very small part of the init scripts need to change from daemon to daemon. For example, for nova-api, here’s the init script (LSB header stripped-out):

DESC="OpenStack Compute API" PROJECT_NAME=nova NAME=${PROJECT_NAME}-api

That is it: only 3 lines, defining only the name of the daemon, the name of the project it attaches (eg: nova, cinder, etc.), and a long description. There’s of course much more complicated init scripts (see the one for neutron-server in the Debian archive for example), but the vast majority only needs the above.

Here’s the sysv-rc init script template that I currently use:

#!/bin/sh # The content after this line comes from openstack-pkg-tools # and has been automatically added to a .init.in script, which # contains only the descriptive part for the daemon. Everything # else is standardized as a single unique script. # Author: Thomas Goirand <zigo@debian.org> # PATH should only include /usr/* if it runs after the mountnfs.sh script PATH=/sbin:/usr/sbin:/bin:/usr/bin if [ -z "${DAEMON}" ] ; then DAEMON=/usr/bin/${NAME} fi PIDFILE=/var/run/${PROJECT_NAME}/${NAME}.pid if [ -z "${SCRIPTNAME}" ] ; then SCRIPTNAME=/etc/init.d/${NAME} fi if [ -z "${SYSTEM_USER}" ] ; then SYSTEM_USER=${PROJECT_NAME} fi if [ -z "${SYSTEM_USER}" ] ; then SYSTEM_GROUP=${PROJECT_NAME} fi if [ "${SYSTEM_USER}" != "root" ] ; then STARTDAEMON_CHUID="--chuid ${SYSTEM_USER}:${SYSTEM_GROUP}" fi if [ -z "${CONFIG_FILE}" ] ; then CONFIG_FILE=/etc/${PROJECT_NAME}/${PROJECT_NAME}.conf fi LOGFILE=/var/log/${PROJECT_NAME}/${NAME}.log if [ -z "${NO_OPENSTACK_CONFIG_FILE_DAEMON_ARG}" ] ; then DAEMON_ARGS="${DAEMON_ARGS} --config-file=${CONFIG_FILE}" fi # Exit if the package is not installed [ -x $DAEMON ] || exit 0 # If ran as root, create /var/lock/X, /var/run/X, /var/lib/X and /var/log/X as needed if [ "x$USER" = "xroot" ] ; then for i in lock run log lib ; do mkdir -p /var/$i/${PROJECT_NAME} chown ${SYSTEM_USER} /var/$i/${PROJECT_NAME} done fi # This defines init_is_upstart which we use later on (+ more...) . /lib/lsb/init-functions # Manage log options: logfile and/or syslog, depending on user's choosing [ -r /etc/default/openstack ] && . /etc/default/openstack [ -r /etc/default/$NAME ] && . /etc/default/$NAME [ "x$USE_SYSLOG" = "xyes" ] && DAEMON_ARGS="$DAEMON_ARGS --use-syslog" [ "x$USE_LOGFILE" != "xno" ] && DAEMON_ARGS="$DAEMON_ARGS --log-file=$LOGFILE" do_start() { start-stop-daemon --start --quiet --background ${STARTDAEMON_CHUID} --make-pidfile --pidfile ${PIDFILE} --chdir /var/lib/${PROJECT_NAME} --startas $DAEMON \ --test > /dev/null || return 1 start-stop-daemon --start --quiet --background ${STARTDAEMON_CHUID} --make-pidfile --pidfile ${PIDFILE} --chdir /var/lib/${PROJECT_NAME} --startas $DAEMON \ -- $DAEMON_ARGS || return 2 } do_stop() { start-stop-daemon --stop --quiet --retry=TERM/30/KILL/5 --pidfile $PIDFILE RETVAL=$? rm -f $PIDFILE return "$RETVAL" } do_systemd_start() { exec $DAEMON $DAEMON_ARGS } case "$1" in start) init_is_upstart > /dev/null 2>&1 && exit 1 log_daemon_msg "Starting $DESC" "$NAME" do_start case $? in 0|1) log_end_msg 0 ;; 2) log_end_msg 1 ;; esac ;; stop) init_is_upstart > /dev/null 2>&1 && exit 0 log_daemon_msg "Stopping $DESC" "$NAME" do_stop case $? in 0|1) log_end_msg 0 ;; 2) log_end_msg 1 ;; esac ;; status) status_of_proc "$DAEMON" "$NAME" && exit 0 || exit $? ;; systemd-start) do_systemd_start ;; restart|force-reload) init_is_upstart > /dev/null 2>&1 && exit 1 log_daemon_msg "Restarting $DESC" "$NAME" do_stop case $? in 0|1) do_start case $? in 0) log_end_msg 0 ;; 1) log_end_msg 1 ;; # Old process is still running *) log_end_msg 1 ;; # Failed to start esac ;; *) log_end_msg 1 ;; # Failed to stop esac ;; *) echo "Usage: $SCRIPTNAME {start|stop|status|restart|force-reload|systemd-start}" >&2 exit 3 ;; esac exit 0

Nothing particularly fancy here… You’ll noticed that it’s really OpenStack centric (see the LOGFILE and CONFIGFILE things…). You may have also noticed the call to “init_is_upstart” which is needed for upstart support. I’m not sure if it’s at the correct place in the init script. Should I put that on top of the script? Was I right with the exit values for it? Please send me your comments…

Then I thought about generalizing all of this. Because not only the sysv-rc scripts needed to be squared-up, but also Upstart. The approach here was to source the sysv-rc script in debian/*.init.in, and then generate the Upstart job accordingly, using the above 3 variables (or more as needed). Here, the fun is that, instead of taking the approach of calculating everything at runtime with the sysv-rc, for Upstart jobs, many things are calculated at build time. For each debian/*.init.in script that the debian/rules finds, pkgos-gen-upstart-job is called. Here’s pkgos-gen-upstart-job:

#!/bin/sh INIT_TEMPLATE=${1} UPSTART_FILE=`echo ${INIT_TEMPLATE} | sed 's/.init.in/.upstart/'` # Get the variables defined in the init template . ${INIT_TEMPLATE} ## Find out what should go in After= #SHOULD_START=`cat ${INIT_TEMPLATE} | grep "# Should-Start:" | sed 's/# Should-Start://'` # #if [ -n "${SHOULD_START}" ] ; then # AFTER="After=" # for i in ${SHOULD_START} ; do # AFTER="${AFTER}${i}.service " # done #fi if [ -z "${DAEMON}" ] ; then DAEMON=/usr/bin/${NAME} fi PIDFILE=/var/run/${PROJECT_NAME}/${NAME}.pid if [ -z "${SCRIPTNAME}" ] ; then SCRIPTNAME=/etc/init.d/${NAME} fi if [ -z "${SYSTEM_USER}" ] ; then SYSTEM_USER=${PROJECT_NAME} fi if [ -z "${SYSTEM_GROUP}" ] ; then SYSTEM_GROUP=${PROJECT_NAME} fi if [ "${SYSTEM_USER}" != "root" ] ; then STARTDAEMON_CHUID="--chuid ${SYSTEM_USER}:${SYSTEM_GROUP}" fi if [ -z "${CONFIG_FILE}" ] ; then CONFIG_FILE=/etc/${PROJECT_NAME}/${PROJECT_NAME}.conf fi LOGFILE=/var/log/${PROJECT_NAME}/${NAME}.log DAEMON_ARGS="${DAEMON_ARGS} --config-file=${CONFIG_FILE}" echo "description \"${DESC}\" author \"Thomas Goirand <zigo@debian.org>\" start on runlevel [2345] stop on runlevel [!2345] chdir /var/run pre-start script for i in lock run log lib ; do mkdir -p /var/\$i/${PROJECT_NAME} chown ${SYSTEM_USER} /var/\$i/${PROJECT_NAME} done end script script [ -x \"${DAEMON}\" ] || exit 0 DAEMON_ARGS=\"${DAEMON_ARGS}\" [ -r /etc/default/openstack ] && . /etc/default/openstack [ -r /etc/default/\$UPSTART_JOB ] && . /etc/default/\$UPSTART_JOB [ \"x\$USE_SYSLOG\" = \"xyes\" ] && DAEMON_ARGS=\"\$DAEMON_ARGS --use-syslog\" [ \"x\$USE_LOGFILE\" != \"xno\" ] && DAEMON_ARGS=\"\$DAEMON_ARGS --log-file=${LOGFILE}\" exec start-stop-daemon --start --chdir /var/lib/${PROJECT_NAME} \\ ${STARTDAEMON_CHUID} --make-pidfile --pidfile ${PIDFILE} \\ --exec ${DAEMON} -- --config-file=${CONFIG_FILE} \${DAEMON_ARGS} end script " >${UPSTART_FILE}

The only thing which I don’t know how to do, is how to implement the Should-Start / Should-Stop in an Upstart job. Can anyone shoot me a mail and tell me the solution?

Then, I wanted to add support for systemd. Here, we cheated, since we only just called the sysv-rc script from the systemd unit, however, the systemd-start target uses exec, so the process stays in the foreground. It’s also much smaller than the Upstart thing. However, here, I could implement the “After” thing, corresponding to the Should-Start:

#!/bin/sh INIT_TEMPLATE=${1} SERVICE_FILE=`echo ${INIT_TEMPLATE} | sed 's/.init.in/.service/'` # Get the variables defined in the init template . ${INIT_TEMPLATE} if [ -z "${SCRIPTNAME}" ] ; then SCRIPTNAME=/etc/init.d/${NAME} fi if [ -z "${SYSTEM_USER}" ] ; then SYSTEM_USER=${PROJECT_NAME} fi if [ -z "${SYSTEM_GROUP}" ] ; then SYSTEM_GROUP=${PROJECT_NAME} fi # Find out what should go in After= SHOULD_START=`cat ${INIT_TEMPLATE} | grep "# Should-Start:" | sed 's/# Should-Start://'` if [ -n "${SHOULD_START}" ] ; then AFTER="After=" for i in ${SHOULD_START} ; do AFTER="${AFTER}${i}.service " done fi echo "[Unit] Description=${DESC} $AFTER [Service] User=${SYSTEM_USER} Group=${SYSTEM_GROUP} WorkingDirectory=/var/lib/${PROJECT_NAME} PermissionsStartOnly=true ExecStartPre=/bin/mkdir -p /var/lock/${PROJECT_NAME} /var/log/${PROJECT_NAME} /var/lib/${PROJECT_NAME} ExecStartPre=/bin/chown ${SYSTEM_USER}:${SYSTEM_GROUP} /var/lock/${PROJECT_NAME} /var/log/${PROJECT_NAME} /var/lib/${PROJECT_NAME} ExecStart=${SCRIPTNAME} systemd-start Restart=on-failure [Install] WantedBy=multi-user.target " >${SERVICE_FILE}

As you can see, it’s calling /etc/init.d/${SCRIPTNAME} sytemd-start, which isn’t great. I’d be happy to have comments from systemd user / maintainers on how to fix it to make it better.

Integrating in debian/rules

To integrate with the Debian package build system, we only need had to write this:

override_dh_installinit: # Create the init scripts from the template for i in `ls -1 debian/*.init.in` ; do \ MYINIT=`echo $$i | sed s/.init.in//` ; \ cp $$i $$MYINIT.init ; \ cat /usr/share/openstack-pkg-tools/init-script-template >>$$MYINIT.init ; \ pkgos-gen-systemd-unit $$i ; \ done # If there's an upstart.in file, use that one instead of the generated one for i in `ls -1 debian/*.upstart.in` ; do \ MYPKG=`echo $$i | sed s/.upstart.in//` ; \ cp $$MYPKG.upstart.in $$MYPKG.upstart ; \ done # Generate the upstart job if there's no already existing .upstart.in for i in `ls debian/*.init.in` ; do \ MYINIT=`echo $$i | sed s/.init.in/.upstart.in/` ; \ if ! [ -e $$MYINIT ] ; then \ pkgos-gen-upstart-job $$i ; \ fi \ done dh_installinit --error-handler=true # Generate the systemd unit file # Note: because dh_systemd_enable is called by the # dh sequencer *before* dh_installinit, we have # to process it manually. for i in `ls debian/*.init.in` ; do \ pkgos-gen-systemd-unit $$i ; \ MYSERVICE=`echo $$i | sed 's/debian\///'` ; \ MYSERVICE=`echo $$MYSERVICE | sed 's/.init.in/.service/'` ; \ dh_systemd_enable $$MYSERVICE ; \ done

As you can see, it’s possible to use a debian/*.upstart.in and not use the templating system, in the more complicated case (I needed it mostly for neutron-server and neutron-plugin-openvswitch-agent).

Conclusion

I do not pretend that what I wrote in the openstack-pkg-tools is the ultimate solution. But I’m convince that it answers our own need as the OpenStack maintainers in Debian. There’s a lot of room for improvements (like implementing the Should-Start in Upstart jobs, or stop calling the sysv-rc script in the systemd units), but that this is a very good move that we did to use templates and generated scripts, as the init scripts are a way more easy to maintain now, in a much more unified way. As I’m not completely satisfied for the systemd and Upstart implementation, I’m sure that there’s already a huge improvements on the sysv-rc script maintainability.

Last and again: please send your comments and help improving the above! :)

Categories: Elsewhere

Gregor Herrmann: GDAC 2014/14

Sun, 14/12/2014 - 22:27

I just got a couple of mails from the BTS. like almost every day, several times per day. now it made me realize how much I like the BTS, & how happy I am that it works so well & even gets new features. – thanks to the BTS maintainers for their continuous work!

this posting is part of GDAC (gregoa's debian advent calendar), a project to show the bright side of debian & why it's fun for me to contribute.

Categories: Elsewhere

Mario Lang: Data-binding MusicXML

Sun, 14/12/2014 - 21:30

My long-term free software project (Braille Music Compiler) just produced some offspring! xsdcxx-musicxml is now available on GitHub.

I used CodeSynthesis XSD to generate a rather complete object model for MusicXML 3.0 documents. Some of the classes needed a bit of manual adjustment, to make the client API really nice and tidy.

During the process, I have learnt (as is almost always the case when programming) quite a lot. I have to say, once you got the hang of it, CodeSynthesis XSD is really a very powerful tool. I definitely prefer having these 100k lines of code auto-generated from a XML Schema, instead of having to implement small parts of it by hand.

If you are into MusicXML for any reason, and you like C++, give this library a whirl. At least to me, it is what I was always looking for: Rather type-safe, with a quite self-explanatory API.

For added ease of integration, xsdcxx-musicxml is sub-project friendly. In other words, if your project uses CMake and Git, adding xsdcxx-musicxml as a subproject is as easy as using git submodule add and putting add_subdirectory(xsdcxx-musicxml) into your CMakeLists.txt.

Categories: Elsewhere

Gregor Herrmann: RC bugs 2014/49-50

Sun, 14/12/2014 - 17:01

it's getting harder to find "nice" RC bugs, due to the efforts of various bug hunters & the awesome auto-removal-from-testing feature. – anyway, here's the list of bugs I worked on in the last 2 weeks:

  • #766740 – gamera: "gamera FTBFS on arm64, testsuite failure."
    sponsor maintainer upload
  • #766773 – irssi-plugin-xmpp: "irssi-plugin-xmpp: /query <JID> fails with "Irssi: critical query_init: assertion 'query->name != NULL' failed""
    add some speculation to the bug report, request binNMU after submitter's confirmation, close this bug afterwards
  • #768127 – dhelp: "Fails to build the index when invalid UTF-8 is met"
    apply patch from Daniel Getz, upload to DELAYED/5
  • #770672 – src:gnome-packagekit: "gnome-packagekit: FTBFS without docbook: reference to entity "REFENTRY" for which no system identifier could be generated"
    provide information, ask for clarification, severity lowered by maintainer
  • #771496 – dpkg-cross: "overwrites user changes to configuration file /etc/dpkg-cross/cross-compile on upgrade (violates 10.7.3)"
    tag confirmed and add information, later downgraded by maintainer, then set back to RC by submitter …
  • #771500 – darcsweb: "darcsweb: postinst uses /usr/share/doc content (Policy 12.3): /usr/share/doc/darcsweb/examples/darcsweb.conf"
    install config sample into /usr/share/<package>, upload to DELAYED/5
  • #771501 – pygopherd: "pygopherd: postinst uses /usr/share/doc content (Policy 12.3): /usr/share/doc/pygopherd/examples/gophermap"
    sponsor NMU from Cameron Norman, upload to DELAYED/5
  • #771727 – fex: "fex: postinst uses /usr/share/doc content (Policy 12.3)"
    propose patch, installing config templates under /usr/share/<package>, upload to DELAYED/5 later
  • #772005 – libdevice-cdio-perl: "libdevice-cdio-perl: Debian patch causes Perl crashes in Device::Cdio::ISO9660::IFS's readdir: "Error in `/usr/bin/perl': realloc(): invalid next size: 0x0000000001f05850""
    reproduce the bug (pkg-perl)
  • #772159 – ruby-moneta: "ruby-moneta: leaves mysqld running after build"
    apply patch from Colin Watson, upload to DELAYED/2
Categories: Elsewhere

Enrico Zini: html5-sse

Sun, 14/12/2014 - 16:32
HTML5 Server-sent events

I have a Django view that runs a slow script server-side, and streams the script output to Javascript. This is the bit of code that runs the script and turns the output into a stream of events:

def stream_output(proc): ''' Take a subprocess.Popen object and generate its output, line by line, annotated with "stdout" or "stderr". At process termination it generates one last element: ("result", return_code) with the return code of the process. ''' fds = [proc.stdout, proc.stderr] bufs = [b"", b""] types = ["stdout", "stderr"] # Set both pipes as non-blocking for fd in fds: fcntl.fcntl(fd, fcntl.F_SETFL, os.O_NONBLOCK) # Multiplex stdout and stderr with different prefixes while len(fds) > 0: s = select.select(fds, (), ()) for fd in s[0]: idx = fds.index(fd) buf = fd.read() if len(buf) == 0: fds.pop(idx) if len(bufs[idx]) != 0: yield types[idx], bufs.pop(idx) types.pop(idx) else: bufs[idx] += buf lines = bufs[idx].split(b"\n") bufs[idx] = lines.pop() for l in lines: yield types[idx], l res = proc.wait() yield "result", res

I used to just serialize its output and stream it to JavaScript, then monitor onreadystatechange on the XMLHttpRequest object browser-side, but then it started failing on Chrome, which won't trigger onreadystatechange until something like a kilobyte of data has been received.

I didn't want to stream a kilobyte of padding just to work-around this, so it was time to try out Server-sent events. See also this.

This is the Django view that sends the events:

class HookRun(View): def get(self, request): proc = run_script(request) def make_events(): for evtype, data in utils.stream_output(proc): if evtype == "result": yield "event: {}\ndata: {}\n\n".format(evtype, data) else: yield "event: {}\ndata: {}\n\n".format(evtype, data.decode("utf-8", "replace")) return http.StreamingHttpResponse(make_events(), content_type='text/event-stream') @method_decorator(never_cache) def dispatch(self, *args, **kwargs): return super().dispatch(*args, **kwargs)

And this is the template that renders it:

{% extends "base.html" %} {% load i18n %} {% block head_resources %} {{block.super}} <style type="text/css"> .out { font-family: monospace; padding: 0; margin: 0; } .stdout {} .stderr { color: red; } .result {} .ok { color: green; } .ko { color: red; } </style> {# Polyfill for IE, typical... https://github.com/remy/polyfills/blob/master/EventSource.js #} <script src="{{ STATIC_URL }}js/EventSource.js"></script> <script type="text/javascript"> $(function() { // Manage spinners and other ajax-related feedback $(document).nav(); $(document).nav("ajax_start"); var out = $("#output"); var event_source = new EventSource("{% url 'session_hookrun' name=name %}"); event_source.addEventListener("open", function(e) { //console.log("EventSource open:", arguments); }); event_source.addEventListener("stdout", function(e) { out.append($("<p>").attr("class", "out stdout").text(e.data)); }); event_source.addEventListener("stderr", function(e) { out.append($("<p>").attr("class", "out stderr").text(e.data)); }); event_source.addEventListener("result", function(e) { if (+e.data == 0) out.append($("<p>").attr("class", "result ok").text("{% trans 'Success' %}")); else out.append($("<p>").attr("class", "result ko").text("{% trans 'Script failed with code' %} " + e.data)); event_source.close(); $(document).nav("ajax_end"); }); event_source.addEventListener("error", function(e) { // There is an annoyance here: e does not contain any kind of error // message. out.append($("<p>").attr("class", "result ko").text("{% trans 'Error receiving script output from the server' %}")); console.error("EventSource error:", arguments); event_source.close(); $(document).nav("ajax_end"); }); }); </script> {% endblock %} {% block content %} <h1>{% trans "Processing..." %}</h1> <div id="output"> </div> {% endblock %}

It's simple enough, it seems reasonably well supported besides needing a polyfill for IE and, astonishingly, it even works!

Categories: Elsewhere

Dirk Eddelbuettel: rfoaas 0.0.4.20141212

Sun, 14/12/2014 - 01:20

A new version of rfoaas is now on CRAN. The rfoaas package provides an interface for R to the most excellent FOAAS service -- which provides a modern, scalable and RESTful web service for the frequent need to tell someone to eff off.

The FOAAS backend gets updated in spurts, and yesterday a few pull requests were integrated, including one from yours truly. So with that it was time for an update to rfoaas. As the version number upstream did not change (bad, bad, practice) I appended the date the version number.

CRANberries also provides a diff to the previous release. Questions, comments etc should go to the GitHub issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Categories: Elsewhere

Gregor Herrmann: GDAC 2014/13

Sat, 13/12/2014 - 21:48

not sure if it it's me or debian but today was a quiet day. time to look back & see what has happened this year … & this brings up memories of this year's & earlier debconfs, with their pkg-perl BOFs & their outdoor hacklabs. – looking through these photos of past events makes me grateful, both to the tireless organizers of debconf, & to the people who can share a bench with me for hours :)

this posting is part of GDAC (gregoa's debian advent calendar), a project to show the bright side of debian & why it's fun for me to contribute.

Categories: Elsewhere

Holger Levsen: having fun cardboard-crack.com.gif

Sat, 13/12/2014 - 17:11
Categories: Elsewhere

Holger Levsen: 20141213-on-having-fun-in-debian

Sat, 13/12/2014 - 17:11
On having fun in Debian

(Thanks to cardboard-crack.com for this awesome comic!)

Categories: Elsewhere

Keith Packard: present-compositor

Sat, 13/12/2014 - 09:28
Present and Compositors

The current Present extension is pretty unfriendly to compositing managers, causing an extra frame of latency between the applications operation and the scanout buffer. Here's how I'm fixing that.

An extra frame of lag

When an application uses PresentPixmap, that operation is generally delayed until the next vblank interval. When using X without composting, this ensures that the operation will get started in the vblank interval, and, if the rendering operation is quick enough, you'll get the frame presented without any tearing.

When using a compositing manager, the operation is still delayed until the vblank interval. That means that the CopyArea and subsequent Damage event generation don't occur until the display has already started the next frame. The compositing manager receives the damage event and constructs a new frame, but it also wants to avoid tearing, so that frame won't get displayed immediately, instead it'll get delayed until the next frame, introducing the lag.

Copy now, complete later

While away from the keyboard this morning, I had a sudden idea -- what if we performed the CopyArea and generated Damage right when the PresentPixmap request was executed but delayed the PresentComplete event until vblank happened.

With the contents updated and damage delivered, the compositing manager can immediately start constructing a new scene for the upcoming frame. When that is complete, it can also use PresentPixmap (either directly or through OpenGL) to queue the screen update.

If it's fast enough, that will all happen before vblank and the application contents will actually appear at the desired time.

Now, at the appointed vblank time, the PresentComplete event will get delivered to the client, telling it that the operation has finished and that its contents are now on the screen. If the compositing manager was quick, this event won't even be a lie.

We'll be lying less often

Right now, the CopyArea, Damage and PresentComplete operations all happen after the vblank has passed. As the compositing manager delays the screen update until the next vblank, then every single PresentComplete event will have the wrong UST/MSC values in it.

With the CopyArea happening immediately, we've a pretty good chance that the compositing manager will get the application contents up on the screen at the target time. When this happens, the PresentComplete event will have the correct values in it.

How can we do better?

The only way to do better is to have the PresentComplete event generated when the compositing manager displays the frame. I've talked about how that should work, but it's a bit twisty, and will require changes in the compositing manager to report the association between their PresentPixmap request and the applications' PresentPixmap requests.

Where's the code

I've got a set of three patches, two of which restructure the existing code without changing any behavior and a final patch which adds this improvement. Comments and review are encouraged, as always!

git://people.freedesktop.org/~keithp/xserver.git present-compositor
Categories: Elsewhere

Thorsten Glaser: WTF is Jessie; PA4 paper size

Sat, 13/12/2014 - 00:56

My personal APT repository now has a jessie suite – currently just a clone of the sid suite, but so, people can get on the correct “upgrade channel” already.

Besides that, the usual small updates to my metapackages, bugfixes, etc. – You might have noticed that it’s now on a (hopefully permanent) location. I’ve put a donated eee-pc from my father to good use and am now running a Debian system at home. (Fun, as I’m emeritus now, officially, and haven’t had one during my time as active uploading DD.) I’ve created a coupld of cowbuilder chroots (pbuilderrc to achieve that included in the repo) and can build packages, but for i386 only (amd64 is still done on the x32 desktop at work), but, more importantly, I can build, sign and publish the repo, so it may grow. (popcon data is interesting. More than double the amount of machines I have installed that stuff on.)

Installing gimp and inkscape, I’m asked for a default paper size by libpaper1. PA4 is still not an option, I wonder why. I also haven’t managed to get MirPorts GNU groff and Artifex Ghostscript to use that paper size, so the various PDF manpages I produce are still using DIN ISO A4, rendering e.g. Mexicans unable to print them. Help welcome.

Categories: Elsewhere

Daniel Kahn Gillmor: a10n for l10n

Sat, 13/12/2014 - 00:00
The abbreviated title above means "Appreciation for Localization" :)

I wanted to say a word of thanks for the awesome work done by debian localization teams. I speak English, and my other language skills are weak. I'm lucky: most software I use is written by default in a language that I can already understand.

The debian localization teams do great work in making sure that packages in debian gets translated into many other languages, so that many more people around the world can take advantage of free software.

I was reminded of this work recently (again) with the great patches submitted to GnuPG and related packages. The changes were made by many different people, and coordinated with the debian GnuPG packaging team by David Prévot.

This work doesn't just help debian and its users. These localizations make their way back upstream to the original projects, which in turn are available to many other people.

If you use debian, and you speak a language other than english, and you want to give back to the community, please consider joining one of the localization teams. They are a great way to help out our project's top priorities: our users and free software.

Thank you to all the localizers!

(this post was inspired by gregoa's debian advent calendar. i won't be posting public words of thanks as frequently or as diligently as he does, any more than i'll be fixing the number of RC bugs that he fixes. This are just two of the ways that gregoa consistently leads the community by example. He's an inspiration, even if living up to his example is a daunting challenge.)

Categories: Elsewhere

Pages