Feed aggregator

Joachim Breitner: Sim Serim as a browser game

Planet Debian - Sun, 29/12/2013 - 22:08

Recently, I gave the very elegant board game “Sim Serim” away as a present, without actually playing it, leaving me curious about the game. One can easily play it on paper (you just need a paper, and 4 tokens like coins for each player, BoardGameGeek has English rules the of game), but I took this as an opportunity to learn more about HTML5 canvas and nodejs, and so I created the browser game “Sum Serum”, implementing the game mechanics of “Sim Serim”. You can play it locally with two people, or over the network. Contributions are welcome! Especially slicker graphics...

In other news, given that in recent years I became one, I’m on BoardGameGeek myself now, see my profile there.

Categories: Elsewhere

Cheppers blog: Global Sprint Weekend January 25 and 26 2014

Planet Drupal - Sun, 29/12/2013 - 18:40

Global Sprint Weekend is a worldwide event you can participate in. Small local sprints in lots of locations, over the same time period: Saturday and Sunday January 25 and 26, 2014. These sprints will usually be 2-15 people in one location, together, working to make Drupal better.

You can make your own locations if no location is near you! Currently people have announced locations in Sevilla Spain; Berlin, Mannheim and Schwerin Germany; Ghent Belgium; Budapest Hungary (hosted at Cheppers and led by Gábor Hojtsy); Manchester UK; Vancouver, London (Ontario) and Montréal Canada; Oak Park, Chicago, Milwaukee, Boston, Minneapolis and Austin USA.

Categories: Elsewhere

Hideki Yamane: creating Ubuntu chroot becomes easier - ubuntu-archive-keyring package in Debian

Planet Debian - Sun, 29/12/2013 - 17:24
Hi, I've just uploaded ubuntu-keyring source package to Debian (its RFP was posted in 26th Dec 2007 - wow 6 years ago) and it was accepted quickly (thanks to ftpmasters :-)

If you're using Debian as primary environment and need to rebuild package for Ubuntu, you'd create Ubuntu pbuilder/cowbuilder/chroot environment on your box. But you'll get an error as below.
henrich@hp:~ $ sudo cowbuilder --create --distribution saucy --mirror http://archive.ubuntu.com/ubuntu --basepath /var/cache/pbuilder/saucy
 -> Invoking pbuilder
  forking: pbuilder create --buildplace /var/cache/pbuilder/saucy --mirror http://archive.ubuntu.com/ubuntu --distribution saucy --no-targz --extrapackages cowdancer
I: Running in no-targz mode
I: Distribution is saucy.
I: Current time: Mon Dec 30 01:00:05 JST 2013
I: pbuilder-time-stamp: 1388332805
I: Building the build environment
I: running debootstrap
/usr/sbin/debootstrap
I: Retrieving Release
I: Retrieving Release.gpg
I: Checking Release signature
E: Release signed by unknown key (key id 3B4FE6ACC0B21F32)
E: debootstrap failed
W: Aborting with an error
pbuilder create failed
  forking: rm -rf /var/cache/pbuilder/saucy 
"E: Release signed by unknown key (key id 3B4FE6ACC0B21F32)", because your system doesn't know about Ubuntu release key. Okay, then "$ sudo apt-get install ubuntu-archive-keyring" and add "--debootstrapopts "--keyring /usr/share/keyrings/ubuntu-archive-keyring.gpg"" option to cowbuilder.
henrich@hp:~ $ sudo cowbuilder --create --distribution saucy --mirror http://archive.ubuntu.com/ubuntu --basepath /var/cache/pbuilder/saucy --debootstrapopts "--keyring=/usr/share/keyrings/ubuntu-archive-keyring.gpg"
 -> Invoking pbuilder
  forking: pbuilder create --debootstrapopts --keyring=/usr/share/keyrings/ubuntu-archive-keyring.gpg --buildplace /var/cache/pbuilder/saucy --mirror http://archive.ubuntu.com/ubuntu --distribution saucy --no-targz --extrapackages cowdancer
I: Running in no-targz mode
I: Distribution is saucy.
I: Current time: Mon Dec 30 01:06:30 JST 2013
I: pbuilder-time-stamp: 1388333190
I: Building the build environment
I: running debootstrap
/usr/sbin/debootstrap
I: Retrieving Release
I: Retrieving Release.gpg
I: Checking Release signature
I: Valid Release signature (key id 790BC7277767219C42C86F933B4FE6ACC0B21F32)
I: Retrieving Packages
I: Validating Packages
I: Resolving dependencies of required packages...
I: Resolving dependencies of base packages...
I: Checking component main on http://archive.ubuntu.com/ubuntu...
I: Retrieving apt 0.9.9.1~ubuntu3
I: Validating apt 0.9.9.1~ubuntu3
I: Retrieving base-files 6.12ubuntu4
(snip)Good :)

note: Just add "--keyring" option (--keyring /usr/share/keyrings/ubuntu-archive-keyring.gpg) doesn't work as expected, bug?

difference with ubuntu-keyring package in Ubuntu

  • Binary package name is "ubuntu-archive-keyring", not "ubuntu-keyring". It's better to use "foobar-archive-keyring" for packages, same as debian-archive-keyring.
  • No udebs. Probably we don't need it.


Categories: Elsewhere

Steve Kemp: A good week?

Planet Debian - Sun, 29/12/2013 - 15:59

This week my small collection of sysadmin tools received a lot of attention; I've no idea what triggered it, but it ended up on the front-page of github as a "trending repository".

Otherwise I've recently spent some time "playing about" with some security stuff. My first recent report wasn't deemed worthy of a security update, but it was still a fun one. From the package description rush is described as:

GNU Rush is a restricted shell designed for sites providing only limited access to resources for remote users. The main binary executable is configurable as a user login shell, intended for users that only are allowed remote login to the system at hand.

As the description says this is primarily intended for use by remote users, but if it is installed locally you can read "any file" on the local system.

How? Well the program is setuid(root) and allows you to specify an arbitrary configuration file as input. The very very first thing I tried to do with this program was feed it an invalid and unreadable-to-me configuration file.

Helpfully there is a debugging option you can add --lint to help you setup the software. Using it is as simple as:

shelob ~ $ rush --lint /etc/shadow rush: Info: /etc/shadow:1: unknown statement: root:$6$zwJQWKVo$ofoV2xwfsff...Mxo/:15884:0:99999:7::: rush: Info: /etc/shadow:2: unknown statement: daemon:*:15884:0:99999:7::: rush: Info: /etc/shadow:3: unknown statement: bin:*:15884:0:99999:7::: rush: Info: /etc/shadow:4: unknown statement: sys:*:15884:0:99999:7::: ..

How nice?

The only mitigating factor here is that only the first token on the line is reported - In this case we've exposed /etc/shadow which doesn't contain whitespace for the interesting users, so it's enough to start cracking those password hashes.

If you maintain a setuid binary you must be trying things like this.

If you maintain a setuid binary you must be confident in the codebase.

People will be happy to stress-test, audit, examine, and help you - just ask.

Simple security issues like this are frankly embarassing.

Anyway that's enough: #733505 / CVE-2013-6889.

Categories: Elsewhere

Hideki Yamane: kernel.org stops using bz2

Planet Debian - Sun, 29/12/2013 - 15:01
Many upstream provides their source as tar.xz format, and kernel.org also throw bz2 away - "Happy new year and good-bye bzip2".

Debian package (dpkg-deb) uses xz as default since 1.17.0 as I mentioned. If your package uses bz2, now it's a time to switch. And maybe it's good to ask upstream to switch from bzip2 to xz, too.
Categories: Elsewhere

Russ Allbery: Review: The Incrementalists

Planet Debian - Sun, 29/12/2013 - 07:47

Review: The Incrementalists, by Steven Brust & Skyler White

Publisher: Tor Copyright: September 2013 ISBN: 0-7653-3422-4 Format: Hardcover Pages: 304

The Incrementalists are a secret society that has been in continuous existence for forty thousand years. The members are immortal... sort of (more on that in a moment). They are extremely good at determining what signals, triggers, and actions have the most impact on people, and adept at using those triggers to influence people's behavior. And they're making the world better. Not in any huge, noticeable way, just a little bit, here and there. Incrementally.

Brust and White completely had me with this concept. There are several SF novels about immortals, but (apart from the vampire subgenre) they mostly stay underground and find ways to survive without trying to change the surrounding human society. I love the idea of small, incremental changes, and I was looking forward to reading a book about how that would work. How do they choose what to do? How do they find the right place to push? What long-term goals would such an organization pursue, and what governance structure would they use?

I'm still wondering about most of those things, sadly, since that isn't what this novel is about at all.

I should be fair: a few of those questions hover in the background. There are some political arguments (that parallel arguments on Brust's blog), and a tiny bit about governance. But mostly this is a sort of romance, a fight over identity, and an extensive exploration of the mental "garden" that the Incrementalists use and build to hold their memories and share information with each other.

The story revolves around two Incrementalists: Phil, who is one of the leaders and has one of the longest continuous identities of any of the group, and Renee, a new recruit by Phil who picks up the memories and capabilities (sort of) of the recently-deceased Incrementalist Celeste. Phil is the long-term insider, although a fairly laid-back one. Renee is the outsider, the one to which the story is happening at the beginning, who is taught how the Incrementalists' system works. The Incrementalists is told in alternating first-person viewpoint sections by Phil and Renee (making me wonder if the characters were each written by one of the authors).

Once I got past my disappointment over this book not being quite what I wanted, the identity puzzles that Brust and White play with here caught my attention. The underlying "magic" of the Incrementalists is based on the "memory palace" concept of storing memories via visualizations, but takes it into something akin to personal alternate worlds. Their immortality isn't through physical immortality, but rather the ability to store their memories and personality in their garden, which is their term for the memory palace, and then have it imposed on someone else: a combination of reincarnation, mental takeover, and mental merging. This makes for complex interpersonal relationships and complex interactions between memory, belief, and identity, not to mention some major ethical issues, which come to a head in Phil and Renee's relationship and Celeste's lingering meddling.

I think Brust and White are digging into some interesting questions of character here. There's a lot of emphasis in The Incrementalists on the ease with which people can be manipulated, and the Incrementalists themselves are far from immune. Choice and personal definition are both questionable concepts given how much influence other people have, how many vulnerabilities everyone carries with them, and how much one's opinions are governed by one's life history. That makes identity complex, and raises the question of whether one can truly define oneself.

But, while all these ideas are raised, I think The Incrementalists dances around them more than engages them directly. It's similar to the background hook: yes, these people are slowly improving the world, and we see a little bit of that (and a little bit of them manipulating people for their own convenience), but the story doesn't fully engage with the idea. There's a romance, a few arguments, some tension, some world-building, but I don't feel like this book ever fully committed to any of them. One of the occasional failure modes of Brust for me is insufficient explanation and insufficient clarity, and I hit that here.

My final impression of The Incrementalists is interesting, enjoyable, but vaguely disappointing. It's still a good story with some interesting characters and nice use of the memory palace concept, and I liked Renee throughout (although I think the development of the love story is disturbingly easy and a little weird). But I can't strongly recommend it, and I'm not sure if it's worth seeking out.

Rating: 7 out of 10

Categories: Elsewhere

Gergely Nagy: Introducing the syslog-ng Incubator

Planet Debian - Sun, 29/12/2013 - 04:16

When syslog-ng 3.5.1 was released, the existence of the syslog-ng Incubator project was already announced, but I did not go into detail as to why it exists, what is in there, how to build it, or how to use it. The documentation that exists on it is almost non-existent, and what does exist, is usually in the form of a commit message and some example files. But there's nothing on how the things in the Incubator can be used, and what problems they solve.

With this post, I will try to ease the situation a little, and provide some insight on how I use some of the things in there, like the Riemann destination.

About the Incubator

The idea for the Incubator existed for a long time, the first incarnation of it was called syslog-ng module collection, and had a different scope: it was a playing field not only for new modules, but updates to existing ones too. The idea was to try out new, possibly experimental features separately from syslog-ng. That didn't work well: it was a nightmare to maintain, and even harder to use. So it never got anywhere, as the developments made there were quickly merged into syslog-ng itself anyway.

Then, two years later, I had a need for a new destination, but there was no open development branch to base it against, and forking and maintaining a whole syslog-ng branch for the purpose of a single module sounded like an overkill. So the Incubator was born, from the ashes of the failed module collection.

The purpose of the Incubator is to be a place of experimental modules, and an easier way to enter the world syslog-ng development. It's something with far less requirements than syslog-ng (I do not care all that much about portability when it comes to the Incubator, especially not to horrible abominations like HP-UX or AIX), less rules, and more freedom. It's meant to be more developer friendly than syslog-ng proper.

It also serves as an example of developing modules completely external to syslog-ng, and parts of it will be used in future posts of mine, to demonstrate the development of new modules.

Compiling & Installation

The first thing we need to do, is to compile whatever we need from the Incubator, unless we happen to have pre-built binaries available (which are going to appear in your usual third-party syslog-ng repositories over the next few weeks). For this, at a minimum, one will need syslog-ng 3.5+ and ivykis installed, along with autotools (autoconf, automake and libtool), pkg-config, bison, and depending on which modules one needs, riemann-c-client and libmongo-client too. Most of these are available packaged in better distributions, but only in recent versions, so one may need to compile those too. They follow a very similar scheme, though.

If everything is installed at the standard locations, getting the Incubator to compile is as simple as this:

$ git clone https://github.com/balabit/syslog-ng-incubator.git $ cd syslog-ng-incubator $ autoreconf -i $ ./configure && make && sudo make install

If anything happens to be installed in a non-standard location, one will need to adjust PKG_CONFIG_PATH to help the configure script locate the needed libraries.

Once installed (the configure script will figure out where syslog-ng modules are, and modules will be put there), syslog-ng will automatically recognise the new modules. One can make sure that this is the case by running the following command after installation:

$ syslog-ng --module-registry The Plugins

Now that we're over the hard part of compiling and installing the Incubator, lets see what is inside! I will start with the easier things, and move on to the more complicated features as we progress. That is, we'll start with some template functions, have a glance at the trigger source, then explore the rss destination, and finish off with the riemann destination.

Template functions

The Incubator gives us three new template functions, some less useful than the others, and one that's a huge, ugly hack for a problem that I ended up solving in a very different way - without the hack.

These functions are the $(or) template function, which takes any number of arguments, and returns the first one that is not empty. The main use case here is that if you have, say, similarly named fields, but some messages have one, the others another, and you want to normalize it, $(or) is one way to do it:

$(or ${HOST} ${HOSTNAME} ${HOST_NAME} "<unknown>")

Another function is $(//), which does the same thing as the built-in $(/) template function: divide its arguments. Except this one works with floating-point numbers only, while the built-in one is for integers exclusively. Using it is simple, too:

$(// ${some_number} 3.4)

The last template function provided by the Incubator is $(state), which can be used to maintain global state, that does not depend on log messages. You can set values in here, like counters, from within a template function. It is possible to count the total amount downloaded data when processing a HTTP server log, for example. But it's slow, and there are better ways to do the same thing, syslog-ng really isn't the best tool for this kind of job. If anyone happens to find a use-case for it, please let me know. As for using it, it has two modes: set (with two arguments) and get (with one):

$(state some-variable ${VALUE}) $(state some-variable) Trigger source

The trigger source has many in common with the built-in mark feature: at given intervals, it sends a message. This is mostly a debugging aid, when you want to generate messages without an external tool. It only has two options: trigger-freq() and trigger-message(), which default to 10 and "Trigger source is trigger happy.", respectively. It also accepts a number of common source options such as program-override(), host-override() and tags().

To use it, one just needs to set it up like any other source, and bind it to a destination with a log statement:

source s_trigger { trigger( program-override("trigger") tags("trigger-happy") trigger-freq(5) trigger-message("Beep.") ); };

Without a program-override() option, messages will be attributed to syslog-ng, which is likely not what you want, even while debugging. Internal messages are usually routed somewhere else.

RSS destination

The RSS destination is an interesting beast. It offers an Atom feed of the last hundred messages routed to the destination. I could very well imagine this being useful in a situation where one already has monitoring set up to listen on various RSS sources - this would be just another. It also works well with most RSS feed readers. The length of the feed is not configurable at this time, and the number of options is limited to port(), title(), entry-title() and entry-description().

The first one specifies which port the destination should listen on (it serves one client at a time!); title() can be used to set the title of the feed itself, while entry-title() and entry-description() can be used with templates to fill in the per-message Atom entries.

Once we have a suitable path we want to route to the RSS destination (such as critical error messages only), we can set it up like this:

destination d_rss { rss( port(8192) feed-title("Critical errors in the system") entry-title("Error from ${PROGRAM} @ ${HOST_FROM} at ${ISODATE}") entry-description("${MESSAGE}") ); }; Riemann destination

Being the original motivator for the Incubator, I left this last. This module is the interface between your logs and the Riemann monitoring system. With this, you can take all the legacy applications that are hard to monitor, but provide log files, use syslog-ng's extraordinary log processing power, and send clear and concise events over to Riemann.

One can use it to monitor logins, downloads, uploads, exceptions - pretty much anything. Just extract some metric, or state, send it over to Riemann, and that will do the heavy lifting. What exactly can be done, will be worth a separate blog post, so for now, I will give just a very tiny example:

destination d_riemann { riemann( ttl("120") description("syslog-ng internal errors") metric(int("${SEQNUM}")) ); };

Hook it up to a path that collects syslog-ng internal messages, keeps only error messages, and routes it toward this destination:

log { source { internal(); }; filter { level(err..emerg); }; destination(d_riemann); };

The destination itself has the following options:

  • server(): The server to connect to, defaults to localhost.
  • port(): The port the Riemann server is listening on, defaults to 5555.
  • type(): The type of connection: UDP or TCP. Defaults to TCP.
  • host(): The host field of the Riemann event, defaults to ${HOST}.
  • service(): The service field of the Riemann event, defaults to ${PROGRAM}
  • state(): The state field of the Riemann event, without a default.
  • description(): The description of the event, with no default.
  • ttl(): The time-to-live of the event, no default.
  • metric(): The metric to send to Riemann. This needs to be either an integer or a floating point number. Using type-hinting is advised here. Without one, the destination will try to parse the value of this option as a float.
  • tags(): As the name implies, this adds tags to the Riemann event. By default, all the tags that are set on the message, will be forwarded.
  • attributes(): With this option, one can set custom attributes on the Riemann event. The syntax is the same as for value-pairs(), with a few enhancements. What the difference is, is left as an exercise to the reader: the example config that comes with the Incubator has a hint.
Closing thoughts

The Incubator contains one more tool, one which can be used to visualise logs in strange ways. But that's not strictly related to syslog-ng, isn't a module, either, so I will not describe it here right now. The above was quite a lot already, I believe.

As a closing thought, I would just like to say that while the Incubator is home for experimental modules, some of them are used in production. Don't be afraid to use them, especially when packages start to arrive to your favourite GNU/Linux distribution!

Categories: Elsewhere

Wouter Verhelst: On schroot

Planet Debian - Sun, 29/12/2013 - 02:30
I: 00check: Untarring chroot environment. This might take a minute or two.

liar.

Categories: Elsewhere

Russ Allbery: lbcd 3.5.0

Planet Debian - Sun, 29/12/2013 - 01:41

This turned out to be a more substantial release than I was originally planning.

lbcd is a small daemon that listens to UDP queries and returns system load and usage information and, optionally, service availability. It's designed to work with a software DNS-based load-balancing system that we use at Stanford (lbnamed), but it can also be used in other situations where you want to easily and quickly query the load of systems remotely.

Debian is currently debating which new init system to adopt, and lbcd was my test package for adding upstart and systemd support. The goal was to add proper, full support as described by upstream, making use of all relevant features, so that I could get some experience with a complete integration. That is complete in this version: lbcd now supports both systemd's and upstart's readiness synchronization mechanisms (using a -Z flag to raise SIGSTOP for upstart) and supports socket activation with systemd. (upstart's socket activation support is missing numerous features, including IPv6 support and, fatally, SOCK_DGRAM support.) It also installs systemd unit files directly when built on a system with systemd support, which should make life easier for use on Red Hat versions later regardless of what happens with Debian.

Of course, while doing that, I kept running across other things that needed to be fixed. For example, systemd's socket activation provides proper support for IPv6, so lbcd should as well. That's now present; that was something I'd been planning on doing for some time. It also supports a -f option to run in the foreground but still log to syslog, something needed by both upstart and systemd to avoid having to use PID files.

Since PID files are no longer necessary, lbcd no longer writes one by default (an idiosyncratic choice made by the previous maintainer), and also drops the -s and -r options to stop and restart itself. Adding these to each daemon was an interesting approach, but I think it's better to leave this to the init system.

While working on the code, I discovered that lbcd allowed the client to request any of the built-in service probes be run, which meant that a client could cause TCP connects to random local services. While this probably couldn't do any harm other than a DoS attack, it still seemed like a bad idea, and was a "feature" I didn't realize was there. Now, only services specified with the -w or -a options may be queried by a client.

I also finally implemented the -l option, which logs each client query, and improved lbcd's recognition of whether someone is on console to allow for modern display manager sessions.

The simple client included in the package, lbcdclient, has been completely rewritten using modern Perl. It supports long options, setting the timeout, setting the port, and returns an error on timeout. It no longer supports multiple servers to query on the command line, since the output just gets confusing and I don't think anyone used this feature. It also now supports IPv6 if IO::Socket::INET6 is available.

Finally, a typo that prevented compilation on Mac OS X has been fixed.

You can get the latest version from the lbcd distribution page.

Categories: Elsewhere

Keith Packard: Present-redirect-lifetimes

Planet Debian - Sat, 28/12/2013 - 22:38
Object Lifetimes under Present Redirection

Present extension redirection passes responsibility for showing application contents from the X server to the compositing manager. This eliminates an extra copy to the composite extension window buffer and also allows the application contents to be shown in the right frame.

(Currently, the copy from application buffer to window buffer is synchronized to the application-specified frame, and then a Damage event is delivered to the compositing manager which constructs the screen image using the window buffer and presents that to the screen at least one frame later, which generally adds another frame of delay for the application.)

The redirection operation itself is simple — just wrap the PresentPixmap request up into an event and send it to the compositing manager. However, the pixmap to be presented is allocated by the application, and hence may disappear at any time. We need to carefully control the lifetime of the pixmap ID and the specific frame contents of the pixmap so that the compositing manager can reliably construct complete frames.

We’ll separately discuss the lifetime of the specific frame contents from that of the pixmap itself. By the “frame contents”, I mean the image, as a set of pixel values in the pixmap, to be presented for a specific frame.

Present Pixmap contents lifetime

After the application is finished constructing a particular frame in a pixmap, it passes the pixmap to the X server with PresentPixmap. With non-redirected Present, the X server is responsible for generating a PresentIdleNotify event once the server is finished using the contents. There are three different cases that the server handles, matching the three different PresentCompleteModes:

  1. Copy. The pixmap contents are not needed after the copy operation has been performed. Hence, the PresentIdleNotify event is generated when the target vblank time has been reached, right after the X server calls CopyArea

  2. Flip. The pixmap is being used for scanout, and so the X server won’t be done using it until some other scanout buffer is selected. This can happen as a result of window reconfiguration which makes the displayed window no longer full-screen, but the usual case is when the application presents a subsequent frame for display, and the new frame replaces the old. Thus, the PresentIdleNotify event generally occurs when the target vblank time for the subsequent frame has been reached, right after the subsequent frame’s pixmap has been selected for scanout.

  3. Skip. The pixmap contents will never be used, and the X server figures this out when a subsequent frame is delivered with a matching target vblank time. This happens when the subsequent Present operation is queued by the X server.

In the Redirect case, the X server cannot tell when the compositing manager is finished with the pixmap. The same three cases as above apply here, but the results are slightly different:

  1. Composite. The pixmap is being used as a part of the screen image and must be composited with other window pixmaps. In this case, the compositing manager will need to hold onto the pixmap until a subsequent pixmap is provided by the application. Thus, the pixmap will remain needed by the compositing manager until it receives a subsequent PresentRedirectNotify for the same window.

  2. Flip. The compositing manager is free to take the application pixmap and use it directly in a subsequent PresentPixmap operation and let the X server ‘flip’ to it; this provides a simple way of avoiding an extra copy while not needing to fuss around with ‘unredirecting’ windows. In this case, the X server will need the pixmap contents until a new scanout pixmap is provided, and the compositing manager will also need the pixmap in case the contents are needed to help construct a subsequent frame.

  3. Skip. In this case, the compositing manager notices that the window’s pixmap has been replaced before it was ever used.

In case 2, the X server and the compositing manager will need to agree on when the PresentIdleNotify event should be delivered. In the other two cases, the compositing manager itself will be in charge of that.

To let the compositing manager control when the event is delivered, the X server will count the number of PresentPixmap redirection events sent, and the compositing manager will deliver matching PresentIdle requests.

PresentIdle ┌─── PresentIdle pixmap: PIXMAP └─── Errors: Pixmap, Match

Informs the server that the Pixmap passed in a PresentRedirectNotify event is no longer needed by the client. Each PresentRedirectNotify event must be matched by a PresentIdle request for the originating client to receive a PresentIdleNotify event.

PresentRedirect pixmap ID lifetimes

A compositing manager will want to know the lifetime of the pixmaps delivered in PresentRedirectNotify events to clean up whatever local data it has associated with it. For instance, GL compositing managers will construct textures for each pixmap that need to be destroyed when the pixmap disappears.

Some kind of PixmapDestroyNotify event is necessary for this; the alternative is for the compositing manager to constantly query the X server to see if the pixmap IDs it is using are still valid, and even that isn’t reliable as the application may re-use pixmap IDs for a new pixmap.

It seems like this PixmapDestroyNotify event belongs in the XFixes extension—it’s a general mechanism that doesn’t directly relate to Present. XFixes doesn’t currently have any Generic Events associated, but adding that should be fairly straightforward. And then, have the Present extension automatically select for PixmapDestroyNotify events when it delivers the pixmap in a PresentRedirectNotify event so that the client is ensured of receiving the associated PixmapDestroyNotify event.

One remaining question I have is whether this is sufficient, or if the compositing manager needs a stable pixmap ID which lives beyond the life of the application pixmap ID. If so, the solution would be to have the X server allocate an internal ID for the pixmap and pass that to the client somehow; presumably in addition to the original pixmap ID.

XFixesPixmapSelectInput XFIXESEVENTID { XID } Defines a unique event delivery target for Present events. Multiple event IDs can be allocated to provide multiple distinct event delivery contexts. PIXMAPEVENTS { XFixesPixmapDestroyNotify } ┌─── XFixesPixmapSelectInput eventid: XFIXESEVENTID pixmap: PIXMAP events: SETofPIXMAPEVENTS └─── Errors: Pixmap, Value

Changes the set of events to be delivered for the target pixmap. A Value error is sent if ‘events’ contains invalid event selections.

XFixesPixmapDestroyNotify ┌─── XFixesPixmapDestroyNotify type: CARD8 XGE event type (35) extension: CARD8 XFixes extension request number sequence-number: CARD16 length: CARD32 0 evtype: CARD16 XFixes_PixmapDestroyNotify eventID: XFIXESEVENTID pixmap: pixmap └───

This event is delivered to all clients selecting for it on ‘pixmap’ when the pixmap ID is destroyed by a client. Note that the pixmap may still be referenced by other objects within the X server, as a window background, GC tile or stipple or Picture drawable (among others), but this event serves to notify the selecting client that the ID is no longer associated with the underlying pixmap.

Categories: Elsewhere

BryceAdamFisher.com: Considerations for Multisite Drupal

Planet Drupal - Sat, 28/12/2013 - 17:40

At my day job, we've been using the Domain Access module with Drupal 6 for 5 years. Recently, we've decided it's time to rethink our approach to Drupal multisite. In this article, I'll share some of ideal use cases and pitfalls for the Domain module and some alternatives for you to consider.

Categories: Elsewhere

David Moreno: FOSDEM 2014

Planet Debian - Sat, 28/12/2013 - 14:17

I just booked my train tickets for FOSDEM 2014. Accommodation will be sorted out later.

This year I planned it well in advance to get decent prices for Thalys, unlike last year that we ended up riding the slow trains that stop in every single town in the Netherlands and Belgium taking forever.

See you there!

Categories: Elsewhere

Russ Allbery: rra-c-util 5.0

Planet Debian - Sat, 28/12/2013 - 05:26

This package started as my collection of shared C utility code and has turned into the collection of all sorts of glue that I use across multiple packages.

The primary changes in this release are to the network utility layer. In order to add proper IPv6 support, I adopted the network_bind_* functions to take the socket type, which changed all the APIs (hence the major version bump). While I was at it, I also cleaned up some of the error reporting in the various functions to make them easier to use. There's a new network_wait_any function that does the select part of network_accept_any, making it useful for UDP servers.

This release also contains a new, generalized TAP add-on for spawning background processes used by test cases. Their standard output and standard error is now captured and merged into the test output stream using the new C TAP Harness support for files containing diagnostic messages. The remctl TAP add-on has been rewritten to use this framework.

Julien ÉLIE also adapted the Autoconf probes for Berkeley DB and zlib used in INN to the framework used for most probes in rra-c-util, and those probes are included in this release.

Finally, the vector utility library's free functions now accept (and do nothing with) NULL, which makes it easier to write cleanup functions.

You can get the latest release from the rra-c-util distribution page.

Categories: Elsewhere

Richard Hartmann: Random rant

Planet Debian - Fri, 27/12/2013 - 23:08

Q: What user space program can reliably lock up a Lenovo X1 Carbon with Intel Core i7-3667U, 8 GB RAM, and an Intel SSDSCMMW240A3L so badly that you can't ssh into it any more and ctrl-alt-f1 does not work? So badly that, after half an hour of waiting, the only thing that's left is to shut if off hard.

A: Google Chrome with 50+ Flickr tabs open.

Q: Why?

A: I honestly don't know.

Categories: Elsewhere

Richard Hartmann: Release Critical Bug report for Week 52

Planet Debian - Fri, 27/12/2013 - 21:41

I had been pondering to do an end-of-year bug stat post. Niels Thykier forced my hand, so here goes :)

The UDD bugs interface currently knows about the following release critical bugs:

  • In Total: 1353
    • Affecting Jessie: 476 That's the number we need to get down to zero before the release. They can be split in two big categories:
      • Affecting Jessie and unstable: 410 Those need someone to find a fix, or to finish the work to upload a fix to unstable:
        • 52 bugs are tagged 'patch'. Please help by reviewing the patches, and (if you are a DD) by uploading them.
        • 28 bugs are marked as done, but still affect unstable. This can happen due to missing builds on some architectures, for example. Help investigate!
        • 330 bugs are neither tagged patch, nor marked done. Help make a first step towards resolution!
      • Affecting Jessie only: 66 Those are already fixed in unstable, but the fix still needs to migrate to Jessie. You can help by submitting unblock requests for fixed packages, by investigating why packages do not migrate, or by reviewing submitted unblock requests.
        • 0 bugs are in packages that are unblocked by the release team.
        • 66 bugs are in packages that are not unblocked.

Graphical overview of bug stats thanks to azhag:
[[!img Error: Image::Magick is not installed]]

Categories: Elsewhere

Tyler Frankenstein: Drupal - Get a View's Export Code Programmatically from CTools

Planet Drupal - Fri, 27/12/2013 - 18:17

With Drupal and Views, we are able to configure a View's settings and import/export them across Drupal sites. This allows us to create backup copies of our View's settings and/or move a View between Drupal sites with relative ease.

Typically this is a manual process by using the Views UI to copy the "export" code string from one site:

http://dev.example.com/admin/structure/views/view/frontpage/export

..and then paste the string into the "import" form on another site:

Categories: Elsewhere

Drupalize.Me: Careful With That Debug Syntax

Planet Drupal - Fri, 27/12/2013 - 14:53

A funny thing happened last week. On Wednesday, we performed our weekly code deployment and released a handful of new features/bug fixes to the site. And then, about an hour later, someone on the team found this:

Notice the extra "asdf fdsa" in there? It's okay if you didn't, because neither did we. How did this happen? Don't you guys have a review process? I would have never let this happen on my project.

Related Topics: debugging, Development
Categories: Elsewhere

Pages

Subscribe to jfhovinne aggregator