Planet Debian

Subscribe to Planet Debian feed
Planet Debian - http://planet.debian.org/
Updated: 53 min 8 sec ago

Dirk Eddelbuettel: RcppArmadillo 0.4.450.1.0

Tue, 23/09/2014 - 05:00

Continuing with his standard pace of approximately one new version per month, Conrad released a new minor release of Armadillo a few days ago. As before, I had created a GitHub-only pre-release which was tested against all eighty-seven (!!) CRAN dependents of our RcppArmadillo package and then uploaded RcppArmadillo 0.4.450.0 to CRAN.

The CRAN maintainers pointed out that under the R-development release, a NOTE was issued concerning the C-library's rand() call. This is a pretty new NOTE, but it means using the (sometimes poor quality) rand() generator is now a no-no. Now, Armadillo being as robustly engineered as it is offers a new random number generator based on C++11 as well as a fallback generator for those unfortunate enough to live with an older C++98 compiler. (I would like to note here that I find Conrad's continued support for both C++11, offering very useful modern language idioms, as well as the fallback code for continued deployment and usage by those constrained in their choice of compilers rather exemplary --- because contrary to what some people may claim, it is not a matter of one or the other. C++ always was, and continues to be, a multi-paradigm language which can be supported easily by several standard. But I digress...)

In any event, one cannot argue with CRAN about their prescription of a C++98 compiler. So Conrad and I discussed this over email, and came up with a scheme where a user-package (such as RcppArmadillo) can provide an alternate generator which Armadillo then deploys. I implemented a first solution which was then altered / reflected by Conrad in a revised version 4.450.1 of Armadillo. I packaged, and now uploaded, that version as RcppArmadillo 0.4.450.1.0 to both CRAN and into Debian.

Besides the RNG change already discussed, this release brings a few smaller changes from the Armadillo side. These are detailed below in the extract from the NEWS file. On the RcppArmadillo side, we now have support for pkgKitten which is both very exciting and likely the topic of another blog post with an example of creating an RcppArmadillo package that purrs. In the process, I overhauled and polished how new packages are created by RcppArmadillo.package.skeleton(). An upcoming blog post may provide an example.

Changes in RcppArmadillo version 0.4.450.1.0 (2014-09-21)
  • Upgraded to Armadillo release Version 4.450.1 (Spring Hill Fort)

    • faster handling of matrix transposes within compound expressions

    • expanded symmatu()/symmatl() to optionally disable taking the complex conjugate of elements

    • expanded sort_index() to handle complex vectors

    • expanded the gmm_diag class with functions to generate random samples

  • A new random-number implementation for Armadillo uses the RNG from R as a fallback (when C++11 is not selected so the C++11-based RNG is unavailable) which avoids using the older C++98-based std::rand

  • The RcppArmadillo.package.skeleton() function was updated to only set an "Imports:" for Rcpp, but not RcppArmadillo which (as a template library) needs only LinkingTo:

  • The RcppArmadillo.package.skeleton() function will now prefer pkgKitten::kitten() over package.skeleton() in order to create a working package which passes R CMD check.

  • The pkgKitten package is now a Suggests:

  • A manual page was added to provide documentation for the functions provided by the skeleton package.

  • A small update was made to the package manual page.

Courtesy of CRANberries, there is also a diffstat report for the most recent release. As always, more detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Categories: Elsewhere

Gunnar Wolf: One month later: How is the set of Debian keyrings faring?

Mon, 22/09/2014 - 20:13

OK, it's almost one month since we (the keyring-maintainers) gave our talk at DebConf14; how are we faring regarding key transitions since then? You can compare the numbers (the graphs, really) to those in our DC14 presentation.

Since the presentation, we have had two keyring pushes:

First of all, the Non-uploading keyring is all fine: As it was quite recently created, and as it is much smaller than our other keyrings, it has no weak (1024 bit) keys. It briefly had one in 2010-2011, but it's long been replaced.

Second, the Maintainers keyring: In late July we had 222 maintainers (170 with >=2048 bit keys, 52 with weak keys). By the end of August we had 221: 172 and 49 respectively, and by September 18 we had 221: 175 and 46.

As for the Uploading developers, in late July we had 1002 uploading developers (481 with >=2048 bit keys, 521 with weak keys). By the end of August we had 1002: 512 and 490 respectively, and by September 18 we had 999: 531 and 468.

Please note that these numbers do not say directly that six DMs or that 50 uploading DDs moved to stronger keys, as you'd have to factor in new people being added, keys migrating between different keyrings (mostly DM⇒DD), and people retiring from the project; you can get the detailed information looking at the public copy of our Git repository, particularly of its changelog.

And where does that put us?

Of course, I'm very happy to see that the lines in our largest keyring have already crossed. We now have more people with >=2048 bit keys. And there was a lot of work to do this processing done! But that still means... That in order not to lock a large proportion of Debian Developers and Maintainers out of the project, we have a real lot of work to do. We would like to keep the replacement slope high (because, remember, in January 1st we will remove all small keys from the keyring).

And yes, we are willing to do the work. But we need you to push us for it: We need you to get a new key created, to gather enough (two!) DD signatures in it, and to request a key replacement via RT.

So, by all means: Do keep us busy!

AttachmentSize Debian Developers (uploading)266.66 KB Debian Developers (non-uploading)204.17 KB Debian Maintainers296.73 KB
Categories: Elsewhere

Konstantinos Margaritis: EfikaMX updated wheezy and jessie images available

Mon, 22/09/2014 - 19:38

A while ago, I promised to some people in powerdeveloper.org forum that I would provide bootable armhf images for wheezy but most importantly for jessie with an updated kernel. After a delay -I did have the images ready and working, but had to clean them up a bit- I decided to publish them here first.

So, here are the images:

http://freevec.org/files/efikamx-wheezy-armhf-20140921.img.xz (559MB)
http://freevec.org/files/efikamx-jessie-armhf-20140921.img.xz (635MB)

Categories: Elsewhere

Joachim Breitner: Using my Kobo eBook reader as an external eInk monitor

Sun, 21/09/2014 - 20:15

I have an office with a nice large window, but more often than not I have to close the shades to be able to see something on my screen. Even worse: There were so many nice and sunny days where I would have loved to take my laptop outside and work there, but it (a Thinkpad T430s) is simply not usable in bright sun. I have seen those nice eInk based eBook readers, who are clearer the brighter they are. That’s what I want for my laptop, and I am willing to sacrifice color and a bit of usability due to latency for being able to work in the bright daylight!

So while I was in Portland for DebConf14 (where I guess I felt a bit more like tinkering than otherwise) I bought a Kobo Aura HD. I chose this device because it has a resolution similar to my laptop (1440×1080) and I have seen reports from people running their own software on it, including completely separate systems such as Debian or Android.

This week, I was able to play around with it. It was indeed simple to tinker with: You can simply copy a tarball to it which is then extracted over the root file system. There are plenty of instructions online, but I found it easier to take them as inspiration and do it my way – with basic Linux knowledge that’s possible. This way, I extended the system boot script with a hook to a file on the internal SD card, and this file then runs the telnetd daemon that comes with the device’s busybox installation. Then I just have to make the device go online and telnet onto it. From there it is a pretty normal Linux system, albeit without an X server, using the framebuffer directly.

I even found an existing project providing a VNC client implementation for this and other devices, and pretty soon I could see my laptop screen on the Kobo. Black and white worked fine, but colors and greyscales, including all anti-aliased fonts, were quite broken. After some analysis I concluded that it was confusing the bit pattern of the pixels. Luckily kvncclient shares that code with koreader, which worked fine on my device, so I could copy some files and settings from there et voilá: I now have an eInk monitor for my laptop. As a matter of fact, I am writing this text with my Kobo sitting on top of the folded-back laptop screen!

I did some minor adjustments to my laptop:

  • I changed the screen size to match the Kobo’s resolution. Using xrandr’s --panning option this is possible even though my real screen is only 900 pixels high.
  • I disabled the cursor-blink where possible. In general, screen updates should be avoided, so I hide my taffybar (which has a CPU usage monitor) and text is best written at the very end of the line (and not before a, say, </p>).
  • My terminal windows are now black-on-white.
  • I had to increase my font-size a bit (the kobo has quite a high DPI), and color is not helpful (so :set syntax=off in vim).

All this is still very manual (going online with the kobo, finding its IP address, logging in via telnet, killing the Kobo's normal main program, starting x11vnc, finding my ip address, starting the vnc client, doing the adjustments mentioned above), so I need to automate it a bit. Unfortunately, there is no canonical way to extend the Kobo by your own application: The Kobo developers made their device quite open, but stopped short from actually encouraging extensions, so people have created many weird ways to start programs on the Kobo – dedicated start menus, background programs observing when the regular Kobo app opens a specific file, complete replacements for the system. I am considering to simply run an SSH server on the device and drive the whole process from the laptop. I’ll keep you up-to-date.

A dream for the future would be to turn the kobo into a USB monitor and simply connect it to any computer, where it then shows up as a new external monitor. I wonder if there is a standard for USB monitors, and if it is simple enough (but I doubt it).

A word about the kobo development scene: It seems to be quite active and healthy, and a number of interesting applications are provided for it. But unfortunately it all happens on a web forum, and they use it not only for discussion, but also as a wiki, a release page, a bug tracker, a feature request list and as a support line – often on one single thread with dozens of posts. This makes it quite hard to find relevant information and decide whether it is still up-to-date. Unfortunately, you cannot really do without it. The PDF viewer that comes with the kobo is barely okish (e.g. no crop functionality), so installing, say, koreader is a must if you read more PDFs than actual ebooks. And then you have to deal with the how-to-start-it problem.

That reminds me: I need to find a decent RSS reader for the kobo, or possibly a good RSS-to-epub converter that I can run automatically. Any suggestions?

PS and related to this project: Thanks to Kathey!

Categories: Elsewhere

Dariusz Dwornikowski: statistics of RFS bugs and sponsoring process

Sun, 21/09/2014 - 16:21

For some days I have been working on statistics of the sponsoring process in Debian. I find this to be one of the most important things that Debian has to attract and enable new contributions. It is important to know how this process works, whether we need more sponsors, how effective is the sponsoring and what are the timings connected to it.

How I did this ?

I have used Debbugs SOAP interface to get all bugs that are filed against sponsorship-requests pseudo package. SOAP gives a little bit of overhead because it needs to download a complete list of bugs for the sponsorship-requests package, and then process them according to given date ranges. The same information can be easily extracted from the UDD database in the future, it will be faster because SQL is better when working with date ranges than python obviously.

The most problematic part was getting the "real done date" of a particular bug, and frankly most of my time I have spent on writing a rather dirty and complicated script. The script gets a log for a particular bug number and returns a "real done date". I have published a proof of concept in a previous post..

What I measured ?

RFSs is a queue, and in every queue one is interested in a mean time to get processed. In this case I called the metric global MTTGS (mean time to get sponsored). This is a metric that gives the overall performance insight in RFS queue. Time to get sponsored (TTGS) for a bug is a number of days that passed between filing an RFS bug and closing it (bug was sponsored). Mean time to get sponsored is calculated as a sum of TTGSs of all bugs divided by number of bugs (in a given period of time). Global MTTGS is MTTGS calculated for a period of time 2012-1-1 until today().

Besides MTTGS I have also measured typical bug related metrics:

  • number of bugs closed in a given day,
  • number of bugs opened in a given day,
  • number of bugs with status open in a given day,
  • number of bugs with status closed in a given day.
Plots and graphs

Below is a plot of global MTTGS vs. time (click for a larger image).

As you can see, the trend is roughly exponential and MTTGS tends to settle around 60 days at the end of the year 2013. This does not mean that your package will wait 60 days on average nowadays to get sponsored. I remind that this is a global MTTGS, so even if the MTTGS of last month was very low, the global MTTGS would decrease just slightly. It gives, however, a good glance in performance of the process. Even that more packages are filed for sponsoring (see next graphs) now, than in the beginning of the epoch, the sponsoring rate is high enough to flatten the global MTTGS, and with time maybe decrease it.

The image below (click for a larger one) shows how many bugs reside in a queue with status open or closed (calculated for each day). For closed we have an almost linear function, so each day more or less the same amount of bugs are closed and they increase the pool of bugs with status closed. For bugs with status open the interesting part begins around May 2012 after the system is saturated or gets popular. It can be interpreted as a plot of how many bugs reside in the queue, the important part is that it is stable and does not show clear increasing trend.

The last plot shows arrival and departure rate of bugs from RFS queue, i.e. how many bugs are opened and closed each day. The interesting part here are the maxima. Let's look at them.

Maximal number of opened bugs (21) was on 2012-05-06. As it appears it was a bunch upload of RFSs for tryton-modules-*..

706953 RFS: tryton-modules-account-stock-anglo-saxon/2.8.0-1 706954 RFS: tryton-modules-purchase-shipment-cost/2.8.0-1 706948 RFS: tryton-modules-production/2.8.0-1 706969 RFS: tryton-modules-account-fr/2.8.0-1 706946 RFS: tryton-modules-project-invoice/2.8.0-1 706950 RFS: tryton-modules-stock-supply-production/2.8.0-1 706942 RFS: tryton-modules-product-attribute/2.8.0-1 706957 RFS: tryton-modules-stock-lot/2.8.0-1 706958 RFS: tryton-modules-carrier-weight/2.8.0-1 706941 RFS: tryton-modules-stock-supply-forecast/2.8.0-1 706955 RFS: tryton-modules-product-measurements/2.8.0-1 706952 RFS: tryton-modules-carrier-percentage/2.8.0-1 706949 RFS: tryton-modules-account-asset/2.8.0-1 706904 RFS: chinese-checkers/0.4-1 706944 RFS: tryton-modules-stock-split/2.8.0-1 706981 RFS: distcc/3.1-6 706945 RFS: tryton-modules-sale-supply/2.8.0-1 706959 RFS: tryton-modules-carrier/2.8.0-1 706951 RFS: tryton-modules-sale-shipment-cost/2.8.0-1 706943 RFS: tryton-modules-account-stock-continental/2.8.0-1 706956 RFS: tryton-modules-sale-supply-drop-shipment/2.8.0-1

Maximum number of closed bugs (18) was on 2013-09-24, and as you probably guessed right also tryton modules had impact on that.

706953 RFS: tryton-modules-account-stock-anglo-saxon/2.8.0-1 706954 RFS: tryton-modules-purchase-shipment-cost/2.8.0-1 706948 RFS: tryton-modules-production/2.8.0-1 706969 RFS: tryton-modules-account-fr/2.8.0-1 706946 RFS: tryton-modules-project-invoice/2.8.0-1 706950 RFS: tryton-modules-stock-supply-production/2.8.0-1 706942 RFS: tryton-modules-product-attribute/2.8.0-1 706958 RFS: tryton-modules-carrier-weight/2.8.0-1 706941 RFS: tryton-modules-stock-supply-forecast/2.8.0-1 706955 RFS: tryton-modules-product-measurements/2.8.0-1 706952 RFS: tryton-modules-carrier-percentage/2.8.0-1 706949 RFS: tryton-modules-account-asset/2.8.0-1 706944 RFS: tryton-modules-stock-split/2.8.0-1 706959 RFS: tryton-modules-carrier/2.8.0-1 723991 RFS: mapserver/6.4.0-2 706951 RFS: tryton-modules-sale-shipment-cost/2.8.0-1 706943 RFS: tryton-modules-account-stock-continental/2.8.0-1 706956 RFS: tryton-modules-sale-supply-drop-shipment/2.8.0-1 The sofware

Most of the software was written in Python. Graphs were generated in R. After a code cleanup I will publish a complete solution on my gihub account, free to use by everybody. If you would like to see another statistics, please let me know, I can create them if the data provides sufficient information.

Categories: Elsewhere

Konstantinos Margaritis: VSX port added to Eigen!

Sun, 21/09/2014 - 15:03

Being the SIMD fanatic that I am, a few years ago I did the PowerPC Altivec and ARM NEON port for the Eigen linear algebra library, one of the best and most popular libraries -and most ported.

Recently I thought it would be a good idea to extend both ports to 64-bit, and it would also help me with the SIMD book, using VSX in one case and ARMv8 NEON (or Advanced SIMD as ARM likes to call it) in the latter. ARMv8 hardware is a bit scarce at the moment, so I thought I'd start with VSX. Being in Debian, I have access to a number of porterboxes in several architectures, and luckily one of those was a Power7 (with VSX) running ppc64. So I started the porting -or rather extending the code- to use VSX in the 64-bit doubles case. Unluckily, I could not test anything because Debian kernels do not have VSX enabled in wheezy -which is what the porterbox is running and enabling it is a non-option(#758620). So, running VSX code would turn out to be quite hard.

Categories: Elsewhere

Laura Arjona: Happy Software Freedom Day!

Sat, 20/09/2014 - 11:58

Today we celebrate the day of free software (each year, a saturday around mid-September) More info at softwarefreedomday.org

There are no public events in Madrid, but I’m going to try to hack and write a bit more this weekend, as my personal celebration.

In this blog post you can find some of my very very recent activities on free software, and my plans for this weekend of celebration!

Debian Children distros aka Derivatives

I had the translation/update of the page www.debian.org/misc/children-distros pending since long time. It’s a long page, and I was not sure what was better: if picking up the too-outdated last translation, and review it carefully in order to update it, or starting from scratch. I decided to reuse the last translation (thanks Luis Uribe!) and after some days dedicating my commuting time on it, finally, yesterday evening I finished it at home. Now it’s in the review queue, and I hope in 10 days or so it will be uploaded.

In the meantime, I have learned a bit about the Debian Derivatives subproject and  census, I have watched the Derivatives Panel at DebConf13, and had a look at the bug #723069 about keeping the children-distros page up to date.

So now that I’m liberated about this translation, I’m going to put some time in keeping up to date the original English page (I’m part of the www and publicity team, so I think it makes sense). My goal is to review at least one Debian derivative each two days, and when I finish the list, start again. I can update the wiki myself, and for the www, I’ll send patches against #723069, unless I’m told to do it other way.

BTW, wouldn’t be nice to mark web/wiki pages as “RFH” the same as packages?, so other people can easily decide to put some time on them, and make http://www.debian.org even more awesome! Or make them appear in the how-can-i-help reminders :)  Mmm maybe it’s just a matter of filing a bug and tagging it as “gift”? I think no, because nobody has the package “www.debian.org” installed in their system… I’ll talk with the maintainer about this.

New Member process

I promised myself to try to work a bit more in Debian during the summer and September, and if everything goes well, try to apply to the new member process in October.

I wanted to read all the documentation first, and one challenge is to review/update the translations of www.debian.org/devel/join folder. This way, both myself and the Spanish speaking community benefit from the effort. Yesterday I translated one of those pending pages and I hope during the weekend I can translate/update the rest. When I finish that, I’ll keep reading the other documentation.

DebConf15

This summer I was invited to join the DebConf15 organization team and pick up tasks in the publicity area. I was very happy to join, I’m not sure at all that I can go to DebConf15 in Heidelberg (Germany), in fact I’m quite sure I will not go since mid-August is the only opportunity to visit family who lives far away, but anyway, there are things that we can do before DebConf15 and I can contribute.

For now, I attended last Monday to the meeting at IRC, and I’m finishing a short blogpost about the DebConf14 talk presenting DebConf15, that will be published in the DebConf15 blog.

Android, F-Droid

I keep on trying to spread the word about F-Droid and the free software available for Android, last week some of my friends updated Kontalk to the 3.0.b1 version (I had updated at the beginning of September) and they liked that now, the images are sent encrypted as well as the text messages :)

Some friends also liked the 2048 game, since it can be played offline, without ads, and so.

I decided to spend some time this weekend contributing translations to the Android apps that I use.

A long pending issue is to try to put workforce in the F-Droid project itself so apps descriptions are internationalized (the program is fully translatable, but the categories of apps and the descriptions themselves, are not). This is a complicated issue, it requires to take some design decisions, and later, of course, the implementation. I cannot do it alone, and I cannot do it in the short time. But today I have filed a bug report (#35) so maybe I find other people able to help.

Jabber/XMPP and the “RedesLibres” chatroom

Since several months I’ve been using more often my Jabber/XMPP account to join the chatroom redeslibres@salas.mijabber.es

I meet there some people that I follow in Pump.io (for example, the people that write in the Comunícate Libremente or Lignux blogs) and we talk about pump.io, free software, free services, and other things. I feel very comfortable there, it’s nice to have a Spanish speaking group inside the Free Software community, and I’m also learning a bit about XMPP (I’ve tried a lot of desktop and Android clients, just for fun!), free networks, and so.

So today I wanted to publicly thank you everybody in that chatroom, that welcomed me so well :)

Thank you, free software friends

And, by extension, I want to thank you all the people that work and have fun in the Free Software communities, in the projects where I contribute or others. They (we) hack to make the world better, and to allow others join to this beautiful challenge that is making machines do what their (final) users wants.

Comments?

You can comment on this post in this Pump.io thread.


Filed under: My experiences and opinion Tagged: Android, Communities, Contributing to libre software, Debian, English, F-Droid, federation, Free Software, Freedom, internationalization, libre software, localization, translations
Categories: Elsewhere

Francesca Ciceri: Four Ways to Forgiveness

Sat, 20/09/2014 - 10:20

"I have seen a picture," Havzhiva went on.
The Chosen was impassive; he might or might not know the word. "Lines and colors made with earth on earth may hold knowledge in them. All knowledge is local, all truth is partial," Havzhiva said with an easy, colloquial dignity that he knew was an imitation of his mother, the Heir of the Sun, talking to foreign merchants. "No truth can make another truth untrue. All knowledge is a part of the whole knowledge. A true line, a true color. Once you have seen the larger patttern, you cannot go back to seeing the part as the whole.

I've just finished to read "Four Ways to Forgiveness" by U.K Le Guin.
It deeply resonated within me, it's still there doing its magic in my brain, lingering in the corners of my mind, tickling my view of reality, humming with the beauty of ideas you didn't knew were inside you till you've seen them written on paper.
And then, you know they were there all along, you just didn't know how to make them into words.
Le Guin knows how to do it, wonderfully.

I loved the whole book, but the last two stories were eye-openers.
Thanks Enrico for suggesting me this one, thanks dkg for having introduced me to Le Guin's books (with another fantastic book: The Left Hand of Darkness).

Categories: Elsewhere

Dariusz Dwornikowski: getting real "done date" of a bug from Debian BTS

Fri, 19/09/2014 - 09:17

As I wrote in my last post currently, SOAP interface, nor Ultimate Debian Database do not provide a date when a given bug was closed (done date). It is quite hard to calculate statistics on a bug tracker when you do not know when a bug was closed (!!).

Done date of bug can be found in its log. The log itself can be downloaded by SOAP method get_bug_log but the processing of it is quite complicated. The same comes to web scrapping of a BTS's web interface. Fortunatelly the web interface gives a possibility to download a log in an mbox format.

Below is a script that extracts the done date of a bug from its log in mbox format. It uses requests to download the mbox and caches the result in ~/.cache/rfs_bugs, which you need to create. It performs different checks:

  1. Check existence of a header e.g. Received: (at 657783-done) by bugs.debian.org; 29 Jan 2012 13:27:42 +0000
  2. Check for header CC: NUMBER-close|done
  3. Check for header TO: NUMBER-close|done
  4. Check for Close: NUMBER in body.

The code is below:

import requests from datetime import datetime import mailbox import re import os import tempfile def get_done_date(bug_num): CACHE_DIR = os.path.expanduser("~") + "/.cache/rfs_bugs/" def get_from_cache(): if os.path.exists("{}{}".format(CACHE_DIR, bug_num)): with open("{}{}".format(CACHE_DIR, bug_num)) as f: return datetime.strptime(f.readlines()[0].rstrip(), "%Y-%m-%d").date() else: return None done_date = get_from_cache() if done_date is not None: return done_date else: r = requests.get("https://bugs.debian.org/cgi-bin/bugreport.cgi?mbox=yes;bug={};mboxstatus=yes".format(self._num)) d = try_header(r.text) if d is None: d = try_cc(r.text) if d is None: d = try_body(r.text) if d is not None: with open("{}{}".format(CACHE_DIR, bug_num), "w") as f: f.write("{}".format(d.date())) else: return None return d.date() def try_body(text): reg = "\(at\s.+\)\s+by\sbugs\.debian\.org;\s(\d{1,2}\s\w\w\w\s\d\d\d\d)" handle, name = tempfile.mkstemp() with open(name, "w") as f: f.write(text.encode('latin-1')) mbox = mailbox.mbox(name) for i in mbox.items(): if i[1].is_multipart(): for m in i[1].get_payload(): if "close" in str(m) or "done" in str(m): try: result = re.search(reg, i[1]['Received']) return datetime.strptime(result.group(1), "%d %b %Y") except: return None else: if "close" in i[1].get_payload() or "done" in i[1].get_payload(): try: result = re.search(reg, i[1]['Received']) return datetime.strptime(result.group(1), "%d %b %Y") except: return None return None def try_header(text): reg = "Received:\s\(at\s\d\d\d\d\d\d-(close|done)\)\s+by.+" try: result = re.search(reg, r.text) line = result.group(0) reg2 = "\d{1,2}\s\w\w\w\s\d\d\d\d" result = re.search(reg2, line) d = datetime.strptime(result.group(0), "%d %b %Y") return d except: return None def try_cc(text): reg = "\(at\s.+\)\s+by\sbugs\.debian\.org;\s(\d{1,2}\s\w\w\w\s\d\d\d\d)" handle, name = tempfile.mkstemp() with open(name, "w") as f: f.write(text.encode('latin-1')) mbox = mailbox.mbox(name) for i in mbox.items(): if ('CC' in i[1] and "done" in i[1]['CC']) or ('To' in i[1] and "done" in i[1]['To']): try: result = re.search(reg, i[1]['Received']) return datetime.strptime(result.group(1), "%d %b %Y") except: return None if __name__ == "__main__": print get_done_date(752210)

PS: I hope that the script will be not needed in the near future, as Don Armstrong plans a new BTS database, a Debconf14 video is here.

Categories: Elsewhere

Daniel Pocock: reSIProcate migration from SVN to Git completed

Fri, 19/09/2014 - 08:47

This week, the reSIProcate project completed the move from SVN to Git.

With many people using the SIP stack in both open source and commercial projects, the migration was carefully planned and tested over an extended period of time. Hopefully some of the experience from this migration can help other projects too.

Previous SVN committers were tracked down using my script for matching emails to Github accounts. This also allowed us to see their recent commits on other projects and see how they want their name and email address represented when their previous commits in SVN were mapped to Git commits.

For about a year, the sync2git script had been run hourly from cron to maintain an official mirror of the project in Github. This allowed people to test it and it also allowed us to start using some Github features like travis-CI.org before officially moving to Git.

At the cut-over, the SVN directories were made read-only, sync2git was run one last time and then people were advised they could commit in Git.

Documentation has also been created to help people get started quickly sharing patches as Github pull requests if they haven't used this facility before.

Categories: Elsewhere

Paul Tagliamonte: Docker PostgreSQL Foreign Data Wrapper

Fri, 19/09/2014 - 03:49

For the tl;dr: Docker FDW is a thing. Star it, hack it, try it out. File bugs, be happy. If you want to see what it's like to read, there's some example SQL down below.

The question is first, what the heck is a PostgreSQL Foreign Data Wrapper? PostgreSQL Foreign Data Wrappers are plugins that allow C libraries to provide an adaptor for PostgreSQL to talk to an external database.

Some folks have used this to wrap stuff like MongoDB, which I always found to be hilarous (and an epic hack).

Enter Multicorn

During my time at PyGotham, I saw a talk from Wes Chow about something called Multicorn. He was showing off some really neat plugins, such as the git revision history of CPython, and parsed logfiles from some stuff over at Chartbeat. This basically blew my mind.

If you're interested in some of these, there are a bunch in the Multicorn VCS repo, such as the gitfdw example.

All throughout the talk I was coming up with all sorts of things that I wanted to do -- this whole library is basically exactly what I've been dreaming about for years. I've always wanted to provide a SQL-like interface into querying API data, joining data cross-API using common crosswalks, such as using Capitol Words to query for Legislators, and use the bioguide ids to JOIN against the congress api to get their Twitter account names.

My first shot was to Multicorn the new Open Civic Data API I was working on, chuckled and put it aside as a really awesome hack.

Enter Docker

It wasn't until tianon connected the dots for me and suggested a Docker FDW did I get really excited. Cue a few hours of hacking, and I'm proud to say -- here's Docker FDW.

Currently it only implements reading from the API, but extending this to allow for SQL DELETE operations isn't out of the question, and likely to be implemented soon. This lets us ask all sorts of really interesting questions out of the API, and might even help folks writing webapps avoid adding too much Docker-aware logic.

Setting it up The only stumbling block you might find (at least on Debian and Ubuntu) is that you'll need a Multicorn `.deb`. It's currently undergoing an official Debianization from the Postgres team, but in the meantime I put the source and binary up on my people.debian.org. Feel free to use that while the Debian PostgreSQL team prepares the upload to unstable.

I'm going to assume you have a working Multicorn, PostgreSQL and Docker setup (including adding the postgres user to the docker group)

So, now let's pop open a psql session. Create a database (I called mine dockerfdw, but it can be anything), and let's create some tables.

Before we create the tables, we need to let PostgreSQL know where our objects are. This takes a name for the server, and the Python importable path to our FDW.

CREATE SERVER docker_containers FOREIGN DATA WRAPPER multicorn options ( wrapper 'dockerfdw.wrappers.containers.ContainerFdw'); CREATE SERVER docker_image FOREIGN DATA WRAPPER multicorn options ( wrapper 'dockerfdw.wrappers.images.ImageFdw');

Now that we have the server in place, we can tell PostgreSQL to create a table backed by the FDW by creating a foreign table. I won't go too much into the syntax here, but you might also note that we pass in some options - these are passed to the constructor of the FDW, letting us set stuff like the Docker host.

CREATE foreign table docker_containers ( "id" TEXT, "image" TEXT, "name" TEXT, "names" TEXT[], "privileged" BOOLEAN, "ip" TEXT, "bridge" TEXT, "running" BOOLEAN, "pid" INT, "exit_code" INT, "command" TEXT[] ) server docker_containers options ( host 'unix:///run/docker.sock' ); CREATE foreign table docker_images ( "id" TEXT, "architecture" TEXT, "author" TEXT, "comment" TEXT, "parent" TEXT, "tags" TEXT[] ) server docker_image options ( host 'unix:///run/docker.sock' );

And, now that we have tables in place, we can try to learn something about the Docker containers. Let's start with something fun - a join from containers to images, showing all image tag names, the container names and the ip of the container (if it has one!).

SELECT docker_containers.ip, docker_containers.names, docker_images.tags FROM docker_containers RIGHT JOIN docker_images ON docker_containers.image=docker_images.id; ip | names | tags -------------+-----------------------------+----------------------------------------- | | {ruby:latest} | | {paultag/vcs-mirror:latest} | {/de-openstates-to-ocd} | {sunlightlabs/scrapers-us-state:latest} | {/ny-openstates-to-ocd} | {sunlightlabs/scrapers-us-state:latest} | {/ar-openstates-to-ocd} | {sunlightlabs/scrapers-us-state:latest} 172.17.0.47 | {/ms-openstates-to-ocd} | {sunlightlabs/scrapers-us-state:latest} 172.17.0.46 | {/nc-openstates-to-ocd} | {sunlightlabs/scrapers-us-state:latest} | {/ia-openstates-to-ocd} | {sunlightlabs/scrapers-us-state:latest} | {/az-openstates-to-ocd} | {sunlightlabs/scrapers-us-state:latest} | {/oh-openstates-to-ocd} | {sunlightlabs/scrapers-us-state:latest} | {/va-openstates-to-ocd} | {sunlightlabs/scrapers-us-state:latest} 172.17.0.41 | {/wa-openstates-to-ocd} | {sunlightlabs/scrapers-us-state:latest} | {/jovial_poincare} | {<none>:<none>} | {/jolly_goldstine} | {<none>:<none>} | {/cranky_torvalds} | {<none>:<none>} | {/backstabbing_wilson} | {<none>:<none>} | {/desperate_hoover} | {<none>:<none>} | {/backstabbing_ardinghelli} | {<none>:<none>} | {/cocky_feynman} | {<none>:<none>} | | {paultag/postgres:latest} | | {debian:testing} | | {paultag/crank:latest} | | {<none>:<none>} | | {<none>:<none>} | {/stupefied_fermat} | {hackerschool/doorbot:latest} | {/focused_euclid} | {debian:unstable} | {/focused_babbage} | {debian:unstable} | {/clever_torvalds} | {debian:unstable} | {/stoic_tesla} | {debian:unstable} | {/evil_torvalds} | {debian:unstable} | {/foo} | {debian:unstable} (31 rows)

Success! This is just a taste of what's to come, so please feel free to hack on Docker FDW, tweet me @paultag, file bugs / feature requests. It's currently a bit of a hack, and it's something that I think has long-term potential after some work goes into making sure that this is a rock solid interface to the Docker API.

Categories: Elsewhere

Jaldhar Vyas: Scotland: Vote A DINNAE KEN

Fri, 19/09/2014 - 01:39

From the crack journalists at CNN.

Interesting fact: anyone who wore a kilt at debconf is allowed to vote in the referendum.

Categories: Elsewhere

Jonathan McDowell: Automatic inline signing for mutt with RT

Thu, 18/09/2014 - 12:00

I spend a surprising amount of my time as part of keyring-maint telling people their requests are badly formed and asking them to fix them up so I can actually process them. The one that's hardest to fault anyone on is that we require requests to be inline PGP signed (i.e. the same sort of output as you get with "gpg --clearsign"). That's because RT does various pieces of unpacking[0] of MIME messages that mean that a PGP/MIME signatures that have passed through it are no longer verifiable. Daniel has pointed out that inline PGP is a bad idea and got as far as filing a request that RT handle PGP/MIME correctly (you need a login for that but there's a generic read-only one that's easy to figure out), but until that happens the requirement stands when dealing with Debian's RT instance. So today I finally added the following lines to my .muttrc rather than having to remember to switch Mutt to inline signing for this one special case:

send-hook . "unset pgp_autoinline; unset pgp_autosign" send-hook rt.debian.org "set pgp_autosign; set pgp_autoinline"

i.e. by default turn off auto inlined PGP signatures, but when emailing anything at rt.debian.org turn them on.

(Most of the other things I tell people to fix are covered by the replacing keys page; I advise anyone requesting a key replacement to read that page. There's even a helpful example request template at the bottom.)

[0] RT sticks a header on the plain text portion of the mail, rather than adding a new plain text part for the header if there are multiple parts (this is something Mailman handles better). It will also re-encode received mail into UTF-8 which I can understand, but Mutt will by default try to find an 8 bit encoding that can handle the mail, because that's more efficient, which tends to mean it picks latin1.

Categories: Elsewhere

Dariusz Dwornikowski: RFS health in Debian

Thu, 18/09/2014 - 10:50

I am working on a small project to create WNPP like statistics for open RFS bugs. I think this could improve a little bit effectiveness of sponsoring new packages by giving insight into bugs that are on their way to being starved (i.e. not ever sponsored, or rotting in a queue).

The script attached in this post is written in Python and uses Debbugs SOAP interface to get currently open RFS bugs and calculates their dust and age.

The dust factor is calculated as an absolute value of a difference between bugs's age and log_modified.

Later I would like to create fully blown stats for an RFS queue, taking into account the whole history (i.e. 2012-1-1 until now), and check its health, calculate MTTGS (mean time to get sponsored).

The list looks more or less like this:

Age Dust Number Title 37 0 757966 RFS: lutris/0.3.5-1 [ITP] 1 0 762015 RFS: s3fs-fuse/1.78-1 [ITP #601789] -- FUSE-based file system backed by Amazon S3 81 0 753110 RFS: mrrescue/1.02c-1 [ITP] 456 0 712787 RFS: distkeys/1.0-1 [ITP] -- distribute SSH keys 120 1 748878 RFS: mwc/1.7.2-1 [ITP] -- Powerful website-tracking tool 1 1 762012 RFS: fadecut/0.1.4-1 3 1 761687 RFS: abraca/0.8.0+dfsg-1 -- Simple and powerful graphical client for XMMS2 35 2 758163 RFS: kcm-ufw/0.4.3-1 ITP 3 2 761636 RFS: raceintospace/1.1+dfsg1-1 [ITP] .... ....

The script rfs_health.py can be found below, it uses SOAPpy (only python <3 unfortunately).

#!/usr/bin/python import SOAPpy import time from datetime import date, timedelta, datetime url = 'http://bugs.debian.org/cgi-bin/soap.cgi' namespace = 'Debbugs/SOAP' server = SOAPpy.SOAPProxy(url, namespace) class RFS(object): def __init__(self, obj): self._obj = obj self._last_modified = date.fromtimestamp(obj.log_modified) self._date = date.fromtimestamp(obj.date) if self._obj.pending != 'done': self._pending = "pending" self._dust = abs(date.today() - self._last_modified).days else: self._pending = "done" self._dust = abs(self._date - self._last_modified).days today = date.today() self._age = abs(today - self._date).days @property def status(self): return self._pending @property def date(self): return self._date @property def last_modified(self): return self._last_modified @property def subject(self): return self._obj.subject @property def bug_number(self): return self._obj.bug_num @property def age(self): return self._age @property def dust(self): return self._dust def __str__(self): return "{} subject: {} age:{} dust:{}".format(self._obj.bug_num, self._obj.subject, self._age, self._dust) if __name__ == "__main__": bugi = server.get_bugs("package", "sponsorship-requests", "status", "open") buglist = [RFS(b.value) for b in server.get_status(bugi).item] buglist_sorted_by_dust = sorted(buglist, key=lambda x: x.dust, reverse=False) print("Age Dust Number Title") for i in buglist_sorted_by_dust: print("{:<4} {:<4} {:<7} {}".format(i.age, i.dust, i.bug_number, i.subject))
Categories: Elsewhere

Jaldhar Vyas: Scotland: Vote NO

Thu, 18/09/2014 - 06:21
_ __<; </_/ _/__ /> > 7 ) ~;</7 / /> / _*<---- Perth ~ </7 7~\_ </7 \ /_ _ _ |

If you don't, the UK will have to rename itself the K. And that's just silly.

Also vote yes on whether Alex Trebek should keep his mustache.

Categories: Elsewhere

Steve Kemp: If this goes well I have a new blog engine

Wed, 17/09/2014 - 19:23

Assuming this post shows up then I'll have successfully migrated from Chronicle to a temporary replacement.

Chronicle is awesome, and despite a lack of activity recently it is not dead. (No activity because it continued to do everything I needed for my blog.)

Unfortunately though there is a problem with chronicle, it suffers from a bit of a performance problem which has gradually become more and more vexing as the nubmer of entries I have has grown.

When chronicle runs it :

  • It reads each post into a complex data-structure.
  • Then it walks this multiple times.
  • Finally it outputs a whole bunch of posts.

In the general case you rebuild a blog because you've made a entry, or received a new comment. There is some code which tries to use memcached for caching, but in general chronicle just isn't fast and it is certainly memory-bound if you have a couple of thousand entries.

Currently my test data-set contains 2000 entries and to rebuild that from a clean start takes around 4 minutes, which is pretty horrific.

So what is the alternative? What if you could parse each post once, add it to an SQLite database, and then use that for writing your output pages? Instead of the complex data-structure in-RAM and the need to parse a zillion files you'd have a standard/simple SQL structure you could use to build a tag-cloud, an archive, & etc. If you store the contents of the parsed-blog, along with the mtime of the source file you can update it if the entry is changed in the future, as I sometimes make typos which I only spot once Ive run make steve on my blog sources.

Not surprisingly the newer code is significantly faster if you have 2000+ posts. If you've imported the posts into SQLite the most recent entries are updated in 3 seconds. If you're starting cold, parsing each entry, inserting it into SQLite, and then generating the blog from scratch the build time is still less than 10 seconds.

The downside is that I've removed features, obviously nothing that I use myself. Most notably the calendar view is gone, as is the ability to use date-based URLs. Less seriously there is only a single theme, which is what is used upon this site.

In conclusion I've written something last night which is a stepping stone between the current chronicle and chronicle2 which will appear in due course.

PS. This entry was written in markdown, just because I wanted to be sure it worked.

Categories: Elsewhere

NOKUBI Takatsugu: Met with a debian developer from Germany

Wed, 17/09/2014 - 10:22

Last weekend, I (knok), Hideki (henrich) and Yutaka (gniibe) met with John Paul Adrian Glaubitz (glaubitz).

In the past, I had met with another Germany developer Jens Schmalzing (jensen) in Japan. He was a good guy, but unfortunately he gone in 2005.

I had an old OpenPGP key with his sign. It is a record of his activity, but the key is weak nowaday (1024D), so I stop to use the key but don’t issue revoke.

Anyway glaubitz is also a good guy, and he loves old videogame console. gniibe gave him five DreamCast consoles. I bring him to SUPER POTATO, a old videogame shop. He bought some software for Virtual Boy.

DebConf 2015 will hold in Germany, I want to go for it if I can.

 

Categories: Elsewhere

Matthew Garrett: ACPI, kernels and contracts with firmware

Wed, 17/09/2014 - 00:51
ACPI is a complicated specification - the latest version is 980 pages long. But that's because it's trying to define something complicated: an entire interface for abstracting away hardware details and making it easier for an unmodified OS to boot diverse platforms.

Inevitably, though, it can't define the full behaviour of an ACPI system. It doesn't explicitly state what should happen if you violate the spec, for instance. Obviously, in a just and fair world, no systems would violate the spec. But in the grim meathook future that we actually inhabit, systems do. We lack the technology to go back in time and retroactively prevent this, and so we're forced to deal with making these systems work.

This ends up being a pain in the neck in the x86 world, but it could be much worse. Way back in 2008 I wrote something about why the Linux kernel reports itself to firmware as "Windows" but refuses to identify itself as Linux. The short version is that "Linux" doesn't actually identify the behaviour of the kernel in a meaningful way. "Linux" doesn't tell you whether the kernel can deal with buffers being passed when the spec says it should be a package. "Linux" doesn't tell you whether the OS knows how to deal with an HPET. "Linux" doesn't tell you whether the OS can reinitialise graphics hardware.

Back then I was writing from the perspective of the firmware changing its behaviour in response to the OS, but it turns out that it's also relevant from the perspective of the OS changing its behaviour in response to the firmware. Windows 8 handles backlights differently to older versions. Firmware that's intended to support Windows 8 may expect this behaviour. If the OS tells the firmware that it's compatible with Windows 8, the OS has to behave compatibly with Windows 8.

In essence, if the firmware asks for Windows 8 support and the OS says yes, the OS is forming a contract with the firmware that it will behave in a specific way. If Windows 8 allows certain spec violations, the OS must permit those violations. If Windows 8 makes certain ACPI calls in a certain order, the OS must make those calls in the same order. Any firmware bug that is triggered by the OS not behaving identically to Windows 8 must be dealt with by modifying the OS to behave like Windows 8.

This sounds horrifying, but it's actually important. The existence of well-defined[1] OS behaviours means that the industry has something to target. Vendors test their hardware against Windows, and because Windows has consistent behaviour within a version[2] the vendors know that their machines won't suddenly stop working after an update. Linux benefits from this because we know that we can make hardware work as long as we're compatible with the Windows behaviour.

That's fine for x86. But remember when I said it could be worse? What if there were a platform that Microsoft weren't targeting? A platform where Linux was the dominant OS? A platform where vendors all test their hardware against Linux and expect it to have a consistent ACPI implementation?

Our even grimmer meathook future welcomes ARM to the ACPI world.

Software development is hard, and firmware development is software development with worse compilers. Firmware is inevitably going to rely on undefined behaviour. It's going to make assumptions about ordering. It's going to mishandle some cases. And it's the operating system's job to handle that. On x86 we know that systems are tested against Windows, and so we simply implement that behaviour. On ARM, we don't have that convenient reference. We are the reference. And that means that systems will end up accidentally depending on Linux-specific behaviour. Which means that if we ever change that behaviour, those systems will break.

So far we've resisted calls for Linux to provide a contract to the firmware in the way that Windows does, simply because there's been no need to - we can just implement the same contract as Windows. How are we going to manage this on ARM? The worst case scenario is that a system is tested against, say, Linux 3.19 and works fine. We make a change in 3.21 that breaks this system, but nobody notices at the time. Another system is tested against 3.21 and works fine. A few months later somebody finally notices that 3.21 broke their system and the change gets reverted, but oh no! Reverting it breaks the other system. What do we do now? The systems aren't telling us which behaviour they expect, so we're left with the prospect of adding machine-specific quirks. This isn't scalable.

Supporting ACPI on ARM means developing a sense of discipline around ACPI development that we simply haven't had so far. If we want to avoid breaking systems we have two options:

1) Commit to never modifying the ACPI behaviour of Linux.
2) Exposing an interface that indicates which well-defined ACPI behaviour a specific kernel implements, and bumping that whenever an incompatible change is made. Backward compatibility paths will be required if firmware only supports an older interface.

(1) is unlikely to be practical, but (2) isn't a great deal easier. Somebody is going to need to take responsibility for tracking ACPI behaviour and incrementing the exported interface whenever it changes, and we need to know who that's going to be before any of these systems start shipping. The alternative is a sea of ARM devices that only run specific kernel versions, which is exactly the scenario that ACPI was supposed to be fixing.

[1] Defined by implementation, not defined by specification
[2] Windows may change behaviour between versions, but always adds a new _OSI string when it does so. It can then modify its behaviour depending on whether the firmware knows about later versions of Windows.

comments
Categories: Elsewhere

Steinar H. Gunderson: The virtues of std::unique_ptr

Wed, 17/09/2014 - 00:30

Among all the changes in C++11, there's one that I don't feel has received enough attention: std::unique_ptr (or just unique_ptr; I'll drop the std:: from here on). The motivation is simple; assume a function like this:

Foo *func() { Foo *foo = new Foo; if (something_complicated()) { // Oops, something wrong happened return NULL; } foo->baz(); return foo; }

The memory leak is obvious; if something_complicated() returns false, we presumably leak foo. The classical fix is:

Foo *func() { Foo *foo = new Foo; if (something_complicated()) { delete foo; return NULL; } foo->baz(); return foo; }

But this is cumbersome and easy to get wrong. Tools like valgrind have made this a lot easier to detect, but that's a poor substitute; what we want is a coding style where it's deliberately hard to make mistakes. Enter unique_ptr:

Foo *func() { unique_ptr<Foo> foo(new Foo); if (something_complicated()) { // unique_ptr<Foo> destructor deletes foo for us! return NULL; } foo->baz(); return foo.release(); }

So we have introduced a notion of ownership; the function (or, more precisely, scope) now owns the Foo object. The only way we can leave the function and not have it destroyed is through an explicit call to release() (which returns the raw pointer and clears the unique_ptr). We have smart pointer semantics, so we can use -> just as if we had a regular pointer. In any case, the runtime overhead over a regular pointer is exactly zero.

Ownership does, of course, extend just fine to classes:

class Bar { public: Foo() : foo(new Foo) {} private: unique_ptr<Foo> foo; };

In this case, the Bar object owns the Foo object, and will destroy it when it goes out of scope without having to do a manual delete in the destructor, operator= and so on; not to mention that it will make your object non-copy-constructible, so you won't get that wrong by mistake. (In this case, you could do the same just by “Foo foo;” instead of using unique_ptr, of course, modulo the copy constructor behavior and heap behavior.)

So far, we could do all of this in C++03. But C++11 includes a very helpful extra piece of the puzzle, namely move semantics. These allow us to transfer the ownership safely:

class Bar { public: Bar(unique_ptr<Foo> arg_foo) : foo(foo) {} private: unique_ptr<Foo> foo; }; void func() { unique_ptr<Foo> foo(new Foo); // Do something with foo. Bar bar(move(foo)); // ... }

Below the Bar constructor line, foo is empty, and bar owns the Foo object! And at no point, the object was without an owner; if there's no more code in the function, bar will get immediately destroyed, and the Foo object with it (since it has ownership). It also deals just fine with exception safety.

If you program with unique_ptr, it is genuinely very hard to get memory leaks. And it's much better than Java-style garbage collection; you don't get the RAM overhead GC needs, your objects are destroyed at predictable times, and destructors are run, so you can get reliable behavior on things like file handles, sockets and the likes, without having to resort to manual cleanup in a finally block. (In a sense, it's like a refcount that can only ever be 0 or 1.)

It sounds so innocuous on paper, but all great ideas are simple. So, go forth and unique_ptr!

Categories: Elsewhere

Steve Kemp: Applications updating & phoning home

Tue, 16/09/2014 - 21:42

Personally I believe that any application packaged for Debian should neither phone home, attempt to download plugins over HTTP at run-time, or update itself.

On that basis I've filed #761828.

As a project we have guidelines for what constitutes a "serious" bug, which generally boil down to a package containing a security issue, causing data-loss, or being unusuable.

I'd like to propose that these kind of tracking "things" are equally bad. If consensus could be reached that would be a good thing for the freedom of our users.

(Ooops I slipped into "us", "our user", I'm just an outsider looking in. Mostly.)

Categories: Elsewhere

Pages