Planet Debian

Subscribe to Planet Debian feed
Planet Debian - http://planet.debian.org/
Updated: 49 min 47 sec ago

Ian Wienand: Finding out if you're a Rackspace instance

Mon, 11/08/2014 - 01:30

Different hosting providers do things slightly differently, so it's sometimes handy to be able to figure out where you are. Rackspace is based on Xen and their provided images should include the xenstore-ls command available. xenstore-ls vm-data will give you a handy provider and even region fields to let you know where you are.

function is_rackspace { if [ ! -f /usr/bin/xenstore-ls ]; then return 1 fi /usr/bin/xenstore-ls vm-data | grep -q "Rackspace" } if is_rackspace; then echo "I am on Rackspace" fi

Other reading about how this works:

Categories: Elsewhere

Simon Josefsson: Wifi on S3 with Replicant

Sun, 10/08/2014 - 20:02

I’m using Replicant on my main phone. As I’ve written before, I didn’t get Wifi to work. The other day leth in #replicant pointed me towards a CyanogenMod discussion about a similar issue. The fix does indeed work, and allowed me to connect to wifi networks and to setup my phone for Internet sharing. Digging deeper, I found a CM Jira issue about it, and ultimately a code commit. It seems the issue is that more recent S3′s comes with a Murata Wifi chipset that uses MAC addresses not known back in the Android 4.2 (CM-10.1.3 and Replicant-4.2) days. Pulling in the latest fixes for macloader.cpp solves this problem for me, although I still need to load the non-free firmware images that I get from CM-10.1.3. I’ve created a pull request fixing macloader.cpp for Replicant 4.2 if someone else is curious about the details. You have to rebuild your OS with the patch for things to work (if you don’t want to, the workaround using /data/.cid.info works fine), and install some firmware blobs as below.

adb push cm-10.1.3-i9300/system/etc/wifi/bcmdhd_apsta.bin_b1 /system/vendor/firmware/ adb push cm-10.1.3-i9300/system/etc/wifi/bcmdhd_apsta.bin_b2 /system/vendor/firmware/ adb push cm-10.1.3-i9300/system/etc/wifi/bcmdhd_mfg.bin_b0 /system/vendor/firmware/ adb push cm-10.1.3-i9300/system/etc/wifi/bcmdhd_mfg.bin_b1 /system/vendor/firmware/ adb push cm-10.1.3-i9300/system/etc/wifi/bcmdhd_mfg.bin_b2 /system/vendor/firmware/ adb push cm-10.1.3-i9300/system/etc/wifi/bcmdhd_p2p.bin_b0 /system/vendor/firmware/ adb push cm-10.1.3-i9300/system/etc/wifi/bcmdhd_p2p.bin_b1 /system/vendor/firmware/ adb push cm-10.1.3-i9300/system/etc/wifi/bcmdhd_p2p.bin_b2 /system/vendor/firmware/ adb push cm-10.1.3-i9300/system/etc/wifi/bcmdhd_sta.bin_b0 /system/vendor/firmware/ adb push cm-10.1.3-i9300/system/etc/wifi/bcmdhd_sta.bin_b1 /system/vendor/firmware/ adb push cm-10.1.3-i9300/system/etc/wifi/bcmdhd_sta.bin_b2 /system/vendor/firmware/ adb push cm-10.1.3-i9300/system/etc/wifi/nvram_mfg.txt /system/vendor/firmware/ adb push cm-10.1.3-i9300/system/etc/wifi/nvram_mfg.txt_murata /system/vendor/firmware/ adb push cm-10.1.3-i9300/system/etc/wifi/nvram_mfg.txt_murata_b2 /system/vendor/firmware/ adb push cm-10.1.3-i9300/system/etc/wifi/nvram_mfg.txt_semcosh /system/vendor/firmware/ adb push cm-10.1.3-i9300/system/etc/wifi/nvram_net.txt /system/vendor/firmware/ adb push cm-10.1.3-i9300/system/etc/wifi/nvram_net.txt_murata /system/vendor/firmware/ adb push cm-10.1.3-i9300/system/etc/wifi/nvram_net.txt_murata_b2 /system/vendor/firmware/ adb push cm-10.1.3-i9300/system/etc/wifi/nvram_net.txt_semcosh /system/vendor/firmware/

Categories: Elsewhere

Cyril Brulebois: Why is my package blocked?

Sun, 10/08/2014 - 19:45

A bit of history: A while ago udeb-producing packages were getting frozen on a regular fashion, when a d-i release was about to be cut. While I wasn’t looking at the time, I can easily understand the reasons behind that: d-i is built upon many components, it takes some time to make sure it’s basically in shape for a release, and it’s very annoying when a regression sneaks in right before the installation images get built.

I took over d-i release maintenance in May 2012 and only a few uploads happened before the wheezy freeze. I was only discovering the job at the time, and I basically released whatever was in testing then. The freeze began right after that (end of June), so I started double checking things affecting d-i (in addition to or instead of the review performed by other release team members), and unblocking packages when changes seemed safe, or once they were tested.

A few uploads happened after the wheezy release and there’s already a Jessie Alpha 1 release. I was about to release Jessie Beta 1 after some fair bits of testing, a debian-installer upload, and the only remaining bits were: building installation images (hello Steve), and of course communication (mail announce and website update).

Unfortunately a new upstream release reached testing in the meanwhile, breaking the installer in several ways. I’ll give details below, of course not because I want to point finger at the maintainer, but to illustrate the ramifications that a single package’s migrating to testing can induce.

  • parted 3.2-1 was uploaded on 2014-07-30 and migrated on 2014-08-05.

  • parted 3.2-2 fixed a regression reported in Ubuntu only (LP#1352252) which I also hit with images built locally after that migration.

  • I then built some images locally using fixed parted packages but then discovered that auto-lvm was still broken, which I reported in #757417.

  • After some investigation Colin confirmed some behavioral changes in this new parted release, which imply the need for an update of several other partman-* components: #757661, #757662, #757663, #757664, #757665, #757666.

  • Thankfully fixes have been added for all of those (except partman-zfs at the time of writing), but more testing is needed, before possibly urgenting those packages so that they get into testing as soon as possible.

Since I’d like to avoid such experience in the future, I’ll probably reintroduce the old method and freeze all udeb-producing packages during next d-i releases.

So you know why it might happen. Your next question might be: “What to do when your package is getting caught in that net?”. In that case, please get in touch with both debian-release@ and debian-boot@ asking for an unblock. I’ll then review your particular package, and either let it migrate to testing, or delay it until after the release.

Categories: Elsewhere

Russ Allbery: Orphaning some software packages

Sun, 10/08/2014 - 06:23

As mentioned in my previous message, I'm orphaning various software packages that I was maintaining as part of my job at Stanford, or that for some other reason I'm no longer using directly. My goal is to free up some time and space for new work projects at my new employer, for other hobbies, and to take better care of other projects that I'm not orphaning.

The following software packages are now orphaned, and marked as such on their web pages:

I'm also stepping down from Debian package maintenance for the OpenAFS and Shibboleth packages, and have already notified the relevant communities. For the Debian packages, and for the above software packages, I will continue to provide security support until someone else can take them over.

WebAuth is going to be in a state of transition as noted on its page:

My successor at Stanford will be continuing maintenance and development, but that person hasn't been hired yet, and it will take some time for them to ramp up and start making new releases (although there may be at least one interim release with work that I'm finishing now). It's therefore not strictly orphaned, but it's noted that way on my software pages until someone else at Stanford picks it up.

Development of the other packages that I maintain should continue as normal, with a small handful of exceptions. The following packages are currently in limbo, since I'm not sure if I'll have continued use for them:

I'm not very happy with the current design of either kadmin-remctl or wallet, so if I do continue to maintain them (and have time to work on them), I am likely to redesign them considerably.

For all of my packages, I've been adding clones of the repository to GitHub as an additional option besides my personal Git repository server. I'm of two minds about using (and locking myself into) more of the GitHub infrastructure, but repository copies on GitHub felt like it might be useful for anyone who wanted to fork or take over maintenance. I will be adding links to the GitHub repositories to the software packages for things that are in Git.

If you want to take over any of the orphaned software packages, feel free. When you're ready for the current software pages to redirect to its new home, let me know.

Categories: Elsewhere

Ian Donnelly: How-To: kdb merge

Sun, 10/08/2014 - 04:30

Hi Everybody,

As you may know, part of my Google Summer of Code project involved the creation of merge tools for Elektra. The one I am going to focus on today is kdb merge. The kdb tool allows users to access and perform functions on the Elektra Key Database from the command line. We added a new command to this very useful tool, the merge command. This command allows a user to perform a three-way merge of KeySets from the kdb tool.
The command to use this tool is:
kdb merge [options] ourpath theirpath basepath resultpath

The standard naming scheme for a three-way merge consists of ours, theirs, and base. Ours refers to the local copy of a file, theirs refers to a remote copy, and base refers to their common anscestor. This works very similarly for KeySets, especially ones that consist of mounted conffiles. For mounted conffiles, ours should be the user’s copy, theirs would be the maintainers copy, and base would be the conffile as it was during the last package upgrade or during the package install. If you are just trying to merge any two KeySets that derive from the same base, ours and theirs can be interchanged. In kdb merge, ourpath, theirpath, and basepath work just like ours, theirs, and base except each one represents the root of a KeySet. Resultpath is pretty self- explanatory, it is just where you want the result of the merge to be saved under.

As for the options, there are a few basic ones and one option, strategy, that is very important. The basic options are:
-H --help which prints the help text
-i --interactive which attempts the merge in an interactive way
-t --test which tests the propsed merge and informs you about possible conflicts
-v –verbose which runs the merge in verbose mode
-V –version prints info about the version

The other option, strategy is:
s --strategy which is used to specify a strategy to use in case of a conflict

The current list of strategies are:
preserve the merge will fail if a conflict is detected
ours the merge will use our version during a conflict
theirs the merge will use their version during a conflict
base the merge will use the base version during a conflict

If no strategy is specified, the merge will default to the preserve strategy as to not risk making the wrong decision. If any of the other strategies are specified, when a conflcit is detected, merge will use the Key specified by the strategy (ours, theirs, or base) for the resulting Key.

An example of using the kdb merge command:
kdb merge -s ours system/hosts/ours system/hosts/theirs system/hosts/base system/hosts/result

-Ian S. Donnelly

Categories: Elsewhere

Russell Coker: Being Obviously Wrong About Autism

Sat, 09/08/2014 - 18:01

I’m watching a Louis Theroux documentary about Autism (here’s the link to the BBC web site [1]). The main thing that strikes me so far (after watching 7.5 minutes of it) is the bad designed of the DLC-Warren school for Autistic kids in New Jersey [2].

A significant portion of people on the Autism Spectrum have problems with noisy environments, whether most Autistic people have problems with noise depends on what degree of discomfort is considered a problem. But I think it’s most likely to assume that the majority of kids on the Autism Spectrum will behave better in a quiet environment. So any environment that is noisy will cause more difficult behavior in most Autistic kids and the kids who don’t have problems with the noise will have problems with the way the other kids act. Any environment that is more prone to noise pollution than is strictly necessary is hostile to most people on the Autism Spectrum and all groups of Autistic people.

The school that is featured in the start of the documentary is obviously wrong in this regard. For starters I haven’t seen any carpet anywhere. Carpeted floors are slightly more expensive than lino but the cost isn’t significant in terms of the cost of running a special school (such schools are expensive by private-school standards). But carpet makes a significant difference to ambient noise.

Most of the footage from that school included obvious echos even though they had an opportunity to film when there was the least disruption – presumably noise pollution would be a lot worse when a class finished.

It’s not difficult to install carpet in all indoor areas in a school. It’s also not difficult to install rubber floors in all outdoor areas in a school (it seems that most schools are doing this already in play areas for safety reasons). For a small amount of money spent on installing and maintaining noise absorbing floor surfaces the school could achieve better educational results. The next step would be to install noise absorbing ceiling tiles and wallpaper, that might be a little more expensive to install but it would be cheap to maintain.

I think that the hallways in a school for Autistic kids should be as quiet as the lobby of a 5 star hotel. I don’t believe that there is any technical difficulty in achieving that goal, making a school look as good as an expensive hotel would be expensive but giving it the same acoustic properties wouldn’t be difficult or expensive.

How do people even manage to be so wrong about such things? Do they never seek any advice from any adult on the Autism Spectrum about how to run their school? Do they avoid doing any of the most basic Google searches for how to create a good environment for Autistic people? Do they just not care at all and create an environment that looks good to NTs? If they are just trying to impress NTs then why don’t they have enough pride to care that people like me will know how bad they are? These aren’t just rhetorical questions, I’d like to know what’s wrong with those people that makes them do their jobs in such an amazingly bad way.

Related posts:

  1. Autism, Food, etc James Purser wrote “Stop Using Autism to Push Your Own...
  2. Autism and a Child Beauty Contest Fenella Wagener wrote an article for the Herald Sun about...
  3. Autism Awareness and the Free Software Community It’s Autism Awareness Month April is Autism Awareness month, there...
Categories: Elsewhere

Steve Kemp: Rebooting the CMS

Sat, 09/08/2014 - 10:59

I run a cluster for the Debian Administration website, and the code is starting to show its age. Unfortunately the code is not so modern, and has evolved a lot of baggage.

Given the relatively clean separation between the logical components I'm interested in trying something new. In brief the current codebase allows:

  • Posting of articles, blog-entries, and polls.
  • The manipulation of the same.
  • User-account management.

It crossed my mind the other night that it might make sense to break this code down into a number of mini-servers - a server to handle all article-related things, a server to handle all poll-related things, etc.

If we have a JSON endpoint that will allow:

  • GET /article/32
  • POST /article/ [create]
  • GET /articles/offset/number [get the most recent]

Then we could have a very thin shim/server on top of that whihc would present the public API. Of course the internal HTTP overhead might make this unworkable, but it is an interesting approach to the problem, and would allow the backend storage to be migrated in the future without too much difficulty.

At the moment I've coded up two trivial servers, one for getting user-data (to allow login requests to succeed), and one for getting article data.

There is a tiny presentation server written to use those back-end servers and it seems like an approach that might work. Of course deployment might be a pain..

It is still an experiment rather than a plan, but it could work out: http://github.com/skx/snooze/.

Categories: Elsewhere

Russ Allbery: A new challenge

Sat, 09/08/2014 - 06:01

Those reading this journal may have noticed that my rate of posting has dropped a bit in the past few years, and quite a lot in the past year. One of the major reasons for this was work, which had been getting more bureaucratic, more stressful, less trusting, and more fearful. After this got drastically worse in the past six months, I finally decided enough was enough and took advantage of a good opportunity to do something different.

I will be joining Dropbox's site reliability engineering team in a week and a half (which means that I'll be working on their servers, not on the product itself). It will take a few months to settle in, but hopefully this will mean a significant improvement to my stress levels and a lot of interesting projects to work on.

I'm taking advantage of this change to inventory the various things I'm currently committed to and let go of some projects to make more space in my life. There are also a variety of software projects that I was maintaining as part of my job at Stanford, and I will be orphaning many of those packages. I'll make another journal post about that a bit later.

For Debian folks, I am going to be at Debconf, and hope to meet many of you there. (It's going to sort of be my break between jobs.) In the long run, I'm hoping this move will let me increase my Debian involvement.

In the long run, I expect most of my free software work, my reviews, and the various services I run to continue as before, or even improve as my stress drops. But I've been at Stanford for a very long time, so this is quite the leap into the unknown, and it's going to take a while before I'm sure what new pattern my life will fall into.

Categories: Elsewhere

Clint Adams: The politically-correct term is a juvenile cricket

Fri, 08/08/2014 - 22:16

Normally I'm disgusted by fangirling of jwz, but it seems that he finally wrote something I like.

Categories: Elsewhere

Daniel Pocock: Help needed reviewing Ganglia GSoC changes

Fri, 08/08/2014 - 22:14

The Ganglia project has been delighted to have Google's support for 5 students in Google Summer of Code 2014. The program officially finishes in ten more days, on 18 August.

If you are a user of Ganglia, Nagios, RRDtool or R or just an enthusiastic C or Python developer, you may be able to use and provide feedback for the students while benefitting from the cool new features they have been working on.

Student Technology Comments Chandrika Parimoo Python, Nagios and some Syslog Chandrika generalized some of my ganglia-nagios-bridge code into the PyNag library. I then used it as the basis for syslog-nagios-bridge. Chandrika has also done some work on improving the ganglia-nagios-bridge configuration file format. Oliver Hamm C Oliver has been working on metrics about Ganglia infrastructure. If you have a large and dynamic Ganglia cloud, this is for you. Plamen Dimitrov R, RRDtool Plamen has been building an R plugin for inspecting RRD files from Ganglia or any other type of RRD. Rana NVIDIA, C Rana has been working on improvements to Ganglia monitoring of NVIDIA GPUs, especially in HPC clusters Zhi An Java, JMX Zhi An has been extending the JMXetric and gmetric4j projects to provide more convenient monitoring of Java server processes.

If you have any feedback or questions, please feel free to discuss on the Ganglia-general mailing list and CC the student and their mentor.

Categories: Elsewhere

Jan Wagner: Monitoring Plugins Debian packages

Fri, 08/08/2014 - 22:03

You may wonder why the old good nagios-plugins are not up to date in Debian unstable and testing.

Since the people behind and maintaining the plugins <= 1.5 were forced to rename the software project into Monitoring Plugins there was some work behind the scenes and much QA work necessary to release the software in a proper state. This happened 4 weeks ago with the release of the version 2.0 of the Monitoring Plugins.

With one day of delay the package was uploaded into unstable, but did hit the Debian NEW queue due the changed package name(s). Now we (and maybe you) are waiting to get them reviewed by ftp-master. This will hopefully happen before the jessie freeze.

Until this will happen, you may grab packages for wheezy by the 'wheezy-backports' suite from ftp.cyconet.org/debian/ or 'debmon-wheezy' suite from debmon.org. Feedback is much appreciated.

Categories: Elsewhere

Richard Hartmann: RFC 7194

Fri, 08/08/2014 - 20:42

On a positive note, RFC 7194 has been published.

Categories: Elsewhere

Tiago Bortoletto Vaz: New gadget

Fri, 08/08/2014 - 20:16

Solid, energy-efficient, nice UI, wireless, multiple output formats and hmm... can you smell it? :)

Categories: Elsewhere

Ian Donnelly: The Line Plug-In

Fri, 08/08/2014 - 19:19

Hi Everybody,

As you may have noticed I wrote a new plug-in for Elektra called “line“. I used it for a lot of examples in my tutorial, How-To: Write a Plug-In. The line plug-in is a very simple storage plug-in for Elektra. This plug-in stores files into the Elektra Key Database creating a new Key for each line and setting the string value of the Key to the string value of the line of that file. So if we have a file called “hello.txt”:

Hello
World!

And we mount it to kdb like so: kdb mount ~/line.txt system/hello_line line. The output of kdb ls system/hello_line the output would be:

system/hello_line/#1
system/hello_line/#2

With the getString values of #1 and #2 being Hello and World! respectively. If this seems like a very simple plug-in, that’s because it is. Obviously, this plug-in isn’t a great showcase for the robustness of Elektra, any data structure could store a file line by line relatively easy, so why did we add a line plug-in at all?

The answer is that we included a line plug-in to allow any line-based file to use functions of Elektra, particularly the new Merge function. My Google Summer of Code project is to allow for automatic three-way merge of Debian conffiles during package upgrades as opposed to the current prompt and manual merging a user must do if a conffile is edited. Using Elektra and the new mergecode we can mount a conffile with the best plug-in for it, (the ini plug-in for Samba’s smb.conf for instance) and that allows for a very powerful merging ability with a lot more success than a simple diff merge. However, there are a lot of conffiles that don’t use any particular standard (such as ini, xml, or JSON) to store data. That is where the line plug-in comes in. We can still mount these files using the line plug-in and attempt a merge. Of course it is much more likely to have conflicts, and this type of merge is still susceptible to many of the same flaws as regular file merges (such as not being able to detect when a line has been moved), but it simple cases, the merge may succeed which would reduce the overall number of times a user would be prompted during an upgrade.

Basically, I wrote a line plug-in for Elektra as a fallback for conffile merges when we can’t mount the conffile in any more meaningful way. While merges using KeySets that were mounted using line are more likely to fail than other, more specialized plug-ins, there are cases that these merges will succeed and the user will not have to deal with a confusing prompt. The whole point of my Google Summer of Code project is to make upgrading packages and dealing with conffiles much smoother and easier than it is now by including a three-way mere and this line plug-in will help with this goal.

Sincerely,
Ian S. Donnelly

Categories: Elsewhere

Richard Hartmann: Microsoft Linux: Debian

Fri, 08/08/2014 - 13:32

Huh...

Source

(Yes, I am on Debian's trademark team and no, I have no idea what that means. Yet.)

Categories: Elsewhere

Richard Hartmann: 08-Microsoft Linux-Debian.mwdn

Fri, 08/08/2014 - 13:30
Categories: Elsewhere

Sune Vuorela: Fun and joy with .bat files

Fri, 08/08/2014 - 09:16

Occasionally, one gets in touch with kind of ‘foreign’ technologies and needs to get stuff working anyways.

Recently, I had to do some various hacking with and around .bat files. Bat files are a kind of script files for Microsoft Windows.

Calling external commands

Imagine need to call some other command, let’s say git diff. So from a cmd thing, you would write

git diff

similar to writing shell scripts on unixes. But there is a catch. If the thing you want to call is another bat-script, just calling it ensures it ‘replaces’ the current script and never returns. So you need

call git diff

if the command you want to run is a bat file and you want to return to your script.

Calling an external helper next to your script
If you for some reason needs to call some external helper placed next to your script, there is a helpful thing to do that as well. Imagine your helper is called helper.bat

call %~dp0helper.bat

is the very self-explanatory way of doing that.

Stopping execution of your script

If you somehow encounter some condition in your script that requires you to stop your script, there is a command ‘exit’ handy. It even takes a argument for what error code there is.

exit 2

stops your script with return code 2. But it also have the nice added feature that if you do it in a script you run by hand in a terminal, it also exits the terminal.

Luckily there is also a fix for that:

exit /b 2

and it doesn’t exit your interactive terminal, and it sets the %ERRORLEVEL% variable to the exit code.

Fortunately, the fun doesn’t stop here.

If the script is run non-interactively, exit /b doesn’t set the exit code for for example perl’s system() call. You need to use exit without /b for that. So now you need two scripts. one for “interactive” use that calls exit /b and a similar one using exit for use by other apps/scripts.

Or, we can combine some of our knowledge and add a extra layer of indirection.

  • write your script for interactive use (with exit /b) and let’s call it script.bat
  • create a simple wrapper script
    call %~dp0script.bat
    exit %ERRORLEVEL%

  • call the wrapper for non-interactive use

and then success.

Oh. and on a unrelated note. Windows can’t schedule tasks for users that aren’t logged in and don’t have a password set. The response “Access Denied” is the only clue given.

Categories: Elsewhere

Ian Wienand: Bash arithmetic evaluation and errexit trap

Fri, 08/08/2014 - 07:30

In the "traps for new players" category:

count=0 things="0 1 0 0 1" for i in $things; do if [ $i == "1" ]; then (( count++ )) fi done echo "Count is ${count}"

Looks fine? I've probably written this many times. There's a small gotcha:

((expression))
The expression is evaluated according to the rules described below under ARITHMETIC EVALUATION. If the value of the expression is non-zero, the return status is 0; otherwise the return status is 1. This is exactly equivalent to let "expression".

When you run this script with -e or enable errexit -- probably because the script has become too big to be reliable without it -- count++ is going to return 0 (post-increment) and per above stop the script. A definite trap to watch out for!

Categories: Elsewhere

Ian Donnelly: How-To: Write a Plug-In (Part 3, Coding)

Fri, 08/08/2014 - 06:13

Hi Everybody!

Hope you have been enjoying my tutorial on writing plug-ins so far. In Part 1 we covered the basic overview of a plug-in. Part 2 covered a plug-in’s contract and the best way to write one. Now, for Part 3 we are going to cover the meat of a plug-in, the actual coding. As you should know from reading Part 1, there are five main functions used for plug-ins, elektraPluginOpen, elektraPluginGet, elektraPluginSet, ELEKTRA_PLUGIN_EXPORT(Plugin), where Plugin should be replaced with the name of your plug-in. We are going to start this tutorial by focusing on the elektraPluginGet because it usually is the most critical function.

As we discussed before, elektraPluginGet is the function responsible for turning information from a file into a usable KeySet. This function usually differs pretty greatly between each plug-in. This function should be of type int, it returns 0 on success or another number on an error. The function will take in a Key, usually called parentKey which contains a string containing the path to the file that is mounted. For instance, if you run the command kdb mount /etc/linetest system/linetest line then keyString(parentKey) should be equal to “/etc/linetest”. At this point, you generally want to open the file so you can begin saving it into keys. Here is the trickier part to explain. Basically, at this point you will want to iterate through the file and create keys and store string values inside of them according to what your plug-in is supposed to do. I will give a few examples of different plug-ins to better explain.

My line plug-in was written to read files into a KeySet line by line using the newline character as a delimiter and naming the keys by their line number such as (#1, #2, .. #_22) for a file with 22 lines. So once I open the file given by parentKey, every time a I read a line I create a new key, let’s call it new_key using dupKey(parentKey). Then I set new_keys’s name to lineNN (where NN is the line number) using keyAddBaseName and store the string value of the line into the key using keySetString. Once the key is initialized, I append it to the KeySet that was passed into the elektraPluginGet function, let’s call it returned for now, using ksAppendKey(return, new_key). Now the KeySet will contain new_key with the name lineNN properly saved where it should be according to the kdb mount command (in this case, system/linetest/lineNN), and a string value equal to the contents of that line in the file. MY plug-in repeats these steps as long as it hasn’t reached end of file, thus saving the whole file into a KeySet line by line.

The simpleini plug-in works similarly, but it parses for ini files instead of just line-by-line. At their most simple level, ini files are in the format of name=value with each pair taking one line. So for this plug-in, it makes a lot of sense to name each Key in the KeySet by the string to the left of the “=” sign and store the value into each key as a string. For instance, the name of the key would be “name” and keyGetString(name) would return “value”. 

As you may have noticed, simpleini and line plug-ins work very similarly. However, they just parse the files differently. The simpleini plug-in parses the file in a way that is more natural to ini file (setting the key’s name to the left side of the equals sign and the value to the right side of the equals sign.) The elektraPluginGet function is the heart of a storage plug-in, its what allows Elektra to store configurations in it’s database. This function isn’t just run when a file is first mounted, but whenever a file gets updated, this function is run to update the Elektra Key Database to match.

We also gave a brief overview of elektraPluginSet function. This function is basically the opposite of elektraPluginGet. Where elektraPluginGet reads information from a file into the Elektra Key Database, elektraPluginSet writes information from the database back into the mounted file.

First have a look at the signature of elektraLineSet:

elektraLineSet(Plugin *handle ELEKTRA_UNUSED, KeySet *toWrite, Key *parentKey)

Lets start with the most important parameters, the KeySet and the parentKey. The KeySet supplied is the KeySet that is going to be persisted in the file. In our case it would contain the Keys representing the lines. The parentKey is the topmost Key of the KeySet at serves several purposes. First, it contains the filename of the destination file as its value. Second, errors and warnings can be emitted via the parentKey. We will discuss error handling in more detail later. The Plugin handle can be used to persist state information in a threadsafe way with elektraPluginSetData. As our plugin is not stateful and therefore does not use the handle, it is marked as unused in order to supress compiler warnings.

Basically the implementation of elektraLineSet can be described with the following pseudocode:

open the file
if (error)
{
ELEKTRA_SET_ERROR(74, parentKey, keyString(parentKey));
}
for each key
{
write the key value together with a newline
}
close the file

The fullblown code can be found at https://github.com/ElektraInitiative/libelektra/blob/master/src/plugins/line/line.c

As you can see, all elektraLineSet does is open a file, take each Key from the KeySet (remember they are named #1, #2 … #_22) in order, and write each key as it’s own line in the file. Since we don’t care about the name of the Key in this case (other than for order), we just write the value of keyString for each Key as a new line in the file. That’s it. Now, each time the mounted KeySet is modified, elektraPluginSet will be called and the mounted file will be updated.

We haven’t discussed ELEKTRA_SET_ERROR yet. Because Elektra is a library, printing errors to stderr wouldn’t be a good idea. Instead, errors and warnings can be appended to a key in the form of metadata. This is what ELEKTRA_SET_ERROR does. Because the parentKey always exists even if a critical error occurres, we append the error to the parentKey. The first parameter is an id specifying the general error that occurred. A listing of existing errors together with a short description and a categorization can be found at https://github.com/ElektraInitiative/libelektra/blob/master/src/liberror/specification. The third parameter can be used to provide additional information about the error. In our case we simply supply the filename of the file that caused the error. The kdb tools will interprete this error and print it in a pretty way. Notice that this can be used in any plugin function where the parentKey is available.

The elektraPluginOpen and elektraPluginClose functions are not commonly used for storage plug-ins, but they can be useful and are worth reviewing. elektraPluginOpen function runs before elektraPluginGet and is useful to do initialization if necessary for the plug-in. On the other hand elektraPluginClose is run after other functions of the plug-in and can be useful for freeing up resources.

The last function, one that is always needed in a plug-in, is ELEKTRA_PLUGIN_EXPORT. This functions is responsible for letting Elektra know that the plug-in exists and which methods it implements. The code from my line function is a good example and pretty self-explanatory:

Plugin *ELEKTRA_PLUGIN_EXPORT(line)
{
return elektraPluginExport("line",
ELEKTRA_PLUGIN_GET, &elektraLineGet,
ELEKTRA_PLUGIN_SET, &elektraLineSet,
ELEKTRA_PLUGIN_END);
}

There you have it! This is the last part of my tutorial on writing a storage plug-in for Elektra. Hopefully you now have a good understanding of how Elektra plug-ins work and you’ll be able to add some great functionality into Elektra through development of new plug-ins. I hope you enjoyed this tutorial and if you have any questions just leave a comment!

Happy coding!
Ian S. Donnelly

Categories: Elsewhere

Jordi Mallach: A pile of reasons why GNOME should be Debian jessie’s default desktop environment

Thu, 07/08/2014 - 23:58

GNOME has, for some reason or another, always been the default desktop environment in Debian since the installer is able to install a full desktop environment by default. Release after release, Debian has been shipping different versions of GNOME, first based on the venerable 1.2/1.4 series, then moving to the time-based GNOME 2.x series, and finally to the newly designed 3.4 series for the last stable release, Debian 7 ‘wheezy’.

During the final stages of wheezy’s development, it was pointed out that the first install CD image would not longer hold all of the required packages to install a full GNOME desktop environment. There was lots of discussion surrounding this bug or fact, and there were two major reactions to it. The Debian GNOME team rebuilt some key packages so they would be compressed using xz instead of gzip, saving the few megabytes that were needed to squeeze everything in the first CD. In parallel, the tasksel maintainer decided switching to Xfce as default desktop was another obvious fix. This change, unannounced and two days before the freeze, was very contested and spurred the usual massive debian-devel threads. In the end, and after a few default desktop flip flops, it was agreed that GNOME would remain as the default for the already frozen wheezy release, and this issue would be revisited later on during jessie’s development.

And indeed, some months ago, Xfce was again reinstated as Debian’s default desktop for jessie as announced:

Change default desktop to xfce. This will be re-evaluated before jessie is frozen. The evaluation will start around the point of DebConf (August 2014). If at that point gnome looks like a better choice, it’ll go back as the default. Some criteria for that choice will include: * Popcon numbers for gnome on jessie. If gnome installations continue to rise fast enough despite xfce being the default (compared with, say kde installations), then we’ll know that users prefer gnome. Currently we have no data about how many users would choose gnome when it’s not the default. Part of the reason for switching to xfce now is to get such data. * The state of accessability support, particularly for the blind. * How well the UI works for both new and existing users. Gnome 3 seems to be adding back many gnome 2 features that existing users expect, as well as making some available via addons. If it feels comfortable to gnome 2 (and xfce) users, that would go a long way toward switching back to it as the default. Meanwhile, Gnome 3 is also breaking new ground in its interface; if the interface seems more welcoming to new users, or works better on mobile devices, etc, that would again point toward switching back. * Whatever size constraints exist for CD or other images at the time. -- Hello to all the tech journalists out there. This is pretty boring. Why don’t you write a story about monads instead?

― Joey Hess in dfca406eb694e0ac00ea04b12fc912237e01c9b5.

Suffice to say that the Debian GNOME team participants have never been thrilled about how the whole issue is being handled, and we’ve been wondering if we should be doing anything about it, or just move along and enjoy the smaller amount of bug reports against GNOME packages that this change would bring us, if it finally made it through to the final release. During our real life meet-ups in FOSDEM and the systemd+GNOME sprint in Antwerp, most members of the team did feel Debian would not be delivering a graphical environment with the polish we think our users deserve, and decided we at least should try to convince the rest of the Debian project and our users that Debian will be best suited by shipping GNOME 3.12 by default. Power users, of course, can and know how to get around this default and install KDE, Xfce, Cinnamon, MATE or whatever other choice they have. For the average user, though, we think we should be shipping GNOME by default, and tasksel should revert the above commit again. Some of our reasons are:

  • Accessibility: GNOME continues to be the only free desktop environment that provides full accessibility coverage, right from login screen. While it’s true GNOME 3.0 was lacking in many areas, and GNOME 3.4 (which we shipped in wheezy) was just barely acceptable thanks to some last minute GDM fixes, GNOME 3.12 should have ironed out all of the issues and our non-expert understanding is that a11y support is now on par with what GNOME 2.30 from squeeze offered.
  • Downstream health: The number of active members in the team taking care of GNOME in Debian is around 5-10 persons, while it is 1-2 in the case of Xfce. Being the default desktop draws a lot of attention (and bug reports) that only a bigger team might have the resources to handle.
  • Upstream health: While GNOME is still committed to its time-based release schedule and ships new versions every 6 months, Xfce upstream is, unfortunately, struggling a bit more to keep up with new plumbing technology. Only very recently it has regained support to suspend/hibernate via logind, or support for Bluez 5.x, for example.
  • Community: GNOME is one of the biggest free software projects, and is lucky to have created an ecosystem of developers, documenters, translators and users that interact regularly in a live social community. Users and developers gather in hackfests and big, annual conferences like GUADEC, the Boston Summit, or GNOME.Asia. Only KDE has a comparable community, the rest of the free desktop projects don’t have the userbase or manpower to sustain communities like this.
  • Localization: Localization is more extensive and complete in GNOME. Xfce has 18 languages above 95% of coverage, and 2 at 100% (excluding English). GNOME has 28 languages above 95%, 9 of them being complete (excluding English).
  • Documentation: Documentation coverage is extensive in GNOME, with most of the core applications providing localized, up to date and complete manuals, available in an accessible format via the Help reader.
  • Integration: The level of integration between components is very high in GNOME. For example, instant messaging, agenda and accessibility components are an integral part of the desktop. GNOME is closely integrated to NetworkManager, PulseAudio, udisks and upower so that the user has access to all the plumbing in a single place. GNOME also integrates easily with online accounts and services (ownCloud, Google, MS Exchange…).
  • Hardware: GNOME 3.12 will be one of the few desktop environments to support HiDPI displays, now very common on some laptop models. Lack of support for HiDPI means non-technical users will get an unreadable desktop by default, and no hints on how to fix that.
  • Security: GNOME is more secure. There are no processes launched with root permissions on the user’s session. All everyday operations (package management, disk partitioning and formatting, date/time configuration…) are accomplished through PolicyKit wrappers.
  • Privacy: One of the latest focuses of GNOME development is improving privacy, and work is being done to make it easy to run GNOME applications in isolated containers, integrate Tor seamlessly in the desktop experience, better disk encryption support and other features that should make GNOME a more secure desktop environment for end users.
  • Popularity: One of the metrics discussed by the tasksel change proponents mentioned popcon numbers. 8 months after the desktop change, Xfce does not seem to have made a dent on install numbers. The Debian GNOME team doesn’t feel popcon’s data is any better than a random online poll though, as it’s an opt-in service which the vast majority of users don’t enable.
  • systemd embracing: One of the reasons to switch to Xfce was that it didn’t depend on systemd. But now that systemd is the default, that shouldn’t be a problem. Also given ConsoleKit is deprecated and dead upstream, KDE and Xfce are switching or are planning to switch to systemd/logind.
  • Adaptation: Debian forced a big desktop change with the wheezy release (switching from the traditional GNOME 2.x to the new GNOME Shell environment. Switching again would mean more adaptation for uses when they’ve had two years to experience GNOME 3.4. Furthermore, GNOME 3.12 means two years of improvements and polishing to GNOME 3.4, which should help with some of the rough edges found in the GNOME release shipped with wheezy.
  • Administration: GNOME is easy to administrate. All the default settings can be defined by administrators, and mandatory settings can be forced to users, which is required in some companies and administrations; Xfce cannot do that. The close integration with freedesktop components (systemd, NM, PulseAudio…) also gives access to specific and useful administration tools.

In short, we think defaulting to GNOME is the best option for the Debian release, and in contrast, shipping Xfce as the default desktop could mean delivering a desktop experience that has some incomplete or rough edges, and not on par with Debian quality standards for a stable release. We believe tasksel should again revert the change and be uploaded as soon as possible, in order to get people testing images with GNOME the sooner the better, with the freeze only two months away.

We would also like that in the future, changes of this nature will not be announced in a git commit log, but widely discussed in debian-project and the other usual development/decision channels, like the change of init system happened recently. We will, whichever the final decision is, continue to package GNOME with great care to ensure our users get the best possible desktop experience Debian can offer.

Categories: Elsewhere

Pages