Feed aggregator

Gregor Herrmann: GDAC 2014/8

Planet Debian - Mon, 08/12/2014 - 21:49

today, a pkg-perl member who was not very active for the last 2 or so years "re-appeared", & together we prepared & uploaded a new package. – always good to see people coming back!

this posting is part of GDAC (gregoa's debian advent calendar), a project to show the bright side of debian & why it's fun for me to contribute.

Categories: Elsewhere

Visitors Voice: Milestone reached regarding Search API for Drupal 8

Planet Drupal - Mon, 08/12/2014 - 20:55
For all of us who care about site search for Drupal, the maintainer Thomas Seidl has written a report about the current status of Search API for Drupal 8. The search crew’s vision is not only to port Search API to Drupal 8, but also to remove all known limitations, making site search for Drupal […]
Categories: Elsewhere

Bernhard R. Link: The Colon in the Shell.

Planet Debian - Mon, 08/12/2014 - 20:35

I was recently asked about some construct in a shell script starting with a colon(:), leading me into a long monologue about it. Afterwards I realized I had forgotten to mention half of the nice things. So here for your amusement some usage for the colon in the shell:

To find the meaning of ":" in the bash manpage[1], you have to look at the start of the SHELL BUILTIN COMMANDS section. There you find:

: [arguments] No effect; the command does nothing beyond expanding arguments and performing any specified redirections. A zero exit code is returned.

If you wonder what the difference to true is: I don't know any difference (except that there is no /bin/:)

So what is the colon useful for? You can use it if you need a command that does nothing, but still is a command.

  • For example, if you want to avoid using a negation (for fear of history expansion still being on by default on a interactive bash or wanting to support ancient shells), you cannot simply write if conditon ; then # this will be an error else echo condition is false fi but need some command there, for which the colon can be used: if conditon ; then : # nothing to do in this case else echo condition is false fi To confuse your reader, you can use the fact that the colon ignores it's arguments and you only have normal words there: if conditon ; then : nothing to do in this case # <- this works but is not good style else echo condition is false fi though I strongly recommend against it (exercise: why did I use a # there for my remark?).
  • This of course also works in other cases: while processnext ; do : done
  • The ability to ignore the actual arguments (but still processing them as with every command that ignores it arguments) can also be used, like in: : ${VARNAME:=default} which sets VARNAME to a default if unset or empty. (One could also use that the first time it is used, or ${VARNAME:-default} everywhere, but this can be more readable).
  • In other cases you do not strictly need a command, but using the colon can clear things up, like creating or truncating a file using a redirection: : > /path/to/file

Then there is more things you can do with the colon, most I'd put under "abuse":

  • misuing it for comments: : ====== here While it has the advantage of also showing up in -x output, the to be expected confusion of the reader and the danger of using any shell active character makes this general a bad idea.
  • As it practically the same as true it can be used as a shorter form of true. Given that true is more readable that is a bad idea. (At least it isn't as evil as using the empty string to denote true.) # bad style! if condition ; then doit= ; doit2=: ; else doit=false ; doit2=false ; fi if $doit ; then echo condition true ; fi if $doit2 && true ; then echo condition true ; fi
  • Another way to scare people: ignoreornot= $ignoreornot echo This you can see. ignoreornot=: $ignoreornot echo This you cannot see. While it works, I recommend against it: Easily confusing and any > in there or $(...) will likely rain harvoc over you.
  • Last and least, one can shadow the built-in colon with a different one. Only useful for obfuscation, and thus likely always evil. :(){:;:};: anyone?

This is of course not a complete list. But unless I missed something else, those are the most common cases I run into.

[1] <rant>If you never looked at it, better don't start: the bash manpage is legendary for being quite useless as hiding all information in other information in a quite absurd order. Unless you look at documentation about how to write a shell script parser, then the bash manpage is really what you want to read.</rant>

Categories: Elsewhere

Appnovation Technologies: How to properly use PHP on Drupal views fields

Planet Drupal - Mon, 08/12/2014 - 18:58

Every once in a while, as a Drupal site builder you will come across this problem.

var switchTo5x = false;stLight.options({"publisher":"dr-75626d0b-d9b4-2fdb-6d29-1a20f61d683"});
Categories: Elsewhere

Nuvole: Atrium Folders for Open Atrium 2

Planet Drupal - Mon, 08/12/2014 - 18:19
Subtitle: Nuvole's files and documents management feature is now available for the latest version of Open Atrium

We received many requests to make an updated version of our Atrium Folders feature available for the latest version of Open Atrium, the excellent Drupal-based solution for Intranets developed by Phase2.

OECD sponsored the development of the new version as an open source project, in order to add a file management functionality to the Innovation Policy Platform site that it manages together with the World Bank. Atrium Folders for Open Atrium 2 is thus now available to everybody.

The usual features, a new way

Open Atrium changed completely and so did Atrium Folders. There are many differences under the hood, with a complete code rewrite, but the familiar user experience is still there.

Uploading and downloading files

Creating folders and adding files to the folders is as easy as creating any other content in Open Atrium 2: it is enough to create "Files sections". When you are viewing a folder, specific buttons allow to create subfolders, upload files and directly download any file.

Access management

The access management works like for other nodes in Open Atrium. Access to the folder can be restricted for both viewing and editing separately, and it can be determined at a folder level.

Notifications

The notification system of Open Atrium can also be used for folders. Users can be informed when new files are added, with the same interface used by other Open Atrium features.

And much more Media module support

The files are attached with the Media widget and it is thus possible to manage not only files, but everything Media supports, like for example YouTube videos or files attached to other content.

Multiple uploads, with drag and drop support

The multi-upload feature of Open Atrium 2 can also be used with Folders to upload several files at the same time. Drag and drop uploads are supported too.

Download folder as ZIP file

The download button for files exists also for folders and it allows to download a folder with its subfolders and all included files as a ZIP archive. This functionality is available as a submodule, bundled with Atrium Folders.

File and folder revisions

Atrium Folders supports history and revisions both for folders and files. You can view previous versions of a file and optionally restore an older version. This functionality is available as a submodule, bundled with Atrium Folders.

Download, installation and support

The Open Atrium Folders feature can be downloaded and installed like any other module.

It is available on drupal.org at https://www.drupal.org/project/oa_folders/

Please report any issues in the module's issue queue at drupal.org

Tags: Drupal PlanetOpen Atrium
Categories: Elsewhere

Andrew Pollock: [tech] A geek Dad goes to Kindergarten with a box full of Open Source and some vegetables

Planet Debian - Mon, 08/12/2014 - 18:04

Zoe's Kindergarten encourages parents to come in and spend some time with the kids. I've heard reports of other parents coming in and doing baking with the kids or other activities at various times throughout the year.

Zoe and I had both wanted me to come in for something, but it had taken me until the last few weeks of the year to get my act together and do something.

I'd thought about coming in and doing some baking, but that seemed rather done to death already, and it's not like baking is really my thing, so I thought I'd do something technological. I just wracked my brains for something low effort and Kindergarten-age friendly.

The Kindergarten has a couple of eduss touch screens. They're just some sort of large-screen with a bunch of inputs and outputs on them. I think the Kindergarten mostly uses them for showing DVDs and hooking up a laptop and possibly doing something interactive on them.

As they had HDMI input, and my Raspberry Pi had HDMI output, it seemed like a no-brainer to do something using the Raspberry Pi. I also thought hooking up the MaKey MaKey to it would make for a more fun experience. I just needed to actually have it all do something, and that's where I hit a bit of a creative brick wall.

I thought I'd just hack something together where based on different inputs on the MaKey MaKey, a picture would get displayed and a sound played. Nothing fancy at all. I really struggled to get a picture displayed full screen in a time efficient manner. My Pi was running Raspbian, so it was relatively simple to configure LightDM to auto-login and auto-start something. I used triggerhappy to invoke a shell script, which took care of playing a sound and an image.

Playing a sound was easy. Displaying an image less so, especially if I wanted the image loaded fast. I really wanted to avoid having to execute an image viewer every time an input fired, because that would be just way too slow. I thought I'd found a suitable application in Geeqie, because it supported being out of band managed, but it's problem was it also responded to the inputs from the MaKey MaKey, so it became impossible to predictably display the right image with the right input.

So the night before I was supposed to go to Kindergarten, I was up beating my head against it, and decided to scrap it and go back to the drawing board. I was looking around for a Kindergarten-friendly game that used just the arrow keys, and I remembered the trusty old Frozen Bubble.

This ended up being absolutely perfect. It had enough flags to control automatic startup, so I could kick it straight into a dumbed-down full screen 1 player game (--fullscreen --solo --no-time-limit)

The kids absolutely loved it. They were cycled through in groups of four and all took turns having a little play. I brought a couple of heads of broccoli, a zucchini and a potato with me. I started out using the two broccoli as left and right and the zucchini to fire, but as it turns out, not all the kids were as good with the "left" and "right" as Zoe, so I swapped one of the broccoli for a potato and that made things a bit less ambiguous.

The responses from the kids were varied. Quite a few clearly had their minds blown and wanted to know how the broccoli was controlling something on the screen. Not all of them got the hang of the game play, but a lot did. Some picked it up after having a play and then watching other kids play and then came back for a more successful second attempt. Some weren't even sure what a zucchini was.

Overall, it was a very successful activity, and I'm glad I switched to Frozen Bubble, because what I'd originally had wouldn't have held up to the way the kids were using it. There was a lot of long holding/touching of the vegetables, which would have fired hundreds of repeat events, and just totally overwhelmed triggerhappy. Quite a few kids wanted to pick up and hold the vegetables instead of just touch them to send an event. As it was, the Pi struggled to play Frozen Bubble enough as it was.

The other lesson I learned pretty quickly was that an aluminium BBQ tray worked a lot better as the grounding point for the MaKey MaKey than having to tether an anti-static strap around each kid's ankle as they sat down in front of the screen. Once I switched to the tray, I could rotate kids through the activity much faster.

I just wish I was a bit more creative, or there were more Kindergarten-friendly arrow-key driven Linux applications out there, but I was happy with what I managed to hack together with a fairly minimal amount of effort.

Categories: Elsewhere

Dries Buytaert: Announcing the Drupal 8 Accelerate Fund

Planet Drupal - Mon, 08/12/2014 - 18:01
Topic: DrupalDrupal Association

Today the Drupal Association announced a new program: the Drupal 8 Accelerate Fund. Drupal 8 Accelerate Fund is a $125,000 USD fund to help solve critical issues and accelerate the release of Drupal 8.

The Drupal Association is guaranteeing the funds and will try to raise more from individual members and organizations within the Drupal community. It is the Drupal 8 branch maintainers — Nathaniel Catchpole, Alex Pott, Angie Byron, and myself — who will decide on how the money is spent. The fund provides for both "top-down" (directed by the Drupal 8 branch maintainers) and "bottom-up" (requested by other community members) style grants. The money will be used on things that positively impact the Drupal 8 release date, such as hiring contributors to fix critical bugs, sponsoring code sprints to fix specific issues, and other community proposals.

Since the restructuring of the Drupal Association, I have encouraged the Drupal Association staff and Board of Directors to grow into our ambitious mission; to unite a global open source community to build and promote Drupal. I've also written and talked about the fact that scaling Open Source communities is really hard. The Drupal 8 Accelerate Fund is an experiment with crowdsourcing as a means to help scale our community which is unique compared to other efforts because it is backed by the official non-profit organization that fosters and supports Drupal.

I feel that the establishment of this fund is an important step towards more sustainable core development. My hope is that if this round of funding is successful that this can grow over time to levels that could make an even more meaningful impact on core, particularly if we complement this with other approaches and steps, such as organization credit on Drupal.org.

This is also an opportunity for Drupal companies to give back to Drupal 8 development. The Drupal Association board is challenging itself to raise $62,500 USD (half of the total amount) to support this program. If you are an organization who can help support this challenge, please let us know. If you're a community member with a great idea on how we might be able to spend this money to help accelerate Drupal 8, you can apply for a grant today.

Categories: Elsewhere

Code Enigma: Meaningful commit messages

Planet Drupal - Mon, 08/12/2014 - 13:51
At Code Enigma, most of our Jenkins builds post a git log into one of our IRC channels on completion. This helps the ops team to keep an eye on what's going on and to quickly spot any build failures. It also gives us a chance to see the commit messages that people are posting.
Categories: Elsewhere

Annertech: Twitter Cards - Up Close and Drupally

Planet Drupal - Mon, 08/12/2014 - 10:12
Twitter Cards - Up Close and Drupally

We've all heard of Twitter, how it can help you boost your business and engage with your supporters. But have you ever noticed that some tweets now come with a nice little description, or maybe an image, or even an embedded video?

That's the magic of Twitter cards, and you can do it with Drupal.

Categories: Elsewhere

Niels Thykier: Jessie has half the number of RC bugs compared to Wheezy

Planet Debian - Mon, 08/12/2014 - 08:08

In the last 24 hours, the number of RC bugs currently affecting Jessie was reduced to just under half of the same number for Wheezy.

 

There are still a lot of bugs to be fixed, but please keep up the good work. :)

 


Categories: Elsewhere

Clint Adams: I don't care about the sunshine, yeah

Planet Debian - Mon, 08/12/2014 - 03:32

Rhoda is guarded. She is secretly in love with her brother. When he gets a girlfriend, she finds a boyfriend. She tells no one what she's really thinking.

Rhoda likes to seize the day. In the midst of evening conversation, she will excuse herself to “use the bathroom” or “come right back”, then, within literally two minutes, she will go home with an acquaintance or stranger. Often these encounters or the aftermaths thereof do not go according to her liking, and she will make veiled remarks of hostility, should she see those people again.

Rhoda was unhappy so she changed “everything” in her life. Though multiple things remained constant, she concluded that she was, in fact, the problem.

Rhoda does not make enough money on which to live. She works part-time, and turns down all other job offers. Other people make up for her financial shortcomings so that she is not homeless and starving.

Rhoda is certain that it is more difficult to be female than male.

Categories: Elsewhere

Ana Beatriz Guerrero Lopez: Keysigning, it’s never too late

Planet Debian - Mon, 08/12/2014 - 00:30

I managed to sign all the keys from DebConf14 within a month after the conference ended, but I had still a pile of keys to sign
going back to DebConf11 from several minidebconfs and other small events. Today it was frakking cold and I finally managed to sign something close to 100 keys. I didn’t
sign any key with less than 2048 bits, expired or revoked.

If you got your key signed by me (or get it in the next hours), sorry it took so long!

Categories: Elsewhere

Junichi Uekawa: Got nasne.

Planet Debian - Sun, 07/12/2014 - 23:10
Got nasne. Got it working with Android, PlayStation VITA, and a Sony TV. Each platform seem to require some form of payment in addition to buying the Nasne itself. Something about ecosystem and lock in that I don't like.

Categories: Elsewhere

Sven Hoexter: Failing with F5: Implementing HTTP session persistence

Planet Debian - Sun, 07/12/2014 - 22:15

In the old days people deployed the Apache HTTPD together with mod_jk and mod_balancer in front of Apache Tomcat instances to achieve scalability and resilience. That sooner or later turns out to not provide features you nowadays would like to have to run a service 24/7. So it happened that I had to jump into the sometimes cold sea of managing services fronted by F5 BigIP devices, a somewhat common load balancing solution.

Soon after you start to introduce loadbalancing, because you grow beyond the "one fat webserver" state, you'll have the requirement to implement some way of session persistence. In the old days you might have added a unique jvmRoute value to your mod_balancer pool and to your tomcat configuration to match the connection to the right pool member. If you're clever you've set it on the Tomcat as a variable that automatically builds up the jvmRoute based on the hostname, so you can add new instances without having to touch the Tomcat configuration. In case you were not that clever (like we were when we started) you might have added placeholders in the server.xml to set the jvmRoute value in your deployment script.

I think in a somewhat better world your first step usually is to introduce a shared session store, be it the php session handler backed by a memcached or Hazelcast for the Java afficionados, to avoid the persistence requirement on your balancer. But if you grow even bigger the requirement will soon reappear because you might have to add some form of sharding and now have to sent users to the same cluster serving a specific shard.

In the end all of that is not cool and your developers have to be aware of those issues. For all cases where fault tolerance within the session is not a priority we aimed for something that we can enable or disable on the balancer without application code changes.

The "let the box do its job" solution

The nicest solution from our point of view is letting the F5 inject a cookie that points the device on subsequent requests to the right pool node. If you like you can define the cookie name, a timeout, force the injection on replies and some other tunables. Take a look at the help for ltm persistence cookie if you'd like to have a look at the details.

The "let's do it by hand but on the box" solution

Most Java frameworks provide a session cookie called JSESSIONID. Why not use that one and add a persistence table entry on the BigIP with the cookie as the lookup key? As always if there's no solution provided out of the box you can implement an iRule for it. Kind of an F5 mantra.

There are many examples out there, here is what we ended up with:

when HTTP_REQUEST { if { [HTTP::cookie "JSESSIONID"] ne "" }{ persist uie [string tolower [HTTP::cookie "JSESSIONID"]] 1800 } } when HTTP_RESPONSE { if { [HTTP::cookie "JSESSIONID"] ne "" }{ persist add uie [string tolower [HTTP::cookie "JSESSIONID"]] 1800 } } And here is where I failed

Now imagine you're running an application that so far handles only machine to machine traffic without session persistence. A few month later you're approached to enable session persistence, because someone introduced a management console for the application as a web user interface. Without further ado you jump on board and provide - showcasing the flexibility of the new balancer - session persistence based on cookies injected by the BigIP. You've done it before, you enable it for the test environment, everything looks fine, you enable it for the production setup. Everything is fine (at least for now ...), the WUI workes fine, sessions are persistent to the node the user hit first.

A month or two later someone from the development team approaches you again, this time asking if the balancing setup has some issues because over 90% of the live traffic hit one of the two machines in the pool until it wents down due to a crash or a deployment. Then about 90% hit the other node, even after the crashed one is resurrected. But every time the developer tests the balancing with a "while true" loop sending several hundred requests via curl the balancing is perfectly equal.

After beeing puzzled by the behaviour myself we started to look at the live traffic with tcpdump and suddenly it was all clear when I saw the BigIP cookie beeing passed around. Actually everything behaved as we configured it. The application used a proper HTTP library and that one implemented cookies: If you receive one you're polite and pass it back to the sender.

The original sender of course also implements HTTP with cookie support and now, due to the natur of the traffic, we have long living connections between two applications passing around the same cookie. On every single HTTP request sent through this connection. And every time the balancer will look at the cookie and happily base the routing decision on the pool member noted in this cookie.

When we tested with curl we did not have the usage of cookies on our mind. Because our false assumption was that the applications do not create cookies for machine to machine traffic. Nobody, including myself, actively remembered the decision to base the session persistence on cookies injected by the balancer.

We went on and switched to the iRule mentioned above. Luckily we already had that one implemented and tested for a different use case with the requirement to not inject additional cookies.

  1. Can we maybe improve the change process for the balancer to handle a mixed traffic scenario?
  2. Do we need monitoring to monitor the balancing of requests?
  3. Is it a wise idea to mix machine2machine and user2machine traffic on the same port?
  4. Should we maybe, if we seperate properly, place management applications on a different system?
  5. Is our documentation sufficient?

Combine 3. and 4. and ask yourself how resilient you are if someone tries to exhaust the memory of the balancer with a lot of connection attempts that have random JSESSIONID cookies? Or is it even possible to exhaust ressource with faked BigIP cookies you inject? Maybe it's even a vector for something like http://events.ccc.de/congress/2011/Fahrplan/attachments/2007_28C3_Effective_DoS_on_web_application_platforms.pdf?

Quite some questions left to think about and draft answers for.

Categories: Elsewhere

Miriam Ruiz: Falling Trees, by Robert Fulghum

Planet Debian - Sun, 07/12/2014 - 21:35

In the Solomon Islands in the south Pacific some villagers practice a unique form of logging. If a tree is too large to be felled with an ax, the natives cut it down by yelling at it. (Can’t lay my hands on the article, but I swear I read it.) Woodsmen with special powers creep up on a tree just at dawn and suddenly scream at it at the top of their lungs. They continue this for thirty days. The tree dies and falls over. The theory is that the hollering kills the spirit of the tree. According to the villagers, it always works.

Ah, those poor nave innocents. Such quaintly charming habits of the jungle. Screaming at trees, indeed. How primitive. Too bad thay don’t have the advantages of modern technology and the scientific mind.

Me? I yell at my wife. And yell at the telephone and the lawn mower. And yell at the TV and the newspaper and my children. I’ve been known to shake my fist and yell at the sky at times.

Man next door yells at his car a lot. And this summer I heard him yell at a stepladder for most of an afternoon. We modern, urban, educated folks yell at traffic and umpires and bills and banks and machines–especially machines. Machines and relatives get most of the yelling.

Don’t know what good it does. Machines and things just sit there. Even kicking doesn’t always help. As for people, well, the Solomon Islanders may have a point. Yelling at living things does tend to kill the spirit in them. Sticks and stones may break our bones, but words will break our hearts….

by Robert Fulghum (All I Really Need To Know I Learned In Kindergarten)

Categories: Elsewhere

Gregor Herrmann: GDAC 2014/7

Planet Debian - Sun, 07/12/2014 - 17:48

creating a free operating system in general & fixing bugs in particular is a collaborative effort. today's example is bug report #766773 (& friends) which involved communication in the BTS & on IRC between the bug submitter, the maintainer of a related package, the release team, & me as a bug triager. & 2.5 hours later, another block for the jessie release is gone. – thanks everybody for yet another pleasant collaboration experience!

this posting is part of GDAC (gregoa's debian advent calendar), a project to show the bright side of debian & why it's fun for me to contribute.

Categories: Elsewhere

Steve Kemp: I eventually installed Debian on a new desktop.

Planet Debian - Sun, 07/12/2014 - 09:12

Recently I build a new desktop system. The hightlights of the hardware are a pair of 512Gb SSDs, which were to be configured in software RAID for additional speed and reliability (I'm paranoid that they'd suddenly stop working one day). From power-on to the (GNOME) login-prompt takes approximately 10 seconds.

I had to fight with the Debian installer to get the beast working though as only the Jessie Beta 2 installer would recognize the SSDs, which are Crucual MX100 devices. My local PXE-setup which deploys the daily testing installer, and the wheezy installer, both failed to recognize the drives at all.

The biggest pain was installing grub on the devices. I think this was mostly this was due to UFI things I didn't understand. I created spare partitions for it, and messaged around with grub-ufi, but ultimately disabled as much of the "fancy modern stuff" as I could in the BIOS, leaving me with AHCI for the SATA SSDs, and then things worked pretty well. After working through the installer about seven times I also simplified things by partitioning and installing on only a single drive, and only configured the RAID once I had a bootable and working system.

(If you've never done that it's pretty fun. Install on one drive. Ignore the other. Then configure the second drive as part of a RAID array, but mark the other half as missing/failed/dead. Once you've done that you can create filesystems on the various /dev/mdX devices, rsync the data across, and once you boot from the system with root=/dev/md2 you can add the first drive as the missing half. Do it patiently and carefully and it'll just work :)

There were some niggles though:

  • Jessie didn't give me the option of the gnome desktop I know/love. So I had to install gnome-session-fallback. I also had to mess around with ~/.config/autostart because the gnome-session-properties command (which should let you tweak the auto-starting applications) doesn't exist anymore.

  • Setting up custom keyboard-shortcuts doesn't seem to work.

  • I had to use gnome-tweak-tool to get icons, etc, on my desktop.

Because I assume the SSDs will just die at some point, and probably both on the same day, I installed and configured obnam to run backups. There is more testing and similar, but this is the core of my backup script:

#!/bin/sh # backup "/" - minus some exceptions. obnam backup -r /media/backups/storage --exclude=/proc --exclude=/sys --exclude=/dev --exclude=/media / # keep files for various periods obnam forget --keep="30d,8w,8m" --repository /media/backups/storage
Categories: Elsewhere

Gregor Herrmann: GDAC 2014/6

Planet Debian - Sat, 06/12/2014 - 23:07

positive feedback is motivating (probably not only) for me. a recent example showing that our work on creating a great operating system is appreciated was sent to the debian-project list yesterday. – thanks!

this posting is part of GDAC (gregoa's debian advent calendar), a project to show the bright side of debian & why it's fun for me to contribute.

Categories: Elsewhere

Pages

Subscribe to jfhovinne aggregator