Elsewhere

Kristian Polso: My road to DrupalCon Amsterdam

Planet Drupal - Fri, 26/09/2014 - 12:50
My first trip to a DrupalCon is just about to start! The yearly DrupalCon Europe is held in Amsterdam from 29th of September to 3rd of October 2014. There are a lot of sessions to be seen, wide array of sprints, social events and much more.
Categories: Elsewhere

KYbest: Using PHP_CodeSniffer in PhpStorm

Planet Drupal - Fri, 26/09/2014 - 12:46

PHP_CodeSniffer is a PHP5 script, which can validate PHP, JavaScript and CSS type source codes according to the different coding standards. In other words, you can easily check your source code’s standardization with a script, instead of knowing every detail about the coding standards by heart. You can use PHP_CodeSniffer in different ways, for example you can run it simply from terminal but thanks for the PhpStorm’s built-in support it becomes a much more effective tool.

Categories: Elsewhere

Holger Levsen: 20140925-reproducible-builds

Planet Debian - Fri, 26/09/2014 - 12:34
Reproducible builds? I never did any - manually

I've never done a reproducible build attempt of any package, manually, ever. But what I've done now is setting up reproducible builds on jenkins.debian.net which will build hundreds or thousands of packages, hopefully reproducibly, regularily in the future. Thanks to Lunar's and many other peoples work, this was actually rather easy. If you want to do this manually, it should take you just a few minutes to setup a suitable build environment.

So three days ago when I wasn't exactly bored I decided that it was a good moment to implement some reproducible build jobs on jenkins.d.n, and so I gave it a try and two hours later the basic implementation was working, and then it was an evening and morning of fine tuning until I was mostly satisfied. Since then there has been some polishing, but the basic setup is done and has been working since.

What's the result? One job, reproducible_setup will just create a suitable environment for pbuilding reproducible packages as documented so well on the Debian wiki. And as that job runs 3.5 minutes only (to debootstrap from scratch), it's run daily.

And then there are currently 16 other jobs, which test reproducible builds in different areas: d-i, core, some six major desktops and some selected desktop applications, some security + privacy related packages, some build chains we have in Debian, libreoffice and X.org. Most of these jobs run several hours, but luckily not days. And they discover packages which still fail to build reproducibly, which already has caused some bugs to be filed, eg. #762732 "libdebian-installer: please do not write timestamps in Doxygen generated documentation".

So this is the output from testing the reproducibilty of all debian-installer packages: 72 packages were successfully built reproducibly, while 6 packages failed to do so. I was quite impressed by these numbers as AFAIK noone tried to build d-i reproducibly before.

72 packages successfully built reproducibly: userdevfs user-setup usb-discover udpkg tzsetup rootskel rootskel-gtk rescue preseed pkgsel partman-xfs partman-target partman-partitioning partman-nbd partman-multipath partman-md partman-lvm partman-jfs partman-iscsi partman-ext3 partman-efi partman-crypto partman-btrfs partman-basicmethods partman-basicfilesystems partman-base partman-auto partman-auto-raid partman-auto-lvm partman-auto-crypto partconf os-prober oldsys-preseed nobootloader network-console netcfg net-retriever mountmedia mklibs media-retriever mdcfg main-menu lvmcfg lowmem localechooser live-installer lilo-installer kickseed kernel-wedge kbd-chooser iso-scan installation-report installation-locale hw-detect grub-installer finish-install efi-reader dh-di debian-installer-utils debian-installer-netboot-images debian-installer-launcher clock-setup choose-mirror cdrom-retriever cdrom-detect cdrom-checker cdebconf-terminal cdebconf-entropy bterm-unifont base-installer apt-setup anna 6 packages failed to built reproducibly: win32-loader libdebian-installer debootstrap console-setup cdebconf busybox

What's also impressive: all packages for the newly introduced Cinnamon Desktop build reproducibly from the start!

The jenkins setup is configured via just three small files:

That's it and that's enough to keep several cores busy for days. But as each job only takes a few hours each is scheduled twice a month and more jobs and packages shall be added in future (with some heuristics to schedule known good packages less often...)

I guess it's an appropriate opportunity to say "many thanks to Profitbricks", who have been donating the powerful virtual machine jenkins.debian.net is running on since October 2012. I also want to say "many many thanks to Helmut" (Grohne) who has recently joined me in maintaining this jenkins setup. And then I'd like to thank "the KGB trio" (Gregor, Tincho and Dam!) for providing those KGB bots on IRC, which are very helpful for providing notifications on IRC channels and last but not least thanks to everybody who contributed so that reproducible builds got this far! Keep up the jolly good work!

And If you happen to know failing packages not included in job-cfg/reproducible.yaml I'd like to hear about those, so they'll get regularily tested and appear on the radar, until finally bugs are filed, fixed and migrated to stable. So one day all binary packages in Debian stable will be build reproducibly. An important step on this road is probably to have this defined as an release goal for Jessie+1. And then for jessie+1 hopefully the first 10k packages will build reproducibly? Or whooping 23k maybe? And maybe release jessie+2 with 100%?!? We will see! Even Jessie already has quite some packages (someone needs to count them...) which build reproducibly with just modified dpkg(-dev) and debhelper packages alone...

So let's fix all the bugs! That said, an easier start for most of you is probably the list of useful things you (yes, you!) can do!

Oh, and last but surely not least in my book: many thanks too to the nice people hosting me so friendly in the last days! Keep on rockin'!

Categories: Elsewhere

Petter Reinholdtsen: How to test Debian Edu Jessie despite some fatal problems with the installer

Planet Debian - Fri, 26/09/2014 - 12:20

The Debian Edu / Skolelinux project provide a Linux solution for schools, including a powerful desktop with education software, a central server providing web pages, user database, user home directories, central login and PXE boot of both clients without disk and the installation to install Debian Edu on machines with disk (and a few other services perhaps to small to mention here). We in the Debian Edu team are currently working on the Jessie based version, trying to get everything in shape before the freeze, to avoid having to maintain our own package repository in the future. The current status can be seen on the Debian wiki, and there is still heaps of work left. Some fatal problems block testing, breaking the installer, but it is possible to work around these to get anyway. Here is a recipe on how to get the installation limping along.

First, download the test ISO via ftp, http or rsync (use ftp.skolelinux.org::cd-edu-testing-nolocal-netinst/debian-edu-amd64-i386-NETINST-1.iso). The ISO build was broken on Tuesday, so we do not get a new ISO every 12 hours or so, but thankfully the ISO we already got we are able to install with some tweaking.

When you get to the Debian Edu profile question, go to tty2 (use Alt-Ctrl-F2), run

nano /usr/bin/edu-eatmydata-install

and add 'exit 0' as the second line, disabling the eatmydata optimization. Return to the installation, select the profile you want and continue. Without this change, exim4-config will fail to install due to a known bug in eatmydata.

When you get the grub question at the end, answer /dev/sda (or if this do not work, figure out what your correct value would be. All my test machines need /dev/sda, so I have no advice if it do not fit your need.

If you installed a profile including a graphical desktop, log in as root after the initial boot from hard drive, and install the education-desktop-XXX metapackage. XXX can be kde, gnome, lxde, xfce or mate. If you want several desktop options, install more than one metapackage. Once this is done, reboot and you should have a working graphical login screen. This workaround should no longer be needed once the education-tasks package version 1.801 enter testing in two days.

I believe the ISO build will start working on two days when the new tasksel package enter testing and Steve McIntyre get a chance to update the debian-cd git repository. The eatmydata, grub and desktop issues are already fixed in unstable and testing, and should show up on the ISO as soon as the ISO build start working again. Well the eatmydata optimization is really just disabled. The proper fix require an upload by the eatmydata maintainer applying the patch provided in bug #702711. The rest have proper fixes in unstable.

I hope this get you going with the installation testing, as we are quickly running out of time trying to get our Jessie based installation ready before the distribution freeze in a month.

Categories: Elsewhere

DrupalCon Amsterdam: DrupalCon Amsterdam is almost here

Planet Drupal - Fri, 26/09/2014 - 12:09

It’s the Friday before DrupalCon Amsterdam, and we couldn’t be more excited for what’s in store. As we prepare to dive headlong into a week filled with fun, friends, and Drupal, there are a few things that all DrupalCon attendees (and would-be attendees) need to be aware of.

Late pricing ends today

At 23:59 tonight, ticket pricing will move to onsite prices. You’ll still be able to register for tickets online, but in order to get the late pricing you must register to attend DrupalCon Amsterdam before midnight tonight Amsterdam time.

DrupalCon Amsterdam is our biggest European DrupalCon yet

We’re thrilled to announce that over 2,000 people have registered to attend DrupalCon Amsterdam, which makes it our biggest European DrupalCon to date. With so many people signed up to attend, there will be tons of opportunities to network, learn, and make new friends.

The fun starts before Tuesday

Excited to get the DrupalCon party started? Many of our amazing community members are starting the celebration early: the Tour de Drupal is cruising across Europe, and a student training with over 200 attendees is happening right now, thanks to the local Dutch community.

We wish you safe travels on your way to Amsterdam. Whether you’re taking a plane, a train, a bike, a boat, or some other method of transportation, we hope that you have a fun and safe trip, and we can’t wait to celebrate the best community in Open Sourced with you.

See you in Amsterdam!

Categories: Elsewhere

Acquia: Ultimate Guide to Drupal 8: Episode 8 - Answers to Your Burning Questions

Planet Drupal - Fri, 26/09/2014 - 11:08

Welcome to the 8th and FINAL installment of an 8-part blog series we're calling "The Ultimate Guide to Drupal 8." Whether you're a site builder, module or theme developer, or simply an end-user of a Drupal website, Drupal 8 has tons in store for you! This blog series will attempt to enumerate the major changes in Drupal 8.

Categories: Elsewhere

Dirk Eddelbuettel: R and Docker

Planet Debian - Fri, 26/09/2014 - 04:57

Earlier this evening I gave a short talk about R and Docker at the September Meetup of the Docker Chicago group.

Thanks to Karl Grzeszczak for setting the meeting, and for providing a pretty thorough intro talk regarding CoreOS and Docker.

My slides are now up on my presentations page.

Categories: Elsewhere

Mediacurrent: Exploring the Picture Element Module (Part 1)

Planet Drupal - Fri, 26/09/2014 - 04:01

Responsive Web Design or RWD, has come a long way since it was first introduced in 2010, and you would think that by now, given the popularity of the subject, all things have been sorted out and all questions have been answered. Not quite.  RWD is a moving target that continues to evolve, but for the most part the majority of the techniques used to accomplish the goal of an adaptable website are unquestionable except one, images. 

Categories: Elsewhere

CMS Quick Start: Drupal 7 Login Methods and Module Roundup: Part 1

Planet Drupal - Thu, 25/09/2014 - 23:45
If your site relies on user engagement, chances are you are using Drupal's powerful built in user modules. Sometimes, however, it can be difficult to understand what tweaks you can make to the default login process to make it a better experience all around. Should you present the login form on every page? Where should you put it? What methods can you use to display the login form in unobtrusive ways? What action does your site take after someone logs in? We're going to be presenting an array of options to hopefully point you in the right direction.

read more

Categories: Elsewhere

Steve Kemp: Today I mostly removed python

Planet Debian - Thu, 25/09/2014 - 21:11

Much has already been written about the recent bash security problem, allocated the CVE identifier CVE-2014-6271, so I'm not even going to touch it.

It did remind me to double-check my systems to make sure that I didn't have any packages installed that I didn't need though, because obviously having fewer packages installed and fewer services running reduces the potential attack surface.

I had noticed in the past I had python installed and just though "Oh, yeah, I must have python utilities running". It turns out though that on 16 out of 19 servers I control I had python installed solely for the lsb_release script!

So I hacked up a horrible replacement for `lsb_release in pure shell, and then became cruel:

~ # dpkg --purge python python-minimal python2.7 python2.7-minimal lsb-release

That horrible replacement is horrible because it defers detection of all the names/numbers to the /etc/os-release which wasn't present in earlier versions of Debian. Happily all my Debian GNU/Linux hosts run Wheezy or later, so it all works out.

So that left three hosts that had a legitimate use for Python:

  • My mail-host runs offlineimap
    • So I purged it.
    • I replaced it with isync.
  • My host-machine runs KVM guests, via qemu-kvm.
    • qemu-kvm depends on Python solely for the script /usr/bin/kvm_stat.
    • I'm not pleased about that but will tolerate it for now.
  • The final host was my ex-mercurial host.
    • Since I've switched to git I just removed tha package.

So now 1/19 hosts has Python installed. I'm not averse to the language, but given that I don't personally develop in it very often (read "once or twice in the past year") and by accident I had no python-scripts installed I see no reason to keep it on the off-chance.

My biggest surprise of the day was that now that we can use dash as our default shell we still can't purge bash. Since it is marked as Essential. Perhaps in the future.

Categories: Elsewhere

Aigars Mahinovs: Distributing third party applications via Docker?

Planet Debian - Thu, 25/09/2014 - 20:54

Recently the discussions around how to distribute third party applications for "Linux" has become a new topic of the hour and for a good reason - Linux is becoming mainstream outside of free software world. While having each distribution have a perfectly packaged, version-controlled and natively compiled version of each application installable from a per-distribution repository in a simple and fully secured manner is a great solution for popular free software applications, this model is slightly less ideal for less popular apps and for non-free software applications. In these scenarios the developers of the software would want to do the packaging into some form, distribute that to end-users (either directly or trough some other channels, such as app stores) and have just one version that would work on any Linux distribution and keep working for a long while.

For me the topic really hit at Debconf 14 where Linus voiced his frustrations with app distribution problems and also some of that was touched by Valve. Looking back we can see passionate discussions and interesting ideas on the subject from systemd developers (another) and Gnome developers (part2 and part3).

After reading/watching all that I came away with the impression that I love many of the ideas expressed, but I am not as thrilled about the proposed solutions. The systemd managed zoo of btrfs volumes is something that I actually had a nightmare about.

There are far simpler solutions with existing code that you can start working on right now. I would prefer basing Linux applications on Docker. Docker is a convenience layer on top of Linux cgroups and namespaces. Docker stores its images in a datastore that can be based on AUFS or btrfs or devicemapper or even plain files. It already has a semantic for defining images, creating them, running them, explicitly linking resources and controlling processes.

Lets play a simple scenario on how third party applications should work on Linux.

Third party application developer writes a new game for Linux. As his target he chooses one of the "application runtime" Docker images on Docker Hub. Let's say he chooses the latest Debian stable release. In that case he writes a simple Dockerfile that installs his build-dependencies and compiles his game in "debian-app-dev:wheezy" container. The output of that is a new folder containing all the compiled game resources and another Dockerfile - this one describes the runtime dependencies of the game. Now when a docker image is built from this compiled folder, it is based on "debian-app:wheezy" container that no longer has any development tools and is optimized for speed and size. After this build is complete the developer exports the Docker image into a file. This file can contain either the full system needed to run the new game or (after #8214 is implemented) just the filesystem layers with the actual game files and enough meta-data to reconstruct the full environment from public Docker repos. The developer can then distribute this file to the end user in the way that is comfortable for them.

The end user would download the game file (either trough an app store app, app store website or in any other way) and import it into local Docker instance. For user convenience we would need to come with an file extension and create some GUIs to launch for double click, similar to GDebi. Here the user would be able to review what permissions the app needs to run (like GL access, PulseAudio, webcam, storage for save files, ...). Enough metainfo and cooperation would have to exist to allow desktop menu to detect installed "apps" in Docker and show shortcuts to launch them. When the user does so, a new Docker container is launched running the command provided by the developer inside the container. Other metadata would determine other docker run options, such as whether to link over a socket for talking to PulseAudio or whether to mount in a folder into the container to where the game would be able to save its save files. Or even if the application would be able to access X (or Wayland) at all.

Behind the scenes the application is running from the contained and stable libraries, but talking to a limited and restricted set of system level services. Those would need to be kept backwards compatible once we start this process.

On the sandboxing part, not only our third party application is running in a very limited environment, but also we can enhance our system services to recognize requests from such applications via cgroups. This can, for example, allow a window manager to mark all windows spawned by an application even if the are from a bunch of different processes. Also the window manager can now track all processes of a logical application from any of its windows.

For updates the developer can simply create a new image and distribute the same size file as before, or, if the purchase is going via some kind of app-store application, the layers that actually changed can be rsynced over individually thus creating a much faster update experience. Images with the same base can share data, this would encourage creation of higher level base images, such as "debian-app-gamegl:wheezy" that all GL game developers could use thus getting a smaller installation package.

After a while the question of updating abandonware will come up. Say there is is this cool game built on top of "debian-app-gamegl:wheezy", but now there was a security bug or some other issue that requires the base image to be updated, but that would not require a recompile or a change to the game itself. If this Docker proposal is realized, then either the end user or a redistributor can easily re-base the old Docker image of the game on a new base. Using this mechanism it would also be possible to handle incompatible changes to system services - ten years down the line AwesomeAudio replaces PulseAudio, so we create a new "debian-app-gamegl:wheezy.14" version that contains a replacement libpulse that actually talks to AwesomeAudio system service instead.

There is no need to re-invent everything or push everything and now package management too into systemd or push non-distribution application management into distribution tools. Separating things into logical blocks does not hurt their interoperability, but it allows to recombine them in a different way for a different purpose or to replace some part to create a system with a radically different functionality.

Or am I crazy and we should just go and sacrifice Docker, apt, dpkg, FHS and non-btrfs filesystems on the altar of systemd?

P.S. You might get the impression that I dislike systemd. I love it! Like an init system. And I love the ideas and talent of the systemd developers. But I think that systemd should have nothing to do with application distribution or processes started by users. I am sometimes getting an uncomfortable feeling that systemd is morphing towards replacing the whole of System V jumping all the way to System D and rewriting, obsoleting or absorbing everything between the kernel and Gnome. In my opinion it would be far healthier for the community if all of these projects would developed and be usable separately from systemd, so that other solutions can compete on a level playing field. Or, maybe, we could just confess that what systemd is doing is creating a new Linux meta-distribution.

Categories: Elsewhere

Jan Wagner: Redis HA with Redis Sentinel and VIP

Planet Debian - Thu, 25/09/2014 - 19:56

For an actual project we decided to use Redis for some reasons. As there is availability a critical part, we discovered that Redis Sentinel can monitor Redis and handle an automatic master failover to a available slave.

Setting up the Redis replication was straight forward and even setting up Sentinel. Please keep in mind, if you configure Redis to require an authentication password, you even need to provide that for the replication process (masterauth) and for the Sentinel connection (auth-pass).

The more interesting part is, how to migrate over the clients to the new master in case of a failover process. While Redis Sentinel could also be used as configuration provider, we decided not to use this feature, as the application needs to request the actual master node from Redis Sentinel much often, which will maybe a performance impact.
The first idea was to use some kind of VRRP, implemented into keepalived or something like this. The problem with such a solution is, you need to notify the VRRP process when a redis failover is in progress.
Well, Redis Sentinel has a configuration option called 'sentinel client-reconfig-script':

# When the master changed because of a failover a script can be called in # order to perform application-specific tasks to notify the clients that the # configuration has changed and the master is at a different address. # # The following arguments are passed to the script: # # <master-name> <role> <state> <from-ip> <from-port> <to-ip> <to-port> # # <state> is currently always "failover" # <role> is either "leader" or "observer" # # The arguments from-ip, from-port, to-ip, to-port are used to communicate # the old address of the master and the new address of the elected slave # (now a master). # # This script should be resistant to multiple invocations.

This looks pretty good and as there is provided a <role>, I thought it would be a good idea to just call a script which evaluates this value and based on it's return, it adds the VIP to the local network interface, when we get 'leader' and removes it when we get 'observer'. It turned out that this was not working as <role> didn't returned 'leader' when the local redis instance got master and 'observer' when got slave in any case. This was pretty annoying and I was short before giving up.
Fortunately I stumpled upon a (maybe) chinese post about Redis Sentinal, where was done the same like I did. On the second look I recognized that the decision was made on ${6} which is <to-ip>, nothing more then the new IP of the Redis master instance. So I rewrote my tiny shell script and after some other pitfalls this strategy worked out well.

Some notes about convergence. Actually it takes round about 6-7 seconds to have the VIP migrated over to the new node after Redis Sentinel notifies a broken master. This is not the best performance, but as we expect this happen not so often, we need to design the application using our Redis setup to handle this (hopefully) rare scenario.

Categories: Elsewhere

Drupal Watchdog: Drupal 7 Form Building

Planet Drupal - Thu, 25/09/2014 - 19:45
Article

Static websites, comprising web pages that do not respond to any user input, may be adequate for listing information, but little else. Dynamic pages are what make the Web more than just interlinked digital billboards. The primary mechanism that enables this is the humble web form – whether a modest single button or a multi-page form with various controls that allow the user to input text, make choices, upload files, etc.

Anyone developing a new Drupal-based website will usually need to create forms for gathering data from users. There are at least two possible approaches to solving this problem: One approach mostly relies on Drupal core, and the other uses a contributed module dedicated to forms.

The Node Knows

If you understand how to create content types and attach fields to them, then that could be a straightforward way to make a form – not for creating nodes to be used later for other purposes, but solely for gathering user input. To add different types of form fields to a content type, you will need to install and enable the corresponding modules, all of which are available from Drupal.org's modules section.

Some of these modules are part of Drupal's core: File (for uploading files), List (for selection lists), Number (for integers, decimals, and floats), Options (for selection controls, checkboxes, and radio buttons), Taxonomy (for tagging content with terms), and Text (for single- and multi-line entry fields).

Other field-related modules have been contributed by the Drupal community, including:

Categories: Elsewhere

Gunnar Wolf: #bananapi → On how compressed files should be used

Planet Debian - Thu, 25/09/2014 - 18:37

I am among the lucky people who got back home from DebConf with a brand new computer: a Banana Pi. Despite the name similarity, it is not affiliated with the very well known Raspberry Pi, although it is a very comparable (although much better) machine: A dual-core ARM A7 system with 1GB RAM, several more on-board connectors, and same form-factor.

I have not yet been able to get it to boot, even from the images distributed on their site (although I cannot complain, I have not devoted more than a hour or so to the process!), but I do have a gripe on how the images are distributed.

I downloaded some images to play with: Bananian, Raspbian, a Scratch distribution, and Lubuntu. I know I have a long way to learn in order to contribute to Debian's ARM port, but if I can learn by doing... ☻

So, what is my gripe? That the three images are downloaded as archive files:

  1. 0 gwolf@mosca『9』~/Download/banana$ ls -hl bananian-latest.zip \
  2. > Lubuntu_For_BananaPi_v3.1.1.tgz Raspbian_For_BananaPi_v3.1.tgz \
  3. > Scratch_For_BananaPi_v1.0.tgz
  4. -rw-r--r-- 1 gwolf gwolf 222M Sep 25 09:52 bananian-latest.zip
  5. -rw-r--r-- 1 gwolf gwolf 823M Sep 25 10:02 Lubuntu_For_BananaPi_v3.1.1.tgz
  6. -rw-r--r-- 1 gwolf gwolf 1.3G Sep 25 10:01 Raspbian_For_BananaPi_v3.1.tgz
  7. -rw-r--r-- 1 gwolf gwolf 1.2G Sep 25 10:05 Scratch_For_BananaPi_v1.0.tgz

Now... that is quite an odd way to distribute image files! Specially when looking at their contents:

  1. 0 gwolf@mosca『14』~/Download/banana$ unzip -l bananian-latest.zip
  2. Archive: bananian-latest.zip
  3. Length Date Time Name
  4. --------- ---------- ----- ----
  5. 2032664576 2014-09-17 15:29 bananian-1409.img
  6. --------- -------
  7. 2032664576 1 file
  8. 0 gwolf@mosca『15』~/Download/banana$ for i in Lubuntu_For_BananaPi_v3.1.1.tgz \
  9. > Raspbian_For_BananaPi_v3.1.tgz Scratch_For_BananaPi_v1.0.tgz
  10. > do tar tzvf $i; done
  11. -rw-rw-r-- bananapi/bananapi 3670016000 2014-08-06 03:45 Lubuntu_1404_For_BananaPi_v3_1_1.img
  12. -rwxrwxr-x bananapi/bananapi 3670016000 2014-08-08 04:30 Raspbian_For_BananaPi_v3_1.img
  13. -rw------- bananapi/bananapi 3980394496 2014-05-27 01:54 Scratch_For_BananaPi_v1_0.img

And what is bad about them? That they force me to either have heaps of disk space available (2GB or 4GB for each image) or to spend valuable time extracting before recording the image each time.

Why not just compressing the image file without archiving it? That is,

  1. 0 gwolf@mosca『7』~/Download/banana$ tar xzf Lubuntu_For_BananaPi_v3.1.1.tgz
  2. 0 gwolf@mosca『8』~/Download/banana$ xz Lubuntu_1404_For_BananaPi_v3_1_1.img
  3. 0 gwolf@mosca『9』~/Download/banana$ ls -hl Lubun*
  4. -rw-r--r-- 1 gwolf gwolf 606M Aug 6 03:45 Lubuntu_1404_For_BananaPi_v3_1_1.img.xz
  5. -rw-r--r-- 1 gwolf gwolf 823M Sep 25 10:02 Lubuntu_For_BananaPi_v3.1.1.tgz

Now, wouldn't we need to decompress said files as well? Yes, but thanks to the magic of shell redirections, we can just do it on the fly. That is, instead of having 3×4GB+1×2GB files sitting on my hard drive, I just need to have several files ranging between 145M and I guess ~1GB. Then, it's as easy as doing:

  1. 0 gwolf@mosca『8』~/Download/banana$ dd if=<(xzcat bananian-1409.img.xz) of=/dev/sdd

And the result should be the same: A fresh new card with Bananian ready to fly. Right, right, people using these files need to have xz installed on their systems, but... As it stands now, I can suppose current prospective users of a Banana Pi won't fret about facing a standard Unix tool!

(Yes, I'll forward this rant to the Banana people, it's not just bashing on my blog :-P )

Categories: Elsewhere

Marco d'Itri: CVE-2014-6271 fix for Debian sarge and etch

Planet Debian - Thu, 25/09/2014 - 17:01

Very old Debian releases like sarge (3.1) and etch (4.0) are not supported anymore by the Debian Security Team and do not get security updates. Since some of our customers still have servers running these version, I have built bash packages with the fix for CVE-2014-6271 (the "shellshock" bug):

http://ftp.linux.it/pub/People/md/bash/

This work has been sponsored by my employer Seeweb, an hosting, cloud infrastructure and colocation provider.

Categories: Elsewhere

Julian Andres Klode: hardlink 0.3.0 released; xattr support

Planet Debian - Thu, 25/09/2014 - 14:42

Today I not only submitted my bachelor thesis to the printing company, I also released a new version of hardlink, my file deduplication tool.

hardlink 0.3 now features support for xattr support, contributed by Tom Keel at Intel. If this does not work correctly, please blame him.

I also added support for a –minimum-size option.

Most of the other code has been tested since the upload of RC1 to experimental in September 2012.

The next major version will split up the code into multiple files and clean it up a bit. It’s getting a bit long now in a single file.


Filed under: Uncategorized
Categories: Elsewhere

Petter Reinholdtsen: Suddenly I am the new upstream of the lsdvd command line tool

Planet Debian - Thu, 25/09/2014 - 11:20

I use the lsdvd tool to handle my fairly large DVD collection. It is a nice command line tool to get details about a DVD, like title, tracks, track length, etc, in XML, Perl or human readable format. But lsdvd have not seen any new development since 2006 and had a few irritating bugs affecting its use with some DVDs. Upstream seemed to be dead, and in January I sent a small probe asking for a version control repository for the project, without any reply. But I use it regularly and would like to get an updated version into Debian. So two weeks ago I tried harder to get in touch with the project admin, and after getting a reply from him explaining that he was no longer interested in the project, I asked if I could take over. And yesterday, I became project admin.

I've been in touch with a Gentoo developer and the Debian maintainer interested in joining forces to maintain the upstream project, and I hope we can get a new release out fairly quickly, collecting the patches spread around on the internet into on place. I've added the relevant Debian patches to the freshly created git repository, and expect the Gentoo patches to make it too. If you got a DVD collection and care about command line tools, check out the git source and join the project mailing list. :)

Categories: Elsewhere

MariqueCalcus: Prius is in Alpha 15

Planet Drupal - Thu, 25/09/2014 - 10:51

Today, we are very excited to announce the latest release of our Drupal 8 theme Prius. Alongside a full support of Drupal 8 Alpha 15, we have included a number of new features. This release is particularly exciting as we are one step closer to an official launch of Drupal 8. Indeed, Drupal Alpha 15 is the first candidate for a Beta release. Meaning if no new beta blocker bugs are found within the next coming days1, we could see the first Beta version of our favourite "CMS" very soon.

Read More...
Categories: Elsewhere

Mike Hommey: So, hum, bash…

Planet Debian - Thu, 25/09/2014 - 09:43

So, I guess you heard about the latest bash hole.

What baffles me is that the following still is allowed:

env echo='() { xterm;}' bash -c "echo this is a test"

Interesting replacements for “echo“, “xterm” and “echo this is a test” are left as an exercise to the reader.

Categories: Elsewhere

Commerce Guys: Commerce Guys is pleased to sponsor Symfony live

Planet Drupal - Thu, 25/09/2014 - 08:00

The Symfony Live events of this Fall (London, Berlin, NYC, Madrid) are around the corner, and for the first year, Commerce Guys is going to attend these events as a sponsor. Some people are wondering why, and I’d like to explain why Commerce Guys is very excited to engage with the Symfony community and its open source software vendor, SensioLabs.

In fact, there are 3 main reasons for Commerce Guys’ interest in Symfony and working tightly with SensioLabs:

Drupal 8 and Drupal Commerce 2.0

It’s no secret that Drupal8 will rely on Symfony components. This architecture decision is good, and paved the way for similar thoughts on Drupal Commerce 2.0. It also ties the destinies of both open source communities, we think for the better. The work on Drupal Commerce for Drupal 8, known as Drupal Commerce 2.x, started in June 2014. During a community sprint that included members of SensioLabs and other partners like Smile, Publicis Modem, Osinet, i-KOS, Adyax, and Ekino, we validated the idea that some of the core eCommerce components of Drupal Commerce 2.x should rely on Symfony and other general PHP libraries directly. The goal is to offer an even more generic and flexible solution that spreads the impact of our code beyond the walls of the Drupal community.

This effort is well in progress already. Bojan Zivanovic, Drupal Commerce 2.x co-maintainer, provides a great example of this in a recent blog post about our new Internationalization library. He explains how much improvement this component will bring to the software for managing and formatting currencies internationally via a generic PHP library called commerceguys/intl. Expanding the reach of our work to the broader PHP community will help us get more feedback, more users, and more open source contributors, ultimately leading to better software. Ryan Szrama, Commerce Guys co-founder and Drupal Commerce CTO, will be presenting this approach at Symfony Live in New York City in October. We strongly believe this vision will bring us closer to our goal of building the most popular open source eCommerce software.

Platform.sh now refined for Symfony projects

In a context where Symfony will be central to mastering Drupal 8 projects, we’ve pursued the goal to enable our development & production Platform as a Service (PaaS) for Symfony projects in general. We’re convinced that this will provide Platform.sh an edge, and wanted to be a driving force in providing tools that will fit both open source communities.

Since Spring 2014, Commerce Guys engineers have been collaborating with SensioLabs engineers to understand Symfony better. Few companies in the world have the expertise in enterprise PHP that SensioLabs has, and the Platform.sh Symfony experience is the outcome of lots of intense discussions with the SensioLabs’ team.

Our objective was to enable teams to develop and deploy Symfony projects faster and more productively on Platform.sh. That work is now done and we’re very happy to announce today that, with just a few clicks, Symfony developers can create a full Symfony development environment (starting from an existing Symfony distribution), in order to build and deploy highly scalable websites and custom applications. This will lead to a much improved development process, lots of time saved for developers and a reduced time to market from development to production. Sponsoring Symfony live is a way for Commerce Guys to share the hard work we’ve done to build a unique, cloud-based development experience for Symfony developers. We’re excited to share our work and get feedback from the Symfony community about this product.

A shared focus on the developers

The time we’ve spent with SensioLabs’ management team highlighted our common passion and interest: help developers be more efficient and successful and, as much as it depends on us, to enjoy their jobs even more. SensioLabs and Commerce Guys were both founded to design and develop open source frameworks, gather large and global developer communities, and enable developers to create great web experiences. Both companies aim at making developers happier and more successful by providing them the right tools. It’s on these values and fundamental principles that this partnership was built. It’s all very solid and here to stay!

Categories: Elsewhere

Pages

Subscribe to jfhovinne aggregator - Elsewhere