Agrégateur de flux

Jay L.ee: How To Customize The Fivestar Module

Planet Drupal - sam, 30/01/2016 - 22:37

* Blue: The code from a Stack Exchange developer that saved my life and got the ball rolling for everything else.
* Green: My own comments for the purpose of this blog post.
* Orange: Additional edits made by me.
* Pink: Code that was deleted by me.
* Yellow: Code that was copied and pasted by me.

When using the Fivestar module (version 7.x-2.1), I can show the result in three ways:

Tags: Drupal 7Drupal Planet
Catégories: Elsewhere

Wuinfo: Spam Defense Network

Planet Drupal - sam, 30/01/2016 - 22:22


Spam is a big headache for many website owners. Using the Drupal impression module, I saw the relentlessness of the spammer bots. Every day, for a single site, I got thousands of hit from URLs like "/?q=user/register" and "/?q=node/add". I have someone commented on my LinkedIn update of my blog post Is there more computer bots than us?. She is "on the verge of giving up on Drupal after being unable to solve this problem". How do we address this issue and solve the problem? I know this is not the issue for one CMS like Drupal, but, it provides some mandate for us to do something. Build something for Drupal and usable by other CMS like Wordpress and Joomla.

I have a bold idea of blocking spam efficiently without taking a toll on the performance of every website. Let's set up a websites spam defence network. A network based on a global spam IP database. Each website is a node of the defence network. It provides spamming IP query as a web service.

The idea is to have a distributed but well-controlled spam IP servers. All participated website acting as a node in the network and capture spamming IP and report it. Web sites are connected and talk to each other and form a defendant line in front of spammers. The network will quarantine the spammer IP for 45 minutes or more depending on how active the spamming activity. The IP will get off the list after the quarantine time ended.

Web sites that join the network will have faster responding website by freeing up the resource taken by spamming activity. We will have a cleaner internet by eliminating the fake users, spamming comments and contents.

Technical wise, we use the open source solution. We can build distributed spam IP database like git repository. We use composer repository, so, all PHP based CMS websites can easily join the network.

Catégories: Elsewhere

Daniel Silverstone: Building an Oscilloscope

Planet Debian - sam, 30/01/2016 - 19:24

I recently ordered some PCBs from Elecrow for the Vic's beer-measurement system I've been designing with Rob. While on the site, I noticed that they have a single-channel digital oscilloscope kit based on an STM32. This is a JYE Tech DSO138 which arrives as a PCB whose surface-mount stuff has been fitted, along with a whole bunch of pin-through components for you to solder up the scope yourself. There's a non-trivial number of kinds of components, so first you should prep by splitting them all up and double-checking them all.

Once you've done that, the instructions start you off fitting a whole bunch of resistors...

Then some diodes, RF chokes, and the 8MHz crystal for the STM32.

The single most-difficult bit for me to solder was the USB socket. Fine pitch leads, coupled with high-thermal-density socket.

There is a veritable mountain of ceramic capacitors to fit...

And then buttons, inductors, trimming capacitors and much more...

THe switches were the next hardest things to solder, after the USB socket...

Finally you have to solder a test loop and close some jumpers before you power-test the board.

The last bit of soldering is to solder pins to the LCD panel board...

Before you finally have a working oscilloscope

I followed the included instructions to trim the scope using the test point and the trimming capacitors, before having a break to write this up for you all. I'd say that it was a fun day because I enjoyed getting a lot of soldering practice (before I have to solder up the beer'o'meter for the pub) and at the end of it I got a working oscilloscope. For 40 USD, I'd recommend this to anyone who fancies a go.

Catégories: Elsewhere

Simon McVittie: GNOME Developer Experience hackfest: xdg-app + Debian

Planet Debian - sam, 30/01/2016 - 19:07

Over the last few days I've been at the GNOME Developer Experience hackfest in Brussels, looking into xdg-app and how best to use it in Debian and Debian derivatives.

xdg-app is basically a way to run "non-core" software on Linux distributions, analogous to apps on Android and iOS. It doesn't replace distributions like Debian or packaging systems, but it adds a layer above them. It's mostly aimed towards third-party apps obtained from somewhere that isn't your distribution vendor, aiming to address a few long-standing problems in that space:

  • There's no single ABI that can be called "a standard Linux system" in the same way there would be for Windows or OS X or Android or whatever, apart from LSB which is rather limited. Testing that a third-party app "works on Linux", or even "works on stable Linux releases from 2015", involves a combinatorial explosion of different distributions, desktop environments and local configurations. Steam uses the Steam Runtime, a chroot environment closely resembling Ubuntu 12.04 LTS; other vendors tend to test on a vaguely recent Ubuntu LTS and leave it at that.

  • There's no widely-supported mechanism for installing third-party applications as an ordinary user. gog.com used to distribute Ubuntu- and Debian-compatible .deb files, but installing a .deb involves running arbitrary vendor-supplied scripts as root, which should worry anyone who wants any sort of privilege-separation. (They have now switched to executable self-extracting installers, which involve running arbitrary vendor-supplied scripts as an ordinary user... better, but not perfect.)

  • Relatedly, the third-party application itself runs with the user's full privileges: a malicious or security-buggy third-party application can do more or less anything, unless you either switch to a different uid to run third-party apps, or use a carefully-written, app-specific AppArmor profile or equivalent.

To address the first point, each application uses a specified "runtime", which is available as /usr inside its sandbox. This can be used to run application bundles with multiple, potentially incompatible sets of dependencies within the same desktop environment. A runtime can be updated within its branch - for instance, if an application uses the "GNOME 3.18" runtime (consisting of a basic Linux system, the GNOME 3.18 libraries, other related libraries like Mesa, and their recursive dependencies like libjpeg), it can expect to see minor-version updates from GNOME 3.18.x (including any security updates that might be necessary for the bundled libraries), but not a jump to GNOME 3.20.

To address the second issue, the plan is for application bundles to be available as a single file, containing metadata (such as the runtime to use), the app itself, and any dependencies that are not available in the runtime (which the app vendor is responsible for updating if necessary). However, the primary way to distribute and upgrade runtimes and applications is to package them as OSTree repositories, which provide a git-like content-addressed filesystem, with efficient updates using binary deltas. The resulting files are hard-linked into place.

To address the last point, application bundles run partially isolated from the wider system, using containerization techniques such as namespaces to prevent direct access to system resources. Resources from outside the sandbox can be accessed via "portal" services, which are responsible for access control; for example, the Documents portal (the only one, so far) displays an "Open" dialog outside the sandbox, then allows the application to access only the selected file.

xdg-app for Debian

One thing I've been doing at this hackfest is improving the existing Debian/Ubuntu packaging for xdg-app (and its dependencies ostree and libgsystem), aiming to get it into a state where I can upload it to Debian experimental. Because xdg-app aims to be a general freedesktop project, I'm currently intending to make it part of the "Utopia" packaging team alongside projects like D-Bus and polkit, but I'm open to suggestions if people want to co-maintain it elsewhere.

In the process of updating xdg-app, I sent various patches to Alex, mostly fixing build and test issues, which are in the new 0.4.8 release.

I'd appreciate co-maintainers and further testing for this stuff, particularly ostree: ostree is primarily a whole-OS deployment technology, which isn't a use-case that I've tested, and in particular ostree-grub2 probably doesn't work yet.

Source code:

Binaries (no trust path, so only use these if you have a test VM):

  • deb https://people.debian.org/~smcv/xdg-app xdg-app main
The "Hello, World" of xdg-apps

Another thing I set out to do here was to make a runtime and an app out of Debian packages. Most of the test applications in and around GNOME use the "freedesktop" or "GNOME" runtimes, which consist of a Yocto base system and lots of RPMs, are rebuilt from first principles on-demand, and are extensive and capable enough that they make it somewhat non-obvious what's in an app or a runtime.

So, here's a step-by-step route through xdg-app, first using typical GNOME instructions, but then using the simplest GUI app I could find - xvt, a small xterm clone. I'm using a Debian testing (stretch) x86_64 virtual machine for all this. xdg-app currently requires systemd-logind to put users and apps in cgroups, either with systemd as pid 1 (systemd-sysv) or systemd-shim and cgmanager; I used the default systemd-sysv. In principle it could work with plain cgmanager, but nobody has contributed that support yet.

Demonstrating an existing xdg-app

Debian's kernel is currently patched to be able to allow unprivileged users to create user namespaces, but make it runtime-configurable, because there have been various security issues in that feature, making it a security risk for a typical machine (and particularly a server). Hopefully unprivileged user namespaces will soon be secure enough that we can enable them by default, but for now, we have to do one of three things to let xdg-app use them:

  • enable unprivileged user namespaces via sysctl:

    sudo sysctl kernel.unprivileged_userns_clone=1
  • make xdg-app root-privileged (it will keep CAP_SYS_ADMIN and drop the rest):

    sudo dpkg-statoverride --update --add root root 04755 /usr/bin/xdg-app-helper
  • make xdg-app slightly less privileged:

    sudo setcap cap_sys_admin+ep /usr/bin/xdg-app-helper

First, we'll need a runtime. The standard xdg-app tutorial would tell you to download the "GNOME Platform" version 3.18. To do that, you'd add a remote, which is a bit like a git remote, and a bit like an apt repository:

$ wget http://sdk.gnome.org/keys/gnome-sdk.gpg $ xdg-app remote-add --user --gpg-import=gnome-sdk.gpg gnome \ http://sdk.gnome.org/repo/

(I'm ignoring considerations like trust paths and security here, for brevity; in real life, you'd want to obtain the signing key via https and/or have a trust path to it, just like you would for a secure-apt signing key.)

You can list what's available in a remote:

$ xdg-app remote-ls --user gnome ... org.freedesktop.Platform ... org.freedesktop.Platform.Locale.cy ... org.freedesktop.Sdk ... org.gnome.Platform ...

The Platform runtimes are what we want here: they are collections of runtime libraries with which you can run an application. The Sdk runtimes add development tools, header files, etc. to be able to compile apps that will be compatible with the Platform.

For now, all we want is the GNOME 3.18 platform:

$ xdg-app install --user gnome org.gnome.Platform 3.18

Next, we can install an app that uses it, from Alex Larsson's nightly builds of a subset of GNOME. The server they're on doesn't have a great deal of bandwidth, so be nice :-)

$ wget http://209.132.179.2/keys/nightly.gpg $ xdg-app remote-add --user --gpg-import=nightly.gpg nightly \ http://209.132.179.2/repo/ $ xdg-app install --user nightly org.mypaint.MypaintDevel

We now have one app, and the runtime it needs:

$ xdg-app list org.mypaint.MypaintDevel $ xdg-app run org.mypaint.MypaintDevel [you see a GUI window] Digression: what's in a runtime?

Behind the scenes, xdg-app runtimes and apps are both OSTree trees. This means the ostree tool, from the package of the same name, can be used to inspect them.

$ sudo apt install ostree $ ostree refs --repo ~/.local/share/xdg-app/repo gnome:runtime/org.gnome.Platform/x86_64/3.18 nightly:app/org.mypaint.MypaintDevel/x86_64/master

A "ref" has roughly the same meaning as in git: something like a branch or a tag. ostree can list the directory tree that it represents:

$ ostree ls --repo ~/.local/share/xdg-app/repo \ runtime/org.gnome.Platform/x86_64/3.18 d00755 0 0 0 / -00644 0 0 493 /metadata d00755 0 0 0 /files $ ostree ls --repo ~/.local/share/xdg-app/repo \ runtime/org.gnome.Platform/x86_64/3.18 /files d00755 0 0 0 /files l00777 0 0 0 /files/local -> ../var/usrlocal l00777 0 0 0 /files/sbin -> bin d00755 0 0 0 /files/bin d00755 0 0 0 /files/cache d00755 0 0 0 /files/etc d00755 0 0 0 /files/games d00755 0 0 0 /files/include d00755 0 0 0 /files/lib d00755 0 0 0 /files/lib64 d00755 0 0 0 /files/libexec d00755 0 0 0 /files/share d00755 0 0 0 /files/src

You can see that /files in a runtime is basically a copy of /usr. This is not coincidental: the runtime's /files gets mounted at /usr inside the xdg-app container. There is also some metadata, which is in the ini-like syntax seen in .desktop files:

$ ostree cat --repo ~/.local/share/xdg-app/repo \ runtime/org.gnome.Platform/x86_64/3.18 /metadata [Runtime] name=org.gnome.Platform/x86_64/3.16 runtime=org.gnome.Platform/x86_64/3.16 sdk=org.gnome.Sdk/x86_64/3.16 [Extension org.freedesktop.Platform.GL] version=1.2 directory=lib/GL [Extension org.freedesktop.Platform.Timezones] version=1.2 directory=share/zoneinfo [Extension org.gnome.Platform.Locale] directory=share/runtime/locale subdirectories=true [Environment] GI_TYPELIB_PATH=/app/lib/girepository-1.0 GST_PLUGIN_PATH=/app/lib/gstreamer-1.0 LD_LIBRARY_PATH=/app/lib:/usr/lib/GL

Looking at an app, the situation is fairly similar:

$ ostree ls --repo ~/.local/share/xdg-app/repo \ app/org.mypaint.MypaintDevel/x86_64/master d00755 0 0 0 / -00644 0 0 258 /metadata d00755 0 0 0 /export d00755 0 0 0 /files

This time, /files maps to what will become /app for the application, which was compiled with --prefix=/app:

$ ostree ls --repo ~/.local/share/xdg-app/repo \ app/org.mypaint.MypaintDevel/x86_64/master /files d00755 0 0 0 /files -00644 0 0 4599 /files/manifest.json d00755 0 0 0 /files/bin d00755 0 0 0 /files/lib d00755 0 0 0 /files/share

There is also a /export directory, which is made visible to the host system so that the contained app can appear as a "first-class citizen" in menus:

$ ostree ls --repo ~/.local/share/xdg-app/repo \ app/org.mypaint.MypaintDevel/x86_64/master /export d00755 0 0 0 /export d00755 0 0 0 /export/share user@debian:~$ ostree ls --repo ~/.local/share/xdg-app/repo \ app/org.mypaint.MypaintDevel/x86_64/master /export/share d00755 0 0 0 /export/share d00755 0 0 0 /export/share/app-info d00755 0 0 0 /export/share/applications d00755 0 0 0 /export/share/icons user@debian:~$ ostree ls --repo ~/.local/share/xdg-app/repo \ app/org.mypaint.MypaintDevel/x86_64/master /export/share/applications d00755 0 0 0 /export/share/applications -00644 0 0 715 /export/share/applications/org.mypaint.MypaintDevel.desktop $ ostree cat --repo ~/.local/share/xdg-app/repo \ app/org.mypaint.MypaintDevel/x86_64/master \ /export/share/applications/org.mypaint.MypaintDevel.desktop [Desktop Entry] Version=1.0 Name=(Nightly) MyPaint TryExec=mypaint Exec=mypaint %f Comment=Painting program for digital artists ... Comment[zh_HK]=藝術家的电脑绘画 GenericName=Raster Graphics Editor GenericName[fr]=Éditeur d'Image Matricielle MimeType=image/openraster;image/png;image/jpeg; Type=Application Icon=org.mypaint.MypaintDevel StartupNotify=true Categories=Graphics;GTK;2DGraphics;RasterGraphics; Terminal=false

Again, there's some metadata:

$ ostree cat --repo ~/.local/share/xdg-app/repo \ app/org.mypaint.MypaintDevel/x86_64/master /metadata [Application] name=org.mypaint.MypaintDevel runtime=org.gnome.Platform/x86_64/3.18 sdk=org.gnome.Sdk/x86_64/3.18 command=mypaint [Context] shared=ipc; sockets=x11;pulseaudio; filesystems=host; [Extension org.mypaint.MypaintDevel.Debug] directory=lib/debug Building a runtime, probably the wrong way

The way in which the reference/demo runtimes and containers are generated is... involved. As far as I can tell, there's a base OS built using Yocto, and the actual GNOME bits come from RPMs. However, we don't need to go that far to get a working runtime.

In preparing this runtime I'm probably completely ignoring some best-practices and tools - but it works, so it's good enough.

First we'll need a repository:

$ sudo install -d -o$(id -nu) /srv/xdg-apps $ ostree init --repo /srv/xdg-apps

I'm just keeping this local for this demonstration, but you could rsync it to a web server's exported directory or something - a lot like a git repository, it's just a collection of files. We want everything in /usr because that's what xdg-app expects, hence usrmerge:

$ sudo mount -t tmpfs -o mode=0755 tmpfs /mnt $ sudo debootstrap --arch=amd64 --include=libx11-6,usrmerge \ --variant=minbase stretch /mnt http://192.168.122.1:3142/debian $ sudo mkdir /mnt/runtime $ sudo mv /mnt/usr /mnt/runtime/files

This obviously has a lot of stuff in it that we don't need - most obviously init, apt and dpkg - but it's Good Enough™.

We will also need some metadata. This is sufficient:

$ sudo sh -c 'cat > /mnt/runtime/metadata' [Runtime] name=org.debian.Debootstrap/x86_64/8.20160130 runtime=org.debian.Debootstrap/x86_64/8.20160130

That's a runtime. We can commit it to ostree, and generate xdg-app metadata:

$ ostree commit --repo /srv/xdg-apps \ --branch runtime/org.debian.Debootstrap/x86_64/8.20160130 \ /mnt/runtime $ fakeroot ostree commit --repo /srv/xdg-apps \ --branch runtime/org.debian.Debootstrap/x86_64/8.20160130 $ fakeroot xdg-app build-update-repo /srv/xdg-apps

(I'm not sure why ostree and xdg-app report "Operation not permitted" when we aren't root or fakeroot - feedback welcome.)

build-update-repo would presumably also be the right place to GPG-sign your repository, if you were doing that.

We can add that as another xdg-app remote:

$ xdg-app remote-add --user --no-gpg-verify local file:///srv/xdg-apps $ xdg-app remote-ls --user local org.debian.Debootstrap Building an app, probably the wrong way

The right way to build an app is to build a "SDK" runtime - similar to that platform runtime, but with development files and tools - and recompile the app and any missing libraries with ./configure --prefix=/app && make && make install. I'm not going to do that, because simplicity is nice, and I'm reasonably sure xvt doesn't actually hard-code /usr into the binary:

$ install -d xvt-app/files/bin $ sudo apt-get --download-only install xvt $ dpkg-deb --fsys-tarfile /var/cache/apt/archives/xvt_2.1-20.1_amd64.deb \ | tar -xvf - ./usr/bin/xvt ./usr/ ./usr/bin/ ./usr/bin/xvt ... $ mv usr xvt-app/files

Again, we'll need metadata, and it's much simpler than the more production-quality GNOME nightly builds:

$ cat > xvt-app/metadata [Application] name=org.debian.packages.xvt runtime=org.debian.Debootstrap/x86_64/8.20160130 command=xvt [Context] sockets=x11; $ fakeroot ostree commit --repo /srv/xdg-apps \ --branch app/org.debian.packages.xvt/x86_64/2.1-20.1 xvt-app $ fakeroot xdg-app build-update-repo /srv/xdg-apps Updating appstream branch No appstream data for runtime/org.debian.Debootstrap/x86_64/8.20160130 No appstream data for app/org.debian.packages.xvt/x86_64/2.1-20.1 Updating summary $ xdg-app remote-ls --user local org.debian.Debootstrap org.debian.packages.xvt The obligatory screenshot

OK, good, now we can install it:

$ xdg-app install --user local org.debian.Debootstrap 8.20160130 $ xdg-app install --user local org.debian.packages.xvt 2.1-20.1 $ xdg-app run --branch=2.1-20.1 org.debian.packages.xvt

and you can play around with the shell in the xvt and see what you can and can't do in the container.

I'm sure there were better ways to do most of this, but I think there's value in having such a simplistic demo to go alongside the various GNOMEish apps.

Acknowledgements:

Thanks to all those!

Catégories: Elsewhere

Another Drop in the Drupal Sea: Drupal Chat: What About Drupal 8?

Planet Drupal - sam, 30/01/2016 - 18:17

Drupal 8 was released just over two months ago. Is it time yet for you to start using it on your production sites?

You'll need to consider the state of the modules you typically use to build your sites, the state of the themes you typically use to build your sites, the nature of the site, the budget for the site and your own skill set.

There are, without a doubt, sites that are being launched on Drupal 8 already. And, at the same time there is this:

https://www.acquia.com/blog/accelerating-drupal-8-adoption/27/01/2016/32...

So, there is obviously still work to be done.

read more

Catégories: Elsewhere

Neil McGovern: On ZFS in Debian

Planet Debian - sam, 30/01/2016 - 17:35

I’m currently over at FOSDEM, and have been asked by a couple of people about the state of ZFS and Debian. So, I thought I’d give a quick post to explain what Debian’s current plan is (which has come together with a lot of discussion with the FTP Masters and others around what we should do).

TLDR: It’s going in contrib, as a source only dkms module.

Longer version:

Debian has always prided itself in providing the unequivocally correct solution to our users and downstream distributions. This also includes licenses – we make sure that Debian will contain 100% free software. This means that if you install Debian, you are guaranteed freedoms offered under the DFSG and our social contract.

Now, this is where ZFS on Linux gets tricky. ZFS is licensed under the CDDL, and the Linux kernel under the GPLv2-only. The project views that both of these are free software licenses, but they’re incompatible with each other. This incompatibility means that there is risk to producing a combined work with Linux and a CDDL module. (Note: there is arguments about if a kernel module, once loaded, is a combined work with the kernel. I’m not touching that with a barge pole, as I Am Not A Lawyer.)

Now, does this mean that Debian would get sued by distributing ZFS natively compiled into the kernel? Well, maybe, but I think it’s a bit unlikely. This doesn’t mean it’s the right choice for Debian to take as a project though! It brings us back to our promise to our users, and our commercial and non-commercial downstream distributions. If a commercial downstream distribution took the next release of stable, and used our binaries, they may well get sued if they have enough money to make it worthwhile. Additionally, Debian has always taken its commitment to upstream licenses very seriously. If there’s a doubt, it doesn’t go in official Debian.

It should be noted that ZFS is something that is important to a lot of Debian users, who all want to be able to use ZFS in a manner that makes it easier for them to install. Thus, the position that we’ve arrived at is that we can ship ZFS as a source only, DKMS module. This means it will be built on the target machines, and we’re not distributing binaries. There’s also a warning in the README.Debian file explaining that care should be taken if you do things with the resultant binary – as we can’t promise it complies with the licenses.

Finally, I should point out that this isn’t my decision in the end. The contents of the archive is a decision for the FTP-Masters, as it’s delegated. However, what I have been able to do is coordinate many conflicting views, and I hope that ZFS will be accepted into the archive soon!

Catégories: Elsewhere

Bálint Réczey: Progress report on hardened1-linux-amd64, a potential Debian port with PIE, ASAN, UBSAN and more

Planet Debian - sam, 30/01/2016 - 12:28

It was more that one and a half years ago when I proposed creating a new security &QA focused port for Debian and now I’m happy to share the first bits of it.

Last year I started the bootstrapping during the holidays and I now have the prototype in the form of cross built packages which can be installed next to amd64 packages using multiarch.

The aim of creating the port is still the same, letting people mix fast (amd64) and reasonably hardened (hardened1-linux-amd64) packages on the same system.

You can already try the cross-built packages in an amd64 unstable chroot, but be warned that the packages are not stable yet.

In the following session I tested curl which seems to be working OK, and groff, which seems to be too buggy even for debugging:

debootstrap --arch=amd64 unstable test-hardened1 # mount /proc for ASAN mount --bind /proc test-hardened1/proc chroot test-hardened1/ apt-get install debian-keyring # this is my key, I'll create one dedicated release key later gpg --keyring /usr/share/keyrings/debian-keyring.gpg -a --export 0x21E764DF | apt-key add - echo "deb http://hardened1-debian.s3.amazonaws.com/debian-cross-built hardened1-unstable main" >> \ /etc/apt/sources.list apt-get update # update apt and dpkg to versions handling the new port apt-get upgrade apt-get update dpkg --add-architecture hardened1-linux-amd64 apt-get update apt-get install curl:hardened1-linux-amd64 curl -s https://www.debian.org | tail -n2 </body> </html> apt-get install -t hardened1-unstable groff:hardened1-linux-amd64 groff ASAN:SIGSEGV ================================================================= ==20642==ERROR: AddressSanitizer: SEGV on unknown address 0x000000000000 (pc 0x7f79cd84698a bp 0x619000006980 sp 0x7ffe89b3a930 T0) ASAN:SIGSEGV ==20642==AddressSanitizer: while reporting a bug found another one. Ignoring.

The next steps are finalizing the changes to apt, dpkg, GCC, glibc and other packages,  rebuilding all packages in hardened1-linux-amd64 sbuild chroots and building the rest of the archive.
Some of the patches are not submitted yet but they are available in a temporary fork of rebootstrap
I hope I’ll be back soon with the recompiled and finalized packages, but until then feel free to try the cross-compiled ones! Patches fixing crashes are always welcome!

update 1: Some packages like dpkg-dev are not installable, I’m working on them.
update 2: There is one similar project I know of which aims creating an address sanitized Gentoo variant and Hanno Böck will give a presentation about that at FOSDEM.

Catégories: Elsewhere

Lars Wirzenius: Obnam 1.19.1 released (backup software)

Planet Debian - sam, 30/01/2016 - 11:35

I have just released version 1.19.1 of Obnam, the backup program. See the website at http://obnam.org for details on what the program does. The new version is available from git (see http://git.liw.fi) and as Debian packages from http://code.liw.fi/debian, and uploaded to Debian, and soon in unstable.

The NEWS file extract below gives the highlights of what's new in this version. Basically, it fixes a bug.

NOTE: Obnam has an EXPERIMENTAL repository format under development, called green-albatross. It is NOT meant for real use. It is likely to change in incompatible ways without warning. Do not use it unless you're willing to lose your backup.

Version 1.19.1, released 2016-01-30

Bug fix:

  • The check for paramiko version turned out not to work with versions 1.7.8 through 1.10.4, due to the paramiko.__version_info__ variable being missing. It's there in earlier and later versions. Lars Wirzenius added code to make the check work if the paramiko.__version__ variable is there. Jan Niggemann provided research and testing.
Catégories: Elsewhere

Mike Hommey: Enabling TLS on this blog

Planet Debian - sam, 30/01/2016 - 07:22

Long overdue, I finally enabled TLS on this blog. It went almost like a breeze.

I used simp_le to get the certificate from Let’s Encrypt, along Mozilla’s Web Server Configuration generator. SSL Labs now reports a rating of A+.

I just had a few issues:

  • I had some hard-coded http:// links in my wordpress theme, that needed changes,
  • Since my wordpress instance is reverse-proxied and the real server not behind HTTPS, I had to adjust the wordpress configuration so that it doesn’t do an infinite redirect loop,
  • Nginx’s config for multiple virtualhosts needs SSL configuration to be repeated. Fortunately, one can use include statements,
  • Contrary to the suggested configuration, setting ssl_session_tickets off; makes browsers unhappy (at least, it made my Firefox unhappy, with a SSL_ERROR_RX_UNEXPECTED_NEW_SESSION_TICKET error message).

I’m glad that there are tools helping to get a proper configuration of SSL. It is sad, though, that the defaults are not better and that we still need to tweak at all. Setting where the certificate and the private key files are should, in 2016, be the only thing to do to have a secure web server.

Catégories: Elsewhere

ThinkDrop Consulting: OpenDevShop at DrupalCamp NJ 2016 and the Global Sprint Weekend!

Planet Drupal - sam, 30/01/2016 - 04:26

In a few hours, I'll be heading to Princeton University for my fourth DrupalCamp New Jersey.

This camp holds a special place in my heart because it was the first place that I spoke about OpenDevShop in public.

DrupalCamp NJ brought me out of my lonely freelancer shell and into the larger Drupal community. 

2013: Beginnings of a Community

For DrupalCamp NJ 2013, we sponsored, and I held two BoFs, on Aegir & DevShop. A very small number of people attended, both of which are now good friends! Back then, I called it "DevShop: the Drupal Environment Manager". It took a long time to figure out how to even describe the thing.

Looking back, it was a critical moment. Getting out there and bragging about my own project was not something I was used to. It was the small beginning of a long tradition of listening to what people thought, keep track of the good but more importantly, the bad things about the product. By listening, we've been able to focus on what matters most for our users.

2014: "Drupal Infrastructure in a Box"

The next year, for DrupalCamp NJ 2014, the organizers were nice enough to accept my session proposal. The room was huge, and I really didn't expect such a large turnout. The feedback this year was incredible.

2015: Turn key Testing

For DrupalCamp 2015, the organizers were gracious enough to select my session again. This time I focused on the new Testing features I added to OpenDevShop. 

It was at this camp that we unveiled the ability of OpenDevShop to run Behat and other tests automatically, every time you push code.  

I was able to demo how a new Pull Request on GitHub can automatically spin up a new environment and test it.

It was such a great session, we got such great feedback, that we devoted all of our efforts into polishing and enhancing the testing. It became really clear at this DrupalCamp NJ that everyone wants to get better at testing.

Our goal with OpenDevShop became to make writing and running tests as easy as possible.

2016?

This year, we won't be holding a session, or a BoF. This year, we will be hunkered down in the Coding Lounge, working hard on upgrading OpenDevShop to be able to host and test Drupal 8. We plan on releasing an Alpha by Sunday.

Mentoring & Collaboration

On Sunday, we are staying all day for the Mentoring & Collaboration day at the FFW offices.

I've been working hard to make OpenDevShop easier to contribute to, and helping others join the community. We are going to be online and working together to improve our own tests, development environments and documentation all weekend long.

Please join us in our chat room at https://gitter.im/opendevshop/devshop and in the issue queue at https://github.com/opendevshop/devshop.

Global Sprint Weekend

This year, the Camp lines up with the Global Sprint Weekend organized by the Drupal Association. People all over the world are coming together in person and online to work together on Drupal and Open Source.

We are doing the same with OpenDevShop. Come online or in person at your local Global Sprint Weekend meetup and we will help get you setup to contribute to development!

This should be an amazing weekend. Drupal and OpenDevShop are hitting some serious momentum with Drupal 8. We're really looking forward to working with our new contributors, and pusing out DevShop 1.0!

 

For more information about OpenDevShop, visit getdevshop.com.

Tags: devshopDrupalCampNJPlanet Drupal
Catégories: Elsewhere

OSTraining: Backup and Restore a Drupal 8 Site

Planet Drupal - sam, 30/01/2016 - 03:03

Drupal 8 is here and ready to use right now.

However, not all of the contributed modules are available yet. That includes Backup and Migrate which was the most popular way to backup and restore Drupal 7 sites.

These four videos, sponsored by the excellent team at InMotion Hosting, offer a backup and restore solution that you can use with Drupal 8 today.

Catégories: Elsewhere

Dimitri John Ledkov: Four gunmen outside

Planet Debian - sam, 30/01/2016 - 02:39

There are four gunmen outside of my hotel. They are armed with automatic rifles and pistols. I am scared for my life having sneaked past them inside. Everyone else is acting as if everything is normal. Nobody is scared or running for cover. Nobody called the police. I've asked the reception to talk to the gunmen and ask them to leave. They looked at me as if I am mad. Maybe I am. Is this what shizophrenia feels like?! Can you see them on the picture?! Please help. There are four gunmen outside of my hotel. I am not in central Beirut, I am in central Brussels.

Catégories: Elsewhere

Promet Source: How to Integrate Association Management Systems with Drupal

Planet Drupal - ven, 29/01/2016 - 22:17

Association websites should be built to handle everything from membership drives to billing activities. Having a website by itself isn't enough; associations also need robust member management databases running behind their websites. There are many vendors who specialize in products that meet this need. These products are commonly referred to as an Association Management System (AMS) and it's rare to find a large organization that doesn't use one.

Catégories: Elsewhere

Drupal core announcements: Help finalizing Migrate at the Global Sprint Weekend

Planet Drupal - ven, 29/01/2016 - 22:12

The Migrate team would like to finish up the Migrate subsystem for next month's release of Drupal 8.1.0. They've collected a number of issues for the upcoming global sprint weekend:

Global Sprint Weekend Migrate Issues

There are issues available for all levels of expertise. If you need help, the Migrate maintainers are in #drupal-migrate on irc.freenode.net as usual and are happy to answer questions.

Catégories: Elsewhere

Jan Wagner: Oxidized - silly attempt at (Really Awesome New Cisco confIg Differ)

Planet Debian - ven, 29/01/2016 - 21:46

Since ages I wanted have replaced this freaking backup solution of our Network Equipment based on some hacky shell scripts and expect uploading the configs on a TFTP server.

Years ago I stumbled upon RANCID (Really Awesome New Cisco confIg Differ) but had no time to implement it. Now I returned to my idea to get rid of all our old crap.
I don't know where, I think it was at DENOG2, I saw RANCID coupled with a VCS, where the NOC was notified about configuration (and inventory) changes by mailing the configuration diff and the history was indeed in the VCS.
The good old RANCID seems not to support to write into a VCS out of the box. But for the rescue there is rancid-git, a fork that promises git extensions and support for colorized emails. So far so good.

While I was searching for a VCS capable RANCID, somewhere under a stone I found Oxidized, a 'silly attempt at rancid'. Looking at it, it seems more sophisticated, so I thought this might be the right attempt. Unfortunately there is no Debian package available, but I found an ITP created by Jonas.

Anyway, for just looking into it, I thought the Docker path for a testbed might be a good idea, as no Debian package ist available (yet).

For oxidized configuration is only a configfile needed and as nodes source a rancid compatible router.db file can be used (beside SQLite and http backend). A migration into a production environment seems pretty easy. So I gave it a go.

I assume Docker is installed already. There seems to be a Docker image on Docker Hub, that looks official, but it seems not maintained (actually). An issue is open for automated building the image.

Creating Oxidized container image

The official documentation describes the procedure. I used a slightly different approach.

docking-station:~# mkdir -p /srv/docker/oxidized/ docking-station:~# git clone https://github.com/ytti/oxidized \ /srv/docker/oxidized/oxidized.git docking-station:~# docker build -q -t oxidized/oxidized:latest \ /srv/docker/oxidized/oxidized.git

I thought it might be a good idea to also tag the image with the actual version of the gem.

docking-station:~# docker tag oxidized/oxidized:latest \ oxidized/oxidized:0.11.0 docking-station:~# docker images | grep oxidized oxidized/oxidized latest 35a325792078 15 seconds ago 496.1 MB oxidized/oxidized 0.11.0 35a325792078 15 seconds ago 496.1 MB

Create initial default configuration like described in the documentation.

docking-station:~# mkir -p /srv/docker/oxidized/.config/ docking-station:~# docker run -e CONFIG_RELOAD_INTERVAL=600 \ -v /srv/docker/oxidized/.config/:/root/.config/oxidized \ -p 8888:8888/tcp -t oxidized/oxidized:latest oxidized Adjusting configuration

After this I adjusted the default configuration for writing a log, the nodes config into a bare git, having nodes secrets in router.db and some hooks for debugging.

Creating node configuration docking-station:~# echo "7204vxr.lab.cyconet.org:cisco:admin:password:enable" >> \ /srv/docker/oxidized/.config/router.db docking-station:~# echo "ccr1036.lab.cyconet.org:routeros:admin:password" >> \ /srv/docker/oxidized/.config/router.db Starting the oxidized beast docking-station:~# docker run -e CONFIG_RELOAD_INTERVAL=600 \ -v /srv/docker/oxidized/.config/:/root/.config/oxidized \ -p 8888:8888/tcp -t oxidized/oxidized:latest oxidized Puma 2.16.0 starting... * Min threads: 0, max threads: 16 * Environment: development * Listening on tcp://127.0.0.1:8888

If you want to have the container get started with the docker daemon automatically, you can start the container with --restart always and docker will take care of it. If I wanted to make it running permanent, I would use a systemd unitfile.

Reload configuration immediately

If you don't want to wait to automatically reload of the configuration, you can trigger it.

docking-station:~# curl -s http://localhost:8888/reload?format=json \ -O /dev/null docking-station:~# tail -2 /srv/docker/oxidized/.config/log/oxidized.log I, [2016-01-29T16:50:46.971904 #1] INFO -- : Oxidized starting, running as pid 1 I, [2016-01-29T16:50:47.073307 #1] INFO -- : Loaded 2 nodes Writing nodes configuration docking-station:/srv/docker/oxidized/.config/oxidized.git# git shortlog Oxidizied (2): update 7204vxr.lab.cyconet.org update ccr1036.lab.cyconet.org

Writing the nodes configurations into a local bare git repository is neat but far from perfect. It would be cool to have all the stuff in a central VCS. So I'm pushing it every 5 minutes into one with a cron job.

docking-station:~# cat /etc/cron.d/doxidized # m h dom mon dow user command */5 * * * * root $(/srv/docker/oxidized/bin/oxidized_cron_git_push.sh) docking-station:~# cat /srv/docker/oxidized/bin/oxidized_cron_git_push.sh #!/bin/bash DOCKER_OXIDIZED_BASE="/srv/docker/oxidized/" OXIDIZED_GIT_DIR=".config/oxidized.git" cd ${DOCKER_OXIDIZED_BASE}/${OXIDIZED_GIT_DIR} git push origin master --quiet

Now having all the nodes configurations in a source code hosting system, we can browse the configurations, changes, history and even establish notifications for changes. Mission accomplished!

Now I can test the coverage of our equipment. The last thing that would make me super happy, a oxidized Debian package!

Catégories: Elsewhere

Acquia Developer Center Blog: Drupal Global Sprint 2016, New England-Style

Planet Drupal - ven, 29/01/2016 - 21:43
DC Denison

Tom Kraft and Renato Francia were conferring in the kitchen, laptops open, “trying to make the Feeds module work better out of the box.”

In a nearby conference room, Dan Feidt was juggling a bunch of open windows on his laptop screen, tackling “a little puzzle around virtualization and Vagrant.”

Kay VanValkenburgh, who was in charge of beginners and onboarding, was roaming, talking to attendees, “removing barriers.”

Tags: acquia drupal planet
Catégories: Elsewhere

DrupalCon News: Staying for the Community: Stories From Our Organizers

Planet Drupal - ven, 29/01/2016 - 20:13

Some of our very own DrupalCon Asia organizers are members of the Drupal Association. We spoke to them about why membership is so important to them, and their answers were so great we had to share. As DrupalCon Asia gets closer, we invite you to read why they support the community and the Drupal project with us:

Catégories: Elsewhere

ImageX Media: Can These "Under the Radar" Keyword Tools Help You Optimize Your Content Marketing Strategy? We’ll Help You Find Out.

Planet Drupal - ven, 29/01/2016 - 20:02

 

Thanks to new innovations in the search engine optimization space, there’s more tools than ever before. This article explores 5 powerful keyword research tools that might not even be on your competitors’ radar yet.
 

Catégories: Elsewhere

InternetDevels: Reasons to Upgrade Your Site From Drupal 6 to Drupal 7

Planet Drupal - ven, 29/01/2016 - 16:05

We had a discussion in one of our previous blogs that Drupal 6 is “dead”,
however if you’re a site owner and still not sure whether you need Drupal 7,
we’ve prepared this article specifically for you.

Read more
Catégories: Elsewhere

Acquia Developer Center Blog: Maintainer's Toolbox: git blame

Planet Drupal - ven, 29/01/2016 - 15:04
Jess (xjm)

This blog post is part of a series on everyday tools and strategies for code review, drawn from Drupal contribution experiences. xjm is a Drupal 8 core maintainer and release manager.

If you have spent much time developing software with others, you've probably asked yourself some of these questions at one time or another:

Tags: acquia drupal planet
Catégories: Elsewhere

Pages

Subscribe to jfhovinne agrégateur