Feed aggregator

Pantheon Blog: D8 and Drupal’s Destiny on the Open Web

Planet Drupal - Mon, 09/05/2016 - 17:40
I believe that the web is Earth’s most powerful communication tool. In particular I believe in the Open Web, a place of small pieces loosely joined, true to the original architecture of the internet, the philosophy of UNIX, and of HTTP as a protocol. This technology is rewriting the way humanity operates, and I believe it is one of the most positive things to emerge from the 20th century.
Categories: Elsewhere

Third & Grove: Nat Catchpole on Three Months at TAG as a Drupal 8 Core Committer

Planet Drupal - Mon, 09/05/2016 - 16:00
Nat Catchpole on Three Months at TAG as a Drupal 8 Core Committer catch Mon, 05/09/2016 - 10:00
Categories: Elsewhere

Third & Grove: Retrospective on Hiring a Drupal 8 Core Maintainer

Planet Drupal - Mon, 09/05/2016 - 16:00
Retrospective on Hiring a Drupal 8 Core Maintainer justin Mon, 05/09/2016 - 10:00
Categories: Elsewhere

Mike Gabriel: Recent progress in NXv3 development

Planet Debian - Mon, 09/05/2016 - 14:42

This is to give a comprehensive overview on the recent progress made in NXv3 (aka nx-libs) development.

The upstream sources of nx-libs can be found at / viewed on / cloned from Github:

A great portion of the current work is sponsored by the Qindel Group [1] in Spain (QINDEL FORMACIÓN Y SERVICIOS, S.L.). Thanks for making this possible.

Planned release date: 2nd July, 2016

We aim at releasing a completely tidied up nx-libs code tree versioned 3.6.0 on July 2nd, 2016. There is still a whole bunch of work to do for this, but I am positive that we can make this release date.

Goals of our Efforts

There are basically two major goals for spending a considerable amount of time, money and energy on NXv3 hacking:

  • make this beast long-term maintainable
  • make it work with latest X11 desktop environments and applications

The efforts undertaken always have the various existing use cases in mind (esp. the framework of the coming-up Arctica Project, TheQVD and X2Go).

Overview on Recent Development Progress General Code Cleanups

Making this beast maintainable means first of all: identifying code redundancies, unused code passages, etc. and remove them.

This is where we came from (NoMachine's NX 3.5.x, including nxcomp, nxcompext, nxcompshad, nx-X11 and nxagent): 1,757,743 lines of C/C++ code.

[mike@minobo nx-libs.35 (3.5.0.x)]$ cloc --match-f '.*\.(c|cpp|h)$' . 5624 text files. 5614 unique files. 2701 files ignored. http://cloc.sourceforge.net v 1.60 T=18.59 s (302.0 files/s, 132847.4 lines/s) ------------------------------------------------------------------------------- Language files blank comment code ------------------------------------------------------------------------------- C 3134 231180 252893 1326393 C/C++ Header 2274 78062 116132 349743 C++ 206 20037 13312 81607 ------------------------------------------------------------------------------- SUM: 5614 329279 382337 1757743 -------------------------------------------------------------------------------

On the current 3.6.x branch of nx-libs (at commit 6c6b6b9), this is where we are now: 662,635 lines of C/C++ code, amount of code reduced to a third of the original code lines.

[mike@minobo nx-libs (3.6.x)]$ cloc --match-f '.*\.(c|cpp|h)' . 2012 text files. 2011 unique files. 1898 files ignored. http://cloc.sourceforge.net v 1.60 T=5.63 s (341.5 files/s, 161351.5 lines/s) ------------------------------------------------------------------------------- Language files blank comment code ------------------------------------------------------------------------------- C 1015 74605 81625 463244 C/C++ Header 785 26992 34354 138063 C++ 122 16984 10804 61328 ------------------------------------------------------------------------------- SUM: 1922 118581 126783 662635 -------------------------------------------------------------------------------

The latest development branch currently has these statistics: 619,353 lines of C/C++ code, another 40,000 lines could be dropped.

[mike@minobo nx-libs (pr/libnx-xext-drop-unused-extensions)]$ cloc --match-f '.*\.(c|cpp|h)' . 1932 text files. 1931 unique files. 1898 files ignored. http://cloc.sourceforge.net v 1.60 T=5.66 s (325.4 files/s, 150598.1 lines/s) ------------------------------------------------------------------------------- Language files blank comment code ------------------------------------------------------------------------------- C 983 69474 77186 426564 C/C++ Header 738 25616 33048 131599 C++ 121 16984 10802 61190 ------------------------------------------------------------------------------- SUM: 1842 112074 121036 619353 ------------------------------------------------------------------------------- Dropping various libNX_X* shared libraries (and using X.org shared client libraries instead)

At first, various bundled libraries could be dropped from the nx-X11 code tree. Secondly, several of the bundled X.org libraries could be dropped, because we managed to build against those libraries as provided system-wide.

Then, and this undertaking is much trickier, we could drop nearly all Xlib extension libraries that are used by nxagent with its role of being an X11 client.

We could sucessfully drop these Xlib extension libraries from nx-X11, because we managed to build nxagent against the matching libraries in X.org: libNX_Xdmcp, libNX_Xfixes, libNX_XComposite, libNX_Xdamage, libNX_Xtst, libNX_Xinerama, libNX_Xfont, libNX_xkbui, and various others. All these droppings happened without a loss of functionality.

However, some shared X client libraries are not easy to remove without loss of functionality, or rather not removable at all.

Dropping libNX_Xrender

We recently dropped libNX_Xrender [2] and now build nxagent against X.org's libXrender. However, this cost us a compression feature in NX. The libNX_Xrender code has passages that do zero padding of the unused memory portions in non-32bit-depth glyphs (the NX_RENDER_CLEANUP feature). However, we have hope for being able to reintroduce that feature again later, but current efforts [3] still fail at run-time.

Dropping libNX_Xext is not possible...

...the libNX_Xext / Xserver Xext code has been cleaned up instead.

Quite an amount of research and testing has been spent on removing the libNX_Xext library from the build workflow of nxagent. However, it seems that building against X.org's libXext will require us to update the Xext part of the nxagent Xserver at the same time. While taking this deep look into Xext code, we dropped various Xext extensions from the nx-X11 Xserver code. The extensions that got dropped [5] are all extensions that already have been dropped from X.org's Xserver code, as well.

Further investigation, however, showed, that actually the only used client side extension code from libNX_Xext is the XShape extension. Thus, all other client side extension got dropped now in a very recent pull request [4].

Dropping libNX_X11 not easy, research summary given here

For the sake of dropping the Xlib library bundled with nx-libs, we have attempted at writing a shared library called libNX_Xproxy. This library is supposed to contain a subset of the NXtrans related Xlib code that NoMachine patched into X.org's libX11 (and libxtrans).

Results of this undertaking [6] so far:

  • We managed to build nxagent against Xlib from X.org
  • Local nxagent sessions (using the normal X11 transport) worked and seemed to perform better than sessions running under the original Xlib library (libNX_X11) bundled with nx-libs.
  • NXtrans support in libNX_Xproxy is half implemented but far from being finished, yet. The referenced branch [6] is a work-in-progress branch. Don't expect it to work. Expect force-pushes no that branch, too, please.

Over the weekend, I thought this all through once more and I am pretty sure right now, that we can actually make libNX_Xproxy work and drop all of libNX_X11 from nx-libs soon. Although we have to port (i.e. copy) various functions related to the NX transport from libNX_X11 into libNX_Xproxy, this change will allow us to drop all Xlib drawing routines and use those provided by X.org's Xlib shared library directly.

Composite extension backport in nxagent to version 0.4

Mihai Moldovan looked at what it needs to make the Composite extension functional in nxagent. Unfortunately, the exact answer cannot be given, yet. As a start, Mihai backported latest Composite extension (v0.4) code from X.org's Xserver into the nxagent Xserver [7]. The currently shipped Composite extension version in nxagent is v0.2.

Work on the NX Compression shared library (aka nxcomp v3)

Fernando Carvajal and Salvador Fandiño from the Qindel Group [1] filed three pull requests against the nxcomp part of nx-libs recently, two of them have been code cleanups (done by Fernando), the third one is a feature enhancement regarding channels in nxcomp (provided by Salva).

Protocol clean-up: drop pre-3.5 support

With the release of nx-libs 3.6.x, we will drop support for nxcomp versions earlier than 3.5. Thus, if you are still on nxcomp 3.4, be prepared for upgrading at least to version 3.5.x.

The code removal had been issued as pull request #111 ("Remove compatibility code for nxcomp before 3.5.0") [8]. The PR has already been reviewed and merged.

Fernando filed another code cleanup PR (#119 [9]) against nx-libs that also already got merged into the 3.6.x branch.

UNIX Socket Support for Channels

The nxcomp library (and thus, nxproxy) provides a feature called "channels". Channels in nxcomp can be used for forwarding traffic between NX client side and the NX server side (along the graphical X11 transport that nxcomp is designed for). Until version 3.5.x, nxcomp was only able to use local TCP sockets for providing / connecting to channel endpoints. We consider local TCP sockets as insecure and aim at adding UNIX file socket support to nxcomp whereever a connection is established.

Salva provided a patch against nxcomp that provides UNIX socket support to channel endpoints. The initial use case for this patch is: connect to client side pulseaudio socket file and avoid enabling the TCP listening socket in pulseaudio. The traffic then is channeled to the server side, so that pulse clients can connect to a UNIX socket file rather than to a local TCP port.

The channel support patch has already been reviewed and merged into the 3.6.x branch of nx-libs.

Rebasing nxagent against latest X.org

Ulrich Sibiller spent a considerable amount of time and energy on providing a build chain that allows building nxagent against a modularized X.org 7.0 (rather than against the monolithic build tree of X.org 6.9, like we still do in nx-libs 3.6.x). We plan to adapt and minimize this build workflow for nx-libs 3.7.x (scheduled for summer 2017).

A short howto that shows how to build nxagent with that new workflow will be posted on this blog within the next days. So stay tuned.

Further work accomplished

Quite a lot of code cleanup PRs have been filed by myself against nx-libs. Most of them target at removal of unnecessary code from the nx-X11 Xserver code base and the nxagent DDX:

  • Amend all issues generating compiler warnings in progams/Xserver/hw/nxagent/. (PR #102 [10], not yet merged).
  • nx-X11's Xserver: Drop outdated Xserver extensions. (PR #106 [11], not yet merged).
  • nxagent: drop duplicate Xserver code and include original Xserver code in nxagent code files. (PR #120 [12], not yet merged).
  • libNX_Xext: Drop unused extensions. (PR #121 [4], not yet merged).

The third one (PR #120) in the list requires some detailled explanation:

We discovered that nxagent ships overrides some symbols from the original Xserver code base. These overrides are induced by copies of some files from some Xserver sub-directory placed into the hw/nxagent/ DDX path. All those files' names match the pattern NX*.c. These copies of code are done in a way that the C compiler suppresses throwing its 'symbol "" redefined: first defined in ""; redefined in ""' errors.

The approach taken, however, requires to have quite a few 10.000 lines of redundant code in hw/nxagent/NX*.c that also gets shipped in some Xserver sub-directory (mostly dix/ and render/).

With pull request #120, we have identified all code passages in hw/nxagent/NX*.c that must be considered as NX'ish. We differentiated the NX'ish functions from functions that never got changed by NoMachine when developing nxagent.

I then came up with four different approaches ([13,14,15,16]) of dropping redundant code from those hw/nxagent/NX*.c files. We (Mihai Moldovan and myself) discussed the various approaches and favoured the disable-Xserver-code-and-include-into-NX*.c variant [14] over the others for the following reasons:

  • It requires the least invasive change in Xserver code files.
  • It pulls in Xserver code (MIT/X11) into nxagent code (GPL-2) at build time (vs. [13]).
  • It does not require weakening symbols, static symbols can stay static symbols [15].
  • We don't have to use the hacky interception approach as shown in [16].

In the long run, the Xserver portion of the patches provided via this pull request #120 are required to be upstreamed into X.org's Xserver. The discussion around this will be started when we fully dive into rebasing nxagent's Xserver code base against latest X.org Xserver.

Tasks ahead before the 3.6.x Release

Various tasks we face before 3.6.x can be released. Here is a probably incomplete list:

  • Drop libNX_X11 from nx-libs (2nd attempt)
  • Drop libNX_Xau from nx-libs
  • Fully fix the Composite extension in nxagent (hopefully)
  • Several QA pull requests to close several of the open issues
  • Upgrade XRandR extension in the nxagent Xserver to 1.4
  • Attempt fixing KDE5 launch-up inside nxagent and recent GNOME application failures (due to missing Xinput2 extension in nxagent)
  • More UNIX file socket support in nxcomp
  • Fix reparenting in nxagent
  • Make nxagent run smoothly in x11vnc when suspended
Tasks ahead after the 3.6.x Release (i.e., for 3.7.x)

Here is an even rougher and probably highly incomplete list for tasks after the 3.6.x release:

  • Rename nx-libs to a new name that does not remind us of the original authoring company (NoMachine) that much.
  • Generalize the channel support in nxcomp (make it possible to fire-up an arbitrary amount of channels with TCP and/or UNIX file socket endpoints.
  • Proceed with the X.org Rebasing Effort.

Some people have to be named here that give their heart and love to this project. Thank you guys for supporting the development efforts around nx-libs and the Arctica Project:

Thanks to Nico Arenas Alonso from and on behalf of the Qindel Group for coordinating the current funding project around nx-libs.

Thanks to Ulrich Sibiller for giving a great amount of spare time to working on the nxagent-rebase-against-X.org effort.

Thanks to Mihai Moldovan for doing endless code reviews and being available for contracted work via BAUR-ITCS UG [17] on NXv3, as well.

Thanks to Mario Becroft for providing a patch that allows us to hook into nxagent X11 sessions with VNC and have the session fully available over VNC while the NX transport is in suspended state. Also Mario is pouring some fancy UDP ideas into the re-invention of remote desktop computing process performed in the Arctica Project. Mario has been an NX supporter for years, I am glad to have him still around after so many years (although he was close to abandoning NX usage at least once).

Thanks to Fernando Carvajal from Qindel (until April 2016) for cleaning up nxcomp code.

Thanks to Orion Poplawski from the Fedora Project for working on the first bundled libraries removal patches and being a resource on RPM packaging.

Thanks to my friend Lee for working behind the scenes on the Arctica Core code and constantly pouring various of his ideas into my head. Thanks for regularly reminding me on benchmarking things.

Folks, thanks to all of you for all your various efforts on this huge beast of software. You are a great resource of expertise and it's a pleasure and honour working with you all.

New Faces

Last but not least, I'd like to let everyone know that the Qindel Group sponsors another developer joining in on NXv3 development: Vadim Troshchinskiy (aka vatral on Github). Vadim has worked on NXv3 before and we are looking forward to having him and his skills in the team soon (probably end of May 2016).

Welcome on board of the team, Vadim.

[1] http://www.qindel.com
[2] https://github.com/ArcticaProject/nx-libs/pull/93
[3] https://github.com/sunweaver/nx-libs/commit/be41bde7efc46582b442706dfb85...
[4] https://github.com/ArcticaProject/nx-libs/pull/121
[5] https://github.com/ArcticaProject/nx-libs/pull/106
[6] https://github.com/sunweaver/nx-libs/tree/wip/libnx-x11-full-removal
[7] https://github.com/sunweaver/nx-libs/tree/pr/composite-0_4
[8] https://github.com/ArcticaProject/nx-libs/pull/111
[9] https://github.com/ArcticaProject/nx-libs/pull/119
[10] https://github.com/ArcticaProject/nx-libs/pull/102
[11] https://github.com/ArcticaProject/nx-libs/pull/106
[12] https://github.com/ArcticaProject/nx-libs/pull/120
[13] https://github.com/sunweaver/nx-libs/commit/9692e6a7045b3ab5cb0daaed187e... (include NX'ish code into Xserver)
[14] https://github.com/sunweaver/nx-libs/commit/3d359bfc2b6d021c1ae9c6e19e96... (include Xserver code into NX*.c)
[15] https://github.com/sunweaver/nx-libs/commit/af72ee5624a15d21c610528e37b6... (use weak symbols and non-static symbols)
[16] https://github.com/sunweaver/nx-libs/commit/7205bb8848c49ee3e78a82fde906... (override symbols with interceptions)
[17] http://www.baur-itcs.de/20-x2go/20-x2gosupport/

Categories: Elsewhere

Riku Voipio: Booting ubuntu 16.04 cloud images on Arm64

Planet Debian - Mon, 09/05/2016 - 14:32
For testing kvm/qemu, prebaked images cloud images are nice. However, there is a few steps to get started. First we need a recent Qemu (2.5 is good enough). An efi firmware is needed, and cloud-utils, for customizing our VM.
sudo apt install -y qemu qemu-utils cloud-utils
wget https://releases.linaro.org/components/kernel/uefi-linaro/15.12/release/qemu64/QEMU_EFI.fd
wget https://cloud-images.ubuntu.com/xenial/current/xenial-server-cloudimg-arm64-uefi1.img
Cloud images are plain - there is no user setup, no default user/pw combo, so to log in to the image, we need to customize the image on first boot. The defacto tool for this is cloud-init. The simplest method for using cloud-init is passing a block media with a settings file - of course for real cloud deployment, you would use one of fancy network based initialization protocols cloud-init supports. Enter the following to a file, say cloud.txt:

- name: you
- ssh-rsa AAAAB3Nz....
groups: sudo
shell: /bin/bash
This minimal config will just set you a user with ssh key. A more complex setup can install packages, write files and run arbitrary commands on first boot. In professional setups, you would most likely end up using cloud-init only to start Ansible or another configuration management tool.
cloud-localds cloud.img cloud.txt
qemu-system-aarch64 -smp 2 -m 1024 -M virt -bios QEMU_EFI.fd -nographic \
-device virtio-blk-device,drive=image \
-drive if=none,id=image,file=xenial-server-cloudimg-arm64-uefi1.img \
-device virtio-blk-device,drive=cloud \
-drive if=none,id=cloud,file=cloud.img \
-netdev user,id=user0 -device virtio-net-device,netdev=user0 -redir tcp:2222::22 \
-enable-kvm -cpu host
If you are on an X86 host and want to use qemu to run an aarch64 image, replace the last line with "-cpu cortex-a57". Now, since the example uses user networking with tcp port redirect, you can ssh into the VM:
ssh -p 2222 you@localhost
Welcome to Ubuntu 16.04 LTS (GNU/Linux 4.4.0-22-generic aarch64)
Categories: Elsewhere

Drop Guard: Michael Schmid presents their amazee-ing Drupal Security

Planet Drupal - Mon, 09/05/2016 - 13:45
Michael Schmid presents their amazee-ing Drupal Security Johanna Anthes Mon, 09.05.2016 - 13:45

Michael Schmid , Group CTO at Amazee, conduces his team with creativity and an amount of know-how you wouldn’t think his age would possess! Amazee Labs, a web-hosting, web-consulting and development company, started their Drupal security of 2016 with Drop Guard. And amazee.io the just launched Drupal Hosting platform built for develeopers, which has a full integration into Drop Guard.

Drupal Drupal Planet Security Interview
Categories: Elsewhere

Valuebound: Create Apache2 Virtual Host using Shell Script

Planet Drupal - Mon, 09/05/2016 - 13:05

The ability to create and utilize tools, makes human race dominant in world. Tools make our work easier and also saves time. One of the tools, I am going to share is bash shell script to create apache2 server virtual host.

Why Virtual Host?

Using virtual host we can run more than one web site (such as dev.drupal-cms.com, stage.drupal-cms.com and www.drupal-cms.com) on a single machine. It can be "IP-based" or “name-based”. In IP-based you can have different IP address for each web site. In name-based you can have multiple names running on each IP address.

Shell script code

Script expects 3…

Categories: Elsewhere

Clemens Tolboom: My sync2dev script on Mac

Planet Drupal - Mon, 09/05/2016 - 12:58

# Sync database
# Fix before receiving files
# Get the files
# Fix after receiving
# Drupal 7 variant of cache-rebuild
# Login to site
# Make sure drush finds the features drush extension
# List features with changes

Categories: Elsewhere

Joachim Breitner: Doctoral Thesis Published

Planet Debian - Mon, 09/05/2016 - 10:48

I have officially published my doctoral thesis “ Lazy Evaluation: From natural semantics to a machine-checked compiler transformation” (DOI: 10.5445/IR/1000054251). The abstract of the 226 page long document that earned me a “summa cum laude” reads

In order to solve a long-standing problem with list fusion, a new compiler transformation, 'Call Arity' is developed and implemented in the Haskell compiler GHC. It is formally proven to not degrade program performance; the proof is machine-checked using the interactive theorem prover Isabelle. To that end, a formalization of Launchbury`s Natural Semantics for Lazy Evaluation is modelled in Isabelle, including a correctness and adequacy proof.

and I assembled all relevant artefacts (the thesis itself, its LaTeX-sources, the Isabelle theories, various patches against GHC, raw benchmark results, errata etc.) at http://www.joachim-breitner.de/thesis/.

Other, less retrospective news: My paper on the Incredible Proof Machine got accepted at ITP in Nancy, and I was invited to give a keynote demo about the proof machine at LFMTP in Porto. Exciting!

Categories: Elsewhere

Russell Coker: BIND Configuration Files

Planet Debian - Mon, 09/05/2016 - 10:32

I’ve recently been setting up more monitoring etc to increase the reliability of servers I run. One ongoing issue with computer reliability is any case where a person enters the same data in multiple locations, often people make mistakes and enter slightly different data which can give bad results.

For DNS you need to have at least 2 authoritative servers for each zone. I’ve written the below Makefile to extract the zone names from the primary server and generate a config file suitable for use on a secondary server. The next step is to automate this further by having the Makefile copy the config file to secondary servers and run “rndc reload”. Note that in a typical Debian configuration any user in group “bind” can write to BIND config files and reload the server configuration so this can be done without granting the script on the primary server root access on the secondary servers.

My blog replaces the TAB character with 8 spaces, you need to fix this up if you want to run the Makefile on your own system and also replace with the IP address of your primary server.

all: other/secondary.conf

other/secondary.conf: named.conf.local Makefile
        for n in $$(grep ^zone named.conf.local | cut -f2 -d\"|sort) ; do echo "zone \"$$n\" {\n  type slave;\n  file \"$$n\";\n  masters {; };\n};\n" ; done > other/secondary.conf

Related posts:

  1. BIND Stats In Debian the BIND server will by default append statistics...
  2. DNSSEC reason=”verification failed; insecure key” I’ve recently noticed OpenDKIM on systems...
  3. A Basic IPVS Configuration I have just configured IPVS on a Xen server for...
Categories: Elsewhere

Jay L.ee: Clickable Background Takeover Ads

Planet Drupal - Mon, 09/05/2016 - 07:48

For the past few days, I wrote three blog posts on how to configure three modules: Drupal Background Images Module Configuration Manual, Drupal Background Images Formatter Module Configuration Manual & Drupal BackgroundField Module Configuration Manual

Today I'll finally reveal how to create clickable background takeover ads. I'll use Drupal as an example, but the concept itself should apply to just about any type of website. Are you ready? Here we go:

Tags: Drupal 7Drupal Planet
Categories: Elsewhere

Norbert Preining: ハリー・ポッターとアズカバンの囚人 (Harry Potter and the Prisoner of Azkaban)

Planet Debian - Mon, 09/05/2016 - 05:04

Yeah, another book finished in Japanese, the third Harry Potter book, Prisoner of Azkaban. Took me quite some time, but thanks to a good Kobo ebook reader and my own enriched dictionary reading is getting more and more fun, even if the book is in Japanese.

I guess I don’t have to tell the story of the book, as it is sufficiently well known. Just a good recommendation to students k of Japanese (at a bit higher level) – get some books you like and start reading, the sooner the better. And read on an ebook reader with a good dictionary. This way your Japanese will improve quickly.

My next book will be よしもとばなな、デッドエンドの思い出, as I think I actually prefer the original (English) Harry Potter, and want to read Japanese authors in Japanese. Now if Murakami would only allow to have Japanese ebooks of his works …

Categories: Elsewhere

Russ Allbery: BookRiot's SF/F by Female Authors

Planet Debian - Mon, 09/05/2016 - 03:48

A list by Nikki Steele of the 100 best SF and fantasy novels by female authors (in her subjective take), published on BookRiot, has been making the rounds, with people noting which of those they've read. These crop up from time to time, and I've always been tempted to do the work to track the list over time the way that I track award winners. I had some free time this afternoon, so went ahead and set that up (although I badly need to refactor or rewrite a lot of my review posting code).

An extra advantage is that I can publish the list as a separate web page so that I don't spam RSS readers with a huge list.

The list, annotated with ratings and reviews where I've read the books, is under the reviews section of my web site. As I have time, I may add more lists. I'm also (slowly) working on the project of adding all the nominees for major awards and annotating those, since often the short lists contain a lot of interesting material too.

One big caveat for this list: Steele only lists the first book of series, and in many cases the first book isn't very good and is far from the best of the series. So you'll see some anomalous low ratings here for first books that improve later on.

Categories: Elsewhere

Junichi Uekawa: I'm using 256 color configuration in my terminal now but I need more.

Planet Debian - Mon, 09/05/2016 - 03:03
I'm using 256 color configuration in my terminal now but I need more. Colors look different between my emacs in X and inside terminal.

Categories: Elsewhere

Dirk Eddelbuettel: Rblpapi 0.3.4

Planet Debian - Sun, 08/05/2016 - 23:57

A new release of Rblpapi is now on CRAN. It provides a direct interface between R and the Bloomberg Terminal via the C++ API provided by Bloomberg Labs (but note that a valid Bloomberg license and installation is required).

This marks the fifth release since the package first appeared on CRAN last year. Continued thanks to all contributors for code, suggestions or bug reports. This release contains a lot of internal fixes by Whit, John and myself and should prove to be more resilient to 'odd' representations of data coming back. The NEWS.Rd extract has more details:

Changes in Rblpapi version 0.3.4 (2016-05-08)
  • On startup, the API versions of both the headers and the runtime are displayed (PR #161 and #165).

  • Documentation about extended futures roll notation was added to the bdh manual page.

  • Additional examples for overrides where added to bdh (PR #158).

  • Internal code changes make retrieval of data in ‘unusual’ variable types more robust (PRs #157 and #153)

  • General improvements and fixes to documentation (PR #156)

  • The bdp function now also supports an option verbose (PR #149).

  • The internal header Rblpapi_types.h was renamed from a lower-cased variant to conform with Rcpp Attributes best practices (PR #145).

Courtesy of CRANberries, there is also a diffstat report for the this release. As always, more detailed information is on the Rblpapi page. Questions, comments etc should go to the issue tickets system at the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Categories: Elsewhere

Pronovix: Brightcove Video Connect for Drupal 8 - Part 1: Architecture

Planet Drupal - Sun, 08/05/2016 - 22:19

The Drupal 8 version of the Brightcove Video Connect module was written from scratch in order to take advantage of the architectural changes in Drupal 8, especially the new Entity Data Model. Designed around the new entity system in Drupal 8, the new Brightcove module seamlessly integrates video publishing into the Drupal editorial workflow and interface. This alleviates the fragmented editorial experience typically associated with 3rd party video hosting services.

Categories: Elsewhere

DrupalCon News: Come by! Registration is open.

Planet Drupal - Sun, 08/05/2016 - 20:54

We are just kicking off registration - totes, and badges and t-shirts, oh my!  We will be open until 6:00pm today and will open up bright and early at 7:00am tomorrow.

We are located in the Hall G lobby of the New Orleans Convention Center.  Please note - when you enter, it is quite far down (about 1 mile/1.6km) from the main entrance, but just following the purple signs; we are here waiting for you.


Categories: Elsewhere

Niels Thykier: Another Britney patchset

Planet Debian - Sun, 08/05/2016 - 20:52

I just submitted another patch series to improve Britney for review.  If accepted, they will probably be merged into master within 2 weeks. The changes this time are probably most exciting for people that run/maintain Britney.  Key highlights include:

  • Britney will be able to use a regular mirror (without partial suites) as data source
    • Previously you would have to decompress and merge the Packages/Sources for each component.
    • Partial suite support is still not added, but I hope to add it eventually.  I know it is feature used by at least Ubuntu.
    • This change implies renaming some input files around (Dates, Urgency and BugsV files) as Britney expected these next to the Packages files.
  • More machine parsable facts added to “excuses.yaml”.  It will cover almost all excuses currently in use.
  • Britney will support two use cases for “faux packages” natively.
    • I hope to use this to eliminate our need to “injecting” fake packages into Britney’s data source.

I would like to dwell a moment on the “faux packages”.  We have had a helper script generate and inject fake packages into the list of packages (called “faux packages”).  They generally serve two purposes, which Britney will support:

  1. Whitelist of fake packages to satisfy dependencies of other packages.
    • These are generally stand-in for non-free machine configuration packages, where the end-user system would also fetch packages from the vendor’s repository.
    • Packages relying on “faux packages” are generally not in “main” as Debian’s main component is required to be self-contained.
    • These are (still) be called “faux packages” in/after the patch series
  2. Ensuring that certain packages are present and installable in testing.
    • We have a lot of d-i related packages here to avoid accidental breakage of d-i.
    • These are now referred to as a “constraint” (assuming there is no bike-shedding over the name).

Since Britney will now distinguish between these two use cases, I also make Britney enforce the second use case slightly better.  Mind you, it can still be overruled by force-hints and BREAK_ARCHES, so there still enough rope to hang yourself.


The other exciting part of this patch set (for me, at least) is that Britney will hopefully become simpler to deploy. No doubt there are still some missing features and paper cuts left, but I suspect we are not far from a:

  1. Fill out a template config file pointing Britney to your mirror
  2. Run britney -c britney.conf
  3. Make your archive kit update your target suite based on Britney’s output.
  4. Put step 2+3 in crontab/jenkins/task scheduler of choice
  5. Profit

There will certainly be some features that requires extra steps.  An example is the “anti rc-bugs regression” feature, which requires you to feed Britney with the list of RC bugs for your source and target suite. But even without, Britney would still protect your target suite from most installability issues.

Filed under: Debian, Release-Team
Categories: Elsewhere

Gunnar Wolf: Pyra, PocketC.H.I.P. — Not quite the same, but...

Planet Debian - Sun, 08/05/2016 - 20:49

Petter and Elena both talk enthusiastically about the Pyra. I am currently waiting for the shipment of my C.H.I.P. kit — I pre-ordered my kit when it was still in kickstarter phase, and got the PocketC.H.I.P. level.

It is clearly not the same nor equivalent to the Pyra — The PocketC.H.I.P. is a convenient packaging for what is chiefly an System-on-a-chip; the C.H.I.P. is a small system by today's standards (single-core ARM, 512MB RAM, not meant to be expanded), but still it looks quite usable as a very portable and usable Unix system. Oh, and of course — It's also Debian by default.

I got quite interested in what the Pyra was like. However, the pricing does not make much sense to me. OK, the Pyra is quite a bigger machine, but... While the PocketC.H.I.P. costs officially US$70 (and before June, according to the site, US$50), the Pyra starts at €500 (plus taxes)... It is just too much!

Anyway, I hope to have mine in time to go to DebConf. I also hope Petter and/or Elena can make it this year. And I hope we can compare the systems. I guess the Pyra will sit closer to a regular laptop... But anyway, my two last laptops have been at the bottom of the price scale (both from the Acer Aspire One line). I bought both for around US$300, used the first one as my main laptop for over five years, and have been three with the current one, completely happy.

Categories: Elsewhere

Arturo Borrero González: Continuous integration for the Debian HA stack

Planet Debian - Sun, 08/05/2016 - 20:24

Good news. The Debian Continuous Integration system is just awesome.

If a developer of a package prepares and declares tests for a given package, this CI system will trigger these test from time to time.
These tests are intended to check packages 'as installed', i.e, test what the end user is going to use in a final system.

The CI system is being improved, and now it supports 2 architectures: amd64 and arm64. Also, it now implements LXC as a backend, so the level of isolation available to run tests is very good, allowing us developers to launch even more elaborated tests.

This is the case of the HA stack (pacemaker, corosync, crmsh...), and the good news is that Debian is now continuously testing these packages.

At the time of this blog post, these are the tests:

  • corosync: start the system service (the daemon) with the default Debian configuration
  • pacemaker: start the system services (corosync and pacemaker daemons) with the default Debian configuration.
  • crmsh: start the system services (corosync, pacemaker) and then add a basic resource (a virtual IP address) and do some tests (start the resource, stop, delete...)
Check the CI page for corosync, pacemaker and crmsh for more details.
The basic tests for crmsh was contributed by myself, you can check the two [1][2] commits in the git packaging repo.
This means that we should be able to detect and fix issues in these software packages very early in the development cycle (very cheap and easy compared to fixing things once packages migrate to testing or even stable).
For all users of HA cluster with Debian, these are definitely good news. We have now a HA stack in a fairly different state than in previous years :-) Big step forward.
Most of my Netfilter packages also implement these tests, but that subject doesn't belong to this blogpost.
Categories: Elsewhere


Subscribe to jfhovinne aggregator