Michal Čihař: New free software projects on Hosted Weblate

Planet Debian - ven, 14/10/2016 - 18:00

Hosted Weblate provides also free hosting for free software projects. I'm quite slow in processing the hosting requests, but when I do that, I process them in a batch and add several projects at once.

This time, the newly hosted projects include:

Filed under: Debian English SUSE Weblate | 0 comments

Catégories: Elsewhere

Mike Gabriel: [Arctica Project] Release of nx-libs (version

Planet Debian - ven, 14/10/2016 - 17:47

NX is a software suite which implements very efficient compression of the X11 protocol. This increases performance when using X applications over a network, especially a slow one.

NX (v3) has been originally developed by NoMachine and has been Free Software ever since. Since NoMachine obsoleted NX (v3) some time back in 2013/2014, the maintenance has been continued by a versatile group of developers. The work on NX (v3) is being continued under the project name "nx-libs".

Release Announcement

On Thursday, Oct 13th, version of nx-libs has been released [1].

This release brings a major backport of libNX_X11 to the status of libX11 1.3.4 (as provided by X.org). On top of that, all CVE fixes provided for libX11 by the Debian X11 Strike Force and the Debian LTS team got cherry-picked to libNX_X11, too. This big chunk of work has been performed by Ulrich Sibiller and there is more to come. We currently have a pull request pending review that backports more commits from libX11 (bumping the status of libNX_X11 to the state of libX11 1.6.4, which is the current HEAD on the X.org Git site).

Another big clean-up performed by Ulrich is the split-up of XKB code which got symlinked between libNX_X11 and nx-X11/programs/Xserver. This brings in some code duplications but allows maintaing the nxagent Xserver code and the libNX_X11 code separately.

In the upstream ChangeLog you will find some more items around code clean-ups and .deb packaging, see the diff [2] on the ChangeLog file for details.

So for this releas, a very special and massive thanks goes to Ulrich Sibiller!!! Well done!!!

Change Log

A list of recent changes (since can be obtained from here.

Known Issues

This version of nx-libs is known to segfault when LDFLAGS / CFLAGS have the -pie / -fPIE hardening flags set. This issue is currently under investigation.

Binary Builds

You can obtain binary builds of nx-libs for Debian (jessie, stretch, unstable) and Ubuntu (trusty, xenial) via these apt-URLs:

Our package server's archive key is: 0x98DE3101 (fingerprint: 7A49 CD37 EBAE 2501 B9B4 F7EA A868 0F55 98DE 3101). Use this command to make APT trust our package server:

wget -qO - http://packages.arctica-project.org/archive.key | sudo apt-key add -

The nx-libs software project brings to you the binary packages nxproxy (client-side component) and nxagent (nx-X11 server, server-side component).

Ubuntu developers, please note: we have added nightly builds for Ubuntu latest to our build server. This has been Ubuntu 16.10 so far, but we will soon drop 16.10 support in nightly builds and add 17.04 support.

Catégories: Elsewhere

Antoine Beaupré: Managing good bug reports

Planet Debian - ven, 14/10/2016 - 17:11

Bug reporting is an art form that is too often neglected in software projects. Bug reports allow contributors to participate without deep technical knowledge and at the same time provide a crucial space for developers to be made aware of issues with their software that they could not have foreseen or found themselves, for lack of resources, variety or imagination.

Prior art

Unfortunately, there are rarely good guidelines for submitting bug reports. Historically, people have pointed towards How to report bugs effectively or How to ask questions the smart way. While those guides can be useful for motivated people and may seem attractive references for project managers, they suffer from serious issues:

  • they are written by technical people, for non-technical people
  • as a result, they have a deeply condescending attitude such as calling people "stupid" or various animal names like "mongoose"
  • they are also very technical themselves: one starts with a copyright notice and a changelog, the other uses magic words like "Core dumps" and $Id$
  • they are too long: sgtatham's is about 3600 words long, esr's is even longer at about 11800 words. those texts will take about 20 to 60 minutes to read by an average reader, according to research

Individual projects have their own guides as well. Linux has the REPORTING_BUGS file that is a much shorter 1200 that can be read under 5 minutes, provided that you can understand the topic at hand. Interestingly, that guide refers to both esr's and sgtatham's guidelines, which means, in the degenerate case where the user hasn't had the "privilege" of reading esr's prose already, they will have an extra hour and a half of reading to do to have honestly followed the guidelines before reporting the bug.

I often find good documentation in the Tails project. Their bug reporting guidelines are easily accessible and quick to read, although they still might be too technical. It could be argued that you need to get technical at some point to get that information out, of course.

In the Monkeysign project, I have started a bug reporting guide that doesn't yet address all those issues. I am considering writing a new guide, but I figured I would look at other people's work and get feedback before writing my own standard.

What's the point?

Why have those documents been written? Are people really expected to read them before seeking help? It seems to me unlikely that someone would:

  1. be motivated enough to do something about a broken part of their computer
  2. figure out they can do something about it
  3. read a fifteen thousand words novel about how to report a bug...
  4. just to finally write a 20-line bug report that has no warranty of support attached to it

And if I would be a paying customer, I wouldn't want to be forced to waste my time reading that prose either: it's your job to help me fix your broken things, not the reverse. As someone doing consulting these days: I totally understand: it's not you, the user, it's us, the developers, that have a problem. We have been socialized through computers, and it makes us weird and obtuse, but that's no excuse, and we need to clean up our act.

Furthermore, it's surprising how often we get (and make!) bug reports that can be difficult to use. The Monkeysign project is very "technical" and I have expected that the bug reports I would get would be well written, with ways to reproduce and so on, but it happened that I received bug reports that were all over the place, didn't have any ways of reproducing or were simply incomplete. Those three bug reports were filed by people that I know to be very technically capable: one is a fellow Debian developer, the second had filed a good bug report 5 days before, and the third one is a contributor that sent good patches before.

In all three cases, they knew what they were doing. Those three people probably read the guidelines mentioned in the past. They may have even read the Monkeysign bug reporting guidelines as well. I can only explain those bug reports by the lack of time: people thought the issue was obvious, that it would get fixed rapidly because, obviously, something is broken.

We need a better way.

The takeaway

What are those guides trying to tell us?

  1. ask questions in the right place
  2. search for similar questions and issues before reporting the bug
  3. try to make the developers reproduce the issues
  4. failing that, try to describe the issue as well as you can
  5. write clearly, be specific and verbose yet concise

There are obviously contradictions in there, like sgtatham telling us to be verbose and esr tells us to, basically, not be verbose. There is definitely a tension in there, and there are many, many more details about how great bug reports can be if done properly.

I tend towards the side of terseness in our descriptions: people that will know how to be concise will be, people that don't will most likely not learn by reading a 12 000 words novel that, in itself, didn't manage to be parsimonious.

But I am willing to allow for verbosity in bug reports: I prefer too many details instead of missing a key bit of information.

Issue trackers

Step 1 is our job: we should send people in the right place, and give them the right tools. Monkeysign used to manage bugs with bugs-everywhere and this turned out to be a terrible idea: you had to understand git and bugs-everywhere to file any bug reports. As a result, there were exactly zero bug reports filed by non-developers during the whole time BE was used, although some bugs were filed in the Debian Bugtracker.

So have a good bug tracker. A mailing list or email address is not a good bug tracker: you lose track of old issues, and it's hard for newcomers to search the archives. It does have the advantage of having a unified interface for the support forum and bug tracking, however.

Redmine, Gitlab, Github and others are all decent-enough bug trackers. The key point is that the issue tracker should be publicly available, and users should be able to register easily to file new issues. You should also be able to mass-edit tickets and users should be able to discover the tracker's features easily. I am sorry to say that the Debian BTS somewhat falls short on those two features.

Step 2 is a shared responsibility: there should be an easy way to search for issues, and we should help the user looking for similar issues. Stackexchange sites do an excellent job at this, by automatically searching for similar questions while you write your question, suggesting similar ones in an attempt to weed out duplicates. Duplicates still happen, but they can then clearly be marked and linked with a distinct mechanism. Most bug trackers do not offer such high level functionality, but should, so I feel the fault lies more on "our" end than at the user's end.

Reproducing the environment

Step 3 and 4 are more or less the user's responsibility. We can detail in our documentation how to clearly share the environment where we reproduced the bug, for example, but in the end, the user decides if they want to share that information or not.

In Monkeysign, I have finally implemented joeyh's suggestion of shipping the test suite with the program. I can now tell people to run the test suite in their environment to see if this is a regression that is specific to their environment - so a known bug, in a way - or a novel bug for which I can look at writing a new unit test. I also include way more information about the environment in the --version output, an idea I brought forward in the borg project to ease debugging. That way, people can just send the output of monkeysign --test and monkeysign --version, and I have a very good overview of what is happening on their end. Of course, Monkeysign also supports the usual --verbose and --debug flag that users should enable when reproducing issues.

Another idea is to report bugs directly from the application. We have all seen Firefox or other software have automatic bug reporting tools, but somehow those seem unsatisfactory for a user: we have no feedback of where the report goes, if it's followed up on. It is useful for larger project to get statistical data, but not so useful for users in the short term.

Monkeysign tries to handle exceptions in the code in a graceful way, but could do better. We use a small library to handle exceptions, but that library has since then been improved to directly file bugs against the Github project. This assumes the user is logged into Github, but it is nice to pre-populate bug reports with the relevant information up front.

Issue templates

In the meantime, to make sure people provide enough information, I have now moved a lot of the bug reporting guidelines to a separate issue template. That issue template is available through the issue creation form now, although it is not enabled by default, a weird limitation of Gitlab. Issue templates are available in Gitlab and Github.

Issue templates somewhat force users in a straight jacket: there is already something to structure their bug report. Those could be distinct form elements that had to be filled in, but I like the flexibility of the template, and the possibility for users to just escape the formalism and just plead for help in their own way.

Issue guidelines

In the end, I opted for a short few paragraphs in the style of the Tails documentation, including a reference to sgtatham, as an optional future reference:

  • Before you report a new bug, review the existing issues in the online issue tracker and the Debian BTS for Monkeysign to make sure the bug has not already been reported elsewhere.

  • The first aim of a bug report is to tell the developers exactly how to reproduce the failure, so try to reproduce the issue yourself and describe how you did that.

  • If that is not possible, try to describe what went wrong in detail. Write down the error messages, especially if they have numbers.

  • Take the necessary time to write clearly and precisely. Say what you mean, and make sure it cannot be misinterpreted.

  • Include the output of monkeysign --test, monkeysign --version and monkeysign --debug in your bug reports. See the issue template for more details about what to include in bug reports.

If you wish to read more about issues regarding communication in bug reports, you can read How to Report Bugs Effectively, which takes around 20 to 30 minutes.

Unfortunately, short of rewriting sgtatham's guide, I do not feel there is much more we can do as a general guide. I find esr's guide to be too verbose and commanding, so sgtatham it will be for now.

The prose and literacy

In the end, there is a fundamental issue with reporting bugs: it assumes our users are literate and capable of writing amazing prose that we will enjoy reading as the last J.K. Rowling novel (if you're into that kind of thing). It's just an unreasonable expectation: some of your users don't even speak the same language as you, let alone read or write it. This makes for challenging collaboration, to say the least. This is where automated reporting makes sense: it doesn't require user's intervention, and the communication is mediated by machines without human intervention and their pesky culture.

But we should, as maintainers, "be liberal in what we accept and conservative in what we send". Be tolerant, and help your users in fixing their issues. It's what you are there for, after all.

And in the end, we all fail the same way. In an attempt to improve the situation on bug reporting guides, I seem to have myself written a 2000 short story that will have taken up a hopefully pleasant 10 minutes of your time at minimum. Hopefully I will have succeeded at being clear, specific, verbose and concise all at once and look forward to your feedback on how to improve our bug reporting culture.

Catégories: Elsewhere

Acquia Developer Center Blog: The White House is Open Sourcing a Drupal Chat Module for Facebook Messenger

Planet Drupal - ven, 14/10/2016 - 17:02

Today, Jason Goldman, Chief Digital Officer of the White House, announced that the White House is open-sourcing a Drupal module that will enable Drupal 8 developers to quickly launch a Facebook Messenger bot.

The goal, Goldman said, is to make it easier for citizens to message the President via Facebook.

This is the first-ever government bot on Facebook messenger.

Tags: acquia drupal planet
Catégories: Elsewhere

ThinkShout: Content Modeling in Drupal 8

Planet Drupal - ven, 14/10/2016 - 17:00

In many modern frameworks, data modeling is done by building out database tables. In Drupal, we use a web-based interface to build our models. This interface makes building the database accessible for people with no database experience. However, this easy access can lead to overly complex content models because it’s so easy to build out advanced structures with a few hours of clicking. It’s surprising how often Drupal developers are expected to be content modeling experts. Rachel Lovinger wrote this great overview of content modeling for the rest of us who aren’t experts yet.

Data Modeling Goal

Our goal when modeling content in Drupal is to build out the structure that will become our editor interface and HTML output. We also need to create a model that supports the functionality needed in the website. While accomplishing this, we want to reduce the complexity of our models as much as possible.

Getting Started

One of the first things to do when building a Drupal site is build content types. So, before you start a site build, start with either a content model or a detail page wireframe. This spreadsheet from Palantir will help you. The home page design may look amazing, but it’s unhelpful for building out content types. Get the detail pages before you start building.

Why Reduce Complexity?

The more content types you create, the more effort it will take to produce a site. Furthermore, the more types you have, the more time it will take to maintain the site in the future. If you have 15 content types and need to make a site-wide change, you need to edit 15 different pages.

The more pages you need to edit, the more mistakes you will make in choosing labels, settings, and formatters. Lastly, content can’t easily be copied from one type to another, which makes moving content around your site harder when there are many content types. So, the first thing you’ll want to do with your content model is collapse your types into as few types as feasible. How many is that?

5 Content Types is Enough

Drupal has many built in entities like files, taxonomy, users, nodes, comments, and config. So, the vast majority of sites don’t need any more than 5 content types. Instead of adding a new content type for every design, look for ways to reuse existing types by adding fields and applying layouts to those fields.

Break Up the Edit Form

Drupal 8 allows you to have different form displays for a single content type. With either Form Mode Control or Form Mode Manager, you can create different edit experiences for the same content type without overloading the admin interface.

By reducing the complexity of the content model, we decrease maintenance cost, improve the consistency of the website, and simplify the editing experience. Now that you’ve got some content modeling basics, look for opportunities to reduce and reuse content types in your Drupal projects. Content editors will thank you.

Catégories: Elsewhere

Mediacurrent: Friday 5: 5 Advantages of Component-Driven Theming

Planet Drupal - ven, 14/10/2016 - 16:44

Happy Friday and thanks for tuning into Episode 17, it's good to be back!

Catégories: Elsewhere

Jonathan Dowland: Hi-Fi Furniture

Planet Debian - ven, 14/10/2016 - 16:23

sadly obsolete

For the last four years or so, I've had my Hi-Fi and the vast majority of my vinyl collection stored in a self-contained, mildly-customized Ikea unit. Since moving house this has been in my dining room—which we have always referred to as the "play room", since we have a second dining room in which we actually dine.

The intention for the play room was for it to be the room within which all our future children would have their toys kept, in an attempt to keep the living room from being overrun with plastic. The time has thus come for my Hi-Fi to come out of there, so we've moved it to our living room. Unfortunately, there's not enough room in the living room for the Ikea unit: I need something narrower for the space available.

via IkeaHackers.net

In the spirit of my original hack, I started looking at what others might have achieved with Ikea components. There are some great examples of open-style units built out of the (extremely cheap) Lack coffee tables, such as this ikeahackers article, but I'd prefer something more enclosed. One problem I've had with the Expedit unit was my cat trying to scratch the records. I ended up putting framed records at the front to cover the spines of the records within. If I were keeping the unit, I'd look at fitting hinges (another ikeahackers article)

Asides from hacked Ikea stuff, there are a few companies offering traditional enclosed Hi Fi cabinets. I'm going to struggle to fit both the equipment and a subset of records into these, so I might have to look at storing them separately. In some ways that makes life easier: the records could go into a 1x4 Ikea KALLAX unit, leaving the amp and deck to home somewhere. Perhaps I could look at a bigger unit for under the TV.

My parents have a nice Hi-Fi unit that pretends to be a chest of drawers. I'm fairly sure my Dad custom-built it, as it has a hinged top to provide access to the turntable and I haven't seen anything like that on the market.

That brings me onto thinking about other AV things I'd like to achieve in the living room. I've always been interested in exploring surround sound, but my initial attempt in my prior flat did not go well, either because the room was not terribly suited accoustically, or because the Pioneer unit I bought was rubbish, or both. It seems that there aren't really AV receivers which are designed to satisfy both people wanting to use them in a Hi-Fi and a home cinema setting. I could stick to stereo and run the TV into my existing (or a new) amplifier, subject to some logistics around wiring. A previous house owner ran some phono cables under the hard-wood flooring from the TV alcove to the opposite side of the fire place, which might give me some options.

There's also the world of wireless audio, Sonos etcetera. Realistically the majority of my music is digital nowadays, and it would be good to be able to listen to it conveniently in the house. I've heard good reports on the entry level Sonos stuff, but they seem to be Mono, and even the more high-end ones with lots of drivers have very small separation. I did buy a Chromecast Audio on a whim recently, but I haven't looked at it much yet: perhaps that's part of the solution.

So, lots of stuff in the melting pot to figure out here!

Catégories: Elsewhere

Drupal @ Penn State: Increasing accessibility empathy through simulation

Planet Drupal - ven, 14/10/2016 - 16:16

A few months ago I did a post showing ELMS:LN’s initial work on the A11y module which provides blocks for improving as well as simulating accessibility conditions. After lots of testing I’m marking a full release of the project we’ve been using in ELMS:LN for months (as dev).

This latest release includes many bug fixes and most notable new features of:

Catégories: Elsewhere

Drupal @ Penn State: Building conversational interfaces in Drupal

Planet Drupal - ven, 14/10/2016 - 16:16

I know Dreis caused a lot of smiles when he used Amazon Echo and Drupal 8 to be notified about “Awesome Sauce” going on sale. The future is bright and requires increasingly more ways of engaging with technology. But what if you wanted to start to have a conversation without Echo to do it? What if we wanted to progressively enhance the browser experience to include lessons learned from conversational input technologies.

Catégories: Elsewhere

Drupal @ Penn State: How to export fields from one content type to another

Planet Drupal - ven, 14/10/2016 - 16:16

The title kind of explains it all. Check out the screencast for a quick demo on how to do it.

Catégories: Elsewhere

Drupal @ Penn State: #Drupal GovDay: #Purespeed talk

Planet Drupal - ven, 14/10/2016 - 16:16

This is a recording of a presentation I gave at Drupal Gov Day called Purespeed. There’s many posts on this site that are tagged with purespeed if you want to dig into deeper detail about anything mentioned in this talk. This talk consolidates a lot of lessons learned in optimizing drupal to power elms learning network. I talk through apache, php, mysql, front end, backend, module selection, cache bin management, and other optimizations in this comprehensive full-stack tuning talk.

Catégories: Elsewhere

Drupal @ Penn State: Web service requests via snakes and spiders

Planet Drupal - ven, 14/10/2016 - 16:16

First off, hope you are all enjoying Drupalcon, super jealous.

Its been almost three months since I wrote about Creating secure, low-level bootstraps in D7, the gist of which is skipping index.php when you make webservice calls so that Drupal doesn't have to bootstrap as high. Now that I've been playing with this, we've been starting to work on a series of simplied calls that can propagate data across the network in different ways.

Catégories: Elsewhere

Daniel Silverstone: Gitano - Approaching Release - Changes

Planet Debian - ven, 14/10/2016 - 15:30

Continuing on from the previous article, here is a (probably incomplete) list of the critical changes to Gitano which have been, or will be, worked on during the run toward a 1.0 release. Each of these will have a blog posting to discuss what the changes mean for current and future users. Sometimes I'll aggregate postings, sometimes I won't.

The following are some highlights from the past little while of development which has been undertaken by Richard and myself. Each item is, I feel, important enough to warrant commentary, even for those who already use Gitano.

  • Lace now supports a sub-define syntax: [foo bar] which makes for simpler rulesets.
  • Gitano no longer creates auto_user_XXX and auto_group_XXX Lace predicates
  • Gitano no longer supports "basic" simple matches of the form user foo but instead requires a match kind such as group prefix bar-.
  • Gitano is gaining i18n/l10n support, though it will not be complete for version 1.0 the basics will be in place.
  • Gitano is gaining a much larger integration test suite using yarn.
  • Deprecated commands have now been removed from Gitano. (e.g. no more set-owner)
  • Gitano has gained PGP/GPG signature verification for commits and tags.

Any number of smaller things have been done which fall below some arbitrary barrier for telling you about. If you're aware of any of them and feel they are worthwhile telling the world about, then please prod me and I'll add an article to the series.

Finally it's worth noting that the effort to get all this into Debian Stretch proceeds apace. Of the eight packages needed, at the time of posting: one was already in and has been updated (luxio), three have been accepted into Debian already (supple, clod, lua-scrypt), two are in NEW (gall and lace), and that leaves the newest library (tongue) and then Gitano itself still to go. The Debian FTP team have been awesome in helping me with all this, so thanks go to them.

Catégories: Elsewhere

OSTraining: Use the Drupal Hacked! module for peace of mind

Planet Drupal - ven, 14/10/2016 - 13:21

Hacked! Is an extremely powerful Drupal module avalible in both Drupal 7 and 8 that allows you to check Drupal's modules and core against Drupal.org stored versions to make sure they have not been tampered with. This module is great for none coders to ensure that the modules are safe and have not been tampered with. It will not check any subthemes or custom modules that do not exist on Drupal.org.

Catégories: Elsewhere

Unimity Solutions Drupal Blog: My DrupalCon Dublin Board Retreat!

Planet Drupal - ven, 14/10/2016 - 07:10

Drupalcon is always special! Back from DrupalCon, left feeling happy, proud to be part of a caring Board, a passionate DA team and last but not least the wonderful Drupal community.

Catégories: Elsewhere

Michal Čihař: motranslator 2.0

Planet Debian - ven, 14/10/2016 - 06:00

Yesterday, the motranslator 2.0 has been released. As the version change suggests there are some important changes under the hood.

Full list of changes:

  • Consistently use camelCase in API
  • No more relies on using eval()
  • Depends on symfony/expression-language for calculations

As you can see, yesterday announced SimpleMath is not used in the end and I've moved to use existing library. Somehow I misunderstood library description and I thought that it works as PHP, what would be problem for us (or would bring need to add parenthesis around ternary operator as we did with eval()). But this is not the case and ternary operator behaves sane in ExpressionLanguage, so we're good too use it.

Anyway if you were using MoTranslator, it might be good idea to upgrade and check if API changes affect you.

Filed under: Debian English phpMyAdmin | 0 comments

Catégories: Elsewhere

Wouter Verhelst: Webserver certificate authentication with intermediate CAs

Planet Debian - jeu, 13/10/2016 - 14:35

Authenticating HTTPS clients using TLS certificates has long been very easy to do. The hardest part, in fact, is to create a PKI infrastructure robust enough so that compromised (or otherwise no longer desirable) certificates cannot be used anymore. While setting that up can be fairly involved, it's all pretty standard practice these days.

But authentication using a private CA is laughably easy. With apache, you just do something like:

SSLCACertificateFile /path/to/ca-certificate.pem SSLVerifyClient require

...and you're done. With the above two lines, apache will send a CertificateRequest message to the client, prompting the client to search its local certificate store for certificates signed by the CA(s) in the ca-certificate.pem file, and using the certificate thus found to authenticate against the server.

This works perfectly fine for setting up authentication for a handful of users. It will even work fine for a few thousands of users. But what if you're trying to set up website certificate authentication for millions of users? Well, in that case, storing everything in a single CA just doesn't work, and you'll need intermediate CAs to make things not fall flat on their face.

Unfortunately, the standard does not state what should happen when a client certificate is signed by an intermediate certificate, and the distinguished name(s) in the CertificateRequest message are those of the certificate(s) at the top of the chain rather than the intermediates. Previously, browsers would not just send out the client certificate, but also send along the certificate that it knew to have signed that client certificate, and so on until it found a self-signed certificate. With that, the server would see a chain of certificates that it could verify against the root certificates in its local trust store, and certificate verification would succeed.

It would appear that browsers are currently changing this behaviour, however. With the switch to BoringSSL, Google Chrome on the GNU/Linux platform no longer sent the signing certificates, and instead only sends the leaf certificate that it wants to use for certificate authentication. In the bug report, Ryan Sleevi explains that while the change was premature, the long term plans are for this to be the behaviour not just on GNU/Linux, but on Windows and macOS too. Meanwhile, the issue has been resolved for Chrome 54 (due to be released), but there's no saying for how long. As if that was not enough, the new version of Safari as shipped with macOS 10.12 has also stopped sending intermediate certificates, and expects the web server to be happy with just receiving the leaf certificates.

So, I guess it's safe to say that when you want to authenticate a client in a multi-level hierarchical CA environment, you cannot just hand the webserver a list of root certificates and expect things to work anymore; you'll have to hand it the intermediary certificates as well. To do so, you need to modify the SSLCACertificateFile parameter in apache configuration so that it not only contains the root certificate, but all the intermediate certificates as well. If your list of intermediate certificates is rather long, it may improve performance to use SSLCACertificatePath and the c_rehash tool to create a hashed directory with certificates rather than a PEM file, but in essense, that should work.

While this works, the problem with doing so is that now the DNs of all the intermediate certificates are sent along with the CertificateRequest message to the client, which (again, if your list of intermediate certificates is rather long) may result in performance issues. The fix for that is fairly easy: add a line like

SSLCADNRequestFile /path/to/root-certificates.pem

where the file root-certificates.pem contains the root certificates only. This file will tell the webserver which certificates should be announced in the CertificateRequest message, but the SSLCACertificateFile (or SSLCACertificatePath) configuration item will still be used for actual verification of the certificates in question. Note though that the root certificates apparently also seem to need to be available in the SSLCACertificatePath or SSLCACertificateFile configuration; if you do not do so, then authentication seems to fail, although I haven't yet found why.

I've set up a test page for all of those "millions" of certificates, and it seems to work for me. I've you're trying to use one of those millions of certificates against your own webserver or have a similar situation with a different set of certificates, you might want to make the above changes, too.

Catégories: Elsewhere

Aurelien Navarre: How to find PHP code in Drupal nodes

Planet Drupal - jeu, 13/10/2016 - 11:49

Before Drupal 8 was released, the PHP Filter module was part of Drupal core and allowed site builders and developers to create a PHP filter text format. While very convenient for developers and themers, this was also very risky as you could easily introduce security or even performance issues on your site, as a result of executing arbitrary PHP code.

What's the use case for injecting PHP code in content anyway?

There never is a truly good reason to do so except when you're developing the site and willing to quickly test something. Most of the time, using PHP in content is either the result of laziness, lack of time (easiest to add raw PHP directly rather than having to build a custom module) or lack of Drupal API knowledge. PHP Filter is most often used to inject logic in nodes or blocks. As horrible as it sounds, there are very interesting (and smart!) use cases people have come up with and you have to respect the effort. But this is just not something acceptable as you should always advise a clear separation of concerns and use the Drupal API in every instance.

In the past 5 years I've seen things such as:

  • Creating logic for displaying ads after the body
  • Injecting theming elements on the page
  • Redirecting users via drupal_goto() which was breaking cron and search indexing
  • Using variable_set() to store data on node_view()
  • Including raw PHP files
  • ...

The list goes on and on and on.

After heated discussions, and because it was far too easy to have users shoot themselves in the foot, it was finally decided to remove the module from core for Drupal 8. But as the usage statistics for Drupal core page shows, we still have more than 1 million Drupal 6 and 7 sites out there that are potentially using it.

If you're still building Drupal 7 sites or if you're taking over maintaining a Drupal 6 or 7 site, it's thus your responsibility to ensure no PHP code is being executed in nodes, blocks, comments, views, etc.

Determine if the PHP text format is in use

So, before you start wondering if you have an issue to fix, let's find out if the PHP module is enabled.

mysql> SELECT name FROM system WHERE name = 'php'; +------+ | name | +------+ | php | +------+ 1 row in set (0.00 sec)

Now, we need to confirm there is indeed a PHP filter text format on your site. You can use the Security Review module, navigate through the Drupal UI, or query MySQL, which is preferred here and later on because it gives us the granularity we need.

mysql> SELECT format,name,status FROM filter_format WHERE format="php_code"; +----------+----------+--------+ | format | name | status | +----------+----------+--------+ | php_code | PHP code | 1 | +----------+----------+--------+ 1 row in set (0.00 sec)

When you do have the php_code text format in use on a site, then you need to start your investigation. In this post we'll focus only on nodes. But the same logic applies for all entities.

Audit all nodes with the php_code text format

In the below example we only have 4 nodes. This means php_code was used only when it was required. But it might very well be that all nodes on a site would use the PHP text filter by default. Tracking down issues would then become more challenging. Worse, removing the text filter entirely would be a very time-consuming task in terms of site auditing, as you might not know what is or isn't going to break when you do the change.

mysql> SELECT nid,title,bundle,entity_type FROM field_data_body LEFT JOIN node ON node.nid=field_data_body.entity_id WHERE body_format='php_code'; +------+-----------------------+----------+-------------+ | nid | title | bundle | entity_type | +------+-----------------------+----------+-------------+ | 7571 | Test nid 7571 | article | node | +------+-----------------------+----------+-------------+ | 538 | Test nid 538 | page | node | +------+-----------------------+----------+-------------+ | 5432 | Test nid 5432 | article | node | +------+-----------------------+----------+-------------+ | 1209 | Test nid 1209 | article | node | +------+-----------------------+----------+-------------+ Find PHP code in nodes

Now that we know which nodes have the php_code text filter set, it's easy to find out if there's indeed PHP code in them, and if it's breaking the site in any way, causing performance troubles, or introducing a security hole.

mysql> SELECT body_value FROM field_data_body WHERE entity_id=7571; +--------------------------------------------------------------+ | body_value | +--------------------------------------------------------------+ | Thank you for participating! Your results can be found below. <?php include path_to_theme()."/calculator-results.php"; ?> | +--------------------------------------------------------------+ What about Drupal 8?

As we said in the introduction, the PHP Filter module now lives in contrib instead of Drupal core. And it's very good like that, because it'll prevent the vast majority of Drupal users from installing it. Because, you know, if they can, they will.

If it does exist in production though, then you're in for the same investigation. Fortunately, with Drupal 8 it's even easier to determine when a node is using the php_code text format as you only need one MySQL query and no JOIN.

mysql> SELECT entity_id,bundle,body_value,body_format FROM node__body WHERE body_format = 'php_code'; +-----------+---------+----------------------------+-------------+ | entity_id | bundle | body_value | body_format | +-----------+---------+----------------------------+-------------+ | 1 | article | <?php echo 'hi there!'; ?> | php_code | +-----------+---------+----------------------------+-------------+ 1 row in set (0.00 sec)

Now that you know how to find PHP code in nodes, it's your job to review the code and fix it if necessary, then find ways to remove it completely (custom / contrib module? Theming?). You'll feel a sense of joy when you can switch back to Basic HTML, Markdown, or any other controlled and secure text format.

Catégories: Elsewhere

Michal &#268;iha&#345;: Announcing SimpleMath

Planet Debian - jeu, 13/10/2016 - 06:00

For quite some time we've been relying on using eval() function in phpMyAdmin in two places. One of them is gettext library, where we have to evaluate plural forms and second of them is MySQL configuration advisor, which does it's suggestions based on text file (the original idea was to make this file shared with other tools, but it never really worked out).

Using eval() in PHP is something what is better to avoid, but we were using it on data we ship, so it was considered safe. On the other side, there are hostings which deny using eval() altogether (as many of exploits are using this function), so it's better to avoid that. I've been looking for options for replacing eval() in motranslator (library we use for handling Gettext MO files) for quite some time, but never found library which would support all operators needed in Gettext plural formulas.

Yesterday I finally came to conclusion that writing own library to do this is best approach. This way it can in future extended to work with Advisor as well. Also we can make it pretty lightweight without additional dependencies (what was problem in some existing libraries I've found).

To make the story short, this is how SimpleMath was born. As of now, it has grown to version 0.2 (you can use Packagist to install it). For now it's really simple and it can be probably confused by various strange inputs, but it seems for work pretty well for our case. Currently supported features:

  • Supports basic arithmetic operations +, -, *, /, %
  • Supports parenthesis
  • Supports right associative ternary operator
  • Supports comparison operators ==, !=, >, <, >=, <=
  • Supports basic logical operations &&, ||
  • Supports variables (either PHP style $a or simple n)

Maybe it will be usable for somebody else as well, but even if not, it's the way for us to get rid of using eval() in our codebase.


It seems that Symfony ExpressionLanguage Component is doing pretty much same, but more flexible and faster, so SimpleMath will be probably dead soon and we will switch to using Symphony component.

Filed under: Debian English phpMyAdmin | 4 comments

Catégories: Elsewhere

Norbert Preining: John Oliver and news in Japan

Planet Debian - jeu, 13/10/2016 - 03:03

Yesterday evening I was enjoying several features by John Oliver, mostly about the upcoming election in the US (Scandals), but also one of the best features I have heard from him on Guantánamo. It sadly reminded me of the completely different landscape in Japan.

Not only since the unprecedented warning to close down “biased” broadcasters by the Ministry of Internal Affairs and Communication, but ever since Abe is building up his more and more totalitarian control over the country, the freedom of press has been on shaky grounds.

Even worse, newspapers and TV outlets restrict themselves to “save” topics, which means: stupid talk shows, food, and above all praise of Japan and how good, how lovely, how great it is (Make Japan great again!). All this despite the fact that there would be a lot to rumble upon: covering up the truth around Fukushima, mountains of scandals around Olympia 2020, police brutality in Okinawa, the list is long.

Only thinking about having something remotely similar to John Oliver on TV in Japan is as unthinkable as Trump donating all his money to a charity for immigrants. Sure enough, John Oliver is one great example out of tons of rubbish in the US, sure enough, but this one example is missing in Japan.

What remains are Japanese media stations that crawl into the *** of the government, what a sad state.

(Photo credit partially due to Über Arschkriecher)

Catégories: Elsewhere


Subscribe to jfhovinne agrégateur - Elsewhere