Elsewhere

Lucy Wayland: Diversity and Inclusion, Debian Redux

Planet Debian - Sat, 12/11/2016 - 20:28
So, today at Cambridge MiniDebConf, I was scheduled to do a Birds of a Feather (BoF) about Diversity and Inclusion within Debian. I was expecting a handful of people in the breakout room. Instead it was a full blown workshop in the lecture theatre with me nominally facilitating. It went far, far better than I hoped (although a couple of other and myself people had to wrench us back on topic a few times). There were lots of good ideas, and productive friendly debate (although we were pretty much all coming from the same ball park). There are three points I have taken away from it (others may have different views):
  1. We are damned good at Inclusion, but have a long way to go on the Diversity (which is a problem of the entire tech sector).
  2. Debian is a social project as well as a technical one – our immediately accessible documentation does not reflect this.
  3. We are currently too reactive and passive when it comes to social issues and getting people involved. It is essential that we become more proactive.

Combined with the recent Diversity drive from Debconf 2016, I really believe we can do this. Thank-you all you who attended, contributed, and approached me afterwards.

Edit: Video here – Debian Diversity and Inclusion Workshop

Edit Edit: video link fixed.


Categories: Elsewhere

Lucy Wayland: Diversity and Inclusion, Debian Redux

Planet Debian - Sat, 12/11/2016 - 20:28
So, today at Cambridge MiniDebConf, I was scheduled to do a Birds of a Feather (BoF) about Diversity and Inclusion within Debian. I was expecting a handful of people in the breakout room. Instead it was a full blown workshop in the lecture theatre with me nominally facilitating. It went far, far better than I hoped (although a couple of other and myself people had to wrench us back on topic a few times). There were lots of good ideas, and productive friendly debate (although we were pretty much all coming from the same ball park). There are three points I have taken away from it (others may have different views):
  1. We are damned good at Inclusion, but have a long way to go on the Diversity (which is a problem of the entire tech sector).
  2. Debian is a social project as well as a technical one – our immediately accessible documentation does not reflect this.
  3. We are currently too reactive and passive when it comes to social issues and getting people involved. It is essential that we become more proactive.

Combined with the recent Diversity drive from Debconf 2016, I really believe we can do this. Thank-you all you who attended, contributed, and approached me afterwards.

Edit: Video here – Debian Diversity and Inclusion Workshop

Edit Edit: video link fixed.


Categories: Elsewhere

Wouter Verhelst: New Toy: Nikon D7200

Planet Debian - Sat, 12/11/2016 - 09:48

Last month, I was abroad with my trusty old camera, but without its SD cards. Since the old camera has an SD only slot, which does not accept SDHC (let alone SDXC) cards, I cannot use it with cards larger than 2GiB. Today, such cards are not being manufactured anymore. So, I found myself with a few options:

  1. Forget about the camera, just don't take any photos. Given the nature of the trip, I did not fancy this option.
  2. Go on eBay or some such, and find a second-hand 2GiB card.
  3. Find a local shop, and buy a new camera body.

While option 2 would have worked, the lack of certain features on my old camera had meant that I'd been wanting to buy a new camera body for a while, but it just hadn't happened yet; so I decided to go with option 3.

The Nikon D7200 is the latest model in the Nikon D7xxx series of cameras, a DX-format ("APS-C") camera that is still fairly advanced. Slightly cheaper than the D610, the cheapest full-frame Nikon camera (which I considered for a moment until I realized that two of my three lenses are DX-only lenses), it is packed with a similar amount of features. It can shoot photos at shutter speeds of 1/8000th of a second (twice as fast as my old camera), and its sensor can be set to ISO speeds of up to 102400 (64 times as much as the old one) -- although for the two modes beyond 25600, the sensor is switched to black-and-white only, since the amount of color available in such lighting conditions is very very low already.

A camera which is not only ten years more recent than the older one, but also is targeted at a more advanced user profile, took some getting used to at first. For instance, it took a few days until I had tamed the camera's autofocus system, which is much more advanced than the older one, so that it would focus on the things I wanted it to focus on, rather than just whatever object happens to be closest.

The camera shoots photos at up to twice the resolution in both dimensions (which combines to it having four times the amount of megapixels as the old body), which is not something I'm unhappy about. Also, it does turn out that a DX camera with a 24 megapixel sensor ends up taking photos with a digital resolution that is much higher than the optical resolution of my lenses, so I don't think more than 24 megapixels is going to be all that useful.

The builtin WiFi and NFC communication options are a nice touch, allowing me to use Nikon's app to take photos remotely, and see what's going through the lens while doing so. Additionally, the time-lapse functionality is something I've used already, and which I'm sure I'll be using again in the future.

The new camera is definitely a huge step forward from the old one, and while the price over there was a few hundred euros higher than it would have been here, I don't regret buying the new camera.

The result is nice, too:

All in all, I'm definitely happy with it.

Categories: Elsewhere

Wouter Verhelst: New Toy: Nikon D7200

Planet Debian - Sat, 12/11/2016 - 09:48

Last month, I was abroad with my trusty old camera, but without its SD cards. Since the old camera has an SD only slot, which does not accept SDHC (let alone SDXC) cards, I cannot use it with cards larger than 2GiB. Today, such cards are not being manufactured anymore. So, I found myself with a few options:

  1. Forget about the camera, just don't take any photos. Given the nature of the trip, I did not fancy this option.
  2. Go on eBay or some such, and find a second-hand 2GiB card.
  3. Find a local shop, and buy a new camera body.

While option 2 would have worked, the lack of certain features on my old camera had meant that I'd been wanting to buy a new camera body for a while, but it just hadn't happened yet; so I decided to go with option 3.

The Nikon D7200 is the latest model in the Nikon D7xxx series of cameras, a DX-format ("APS-C") camera that is still fairly advanced. Slightly cheaper than the D610, the cheapest full-frame Nikon camera (which I considered for a moment until I realized that two of my three lenses are DX-only lenses), it is packed with a similar amount of features. It can shoot photos at shutter speeds of 1/8000th of a second (twice as fast as my old camera), and its sensor can be set to ISO speeds of up to 102400 (64 times as much as the old one) -- although for the two modes beyond 25600, the sensor is switched to black-and-white only, since the amount of color available in such lighting conditions is very very low already.

A camera which is not only ten years more recent than the older one, but also is targeted at a more advanced user profile, took some getting used to at first. For instance, it took a few days until I had tamed the camera's autofocus system, which is much more advanced than the older one, so that it would focus on the things I wanted it to focus on, rather than just whatever object happens to be closest.

The camera shoots photos at up to twice the resolution in both dimensions (which combines to it having four times the amount of megapixels as the old body), which is not something I'm unhappy about. Also, it does turn out that a DX camera with a 24 megapixel sensor ends up taking photos with a digital resolution that is much higher than the optical resolution of my lenses, so I don't think more than 24 megapixels is going to be all that useful.

The builtin WiFi and NFC communication options are a nice touch, allowing me to use Nikon's app to take photos remotely, and see what's going through the lens while doing so. Additionally, the time-lapse functionality is something I've used already, and which I'm sure I'll be using again in the future.

The new camera is definitely a huge step forward from the old one, and while the price over there was a few hundred euros higher than it would have been here, I don't regret buying the new camera.

The result is nice, too:

All in all, I'm definitely happy with it.

Categories: Elsewhere

Drupal Blog: Drupal 8 will no longer include dev dependencies in release packages

Planet Drupal - Sat, 12/11/2016 - 02:19

As a best practice, development tools should not be deployed on production sites. Accordingly, packaged Drupal 8 stable releases will no longer contain development PHP libraries, because development code is not guaranteed to be secure or stable for production.

This only applies to a few optional libraries that are provided with Drupal 8 for development purposes. The many stable required libraries for Drupal 8, like Symfony and Twig, will still be included automatically in packaged releases. Drupal 7 is not affected.

Updating your site

To adopt this best practice for your site, do one of the following (depending on how you install Drupal):

  • If you install Drupal using the stable release packages provided by Drupal.org (for example, with an archive like drupal-8.2.2.tar.gz or via Drush), update to the next release (8.2.3) as soon as it is available. (Read about core release windows.) Be sure to follow the core update instructions, including removing old vendor files. Once updated, your site will no longer include development libraries and no further action will be needed.
  • If you use a development snapshot on your production site (like 8.2.x-dev), you should either update to a stable release (preferred) or manually remove the dependencies. Remember that development snapshots are not supported for production sites.
  • If you install your site via Composer, you should update your workflows to ensure you specify --no-dev for your production sites.
Development and continuous integration workflows

If you have a continuous integration workflow or development site that uses these development dependencies, your workflow might be impacted by this change. If you installed from a stable Drupal.org package and need the development dependencies, you have three options:

  1. Install Composer and run composer install --dev,
  2. Use a development snapshot (for example, 8.2.x-dev) instead of a tagged release for your development site, or
  3. Install the development dependencies you need manually into Drupal's vendor directory or elsewhere.

However, remember that these development libraries should not be installed on production sites.

For background on this change, see Use "composer install --no-dev" to create tagged core packages. For more information on Composer workflows for Drupal, see Using Composer to manage Drupal site dependencies.

Categories: Elsewhere

Drupal.org blog: What’s new on Drupal.org? - October 2016

Planet Drupal - Fri, 11/11/2016 - 21:43

Read our Roadmap to understand how this work falls into priorities set by the Drupal Association with direction and collaboration from the Board and community.

The Drupal Association team has been getting back to work after coming back from DrupalCon Dublin in September. For the engineering team, October has been focused on some back-end services and infrastructure that support the Drupal project, while we continue to move forward on some longer term front facing initiatives.

Drupal.org updates Promoting Drupal by Industry

Last month we talked about the new homepage we released for Drupal.org, and using those editorial tools to build a membership campaign. We hinted that additional changes will be coming soon. While we're not ready to launch this new content - we can talk about it in some greater detail.

Dries Buytaert, the project founder, has called Drupal the platform for ambitious digital experiences. That phrase expresses the incredible power and flexibility of Drupal, but also encapsulates an aspect of Drupal that can be difficult for newcomers. It can be very hard for newcomers to Drupal to understand how to take a base install of Drupal core, and extend that to achieve that ambitious vision.

We want to help close that gap in understanding—to help evaluators see how Drupal achieves these ambitions. To do this, we'll be creating a series of landing pages that focus granularly on how Drupal creates success stories in particular industries. Look for more on this topic in coming months.

DrupalCon Vienna Site Launched

As is tradition, during the closing session of DrupalCon Dublin we announced that the next DrupalCon in Europe will be held in Vienna! We launched the splash page announcing the event at vienna2017.drupal.org and we have information about sponsorship and hotel reservations already available.

DrupalCon Vienna will happen from the 25th to 29th of September 2017, and we'll hope to see you there!

More flexible project testing

We've made a significant update to how tests are configured on the Automated Testing tab of any project hosted on Drupal.org. Automated testing, using the DrupalCI infrastructure, allows developers to ensure their code will be compatible with core, and with a variety of PHP versions and database environments. In October, we updated the configuration options for module maintainers.

Maintainers can now select a specific branch of core, a specific environment, and select whether to run the test once, daily, on commit, or for issues. Issues are limited to a single test configuration, to ensure that the code works in a single environment before being regression tested against multiple environments on on-commit or daily tests.

Better database replication and reliability

Behind the scenes, we've made some updates to our database cluster - part of our infrastructure standardization on Debian 8 environments managed in Puppet 4. We've made some improvements to replication and reliability - and while these changes are very much behind the scenes they should help maintain a reliable and performant Drupal.org.

Response to Critical Security Vulnerabilities

When it rains, it pours—a maxim we take to heart in Portland, Oregon—and that was especially true in the realm of security in October. The most widely known vulnerability disclosed was the 'DirtyCow' vulnerability in the Linux kernel. A flaw in the copy-on-write system of the Linux kernel made it possible, in principle, for an unprivileged user to elevate their own privileges.

Naturally, responding to this vulnerability was a high priority in October, but DirtyCow was not the only vulnerability disclosed, as security releases were also made for PHP, mariadb, tar, libxslt, and curl. We mitigated each of these vulnerabilities in short order.

Community Initiatives

Community initiatives are a collaboration; with dedicated community volunteers building improvements to Drupal.org with the architectural guidance and oversight of the Drupal Association engineering team.

Drupal 8 User Guide

The Drupal 8 User Guide is getting very close to being available on Drupal.org. We are working closely with contributor jhodgdon to resolve some perplexing inconsistencies between what we're seeing in our development environment and in our initial production deployment.

Dreditor

markcarver who is currently leading the charge to port Dreditor features to Drupal.org, has invited anyone interested in contributing to join him in #dreditor on freenode IRC or the Dreditor GitHub.

Documentation Maintainership

Finally, we want to continue to encourage the community to become maintainers of Drupal documentation. If you are a developer interested in contributing code to the new documentation system, please contact tvn.

———

As always, we’d like to say thanks to all the volunteers who work with us, and to the Drupal Association Supporters, who made it possible for us to work on these projects.

If you would like to support our work as an individual or an organization, consider becoming a member of the Drupal Association.

Follow us on Twitter for regular updates: @drupal_org, @drupal_infra

Categories: Elsewhere

MTech, LLC: Categorizing Migrations According to Their Type

Planet Drupal - Fri, 11/11/2016 - 19:05
Categorizing Migrations According to Their Type

Very often in a migration of Drupal 6/7 to Drupal 8, have a need to run and test the migration many times. This applies more to sites that are very active, where site editors are updating or adding new content.

Keep in mind that the Migration API in Drupal 8 offers a few paths to run migrations. The first is a user interface to make migrations through the path /upgrade. Through this method, the migration process is once and you can not make any customizations.

The other method is to use drush to upgrade. And it is done by using these steps:

Edys Meza Fri, 11/11/2016 - 12:05
Categories: Elsewhere

Jonathan Dowland: Vinyl is killing Vinyl (but that's ok)

Planet Debian - Fri, 11/11/2016 - 18:28

I started buying vinyl records about 16 years ago, but recently I've become a bit uncomfortable being identified as a "vinyl lover". The market is ascendant, with vinyl album sales growing for 8 consecutive years, at least in the UK. So why am I uncomfortable about it?

A quick word about audio fidelity/quality here. I don't subscribe to the school of thought that audio on vinyl is inherently better than digital audio, far from it. I'm aware of its limitations. For recordings that I love, I try to seek out the best quality version available, which is almost always digital. Some believe that vinyl is immune to the "loudness war" brickwall mastering plaguing some modern releases, but for some of the worst offenders (Depeche Mode's Playing The Angel; Red Hot Chili Pepper's Californication) I haven't found the vinyl masterings to sound any different.

16 years ago

Let's go back to why I started buying vinyl. Back when I started, the world was a very different place to what it is today. You could not buy most music in a digital form: it was 3 more years before the iTunes Store was opened, and it was Mac-only at first, and the music it sold was DRM-crippled for the first 5 or so years afterwards. The iPod had not been invented yet and there was no real market for personal music players. Minidiscs were still around, but Net-MD (the only sanctioned way to get digital music onto them from a computer) was terrible.

old-ish LPs

Buying vinyl 16 years ago was a way to access music that was otherwise much harder to reach. There were still plenty of albums, originally recorded and released before CDs, which either had not been re-issued digitally at all, or had been done so early, and badly. Since vinyl was not fashionable, the second hand market was pretty cheap. I bought quite a lot of stuff for pennies at markets and car boot sales.

Some music—such as b-sides and 12" mixes and other mixes prepared especially for the format—remains unavailable and uncollected on CD. (I'm a big fan of the B-side culture that existed prior to CDs. I might write more about that one day.)

10 years ago

modern-ish 7 inches

Fast forward to around 10 years ago. Ephemeral digital music is now much more common, the iPod and PMPs are well established. High-street music stores start to close down, including large chains like MOS, Our Price, and Virgin. Streaming hasn't particularly taken off yet, attempts to set up digital radio stations are fought by the large copyright owners. Vinyl is still not particularly fashionable, but it is still being produced, in particular for singles for up-and-coming bands in 7" format. You can buy a 7" single for between £1 and £4, getting the b-side with it. The b-side is often exclusive to the 7" release as an incentive to collectors. I was very prepared to punt £1-2 on a single from a group I was not particularly familiar with just to see what they were like. I discovered quite a lot of artists this way. One of the songs we played at our wedding was such an exclusive: a recording of the Zutons' covering Jackie Wilson's "Higher and Higher", originally broadcast once on Colin Murray's Evening Session radio show.

Now

An indulgence

So, where are we now?

Vinyl album sales are a huge growth market. They are very fashionable. Many purchasers are younger people who are new to the format; it's believed many don't have the means to play the music on the discs. Many (most?) albums are now issued as 12" vinyl in parallel with digital releases. These are usually exactly the same product (track listing, mixes, etc.) and usually priced at exactly twice that of the CD (with digital prices normally a fraction under that).

The second hand market for 12" albums has inflated enormously. Gone are the bargains that could be had, a typical second hand LP is now priced quite close to the digital price for a popular/common album in most places.

The popularity of vinyl has caused a huge inflation in the price of most 7" singles, which average somewhere between £8-£10 each, often without any b-side whatsoever. The good news is—from my observations—the 2nd hand market for 7" singles hasn't been affected quite as much. I guess they are not as desirable to buyers.

The less said about Record Store Day, the better.

So, that's all quite frustrating. But most of the reasons I used to buy vinyl have gone away anyway. Many of the rushed-to-market CD masterings have been reworked and reissued, correcting the earlier problems. B-side compilations are much more common so there are far fewer obscure tracks or mixes, and when the transfer has been done right, you're getting those previously-obscure tracks in a much higher quality. Several businesses exist to sell 2nd hand CDs for rock bottom prices, so it's still possible to get popular music very cheaply.

The next thing to worry about is probably streaming services.

Categories: Elsewhere

Jonathan Dowland: Vinyl is killing Vinyl (but that's ok)

Planet Debian - Fri, 11/11/2016 - 18:28

I started buying vinyl records about 16 years ago, but recently I've become a bit uncomfortable being identified as a "vinyl lover". The market is ascendant, with vinyl album sales growing for 8 consecutive years, at least in the UK. So why am I uncomfortable about it?

A quick word about audio fidelity/quality here. I don't subscribe to the school of thought that audio on vinyl is inherently better than digital audio, far from it. I'm aware of its limitations. For recordings that I love, I try to seek out the best quality version available, which is almost always digital. Some believe that vinyl is immune to the "loudness war" brickwall mastering plaguing some modern releases, but for some of the worst offenders (Depeche Mode's Playing The Angel; Red Hot Chili Pepper's Californication) I haven't found the vinyl masterings to sound any different.

16 years ago

Let's go back to why I started buying vinyl. Back when I started, the world was a very different place to what it is today. You could not buy most music in a digital form: it was 3 more years before the iTunes Store was opened, and it was Mac-only at first, and the music it sold was DRM-crippled for the first 5 or so years afterwards. The iPod had not been invented yet and there was no real market for personal music players. Minidiscs were still around, but Net-MD (the only sanctioned way to get digital music onto them from a computer) was terrible.

old-ish LPs

Buying vinyl 16 years ago was a way to access music that was otherwise much harder to reach. There were still plenty of albums, originally recorded and released before CDs, which either had not been re-issued digitally at all, or had been done so early, and badly. Since vinyl was not fashionable, the second hand market was pretty cheap. I bought quite a lot of stuff for pennies at markets and car boot sales.

Some music—such as b-sides and 12" mixes and other mixes prepared especially for the format—remains unavailable and uncollected on CD. (I'm a big fan of the B-side culture that existed prior to CDs. I might write more about that one day.)

10 years ago

modern-ish 7 inches

Fast forward to around 10 years ago. Ephemeral digital music is now much more common, the iPod and PMPs are well established. High-street music stores start to close down, including large chains like MOS, Our Price, and Virgin. Streaming hasn't particularly taken off yet, attempts to set up digital radio stations are fought by the large copyright owners. Vinyl is still not particularly fashionable, but it is still being produced, in particular for singles for up-and-coming bands in 7" format. You can buy a 7" single for between £1 and £4, getting the b-side with it. The b-side is often exclusive to the 7" release as an incentive to collectors. I was very prepared to punt £1-2 on a single from a group I was not particularly familiar with just to see what they were like. I discovered quite a lot of artists this way. One of the songs we played at our wedding was such an exclusive: a recording of the Zutons' covering Jackie Wilson's "Higher and Higher", originally broadcast once on Colin Murray's Evening Session radio show.

Now

An indulgence

So, where are we now?

Vinyl album sales are a huge growth market. They are very fashionable. Many purchasers are younger people who are new to the format; it's believed many don't have the means to play the music on the discs. Many (most?) albums are now issued as 12" vinyl in parallel with digital releases. These are usually exactly the same product (track listing, mixes, etc.) and usually priced at exactly twice that of the CD (with digital prices normally a fraction under that).

The second hand market for 12" albums has inflated enormously. Gone are the bargains that could be had, a typical second hand LP is now priced quite close to the digital price for a popular/common album in most places.

The popularity of vinyl has caused a huge inflation in the price of most 7" singles, which average somewhere between £8-£10 each, often without any b-side whatsoever. The good news is—from my observations—the 2nd hand market for 7" singles hasn't been affected quite as much. I guess they are not as desirable to buyers.

The less said about Record Store Day, the better.

So, that's all quite frustrating. But most of the reasons I used to buy vinyl have gone away anyway. Many of the rushed-to-market CD masterings have been reworked and reissued, correcting the earlier problems. B-side compilations are much more common so there are far fewer obscure tracks or mixes, and when the transfer has been done right, you're getting those previously-obscure tracks in a much higher quality. Several businesses exist to sell 2nd hand CDs for rock bottom prices, so it's still possible to get popular music very cheaply.

The next thing to worry about is probably streaming services.

Categories: Elsewhere

Chris Lamb: Awarded Core Infrastructure Initiative grant for Reproducible Builds

Planet Debian - Fri, 11/11/2016 - 18:04

I'm delighted to announce that I have been awarded a grant from the Core Infrastructure Initiative (CII) to fund my previously-voluntary work on Reproducible Builds.

Whilst anyone can inspect the source code of free software for malicious flaws, most software is distributed pre-compiled to end users. The motivation behind the Reproducible Builds effort is to permit verification that no flaws have been introduced — either maliciously or accidentally — during this compilation process by promising identical results are always generated from a given source, thus allowing multiple third-parties to come to a consensus on whether a build was compromised.

I'd like to sincerely thank the CII, not only for their material support but also for their recognition of my existing contributions. I am looking forward to working with my co-grantees towards fulfilling our shared goal.

You can read the CII's press release here.

Categories: Elsewhere

Chris Lamb: Awarded Core Infrastructure Initiative grant for Reproducible Builds

Planet Debian - Fri, 11/11/2016 - 18:04

I'm delighted to announce that I have been awarded a grant from the Core Infrastructure Initiative (CII) to fund my previously-voluntary work on Reproducible Builds.

Whilst anyone can inspect the source code of free software for malicious flaws, most software is distributed pre-compiled to end users. The motivation behind the Reproducible Builds effort is to permit verification that no flaws have been introduced — either maliciously or accidentally — during this compilation process by promising identical results are always generated from a given source, thus allowing multiple third-parties to come to a consensus on whether a build was compromised.

I'd like to sincerely thank the CII, not only for their material support but also for their recognition of my existing contributions. I am looking forward to working with my co-grantees towards fulfilling our shared goal.

You can read the CII's press release here.

Categories: Elsewhere

NEWMEDIA: Using the Haversine Formula in Drupal 7

Planet Drupal - Fri, 11/11/2016 - 17:05
Using the Haversine Formula in Drupal 7 John Fiala Fri, 11/11/2016 - 16:05 The Haversine formula is one of the easiest to use pieces of complicated math I've had the pleasure to use. If you're not familiar with it, it's pretty simple in theory - it's an extension of the Pythagorean formula from a grid to the surface of a sphere, which basically means that you can use it to measure the distance between two points on a sphere (1).

"But what," I hear you ask through the secret microphone hidden in your keyboard, "does this have to do with Drupal?"  And the answer is - search.  I've worked on a number of projects over the years where users wanted to search for things which were some distance from a central point - a zip code, a city, or the like.  This can be done with apache solr these days, but sometimes apache solr isn't what you need. Either you’re not doing a keyword search and just filtering your nodes (show me all mexican restaurants near 12th and Vine) or else you don't think you need the extra complexity of adding an Apache Solr instance to the project.  An index of restaurants isn't a bad idea for an example, so let’s build one.  In the tradition of Drupal demos, we'll say we're creating a restaurant search engine, which we will name 'Shout'.  So, we spin up a copy of Drupal 7, set up the usual database, add a 'Restaurant' node type, download Fivestar* to do ratings, set up a few quick taxonomies (cuisine, price range [low, medium, high], and maybe style [sit-down, food court, fast food]) which we add to the restaurant node type.

Step 1

Create a node type with location information.  To store the address there’s two good options: the Location module, which grew out of CCK, and the Address Field module, which comes from the Commerce module.

Step 2

Add the latitude and longitude to the nodes - if you’re using the location module you can enable storing those values in the same field, but if you’re starting with the address field you need to add a field which stores that information.  I recommend the Geofield module.

Step 3

Finally, you will need to set up geocoding - the process of assigning latitude and longitude based off of an address.  There’s plenty of services which will do this for you, and if you’re using the location module, then you can enable it there.  Alternately you can use the Geocoder module to store these values.

Example

Following along with our Shout example, let's add the addressfield, geofield, and geocoder modules, which will also in turn require the geoPHP and ctools modules.  Add the Address field and tell it to store a postal address, set up a geofield on the restaurant node as well and set the widget to 'geocode from another field', and take a look at the configuration of geocoder in admin/config/content/geocoder.  You can use the google api in batch for free, as long as you don't get too crazy with the number of requests per day.  This being an example site, I think we'll be safe, but when doing a commercial site it's always best to read the Google terms of service,sign up for an API key, and close cover before striking.

 

 

I've named the Address field field_address, and in a fit of originality I've named the geofield field_map_location.  Once I had everything set up, I entered a few local restaurants and ran cron to make sure that I was getting data in the field_data_field_map_location table - I suggest you do the same.  (Well, to be honest, at first I wasn't getting data, but that's why we test our examples when writing blog posts.)

Step 4

Once you've got locations set up, the next step is your search engine.  For this task I suggest the Search API module, which allows you to define your own indexes, and to switch search engines in the future as the need arrives.  You’ll also need the Search API DB and Search API Pages modules.

Step 5

We'll start by setting up a service - in this case, just gave it an obvious name and select the Database engine.

 

  Step 6

Then we'll create an index - although Search API creates a Default node index when it's enabled, we want one just for restaurant nodes.  So we'll click on 'Add Index', give it a name of 'Restaurant Index', select that we want to index Restaurant Nodes, and put in a quick description to remind us of what it is, and select the server we just created.

 

  Step 7

After that, go into index fields for the index and select at least the title and the 'main body text' for indexing - I suggest including the address as well.  It's also important to add the Location field that you're using, and include the latitude and longitude values in the index.  When you can't find a field, expand the 'Add Related Fields' at the bottom and look for them there, and make sure you save your changes before leaving the page.

 

 

Finally, on the filter tab I suggest excluding unpublished nodes, as well as Ignoring case and adding the HTML filter.

With all that setup, use the search_api_pages module to set up a search page for the index you've constructed.

 

With data and index set up, it's time to add the location filtering.  Let's add a quick block to filter with:

/** * Implements hook_block_info(). */ function example_block_info() { return [ 'location' => [ 'info' => t('Search Location Filter'), 'cache' => DRUPAL_CACHE_GLOBAL, ], ]; } /** * Implements hook_block_view(). */ function example_block_view($delta) { if ($delta == 'location') { $block['subject'] = t('Filter by location:'); $block['content'] = drupal_get_form('example_location_filter_form'); } return $block; } /** * This form allows the user to restrict the search by where they are. */ function example_location_filter_form($form, &$form_state) { $form['center'] = array( '#type' => 'textfield', '#title' => t('From'), '#description' => t('Enter a zip code or city, state (ie, "Denver, CO")'), '#maxlength' => 64, '#size' => 20, '#default_value' => isset($_GET['center']) ? $_GET['center'] : '', ); $distances = [5, 10, 20, 50]; foreach ($distances as $distance) { $options[$distance] = t('Within @num miles', array('@num' => $distance)); } $form['radius'] = [ '#type' => 'radios', '#title' => t('Distance:'), '#options' => $options, '#default_value' => isset($_GET['radius']) ? $_GET['radius'] : 5, ]; $form['submit'] = [ '#type' => 'submit', '#value' => t('Filter'), ]; $parameters = drupal_get_query_parameters(NULL, ['q', 'radius', 'center']); $form['clear'] = [ '#type' => 'markup', '#markup' => l(t('Clear'), current_path(), array('query' => $parameters)), ]; return $form; } /** * Validation handler for location filter. */ function example_location_filter_form_validate(&$form, &$form_state) { if (!empty($form_state['values']['center'])) { $location = trim($form_state['values']['center']); // Is this a postal code? $point = example_location_lookup($location); if (empty($point)) { form_set_error('center', t('%location is not a valid location - please enter either a postal code or a city, state (like "Denver, CO")', ['%location' => $location])); } } } /** * Form submit handler for location filter form. */ function example_location_filter_form_submit(&$form, &$form_state) { $parameters = drupal_get_query_parameters(NULL, ['q', 'radius', 'center']); if (!empty($form_state['values']['center'])) { $parameters['radius'] = $form_state['values']['radius']; $parameters['center'] = $form_state['values']['center']; } $form_state['redirect'] = [current_path(), ['query' => $parameters]]; }

In this case, example_location_lookup() looks for a latitude/longitude pair for a given location entered by the user, which I'm leaving as an exercise for the reader in hopes to keep this post short.  It should return an array with the keys 'lat' and 'long', at least.  For testing, you can have it return a fixed point until you've got that setup, like:

function example_location_lookup($location) { return array('lat' => 39.7392, 'long' => -104.9903); }

So, now we can return to the Haversine formula.  Once you've got the position entered and passed along, it's time to match it against our restaurants.  Doing complex math is hard, so after a few moments of thought, we realize that anything more than the radius miles north or south, or east and west, of the center point will be too far away to bother including in the search radius, so we'll first filter on a range of latitude and longitude around the center, and then filter by the haversine formula to knock out everything outside of the circle.  For implementing the Haversine formula in SQL, I'm indebted to Ollie Jones of Plum Island Media, who does a great job of demystifying the formula here.

/** * Implements hook_search_api_db_query_alter(). */ function example_search_api_db_query_alter(SelectQueryInterface &$db_query, SearchApiQueryInterface $query) { $field_name = variable_get('example_location_field_name', 'field_location'); // Do we have a location? if (isset($_GET['center']) && isset($db_query->alterMetaData['search_api_db_fields'][$field_name . ':lat'])) { $location = $_GET['center']; $radius = isset($_GET['radius']) && is_numeric($_GET['radius']) ? $_GET['radius'] * 1 : 5; $point = example_location_lookup($location); if (!empty($point)) { // Basically, we make a subquery that generates the distance for each adventure, and then restrict the results from that to a bounding box. // Then, once that subquery is done, we check each item that survives the bounding box to check that the distance field is less than our radius. $latitude_field = $db_query->alterMetaData['search_api_db_fields'][$field_name . ':lat']['column']; $longitude_field = $db_query->alterMetaData['search_api_db_fields'][$field_name . ':lon']['column']; $table = $db_query->alterMetaData['search_api_db_fields'][$field_name . ':lat']['table']; $sub_query = db_select($table, 'haversine'); $sub_query->fields('haversine', ['item_id', $latitude_field, $longitude_field]); // Calculate a distance column for the query that we'll filter on later. $sub_query->addExpression("69.0 * DEGREES(ACOS(COS(RADIANS(:p_lat)) * COS(RADIANS($latitude_field)) * COS(RADIANS(:p_long - $longitude_field)) + SIN(RADIANS(:p_lat)) * SIN(RADIANS($latitude_field))))", 'distance', [':p_lat' => $point['lat'], ':p_long' => $point['long']]); // Filter out anything outside of the bounding box. $sub_query->condition($latitude_field, [$point['lat'] - ($radius / 69.0), $point['lat'] + ($radius / 69.0)], 'BETWEEN'); $sub_query->condition($longitude_field, [$point['long'] - ($radius / 69.0), $point['long'] + ($radius / (69.0 * cos(deg2rad($point['lat']))))], 'BETWEEN'); $db_query->join($sub_query, 'search_distance', 't.item_id = search_distance.item_id'); $db_query->condition('search_distance.distance', $radius, '<'); } } }

And there you go.  In my example, I set up the page as search, and tested with the url: search/diner?radius=500&center=denver, and got back the Denver Diner, but not the New York Diner.

* It's not certain depending on which version of fivestar you get, but you might need to download the entity api module as well. Just in case you're following along at home.

(1) We're just going to ignore the fact that the Earth isn't a perfect sphere for the purposes of this article - there's a degree of error that may creep in, but honestly if you're trying to find locations within 300 miles of a city, there's already enough error creeping in on the 'center' of a city that the close approximation of the Haversine formula is a relief.

Categories: Elsewhere

CiviCRM Blog: The quest for performance improvements

Planet Drupal - Fri, 11/11/2016 - 16:02

After the socialist party upgraded civicrm to version 4.6 a month ago they are experiencing performance issues. In this blog I will round up our quest for performance improvements. But before that some facts about their installation.

  • +/- 350.000 contacts
  • +/- 300 users who can access civicrm and see members in their local chapter
  • +/- 2700 groups
  • There are several campaign websites linked to CiviCRM and one of their successfully campaigns leads to 7500 new contacts per week in the system
  • Running  on a VPS with 4 CPU cores and 8GB of RAM
  • around 40 active extensions

Since yesterday we have added New Relic as a monitoring tool. With New Relic we can monitor and look back in the history. We can also see all the details in each request. So we can analyze the performance. 

Above is the screenshot of the monitoring of the last 24 hours.  The red/rose squares indicates when the overall performance is poor (more than 2 seconds per request). We also see that MySQL has a big part in the  largest peaks.

The screenshots above shows the slowest queries and the slowest mysql operations. One observation is that it looks like that the MySQL delete statements are slowing the system down.

It is not clear what exactly those delete statements are or what is causing those to be slow. That is one of the questions to look into next.

Another thing we want to look into is the tuning of the MySQL database configuration and we also want to get familiar with New Relic.

Do you have any thoughts? Or did you do any performance improvements you did? We can use any help and we will keep posted about our quest for performance.

 

Drupal
Categories: Elsewhere

Ixis.co.uk - Thoughts: Using Paragraphs in Drupal 8

Planet Drupal - Fri, 11/11/2016 - 15:25

When we received the new designs for the Ixis site it was evident that they contained separate design elements which were shared across several pages, from the homepage to departmental landing pages to the “About us” page. We thought this was a perfect use case for the Paragraphs module, which allows site editors to "choose on-the-fly between predefined Paragraph Types… instead of putting all of their content in one WYSIWYG body field."

Most content types on the new Ixis site contain a Paragraphs field. An editor can create multiple Paragraphs of any defined type and sort them to specify the elements and layout of the node's content.

Paragraph types can be anything from a simple text block or image to a complex and configurable slideshow. Paragraph types are essentially fieldable entities and the Paragraphs module allows the creation of these types. Each defined type can have it’s own set of relevant fields, all added via the Drupal UI and exporting to config.

So, to support the elements outlined in our page designs we added Paragraph types for:

  • Call to action - areas of bold background colour and large text;
  • Download - a downloadable asset or file;
  • Gallery - a gallery list of images;
  • Image - a single, responsive image;
  • Testimonial - a quote or testimonial;
  • Text - basic, filtered HTML edited with CKEditor;
  • Text with Callout - regular body text coupled with a styled "callout";
  • Twitter - an embedded Twitter widget;
  • Video - an embedded video from a 3rd-party site such as YouTube.

All these Paragraph types give editors some flexibility and choice when authoring a page designed with several of these elements.

Styling

The rendered output of Paragraphs entities can be altered using a paragraph.html.twig file in the site’s theme. For example:

{% set classes = [ 'paragraph', 'paragraph--type--' ~ paragraph.bundle|clean_class, view_mode ? 'paragraph--view-mode--' ~ view_mode|clean_class, cta_style ? 'cta-style--' ~ cta_style|clean_class, ] %} {% block paragraph_content %} {{ content }} {% endblock paragraph_content %}

The rendered output of each individual Paragraph type can also be affected using a suggested Twig template, for example we have paragraph--testimonial.twig.html for appropriately rendering a testimonial quote and cited author.

In some places we’ve used a field combined with a preprocess to provide multiple variations of the same paragraph. You can see this in action above with the cta_style variable which gives us a standard or inverted dark style for Call to action paragraphs.

Content Migration

During the initial content migration, we migrated directly into a Text Paragraph in the new Paragraphs field for some content types such as blog posts. To do this, we needed a new process plugin:

/** * Saves D6 Page Body field to D8 Page Paragraph (Textarea) field. * * @MigrateProcessPlugin( * id = "node_paragraph_textarea" * ) */ class NodeParagraphTextarea extends ProcessPluginBase { ... }

We used a slightly modified version of the example plugin in this article by Amit Goyal. Then in our migration.d6.node__blog.yml we removed:

... body/format: plugin: migration migration: d6_filter_format source: format body/value: body body/summary: teaser ...

and replaced with the new process plugin to instead migrate the source body into the Paragraphs field:

... field_paragraphs: plugin: node_paragraph_textarea source: body ...

In summary, Paragraphs is a great alternative to a single WYSIWYG editor for site editors who want to be able to lay out complex pages combining text, images, video, audio, quotes or any other advanced component. Here's some further reading:

Categories: Elsewhere

Valuebound: How Drupal can be used to develop data driven web application

Planet Drupal - Fri, 11/11/2016 - 13:44

Everything about building a website or a web application is not just coding and hosting an app. It includes a thorough ecosystem research and co-designing a scalable product to cooperate and compete within the networked internet of things. Drupal is the perfect platform to build large systems like CRM system or ERP system, which is complicated as well as data oriented.

The data acquired is first organized appropriately and then analyzed to make essential business decisions. Various underlying platforms serve critical aspects in the complete system, which work really well with applications developed with Drupal.  Management, governance and security are always on the top of the list when it comes to media and entertainment companies.

Building a complex website with Drupal…

Categories: Elsewhere

Sooper Drupal Themes: New Logistics Design with Glazed 2.5.4 and Carbide Builder 1.0.15

Planet Drupal - Fri, 11/11/2016 - 04:49

First of all, sorry for the late blog. This blog refers to the drupal themes updates on November 1st. I'm currently travelling China and while I rushed to get the release out before my departure, the blog had to wait a little longer.

TL;DR

Glazed Logistics Design And Reaching Product-Market Fit

With the release of the Logistics design we addded a unique and beautiful theme to our collection while at the same time the core products only needed minor adjustments. This is an indication that the products have achieved a state of stability and product-market fit. SooperThemes now pivots to focus more on creating new designs and features based on the current core products. Of course this doesn't mean we don't add new features at all, there will always be a need for change in a turbulent environment like Drupal frontend development. 

Creating Great Design For The Drupal Community

In the past year we have laid the neccessary ground work that is needed to provide the Drupal community with much desired high quality designs. The site building workflow with Glazed theme and Carbide Builder is incredibly fast, efficient and produces precisely designed responsive Drupal websites. This doesn't just improve productivity of our customers and our customers' content creators but also our own productivity. At this point we completely design our Glazed demos using just the theme and drag and drop builder, no photoshop or coding. This big gain in productivity really allows us to focus more on art direction, photography and design. For our logistics demo we created a set of unique 3D isometric line icons, and we curated a collection of beautiful stock photography to really create the right atmosphere for our niche design. We can afford to produce such detailed niche designs thanks to the productivity gains we made with Glazed theme and Carbide Builder. Our goal of closing the gap with the top tier multi-purpose WordPress themes is now appearing on the horizon.  

Glazed Magazine Component and Drupal 8

This stability also means we will start planning our Drupal 8 upgrade and migration paths. The next couple of months we will focus on ramping up the design release cycle and on adding a new magazine component to our Glazed CMS distribution. We want to avoid spending months on Drupal 8 migration while the Drupal 7 product offering is only at an 80% market fit. We are aiming to offer more than just great theme settings and drag and drop functionality: expect a multitude of niche designs with fully features demo content in a turn-key installation profile. We strive to become Drupal's first "Mega Theme".

Value As A Service

As a subscription Drupal shop, we really focus on building long-term relationships with our customers and with the Drupal community at large. We make decisions based on what we think provides the most value to the most people. An important part of making those decisions in listening to the community. If you can spare a minute, please write a comment on the blog and describe what you would value the most in a Drupal theme. One feature I'm think about adding to the distribution is ready-made translation configuration as an optional component in the Glazed CMS distribution, let me know in the comments if that is something you would value.

Categories: Elsewhere

Drupal core announcements: Drupal core security release window on Wednesday, November 16, 2016

Planet Drupal - Thu, 10/11/2016 - 23:44
Start:  2016-11-15 12:00 - 2016-11-17 12:00 UTC Organizers:  cilefen xjm stefan.r catch David_Rothstein Event type:  Online meeting (eg. IRC meeting)

The monthly security release window for Drupal 8 and 7 core will take place on Wednesday, November 16.

This does not mean that a Drupal core security release will necessarily take place on that date for any of the Drupal 8 or 7 branches, only that you should watch for one (and be ready to update your Drupal sites in the event that the Drupal security team decides to make a release).

There will be no bug fix or feature release on this date. The next window for a Drupal core patch (bug fix) release for all branches is Wednesday, December 07. The next scheduled minor (feature) release for Drupal 8 will be on Wednesday, April 5.

Drupal 6 is end-of-life and will not receive further security releases.

For more information on Drupal core release windows, see the documentation on release timing and security releases, and the discussion that led to this policy being implemented.

Categories: Elsewhere

Matthew Garrett: Tor, TPMs and service integrity attestation

Planet Debian - Thu, 10/11/2016 - 21:48
One of the most powerful (and most scary) features of TPM-based measured boot is the ability for remote systems to request that clients attest to their boot state, allowing the remote system to determine whether the client has booted in the correct state. This involves each component in the boot process writing a hash of the next component into the TPM and logging it. When attestation is requested, the remote site gives the client a nonce and asks for an attestation, the client OS passes the nonce to the TPM and asks it to provide a signed copy of the hashes and the nonce and sends them (and the log) to the remote site. The remoteW site then replays the log to ensure it matches the signed hash values, and can examine the log to determine whether the system is trustworthy (whatever trustworthy means in this context).

When this was first proposed people were (justifiably!) scared that remote services would start refusing to work for users who weren't running (for instance) an approved version of Windows with a verifiable DRM stack. Various practical matters made this impossible. The first was that, until fairly recently, there was no way to demonstrate that the key used to sign the hashes actually came from a TPM[1], so anyone could simply generate a set of valid hashes, sign them with a random key and provide that. The second is that even if you have a signature from a TPM, you have no way of proving that it's from the TPM that the client booted with (you can MITM the request and either pass it to a client that did boot the appropriate OS or to an external TPM that you've plugged into your system after boot and then programmed appropriately). The third is that, well, systems and configurations vary so much that outside very controlled circumstances it's impossible to know what a "legitimate" set of hashes even is.

As a result, so far remote attestation has tended to be restricted to internal deployments. Some enterprises use it as part of their VPN login process, and we've been working on it at CoreOS to enable Kubernetes clusters to verify that workers are in a trustworthy state before running jobs on them. While useful, this isn't terribly exciting for most people. Can we do better?

Remote attestation has generally been thought of in terms of remote systems requiring that clients attest. But there's nothing that requires things to be done in that direction. There's nothing stopping clients from being able to request that a server attest to its state, allowing clients to make informed decisions about whether they should provide confidential data. But the problems that apply to clients apply equally well to servers. Let's work through them in reverse order.We have no idea what expected "good" values areYes, and this is a problem. CoreOS ships with an expected set of good values, and we had general agreement at the Linux Plumbers Conference that other distributions would start looking at what it would take to do the same. But how do we know that those values are themselves trustworthy? In an ideal world this would involve reproducible builds, allowing anybody to grab the source code for the OS, build it locally and verify that they have the same hashes.

Ok. So we're able to verify that the booted OS was good. But how about the services? The rkt container runtime supports measuring each container into the TPM, which means we can verify which container images were started. If container images are also built in such a way that they're reproducible, users can grab the source code, rebuild the container locally and again verify that it has the same hashes. Users can then be sure that the remote site is running the code they're looking at.

Or can they? Not really - a general purpose OS has all kinds of ways to inject code into containers, so an admin could simply replace the binaries inside the container after it's been measured, or ptrace() the server, or modify rkt so it generates correct measurements regardless of the image or, well, there's lots they could do. So a general purpose OS is probably a bad idea here. Instead, let's imagine an immutable OS that does nothing other than bring up networking and then reads a config file that tells it which container images to download and run. This reduces the amount of code that needs to support reproducible builds, making it easier for a client to verify that the source corresponds to the code the remote system is actually running.

Is this sufficient? Eh sadly no. Even if we know the valid values for the entire OS and every container, we don't know the legitimate values for the system firmware. Any modified firmware could tamper with the rest of the trust chain, making it possible for you to get valid OS values even if the OS has been subverted. This isn't a solved problem yet, and really requires hardware vendor support. Let's handwave this for now, or assert that we'll have some sidechannel for distributing valid firmware values.Avoiding TPM MITMingThis one's more interesting. If I ask the server to attest to its state, it can simply pass that through to a TPM running on another system that's running a trusted stack and happily serve me content from a compromised stack. Suboptimal. We need some way to tie the TPM identity and the service identity to each other.

Thankfully, we have one. Tor supports running services in the .onion TLD. The key used to identify the service to the Tor network is also used to create the "hostname" of the system. I wrote a pretty hacky implementation that generates that key on the TPM, tying the service identity to the TPM. You can ask the TPM to prove that it generated a key, and that allows you to tie both the key used to run the Tor service and the key used to sign the attestation hashes to the same TPM. You now know that the attestation values came from the same system that's running the service, and that means you know the TPM hasn't been MITMed.How do you know it's a TPM at all?This is much easier. See [1].


There's still various problems around this, including the fact that we don't have this immutable minimal container OS, that we don't have the infrastructure to ensure that container builds are reproducible, that we don't have any known good firmware values and that we don't have a mechanism for allowing a user to perform any of this validation. But these are all solvable, and it seems like an interesting project.

"Interesting" isn't necessarily the right metric, though. "Useful" is. And I think this is very useful. If I'm about to upload documents to a SecureDrop instance, it seems pretty important that I be able to verify that it is a SecureDrop instance rather than something pretending to be one. This gives us a mechanism.

The next few years seem likely to raise interest in ensuring that people have secure mechanisms to communicate. I'm not emotionally invested in this one, but if people have better ideas about how to solve this problem then this seems like a good time to talk about them.

[1] More modern TPMs have a certificate that chains from the TPM's root key back to the TPM manufacturer, so as long as you trust the TPM manufacturer to have kept control of that you can prove that the signature came from a real TPM

comments
Categories: Elsewhere

Matthew Garrett: Tor, TPMs and service integrity attestation

Planet Debian - Thu, 10/11/2016 - 21:48
One of the most powerful (and most scary) features of TPM-based measured boot is the ability for remote systems to request that clients attest to their boot state, allowing the remote system to determine whether the client has booted in the correct state. This involves each component in the boot process writing a hash of the next component into the TPM and logging it. When attestation is requested, the remote site gives the client a nonce and asks for an attestation, the client OS passes the nonce to the TPM and asks it to provide a signed copy of the hashes and the nonce and sends them (and the log) to the remote site. The remoteW site then replays the log to ensure it matches the signed hash values, and can examine the log to determine whether the system is trustworthy (whatever trustworthy means in this context).

When this was first proposed people were (justifiably!) scared that remote services would start refusing to work for users who weren't running (for instance) an approved version of Windows with a verifiable DRM stack. Various practical matters made this impossible. The first was that, until fairly recently, there was no way to demonstrate that the key used to sign the hashes actually came from a TPM[1], so anyone could simply generate a set of valid hashes, sign them with a random key and provide that. The second is that even if you have a signature from a TPM, you have no way of proving that it's from the TPM that the client booted with (you can MITM the request and either pass it to a client that did boot the appropriate OS or to an external TPM that you've plugged into your system after boot and then programmed appropriately). The third is that, well, systems and configurations vary so much that outside very controlled circumstances it's impossible to know what a "legitimate" set of hashes even is.

As a result, so far remote attestation has tended to be restricted to internal deployments. Some enterprises use it as part of their VPN login process, and we've been working on it at CoreOS to enable Kubernetes clusters to verify that workers are in a trustworthy state before running jobs on them. While useful, this isn't terribly exciting for most people. Can we do better?

Remote attestation has generally been thought of in terms of remote systems requiring that clients attest. But there's nothing that requires things to be done in that direction. There's nothing stopping clients from being able to request that a server attest to its state, allowing clients to make informed decisions about whether they should provide confidential data. But the problems that apply to clients apply equally well to servers. Let's work through them in reverse order.We have no idea what expected "good" values areYes, and this is a problem. CoreOS ships with an expected set of good values, and we had general agreement at the Linux Plumbers Conference that other distributions would start looking at what it would take to do the same. But how do we know that those values are themselves trustworthy? In an ideal world this would involve reproducible builds, allowing anybody to grab the source code for the OS, build it locally and verify that they have the same hashes.

Ok. So we're able to verify that the booted OS was good. But how about the services? The rkt container runtime supports measuring each container into the TPM, which means we can verify which container images were started. If container images are also built in such a way that they're reproducible, users can grab the source code, rebuild the container locally and again verify that it has the same hashes. Users can then be sure that the remote site is running the code they're looking at.

Or can they? Not really - a general purpose OS has all kinds of ways to inject code into containers, so an admin could simply replace the binaries inside the container after it's been measured, or ptrace() the server, or modify rkt so it generates correct measurements regardless of the image or, well, there's lots they could do. So a general purpose OS is probably a bad idea here. Instead, let's imagine an immutable OS that does nothing other than bring up networking and then reads a config file that tells it which container images to download and run. This reduces the amount of code that needs to support reproducible builds, making it easier for a client to verify that the source corresponds to the code the remote system is actually running.

Is this sufficient? Eh sadly no. Even if we know the valid values for the entire OS and every container, we don't know the legitimate values for the system firmware. Any modified firmware could tamper with the rest of the trust chain, making it possible for you to get valid OS values even if the OS has been subverted. This isn't a solved problem yet, and really requires hardware vendor support. Let's handwave this for now, or assert that we'll have some sidechannel for distributing valid firmware values.Avoiding TPM MITMingThis one's more interesting. If I ask the server to attest to its state, it can simply pass that through to a TPM running on another system that's running a trusted stack and happily serve me content from a compromised stack. Suboptimal. We need some way to tie the TPM identity and the service identity to each other.

Thankfully, we have one. Tor supports running services in the .onion TLD. The key used to identify the service to the Tor network is also used to create the "hostname" of the system. I wrote a pretty hacky implementation that generates that key on the TPM, tying the service identity to the TPM. You can ask the TPM to prove that it generated a key, and that allows you to tie both the key used to run the Tor service and the key used to sign the attestation hashes to the same TPM. You now know that the attestation values came from the same system that's running the service, and that means you know the TPM hasn't been MITMed.How do you know it's a TPM at all?This is much easier. See [1].


There's still various problems around this, including the fact that we don't have this immutable minimal container OS, that we don't have the infrastructure to ensure that container builds are reproducible, that we don't have any known good firmware values and that we don't have a mechanism for allowing a user to perform any of this validation. But these are all solvable, and it seems like an interesting project.

"Interesting" isn't necessarily the right metric, though. "Useful" is. And I think this is very useful. If I'm about to upload documents to a SecureDrop instance, it seems pretty important that I be able to verify that it is a SecureDrop instance rather than something pretending to be one. This gives us a mechanism.

The next few years seem likely to raise interest in ensuring that people have secure mechanisms to communicate. I'm not emotionally invested in this one, but if people have better ideas about how to solve this problem then this seems like a good time to talk about them.

[1] More modern TPMs have a certificate that chains from the TPM's root key back to the TPM manufacturer, so as long as you trust the TPM manufacturer to have kept control of that you can prove that the signature came from a real TPM

comments
Categories: Elsewhere

Acquia Developer Center Blog: A walk through New Orleans with Shyamala Rajaram

Planet Drupal - Thu, 10/11/2016 - 21:46

A conversation with our newest community representative to the Drupal Association Board, Director at Large, Shyamala Rajaram, recorded walking to the New Orleans Convention Center during DrupalCon NOLA 2016.

Yes, I can see I need to get a gimbal or a steady cam, but the video is still fun (for me anyway) :-)

Our conversation

Mentioned in the conversation
More Shyamala on the web
Correction

Rakesh James has trained more than 600 new Drupalists - I said 200 in the recording.

Transcription

A full transcription of the conversation will be added shortly!

Skill Level: BeginnerIntermediateAdvanced
Categories: Elsewhere

Pages

Subscribe to jfhovinne aggregator - Elsewhere