Feed aggregator

Michal Čihař: Weekly phpMyAdmin contributions 2016-W13

Planet Debian - Tue, 05/04/2016 - 12:00

Last week was mostly vacation for me, so I'm publishing this report more for not missing one rather than to provide real report.

I've spent only little of time on last Tuesday on reviewing issues and there was no coding involved.

Handled issues:

Filed under: English phpMyAdmin | 0 comments

Categories: Elsewhere

Matthew Garrett: There's more than one way to exploit the commons

Planet Debian - Tue, 05/04/2016 - 09:18
There's a piece of software called XScreenSaver. It attempts to fill two somewhat disparate roles:
  • Provide a functioning screen lock on systems using the X11 windowing system, a job made incredibly difficult due to a variety of design misfeatures in said windowing system[1]
  • Provide cute graphical output while the screen is locked
XScreenSaver does an excellent job of the second of these[2] and is pretty good at the first, which is to say that it only suffers from a disasterous security flaw once very few years and as such is certainly not appreciably worse than any other piece of software.

Debian ships an operating system that prides itself on stability. The Debian definition of stability is a very specific one - rather than referring to how often the software crashes or misbehaves, it refers to how often the software changes behaviour. Debian is very reluctant to upgrade software that is part of a stable release, to the extent that developers will attempt to backport individual security fixes to the version they shipped rather than upgrading to a release that contains all those security fixes but also adds a new feature. The argument here is that the new release may also introduce new bugs, and Debian's users desire stability (in the "things don't change" sense) more than new features. Backporting security fixes keeps them safe without compromising the reason they're running Debian in the first place.

This all makes plenty of sense at a theoretical level, but reality is sometimes less convenient. The first problem is that security bugs are typically also, well, bugs. They may make your software crash or misbehave in annoying but apparently harmless ways. And when you fix that bug you've also fixed a security bug, but the ability to determine whether a bug is a security bug or not is one that involves deep magic and a fanatical devotion to the cause so given the choice between maybe asking for a CVE and dealing with embargoes and all that crap when perhaps you've actually only fixed a bug that makes the letter "E" appear in places it shouldn't and not one that allows the complete destruction of your intergalactic invasion fleet means people will tend to err on the side of "Eh fuckit" and go drinking instead. So new versions of software will often fix security vulnerabilities without there being any indication that they do so[3], and running old versions probably means you have a bunch of security issues that nobody will ever do anything about.

But that's broadly a technical problem and one we can apply various metrics to, and if somebody wanted to spend enough time performing careful analysis of software we could have actual numbers to figure out whether the better security approach is to upgrade or to backport fixes. Conversations become boring once we introduce too many numbers, so let's ignore that problem and go onto the second, which is far more handwavy and social and so significantly more interesting.

The second problem is that upstream developers remain associated with the software shipped by Debian. Even though Debian includes a tool for reporting bugs against packages included in Debian, some users will ignore that and go straight to the upstream developers. Those upstream developers then have to spend at least 15 or so seconds telling the user that the bug they're seeing has been fixed for some time, and then figure out how to explain that no sorry they can't make Debian include a fixed version because that's not how things work. Worst case, the stable release of Debian ends up including a bug that makes software just basically not work at all and everybody who uses it assumes that the upstream author is brutally incompetent, and they end up quitting the software industry and I don't know running a nightclub or something.

From the Debian side of things, the straightforward solution is to make it more obvious that users should file bugs with Debian and not bother the upstream authors. This doesn't solve the problem of damaged reputation, and nor does it entirely solve the problem of users contacting upstream developers. If a bug is filed with Debian and doesn't get fixed in a timely manner, it's hardly surprising that users will end up going upstream. The Debian bugs list for XScreenSaver does not make terribly attractive reading.

So, coming back to the title for this entry. The most obvious failure of the commons is where a basically malicious actor consumes while giving nothing back, but if an actor with good intentions ends up consuming more than they contribute that may still be a problem. An upstream author releases a piece of software under a free license. Debian distributes this to users. Debian's policies result in the upstream author having to do more work. What does the upstream author get out of this exchange? In an ideal world, plenty. The author's software is made available to more people. A larger set of developers is willing to work on making improvements to the software. In a less ideal world, rather less. The author has to deal with bug mail about already fixed bugs. The author's reputation may be harmed by user exposure to said fixed bugs. The author may get less in the way of useful bug fixes or features because people are running old versions rather than fixing new ones. If the balance tips towards the latter, the author's decision to release their software under a free license has made their life more difficult.

Most discussions about Debian's policies entirely ignore the latter scenario, focusing more on the fact that the author chose to release their software under a free license to begin with. If the author is unwilling to handle the consequences of that, goes the argument, why did they do it in the first place? The unfortunate logical conclusion to that argument is that the author realises that they made a huge mistake and never does so again, and woo uh oops.

The irony here is that one of Debian's foundational documents, the Debian Free Software Guidelines, makes allowances for this. Section 4 allows for distribution of software in Debian even if the author insists that modified versions[4] are renamed. This allows for an author to make a choice - allow themselves to be associated with the Debian version of their work and increase (a) their userbase and (b) their support load, or try to distinguish what Debian ship from their identity. But that document was ratified in 1997 and people haven't really spent much time since then thinking about why it says what it does, and so this tradeoff is rarely considered.

Free software doesn't benefit from distributions antagonising their upstreams, even if said upstream is a cranky nightclub owner. Debian's users are Debian's highest priority, but those users are going to suffer if developers decide that not using free licenses improves their quality of life. Kneejerk reactions around specific instances aren't helpful, but now is probably a good time to start thinking about what value Debian bring to its upstream authors and how that can be increased. Failing to do so doesn't serve users, Debian itself or the free software community as a whole.

[1] The X server has no fundamental concept of a screen lock. This is implemented by an application asking that the X server send all keyboard and mouse input to it rather than to any other application, and then that application creating a window that fills the screen. Due to some hilarious design decisions, opening a pop-up menu in an application prevents any other application from being able to grab input and so it is impossible for the screensaver to activate if you open a menu and then walk away from your computer. This is merely the most obvious problem - there are others that are more subtle and more infuriating. The only fix in this case is to nuke the site from orbit.

[2] There's screenshots here. My favourites are the one that emulate the electrical characteristics of an old CRT in order to present a more realistic depiction of the output of an Apple 2 and the one that includes a complete 6502 emulator.

[3] And obviously new versions of software will often also introduce new security vulnerabilities without there being any indication that they do so, because who would ever put that in their changelog. But the less ethically challenged members of the security community are more likely to be looking at new versions of software than ones released three years ago, so you're probably still tending towards winning overall

[4] There's a perfectly reasonable argument that all packages distributed by Debian are modified in some way

comments
Categories: Elsewhere

KnackForge: Chennai Drupal hosting a Drupal 8 Training on Global Drupal Training Day - 9th April 2016 at IIT

Planet Drupal - Tue, 05/04/2016 - 09:04
Chennai Drupal hosting a Drupal 8 Training on Global Drupal Training Day - 9th April 2016 at IIT

Drupal Global Training Days is an effort by Drupal Association to take Drupal, the digital experience platform to more users. Chennai Drupal Community stands to cater a full day Drupal 8 hands-on training on 9th April at IIT Chennai. See the list of proposed sessions below,

Session Details

sivaji Tue, 04/05/2016 - 12:34
Categories: Elsewhere

Ian Wienand: Image building in OpenStack CI

Planet Debian - Tue, 05/04/2016 - 06:30

Also titled minimal images - maximal effort!

A large part of OpenStack Infrastructure teams recent efforts has been focused on moving towards more stable and maintainable CI environments for testing.

OpenStack CI Overview

Before getting into details, it's a good idea to get a basic big-picture conceptual model of how OpenStack CI testing works. If you look at the following diagram and follow the numbers with the explanation below, hopefully you'll have all the context you need.

  1. The developer uploads their code to gerrit via the git-review tool. They wait.

  2. Gerrit provides a JSON-encoded "firehose" output of everything happening to it. New reviews, votes, updates and more all get sent out over this pipe. Zuul is the overall scheduler that subscribes itself to this information and is responsible for managing the CI jobs appropriate for each change.

  3. Zuul has a configuration that tells it what jobs to run for what projects. Zuul can do lots of interesting things, but for the purposes of this discussion we just consider that it puts the jobs it wants run into gearman for a Jenkins master to consume. gearman is a job-server; as they explain it "[gearman] provides a generic application framework to farm out work to other machines or processes that are better suited to do the work". Zuul puts into gearman basically a tuple (job-name, node-type) for each job it wants run, specifying the unique job name to run and what type of node it should be run on.

  4. A group of Jenkins masters are subscribed to gearman as workers. It is these Jenkins masters that will consume the job requests from the queue and actually get the tests running. However, Jenkins needs two things to be able to run a job — a job definition (what to actually do) and a slave node (somewhere to do it).

    The first part — what to do — is provided by job-definitions stored in external YAML files and processed by Jenkins Job Builder (jjb) in to job configurations for Jenkins. Each Jenkins master gets these definitions pushed to it constantly by Puppet, thus each Jenkins master instance knows about all the jobs it can run automatically. Zuul also knows about these job definitions; this is the job-name part of the tuple we said it put into gearman.

    The second part — somewhere to run the test — takes some more explaining. To the next point...

  5. Several cloud companies donate capacity in their clouds for OpenStack to run CI tests. Overall, this capacity is managed by a customised orchestration tool called nodepool. Nodepool watches the gearman queue and sees what requests are coming out of Zuul. It looks at the node-type of jobs in the queue and decides what type of nodes need to start in what clouds to satisfy demand. Nodepool will monitor the start-up of the virtual-machines and register the new nodes to the Jenkins master instances.

  6. At this point, the Jenkins master has what it needs to actually get jobs started. When nodepool registers a host to a Jenkins master as a slave, the Jenkins master can now advertise its ability to consume jobs. For example, if a ubuntu-trusty node is provided to the Jenkins master instance by nodepool, Jenkins can now consume from gearman any job it knows about that is intended to run on an ubuntu-trusty slave. Jekins will run the job as defined in the job-definition on that host — ssh-ing in, running scripts, copying the logs and waiting for the result. (It is a gross oversimplification, but for the purposes of OpenStack CI, Jenkins is pretty much used as a glorified ssh/scp wrapper. Zuul Version 3, under development, is working to remove the need for Jenkins to be involved at all).

  7. Eventually, the test will finish. The Jenkins master will put the result back into gearman, which Zuul will consume. The slave will be released back to nodepool, which destroys it and starts all over again (slaves are not reused and also have no sensitive details on them, as they are essentially publicly accessible). Zuul will wait for the results of all jobs for the change and post the result back to Gerrit; it either gives a positive vote or the dreaded negative vote if required jobs failed (it also handles merges to git, but we'll ignore that bit for now).

In a nutshell, that is the CI work-flow that happens thousands-upon-thousands of times a day keeping OpenStack humming along.

Image builds

So far we have glossed over how nodepool actually creates the images that it hands out for testing. Image creation, illustrated in step 8 above, contains a lot of important details.

Firstly, what are these images and why build them at all? These images are where the "rubber hits the road" — they are instantiated into the virtual-machines that will run DevStack, unit-testing or whatever else someone might want to test.

The main goal is to provide a stable and consistent environment in which to run a wide-range of tests. A full OpenStack deployment results in hundreds of libraries and millions of lines of code all being exercised at once. The testing-images are right at the bottom of all this, so any instability or inconsistency affects everyone; leading to constant fire-firefighting and major inconvenience as all forward-progress stops when CI fails. We want to support a wide number of platforms interesting to developers such as Ubuntu, Debian, CentOS and Fedora, and we also want to and make it easy to handle new releases and add other platforms. We want to ensure this can be maintained without too much day-to-day hands-on.

Caching is a big part of the role of these images. With thousands of jobs going on every day, an occasional network blip is not a minor annoyance, but creates constant and difficult to debug failures. We want jobs to rely on as few external resources as possible so tests are consistent and stable. This means caching things like the git trees tests might use (OpenStack just broke the 1000 repository mark), VM images, packages and other common bits and pieces. Obviously a cache is only as useful as the data in it, so we build these images up every day to keep them fresh.

Snapshot images

If you log into almost any cloud-provider's interface, they almost certainly have a range of pre-canned images of common distributions for you to use. At first, the base images for OpenStack CI testing came from what the cloud-providers had as their public image types. However, over time, there are a number of issues that emerge:

  1. No two images, even for the same distribution or platform, are the same. Every provider seems to do something "helpful" to the images which requires some sort of workaround.
  2. Providers rarely leave these images alone. One day you would boot the image to find a bunch of Python libraries pip-installed, or a mount-point moved, or base packages removed (all happened).
  3. Even if the changes are helpful, it does not make for consistent and reproducible testing if every time you run, you're on a slightly different base system.
  4. Providers don't have some images you want (like a latest Fedora), or have different versions, or different point releases. All update asynchronously whenever they get around to it.

So the original incarnations of OpenStack CI images were based on these public images. Nodepool would start one of these provider images and then run a series of scripts on it — these scripts would firstly try to work-around any quirks to make the images look as similar as possible across providers, and then do the caching, setup things like authorized keys and finish other configuration tasks. Nodepool would then snapshot this prepared image and start instantiating VM's based on these images into the pool for testing. If you hear someone talking about a "snapshot image" in OpenStack CI context, that's likely what they are referring to.

Apart from the stability of the underlying images, the other issue you hit with this approach is that the number of images being built starts to explode when you take into account multiple providers and multiple regions. Even with just Rackspace and the (now defunct) HP Cloud we would end up creating snapshot images for 4 or 5 platforms across a total of about 8 regions — meaning anywhere up to 40 separate image builds happening daily (you can see how ridiculous it was getting in the logging configuration used at the time). It was almost a fait accompli that some of these would fail every day — nodepool can deal with this by reusing old snapshots — but this leads to a inconsistent and heterogeneous testing environment.

Naturally there was a desire for something more consistent — a single image that could run across multiple providers in a much more tightly controlled manner.

Upstream-based builds

Upstream distributions do provide "cloud-images", which are usually pre-canned .qcow2 format files suitable for uploading to your average cloud. So the diskimage-builder tool was put into use creating images for nodepool, based on these upstream-provided images. In essence, diskimage-builder uses a series of elements (each, as the name suggests, designed to do one thing) that allow you to build a completely customised image. It handles all the messy bits of laying out the image file, tries to be smart about caching large downloads and final things like conversion to qcow2 or vhd.

nodepool has used diskimage-builder to create customised images based upon the upstream releases for some time. These are better, but still have some issues for the CI environment:

  1. You still really have no control over what does or does not go into the upstream base images. You don't notice a change until you deploy a new image based on an updated version and things break.
  2. The images still start with a fair amount of "stuff" on them. For example cloud-init is a rather large Python program and has a fair few dependencies. These dependencies can both conflict with parts of OpenStack or end up tacitly hiding real test requirements (the test doesn't specify it, but the package is there as part of another base dependency. Things then break when the base dependencies change). The whole idea of the CI is that (as much as possible) you're not making any assumptions about what is required to run your tests — you want everything explicitly included.
  3. An image that "works everywhere" across multiple cloud-providers is quite a chore. cloud-init hasn't always had support for config-drive and Rackspace's DHCP-less environment, for example. Providers all have their various different networking schemes or configuration methods which needs to be handled consistently.

If you were starting this whole thing again, things like LXC/Docker to keep "systems within systems" might come into play and help alleviate some of the packaging conflicts. Indeed they may play a role in the future. But don't forget that DevStack, the major CI deployment mechanism, was started before Docker existed. And there's tricky stuff with networking and Neutron going on. And things like iSCSI kernel drivers that containers don't support well. And you need to support Ubuntu, Debian, CentOS and Fedora. And you have hundreds of developers already relying on what's there. So change happens incrementally, and in the mean time, there is a clear need for a stable, consistent environment.

Minimal builds

To this end, diskimage-builder now has a serial of "minimal" builds that are really that — systems with essentially nothing on them. For Debian and Ubuntu this is achieved via debootstrap, for Fedora and CentOS we replicate this with manual installs of base packages into a clean chroot environment. We add-on a range of important elements that make the image useful; for example, for networking, we have simple-init which brings up the network consistently across all our providers but has no dependencies to mess with the base system. If you check the elements provided by project-config you can see a range of specific elements that OpenStack Infra runs at each image build (these are actually specified by in arguments to nodepool, see the config file, particularly diskimages section). These custom elements do things like caching, using puppet to install the right authorized_keys files and setup a few needed things to connect to the host. In general, you can see the logs of an image build provided by nodepool for each daily build.

So now, each day at 14:14 UTC nodepool builds the daily images that will be used for CI testing. We have one image of each type that (theoretically) works across all our providers. After it finishes building, nodepool uploads the image to all providers (p.s. the process of doing this is so insanely terrible it spawned shade; this deserves many posts of its own) at which point it will start being used for CI jobs. If you wish to replicate this entire process, the build-image.sh script, run on an Ubuntu Trusty host in a virtualenv with diskimage-builder will get you pretty close (let us know of any issues!).

DevStack and bare nodes

There are two major ways OpenStack projects test their changes:

  1. Running with DevStack, which brings up a small, but fully-functional, OpenStack cloud with the change-under-test applied. Generally tempest is then used to ensure the big-picture things like creating VM's, networks and storage are all working.
  2. Unit-testing within the project; i.e. what you do when you type tox -e py27 in basically any OpenStack project.

To support this testing, OpenStack CI ended up with the concept of bare nodes and devstack nodes.

  • A bare node was made for unit-testing. While tox has plenty of information about installing required Python packages into the virtualenv for testing, it doesn't know anything about the system packages required to build those Python packages. This means things like gcc and library -devel packages which many Python packages use to build bindings. Thus the bare nodes had an ever-growing and not well-defined list of packages that were pre-installed during the image-build to support unit-testing. Worse still, projects didn't really know their dependencies but just relied on their testing working with this global list that was pre-installed on the image.
  • In contrast to this, DevStack has always been able to bootstrap itself from a blank system to a working OpenStack deployment by ensuring it has the right dependencies installed. We don't want any packages pre-installed here because it hides actual dependencies that we want explicitly defined within DevStack — otherwise, when a user goes to deploy DevStack for their development work, things break because their environment differs slightly to the CI one. If you look at all the job definitions in OpenStack, by convention any job running DevStack has a dsvm in the job name — this referred to running on a "DevStack Virtual Machine" or a devstack node. As the CI environment has grown, we have more and more testing that isn't DevStack based (puppet apply tests, for example) that rather confusingly want to run on a devstack node because they do not want dependencies installed. While it's just a name, it can be difficult to explain!

Thus we ended up maintaining two node-types, where the difference between them is what was pre-installed on the host — and yes, the bare node had more installed than a devstack node, so it wasn't that bare at all!

Specifying Dependencies

Clearly it is useful to unify these node types, but we still need to provide a way for the unit-test environments to have their dependencies installed. This is where a tool called bindep comes in. This tool gives project authors a way to specify their system requirements in a similar manner to the way their Python requirements are kept. For example, OpenStack has the concept of global requirements — those Python dependencies that are common across all projects so version skew becomes somewhat manageable. This project now has some extra information in the other-requirements.txt file, which lists the system packages required to build the Python packages in the global-requirements list.

bindep knows how to look at these lists provided by projects and get the right packages for the platform it is running on. As part of the image-build, we have a cache-bindep element that can go through every project and build a list of the packages it requires. We can thus pre-cache all of these packages onto the images, knowing that they are required by jobs. This both reduces the dependency on external mirrors and improves job performance (as the packages are locally cached) but doesn't pollute the system by having everything pre-installed.

Package installation can now happen via the way we really should be doing it — as part of the CI job. There is a job-macro called install-distro-packages which a test can use to call bindep to install the packages specified by the project before the run. You might notice the script has a "fallback" list of packages if the project does not specify it's own dependencies — this essentially replicates the environment of a bare node as we transition to projects more strictly specifying their system requirements.

We can now start with a blank image and all the dependencies to run the job can be expressed by and within the project — leading to a consistent and reproducible environment without any hidden dependencies. Several things have broken as part of removing bare nodes — this is actually a good thing because it means we have revealed areas where we were making assumptions in jobs about what the underlying platform provides. There's a few other job-macros that can do things like provide MySQL/Postgres instances for testing or setup other common job requirements. By splitting these types of things out from base-images we also improve the performance of jobs who don't waste time doing things like setting up databases for jobs that don't need it.

As of this writing, the bindep work is new and still a work-in-progress. But the end result is that we have no more need for a separate bare node type to run unit-tests. This essentially halves the number of image-builds required and brings us to the goal of a single image for each platform running all CI.

Conclusion

While dealing with multiple providers, image-types and dependency chains has been a great effort for the infra team, to everyone's credit I don't think the project has really noticed much going on underneath.

OpenStack CI has transitioned to a situation where there is a single image type for each platform we test that deploys unmodified across all our providers and runs all testing environments equally. We have better insight into our dependencies and better tools to manage them. This leads to greatly decreased maintenance burden, better consistency and better performance; all great things to bring to OpenStack CI!

Categories: Elsewhere

Matthew Garrett: TPMs, event logs, fine-grained measurements and avoiding fragility in remote-attestation

Planet Debian - Mon, 04/04/2016 - 23:59
Trusted Platform Modules are fairly unintelligent devices. They can do some crypto, but they don't have any ability to directly monitor the state of the system they're attached to. This is worked around by having each stage of the boot process "measure" state into registers (Platform Configuration Registers, or PCRs) in the TPM by taking the SHA1 of the next boot component and performing an extend operation. Extend works like this:

New PCR value = SHA1(current value||new hash)

ie, the TPM takes the current contents of the PCR (a 20-byte register), concatenates the new SHA1 to the end of that in order to obtain a 40-byte value, takes the SHA1 of this 40-byte value to obtain a 20-byte hash and sets the PCR value to this. This has a couple of interesting properties:
  • You can't directly modify the contents of the PCR. In order to obtain a specific value, you need to perform the same set of writes in the same order. If you replace the trusted bootloader with an untrusted one that runs arbitrary code, you can't rewrite the PCR to cover up that fact
  • The PCR value is predictable and can be reconstructed by replaying the same series of operations
But how do we know what those operations were? We control the bootloader and the kernel and we know what extend operations they performed, so that much is easy. But the firmware itself will have performed some number of operations (the firmware itself is measured, as is the firmware configuration, and certain aspects of the boot process that aren't in our control may also be measured) and we may not be able to reconstruct those from scratch.

Thankfully we have more than just the final PCR date. The firmware provides an interface to log each extend operation, and you can read the event log in /sys/kernel/security/tpm0/binary_bios_measurements. You can pull information out of that log and use it to reconstruct the writes the firmware made. Merge those with the writes you performed and you should be able to reconstruct the final TPM state. Hurrah!

The problem is that a lot of what you want to measure into the TPM may vary between machines or change in response to configuration changes or system updates. If you measure every module that grub loads, and if grub changes the order that it loads modules in, you also need to update your calculations of the end result. Thankfully there's a way around this - rather than making policy decisions based on the final TPM value, just use the final TPM value to ensure that the log is valid. If you extract each hash value from the log and simulate an extend operation, you should end up with the same value as is present in the TPM. If so, you know that the log is valid. At that point you can examine individual log entries without having to care about the order that they occurred in, which makes writing your policy significantly easier.

But there's another source of fragility. Imagine that you're measuring every command executed by grub (as is the case in the CoreOS grub). You want to ensure that no inappropriate commands have been run (such as ones that would allow you to modify the loaded kernel after it's been measured), but you also want to permit certain variations - for instance, you might have a primary root filesystem and a fallback root filesystem, and you're ok with either being passed as a kernel argument. One approach would be to write two lines of policy, but there's an even more flexible approach. If the bootloader logs the entire command into the event log, when replaying the log we can verify that the event description hashes to the value that was passed to the TPM. If it does, rather than testing against an explicit hash value, we can examine the string itself. If the event description matches a regular expression provided by the policy then we're good.

This approach makes it possible to write TPM policies that are resistant to changes in ordering and permit fine-grained definition of acceptable values, and which can cleanly separate out local policy, generated policy values and values that are provided by the firmware. The split between machine-specific policy and OS policy allows for the static machine-specific policy to be merged with OS-provided policy, making remote attestation viable even over automated system upgrades.

We've integrated an implementation of this kind of policy into the TPM support code we'd like to integrate into Kubernetes, and CoreOS will soon be generating known-good hashes at image build time. The combination of these means that people using Distributed Trusted Computing under Tectonic will be able to validate the state of their systems with nothing more than a minimal machine-specific policy description.

The support code for all of this should also start making it into other distributions in the near future (the grub code is already in Fedora 24), so with luck we can define a cross-distribution policy format and make it straightforward to handle this in a consistent way even in hetrogenous operating system environments. Remote attestation is a powerful tool for ensuring that your systems are in a valid state, but the difficulty of policy management has been a significant factor in making it difficult for people to deploy in their data centres. Making it easier for people to shield themselves against low-level boot attacks is a big step forward in improving the security of distributed workloads and makes bare-metal hosting a much more viable proposition.

comments
Categories: Elsewhere

Zyxware Technologies: Standardized curriculum for Drupal Developers - Phase 1 - completed publishing content

Planet Drupal - Mon, 04/04/2016 - 22:07

We are happy to inform that, we have completed the the first phase of the project on creating a standardized Drupal curriculum for Drupal companies as planned. We have compiled and published our Drupal training curriculum on the groups.drupal.org wiki under the Curriculum and Training group. We are now looking to get feedback from other Drupal companies and Drupal developers on the curriculum and the training materials. We also would like to invite developers from the community to contribute towards making this curriculum better.

DrupalDrupalgiveDrupal PlanetDrupal TrainingTeaching DrupalNewsAnnouncements
Categories: Elsewhere

Mediacurrent: AMP’lify Your Website: Google AMP 101

Planet Drupal - Mon, 04/04/2016 - 20:41

Google just recently released a new open-source initiative called the Google Accelerated Mobile Pages (AMP) Project. The AMP project aims to implement a performance baseline to provide a better mobile web experience for all users. Its goal is to prioritize speed and increase page loading performance - which is essential for any organization interested in retaining an audience in today’s dynamic, fast-paced culture.

Categories: Elsewhere

Dries Buytaert: Improving Drupal's content workflow

Planet Drupal - Mon, 04/04/2016 - 20:38

At DrupalCon Mumbai I sat down for several hours with the Drupal team at Pfizer to understand the work they have been doing on improving Drupal content management features. They built a set of foundational modules that help advance Drupal's content workflow capabilities; from content staging, to multi-site content staging, to better auditability, offline support, and several key user experience improvements like full-site preview, and more. In this post, I want to point a spotlight on some of Pfizer's modules, and kick-off an initial discussion around the usefulness of including some of these features in core.

Use cases

Before jumping to the technical details, let's talk a bit more about the problems these modules are solving.

  1. Cross-site content staging — In this case you want to synchronize content from one site to another. The first site may be a staging site where content editors make changes. The second site may be the live production site. Changes are previewed on the stage site and then pushed to the production site. More complex workflows could involve multiple staging environments like multiple sites publishing into a single master site.
  2. Content branching — For a new product launch you might want to prepare a version of your site with a new section on the site featuring the new product. The new section would introduce several new pages, updates to existing pages, and new menu items. You want to be able to build out the updated version in a self-contained 'branch' and merge all the changes as a whole when the product is ready to launch. In an election case scenario, you might want to prepare multiple sections; one for each candidate that could win.
  3. Preview your site — When you're building out a new section on your site for launch, you want to preview your entire site, as it will look on the day it goes live. This is effectively content staging on a single site.
  4. Offline browse and publish — Here is a use-case that Pfizer is trying to solve. A sales rep goes to a hospital and needs access to information when there is no wi-fi or a slow connection. The site should be fully functional in offline mode and any changes or forms submitted, should automatically sync and resolve conflicts when the connection is restored.
  5. Content recovery — Even with confirmation dialogs, people delete things they didn’t want to delete. This case is about giving users the ability to “undelete” or recover content that has been deleted from their database.
  6. Audit logs — For compliance reasons, some organizations need all content revisions to be logged, with the ability to review content that has been deleted and connect each action to a specific user so that employees are accountable for their actions in the CMS.
Technical details

All these use cases share a few key traits:

  1. Content needs to be synchronized from one place to another, e.g. from workspace to workspace, from site to site or from frontend to backend
  2. Full revision history needs to be kept
  3. Content revision conflicts needs to be tracked

Much of this started as a single module: Deploy. The Deploy module was first created by Greg Dunlap for Drupal 6 in 2008. In 2012, Greg handed over maintainership to Dick Olsson who created the first Drupal 7 version (7.x-2.x) with many big improvements. Later, Dave Hall created a second Drupal 7 version (7.x-3.x) which more significant improvements based on feedback from different users. Today, both Dick and Dave work for Pfizer and have continued to include lessons learned in the Drupal 8 version of the module. After years of experience working on Deploy module and various redesigns, the team has extracted the functionality in a set of modules:

Multiversion

This module does three things: (1) it adds revision support for all content entities in Drupal, not just nodes and block content as provided by core, and (2) it introduces the concept of parent revisions so you can create different branches of your content or site, and (3) it keeps track of conflicts in the revision tree (e.g. when two revisions share the same parent). Many of these features complement the ongoing improvements to Drupal's Entity API.

Replication

Built on top of Multiversion module, this lightweight module reads revision information stored by Multiversion, and uses that to determine what revisions are missing from a given location and lets you replicate content between a source and a target location. The next two modules, Workspace and RELAXed Web Services depend on replication module.

Workspace

This module enables single site content staging and full-site previews. The UI lets you create workspaces and switch between them. With Replication module different workspaces on the same site can behave like different editorial environments.

RELAXed Web Services

This module facilitates cross-site content staging. It provides a more extensive REST API for Drupal with support for UUIDs, translations, file attachments and parent revisions — all important to solve unique challenges with content staging (e.g. UUID and revision information is needed to resolve merge conflicts). The RELAXed Web Services module extends the Replication module and makes it possible to replicate content from local workspaces to workspaces on remote sites using this API.

In short, Multiversion provides the "storage plumbing", whereas Replication, Workspace, and RELAXed Web Services, provide the "transport plumbing".

Deploy

Historically Deploy module has taken care of everything from bottom to top related to content staging. But for Drupal 8 Deploy has been rewritten to just provide a UI on-top of the Workspace and Replication modules. This UI lets you manage content deployments between workspaces on a single site, or between workspaces across sites (if used together with RELAXed Web Services module). The maintainers of the Deploy module have put together a marketing site with more details on what it does: http://www.drupaldeploy.org.

Trash

To handle use case #5 (content recovery) the Trash module was implemented to restore entities marked as deleted. Much like a desktop trash or recycle bin, the module displays all entities from all supported content types where the default revision is flagged as deleted. Restoring creates a newer revision, which is not flagged as deleted.

Synchronizing sites with a battle-tested API

When a Drupal site installs and enables RELAXed Web Services it will look and behave like the REST API from CouchDB. This is a pretty clever trick because it enables us to leverage the CouchDB ecosystem of tools. For example, you can use PouchDB, a JavaScript implementation of CouchDB, to provide a fully-decoupled offline database in the web browser or on a mobile device. Using the same API design as CouchDB also gives you "streaming replication" with instant updates between the backend and frontend. This is how Pfizer implements off-line browse and publish.

This animated gif shows how a content creator can switch between multiple workspaces and get a full-site preview on a single site. In this example the Live workspace is empty, while there has been a lot of content addition on the Stage workspace in. This animated gif shows how a workspace is deployed from 'Stage' to 'Live' on a single site. In this example the Live workspace is initially empty. Conclusion

Drupal 8.0 core packed many great improvements, but we didn't focus much on advancing Drupal's content workflow capabilities. As we think about Drupal 8.x and beyond, it might be good to move some of our focus to features like content staging, better audit-ability, off-line support, full-site preview, and more. If you are a content manager, I'd love to hear what you think about better supporting some or all of these use cases. And if you are a developer, I encourage you to take a look at these modules, try them out and let us know what you think.

Thanks to Tim Millwood, Dick Olsson and Nathaniel Catchpole for their feedback on the blog post. Special thanks to Pfizer for contributing to Drupal.

Categories: Elsewhere

Deeson: How our teams deliver projects

Planet Drupal - Mon, 04/04/2016 - 19:22

One of the things we believe in at Deeson is giving our team the freedom and responsibility to make decisions. At many agencies there’s often a centralisation of control and influence, but we believe that by giving that control back to the people who actually do the work we get better results for our clients.

Our vision is of an empowered team with the freedom to do what they love in better ways, and the responsibility to make decisions that will deliver innovative, data- and insight-driven products that bring genuine value to clients and their end users.

Over the last year we’ve expanded, not just in size but geographically too. We now have people spread around the UK and Europe, so how do we retain that freedom while maintaining our ability to communicate well, to develop our skills individually and as a team, and to deliver the highest quality digital products to our clients?

This was the question occupying us in 2015 when we started to do some research into models for self-organising teams.

At this point the expansion of the team was already adding a level of complexity to delivering projects for our clients. Under those circumstances growing agencies will often hire more managers to handle that complexity.

We didn't want to do that because it's counter-intuitive if you believe, as we do, that people do best when they manage their own schedules and projects, rather than having someone else tell them what to do and when. Instead of adding layers of management hierarchy we wanted to enable everyone to stay focused and close to the work, and that’s what led us to start working in multi-disciplary teams called pods.

A pod typically consists of 7-9 people and is made up of an account manager, a solutions architect, a user experience consultant and a cross-section of designers and developers. Each pod has a set number of projects and clients that they work with at any one time, meaning that everyone in the pod has a detailed knowledge of each project - and, crucially, everyone can work on every project within that pod.

Having a small, dedicated and permanent team is core to the way we work, and even more valuable is the accompanying change in how we view responsibilities.

Whereas before the developers were responsible for development work, the designers for design and the project manager for planning and scheduling, now the whole pod is jointly responsible for bringing purpose, judgement and their own specialist skills to the solving of clients’ problems.

We don’t have a gatekeeper doling out tasks to individuals, or prioritising the work, because the pod members are invested and equipped to make those decisions more effectively than anyone else.

So, what does pod working look like in practice?

We begin each day with the morning stand-up, where we share updates on a per-project basis (with distributed team members in attendance thanks to Google Hangouts). This ensures everyone in the pod is aware of the status of each project, whether they’re working directly on it or not. It also removes the need for separate daily project meetings, because any issues are identified right away and can be dealt with by the appropriate people.

We do most of our written client communication via Basecamp, which means we all have a record of what’s being discussed and agreed. This is reflected in the quality of the work we give back to the client, who gets the same level of communication as before - but with a wider pool of people with a deep understanding of the project who can step in and be involved if necessary.

And that flexibility and scalability is key: depending on our schedule and team size we can add new pods as and when we need to, in order to manage our growing portfolio of clients and projects.

Sharing the knowledge and responsibility across the pod means we don’t find ourselves stuck if someone is ill or goes on holiday, and reducing the number of meetings we have means everyone has more time and a better ability to focus on the day’s work, as well as a quicker feedback loop for solving problems.

Close, collaborative pod working gives us all a chance to share our knowledge and skills, to spot where we can help someone out by pointing them in the direction of a solution to a tricky problem, or by spending half an hour working on something that will unblock their progress. It also gives us the opportunity to develop pod members’ individual careers, by putting people with different backgrounds and levels of experience together and letting them find the best ways of working together and sharing their expertise.

Cross-discipline collaboration and shared learning is essential, but we also recognise that we want to grow strong specialisms as well as strong delivery units. We facilitate this by running separate teams we call chapters: groups built around specialisms like engineering, UX, and design.

Each chapter has a lead - an experienced practitioner who in addition to their role within their pod is responsible for promoting best practice and shared professional development across the chapter. In this way we ensure that our core skills are constantly being developed even as our day-to-day focus is on delivering work to our clients.

So what positive change have we seen since we’ve been working this way?

For one, it’s led to a stronger team spirit (as well as some friendly rivalry between pods).

And while we are proud of our flat management structure, the pod model gives everyone the immediate day-to-day support they need from their peers, so that decisions can be made, issues resolved and advice received right away.

Unlike with other agencies, our clients don’t have to spend a lot of time dealing with account managers. Instead they get direct contact with skilled, dedicated, knowledgeable and trusted people who are focused on doing the work that delivers what they and their businesses need.

Categories: Elsewhere

Promet Source: A Guide to Awesome User Testing Tools, 2016 Edition

Planet Drupal - Mon, 04/04/2016 - 18:00

When creating or redesigning any digital assets, your decisions should revolve around the users. Ask questions like:

► What are the end user's needs?

► How will this design serve those users?

► What are the business goals for your brand and how will they impact the end user?

► How will users interact with this new design or feature of your website?

Categories: Elsewhere

Appnovation Technologies: Introduction to Drupal 8 Module Development

Planet Drupal - Mon, 04/04/2016 - 16:55
Drupal 8 philosophy and architecture is different than Drupal 7. From a backend and frontend standpoints there are new concepts.
Categories: Elsewhere

Evolving Web: Choosing Modules and Themes for Drupal 8

Planet Drupal - Mon, 04/04/2016 - 16:53

If you're a Drupal developer who's on the fence about trying Drupal 8, we hope this post will push you to go for it... or inform you that it's better to wait, if your project depends on a module that's not there yet.

read more
Categories: Elsewhere

Acquia Developer Center Blog: Try Acquia Cloud’s New (Faster and More Responsive) User Interface

Planet Drupal - Mon, 04/04/2016 - 15:16

Today we’re pleased to announce the public beta release of our updated user interface (UI) for Acquia Cloud.

Our goal is to create a UI that offers a level of automation that developers need to be efficient. We’re doing this by reducing and eliminating the times you’ll have to jump between pages, whether you're switching environments, moving between code, databases, and files, or monitoring your website's health. Now it's all on one page.

Tags: acquia drupal planet
Categories: Elsewhere

Raphaël Hertzog: My Free Software Activities in February and March 2016

Planet Debian - Mon, 04/04/2016 - 10:56

My monthly report covers a large part of what I have been doing in the free software world. I write it for my donators (thanks to them!) but also for the wider Debian community because it can give ideas to newcomers and it’s one of the best ways to find volunteers to work with me on projects that matter to me.

I skipped my monthly report last time so this one will cover two months. I will try to list only the most important things to not make it too long.

The Debian Handbook

I worked with Ryuunosuke Ayanokouzi to prepare a paperback version of the Japanese translation of my book. Thanks to the efforts of everybody, it’s now available. Unfortunately, Lulu declined to take it in “distribution” program so it won’t be available on traditional bookstores (like Amazon, etc.). The reason is that they do not support non-latin character sets in the meta-data.

I tried to cheat a little bit by inputting the description in English (still explaining that the book was in Japanese) but they rejected it nevertheless because the English title could mislead people. So the paperback is only available on lulu.com. Fortunately, the shipping costs are reasonable if you pick the most economic offer.

Following this I invited the Italian, Spanish and Brazilian Portuguese translators to complete the work (they were close will all the strings already translated, mainly missing translated screenshots and some backcover content) so that we can also release paperback versions in those languages. It’s getting close to completion for them. Hopefully we will have those available until next month.

Distro Tracker

In early February, I tweaked the configuration to send (by email) exceptions generated by incoming mails and by routine task. Before this they were logged but I did not take the time to look into them. This quickly brought a few issues into light and I fixed them as they appeared: for instance the bounce handling code was getting confused when the character case was not respected, and it appears that some emails come back to use after having been lowercased. Also the code was broken when the “References” field used more than one line on incoming control emails.

This brought into light a whole class of problems with the database storing twice the same email with only differing case. So I did further work to merge all those duplicate entries behind a single email entry.

Later, the experimental Sources files changed and I had to tweak the code to work with the removal of the Files field (relying instead on Checksums-* to find out the various files part of the entry).

At some point, I also fixed the login form to not generate an exception when the user submits an empty form.

I also decided that I no longer wanted to support Django 1.7 in distro tracker as Django 1.8 is the current LTS version. I asked the Debian system administrators to update the package on tracker.debian.org with the version in jessie-backports. This allowed me to fix a few deprecation warnings that I kept triggering because I wanted the code to work with Django 1.7.

One of those warnings was generated by django-jsonfield though and I could not fix it immediately. Instead I prepared a pull request that I submitted to the upstream author.

Oh, and a last thing, I tweaked the CSS to densify the layout on the package page. This was one of the most requested changes from the people who were still preferring packages.qa.debian.org over tracker.debian.org.

Kali and new pkg-security team

As part of my Kali work, I have been fixing RC bugs in Debian packages that we use in Kali. But in many cases, I stumbled upon packages whose maintainers were really missing in action (MIA). Up to now, we were only doing non-maintainers upload (NMU) but I want to be able to maintain those packages more effectively so we created a new pkg-security team (we’re only two right now and we have no documentation yet, but if you want to join, you’re welcome, in particular if you maintain a package which is useful in the security field).

arm64 work. The first 3 packages that we took over (ssldump, sucrack, xprobe) are actually packages that were missing arm64 builds. We just started our arm64 port on Kali and we fixed them for that architecture. Since they were no longer properly maintained, in most cases it was just a matter of using dh_autoreconf to get up-to-date config.{sub,guess} files.

We still miss a few packages on arm64: vboot-utils (that we will likely take over soon since it’s offered for adoption), ruby-libv8 and ruby-therubyracer, ntopng (we have to wait a new luajit which is only in experimental right now). We also noticed that dh-make-golang was not available on arm64, after some discussion on #debian-buildd, I filed two bugs for this: #819472 on dh-make-golang and #819473 on dh-golang.

RC bug fixing. hdparm was affected by multiple RC bugs and the release managers were trying to get rid of it from testing. This removed multiple packages that were used by Kali and its users. So I investigated the situation of that package, convinced the current maintainers to orphan it, asked for new maintainers on debian-devel, reviewed multiple updates prepared by the new volunteers and sponsored their work. Now hdparm is again RC-bug free and has the latest upstream version. We also updated jsonpickle to 0.9.3-1 to fix RC bug #812114 (that I forwarded upstream first).

Systemd presets support in init-system-helpers. I tried to find someone (to hire) to implement the system preset feature I requested in #772555 but I failed. Still Andreas Henriksson was kind enough to give it a try and sent a first patch. I tried it and found some issues so I continued to improve it and simplify it… I submitted an updated patch and pinged Martin Pitt. He pointed me to the DEP-8 test failures that my patch was creating. I quickly fixed those afterwards. This patch is in use in Kali and lets us disable network services by default. I would like to see it merged in Debian so that everybody can setup systemd preset file and have their desire respected at installation time.

Misc bug reports. I filed #813801 to request a new upstream release of kismet. Same for masscan in #816644 and for wkhtmltopdf in #816714. We packaged (before Debian) a new upstream release of ruby-msgpack and found out that it was not building on armel/armhf so we filed two upstream tickets (with a suggested fix). In #814805, we asked the pyscard maintainer to reinstate python-pyscard that was dropped (keeping only the Python3 version) as we use the Python 2 version in Kali.

And there’s more: I filed #816553 (segfault) and #816554 against cdebootstrap. I asked for dh-python to have a better behaviour after having being bitten by the fact that “dh –with python3” was not doing what I expected it to do (see #818175). And I reported #818907 against live-build since it is failing to handle a package whose name contains an upper case character (it’s not policy compliant but dpkg supports them).

Misc packaging

I uploaded Django 1.9.2 to unstable and 1.8.9 to jessie-backports. I provided the supplementary information that Julien Cristau asked me in #807654 but despite this, this jessie update has been ignored for the second point release in a row. It is now outdated until I update it to include the security fixes that have been released in the mean time but I’m not yet sure that I will do it… the lack of cooperation of the release team for that kind of request is discouraging.

I sponsored multiple uploads of dolibarr (on security update notably) and tcpdf (to fix one RC bug).

Thanks

See you next month for a new summary of my activities.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Categories: Elsewhere

Pages

Subscribe to jfhovinne aggregator