Feed aggregator

jfhovinne opened issue nuvoleweb/integration#11

Devel - Mon, 11/04/2016 - 14:30
Apr 11, 2016 jfhovinne opened issue nuvoleweb/integration#11 FileSystemBackend create method can fail since mkdir is not recursive
Categories: Networks

Peter Eisentraut: Some git log tweaks

Planet Debian - Mon, 11/04/2016 - 14:00

Here are some tweaks to git log that I have found useful. It might depend on the workflow of individual projects how applicable this is.

Git stores separate author and committer information for each commit. How these are generated and updated is sometimes mysterious but generally makes sense. For example, if you cherry-pick a commit to a different branch, the author information stays the same but the committer information is updated. git log defaults to showing the author information. But I generally care less about that than the committer information, because I’m usually interested in when the commit arrived in my or the public repository, not when it was initially thought about. So let’s try to change the default git log format to show the committer information instead. Again, depending on the project and the workflow, there can be other preferences.

To create a different default format for git log, first create a new format by setting the Git configuration item pretty.somename. I chose pretty.cmedium because it’s almost the same as the default medium but with the author information replaced by the committer information.

[pretty] cmedium="format:%C(auto,yellow)commit %H%C(auto,reset)%nCommit: %cn <%ce>%nCommitDate: %cd%n%n%w(0,4,4)%s%n%+b"

Unfortunately, the default git log formats are not defined in terms of these placeholders but are hardcoded in the source, so this is my best reconstruction using the available means.

You can use this as git log --pretty=cmedium, and you can set this as the default using

[format] pretty=cmedium

If you find this useful and you’re the sort of person who is more interested in their own timeline than the author’s history, you might also like two more tweaks.

First, add %cr for relative date, so it looks like

[pretty] cmedium="format:%C(auto,yellow)commit %H%C(auto,reset)%nCommit: %cn <%ce>%nCommitDate: %cd (%cr)%n%n%w(0,4,4)%s%n%+b"

This adds a relative designation like “2 days ago” to the commit date.

Second, set

[log] date=local

to have all timestamps converted to your local time.

When you put all this together, you turn this

commit e2c117a28f767c9756d2d620929b37651dbe43d1 Author: Paul Eggert <eggert@cs.ucla.edu> Date: Tue Apr 5 08:16:01 2016 -0700

into this

commit e2c117a28f767c9756d2d620929b37651dbe43d1 Commit: Paul Eggert <eggert@cs.ucla.edu> CommitDate: Tue Apr 5 11:16:01 2016 (3 days ago)

PS: If this is lame, there is always this: http://fredkschott.com/post/2014/02/git-log-is-so-2005/

Categories: Elsewhere

Chapter Three: Javascript testing comes to Drupal 8

Planet Drupal - Mon, 11/04/2016 - 13:47

With the arrival of Drupal 8.1.0 finally you can test javascript interactions on Drupal.org. This is culmination of years of work by many developers to improve the testing API and infrastructure. Without the improvements delivered by Drupal 8 it'd be hard to leverage Mink, PhantomJS and PHPUnit to run our tests, and without the new DrupalCI infrastructure we'd have nowhere to run the tests.



Categories: Elsewhere

Drop Guard: Big update: the new onboarding process, improved patching workflow and more

Planet Drupal - Mon, 11/04/2016 - 12:14
Big update: the new onboarding process, improved patching workflow and more Igor Kandyba Mon, 11.04.2016 - 12:14

Today, we’re excited to introduce you to a number of new features, improvements and fixes for Drop Guard - the first update in the series of releases planned for 2016.  It includes many enhancements designed to improve user experience when creating projects in Drop Guard, support for the "Unsupported updates", and even smarter automated patching workflow. Read below to learn about the major improvements and don't forget to check your Drop Guard account to check it by yourself. Let's dive right in!

Drop Guard features Drupal Planet
Categories: Elsewhere

Kristof De Jaeger: Taking a (Drupal 8) website offline using AppCache

Planet Drupal - Mon, 11/04/2016 - 11:41
Written on April 11, 2016 - 11:41

A native mobile application which can cache the data locally is a way to make content available offline. However, not everyone has the time and/or money to create a dedicated app, and frankly, it's not always an additional asset. What if browsers could work without network connection but still serve content: Application Cache and/or Service Workers to the rescue!

For Frontend United 2016, Mathieu and I experimented to see how far we could take AppCache and make the sessions, speakers and some additional content available offline using data from within the Drupal site. There are a couple of pitfalls when implementing this, of which some are nasty (see the list apart link at the bottom for more information). Comes in Drupal which adds another layer of complexity, with its dynamic nature of content and themes. Javascript and css aggregation is also extremely tricky to get right. So after trial and error and a lot of reading, we came up with the following concept:

  1. Only add the manifest attribute to all "offline" pages which are completely separate from "online pages", even though they might serve the same content. In other words, you create a sandboxed version of some content of your site which can live on its own. Another technique is a hidden iframe which loads a page which contains the html tag with the manifest attribute. You can embed this iframe on any page you like. This gives you the option to create a page where you link to as an opt-in to get a site offline. Both techniques give us full control and no side affects so that when network is available the site works normally.
  2. You define the pages which you want to store in the cache. They are served by Drupal, but on a different route than the original (e.g. node/1 becomes offline/node/1) and use different templates. These are twig templates so you can override the defaults to your own needs. Other information like stylesheet and javascript files can be configured too to be included.
  3. The manifest thus contains everything that we need to create the offline version when your device has no network connection. In our case, it contains the list of speakers and sessions, content pages and some assets like javascript, stylesheet, logo and images.
Offline in the browser or on the homescreen

Go to the Offline homepage of Frontend United and wait until the 'The content is now available offline!' message appears, which means you just downloaded 672 kb of data - it is really really small, surprising no? Now switch off your network connection and reload the browser: still there! Click around and you'll be able to check the offline version at any time. If you're on a mobile device, the experience can be even sweeter: you can add this page to your homescreen, making it available as an 'app'. On iOS, you need to open the app once while still being connected to the network. We really do hope safari/iOS fixes this behavior since this is not necessary on Android. After that, turn off your network and launch the app again. Oh, and it works on a watch too if you have a browser on it. If that isn't cool, we don't know what else is! We have a little video to show you how it looks like. Watch (pun intended) and enjoy! Oh, in case we make changes to the pages, you will see a different notification telling you that the content has been updated - if your device has network of course.

Drupal integration

We've created a new project on Drupal.org, called Offline App, available for Drupal 8. The project contains the necessary code and routes for generating the appcache, iframe, pages (nodes and views) and settings to manipulate the manifest content. 3 new regions are exposed in which you can place the content for offline use. Those regions are used in offline-app-page.html.twig - but any region is available if you want to customize. Two additional view modes are created for content types and the read more link can be made available in the 'Offline teaser' mode. Formatters are available for long texts to strip internal links and certain tags (e.g. embed and iframe) and for images that will make sure that 'Link to content' is pointing to the 'Offline path'. Last, but not least, an 'Offline' Views display is available for creating lists. We're still in the process in making everything even more flexible and less error-prone when configuring the application. However, the code that is currently available, is used as is on the Fronted United website right now.

This module does not pretend to be the ultimate solution for offline content, see it as an example to quickly expose a manifest containing URL's from an existing Drupal installation for an offline version of your website. Other Drupal projects are available trying to integrate with AppCache or Service workers, however, some are unsupported or in a very premature state, apart from https://www.drupal.org/project/pwa. Note that I've been in contact with Théodore already and we'll see how we combine our efforts for coming up with one single solution instead of having multiple ones.

What about service workers ?

Not all browsers support the API yet. Even though AppCache is marked deprecated, we wanted to make sure everyone could have the same offline experience. However, we'll start adding support for service workers soon using the same concept.

We're also planning to start experimenting with delivering personal content as well, since that's also possible, yet a little trickier.

Links
Categories: Elsewhere

Matthew Garrett: Making it easier to deploy TPMTOTP on non-EFI systems

Planet Debian - Mon, 11/04/2016 - 07:59
I've been working on TPMTOTP a little this weekend. I merged a pull request that adds command-line argument handling, which includes the ability to choose the set of PCRs you want to seal to without rebuilding the tools, and also lets you print the base32 encoding of the secret rather than the qr code so you can import it into a wider range of devices. More importantly it also adds support for setting the expected PCR values on the command line rather than reading them out of the TPM, so you can now re-seal the secret against new values before rebooting.

I also wrote some new code myself. TPMTOTP is designed to be usable in the initramfs, allowing you to validate system state before typing in your passphrase. Unfortunately the initramfs itself is one of the things that's measured. So, you end up with something of a chicken and egg problem - TPMTOTP needs access to the secret, and the obvious thing to do is to put the secret in the initramfs. But the secret is sealed against the hash of the initramfs, and so you can't generate the secret until after the initramfs. Modify the initramfs to insert the secret and you change the hash, so the secret is no longer released. Boo.

On EFI systems you can handle this by sticking the secret in an EFI variable (there's some special-casing in the code to deal with the additional metadata on the front of things you read out of efivarfs). But that's not terribly useful if you're not on an EFI system. Thankfully, there's a way around this. TPMs have a small quantity of nvram built into them, so we can stick the secret there. If you pass the -n argument to sealdata, that'll happen. The unseal apps will attempt to pull the secret out of nvram before falling back to looking for a file, so things should just magically work.

I think it's pretty feature complete now, other than TPM2 support? That's on my list.

comments
Categories: Elsewhere

Evolving Web: Bringing files along for the ride to D8

Planet Drupal - Mon, 11/04/2016 - 03:24

We just upgraded our site to Drupal 8, and a big part of that was migrating content. Most content was in JSON files or SQL dumps, which are supported by Drupal's migrate module. But what about images and other files? How could we bring those along?

We'll show how to write a custom migrate process plugin!

read more
Categories: Elsewhere

Petter Reinholdtsen: Lets make a Norwegian Bokmål edition of The Debian Administrator's Handbook

Planet Debian - Sun, 10/04/2016 - 23:20

During this weekends bug squashing party and developer gathering, we decided to do our part to make sure there are good books about Debian available in Norwegian Bokmål, and got in touch with the people behind the Debian Administrator's Handbook project to get started. If you want to help out, please start contributing using the hosted weblate project page, and get in touch using the translators mailing list. Please also check out the instructions for contributors.

The book is already available on paper in English, French and Japanese, and our goal is to get it available on paper in Norwegian Bokmål too. In addition to the paper edition, there are also EPUB and Mobi versions available. And there are incomplete translations available for many more languages.

Categories: Elsewhere

Vincent Bernat: Testing network software with pytest and Linux namespaces

Planet Debian - Sun, 10/04/2016 - 16:30

Started in 2008, lldpd is an implementation of IEEE 802.1AB-2005 (aka LLDP) written in C. While it contains some unit tests, like many other network-related software at the time, the coverage of those is pretty poor: they are hard to write because the code is written in an imperative style and tighly coupled with the system. It would require extensive mocking1. While a rewrite (complete or iterative) would help to make the code more test-friendly, it would be quite an effort and it will likely introduce operational bugs along the way.

To get better test coverage, the major features of lldpd are now verified through integration tests. Those tests leverage Linux network namespaces to setup a lightweight and isolated environment for each test. They run through pytest, a powerful testing tool.

pytest in a nutshell

pytest is a Python testing tool whose primary use is to write tests for Python applications but is versatile enough for other creative usages. It is bundled with three killer features:

  • you can directly use the assert keyword,
  • you can inject fixtures in any test function, and
  • you can parametrize tests.
Assertions

With unittest, the unit testing framework included with Python, and many similar frameworks, unit tests have to be encapsulated into a class and use the provided assertion methods. For example:

class testArithmetics(unittest.TestCase): def test_addition(self): self.assertEqual(1 + 3, 4)

The equivalent with pytest is simpler and more readable:

def test_addition(): assert 1 + 3 == 4

pytest will analyze the AST and display useful error messages in case of failure. For further information, see Benjamin Peterson’s article.

Fixtures

A fixture is the set of actions performed in order to prepare the system to run some tests. With classic frameworks, you can only define one fixture for a set of tests:

class testInVM(unittest.TestCase): def setUp(self): self.vm = VM('Test-VM') self.vm.start() self.ssh = SSHClient() self.ssh.connect(self.vm.public_ip) def tearDown(self): self.ssh.close() self.vm.destroy() def test_hello(self): stdin, stdout, stderr = self.ssh.exec_command("echo hello") stdin.close() self.assertEqual(stderr.read(), b"") self.assertEqual(stdout.read(), b"hello\n")

In the example above, we want to test various commands on a remote VM. The fixture launches a new VM and configure an SSH connection. However, if the SSH connection cannot be established, the fixture will fail and the tearDown() method won’t be invoked. The VM will be left running.

Instead, with pytest, we could do this:

@pytest.yield_fixture def vm(): r = VM('Test-VM') r.start() yield r r.destroy() @pytest.yield_fixture def ssh(vm): ssh = SSHClient() ssh.connect(vm.public_ip) yield ssh ssh.close() def test_hello(ssh): stdin, stdout, stderr = ssh.exec_command("echo hello") stdin.close() stderr.read() == b"" stdout.read() == b"hello\n"

The first fixture will provide a freshly booted VM. The second one will setup an SSH connection to the VM provided as an argument. Fixtures are used through dependency injection: just give their names in the signature of the test functions and fixtures that need them. Each fixture only handle the lifetime of one entity. Whatever a dependent test function or fixture succeeds or fails, the VM will always be finally destroyed.

Parameters

If you want to run the same test several times with a varying parameter, you can dynamically create test functions or use one test function with a loop. With pytest, you can parametrize test functions and fixtures:

@pytest.mark.parametrize("n1, n2, expected", [ (1, 3, 4), (8, 20, 28), (-4, 0, -4)]) def test_addition(n1, n2, expected): assert n1 + n2 == expected Testing lldpd

The general plan for to test a feature in lldpd is the following:

  1. Setup two namespaces.
  2. Create a virtual link between them.
  3. Spawn a lldpd process in each namespace.
  4. Test the feature in one namespace.
  5. Check with lldpcli we get the expected result in the other.

Here is a typical test using the most interesting features of pytest:

@pytest.mark.skipif('LLDP-MED' not in pytest.config.lldpd.features, reason="LLDP-MED not supported") @pytest.mark.parametrize("classe, expected", [ (1, "Generic Endpoint (Class I)"), (2, "Media Endpoint (Class II)"), (3, "Communication Device Endpoint (Class III)"), (4, "Network Connectivity Device")]) def test_med_devicetype(lldpd, lldpcli, namespaces, links, classe, expected): links(namespaces(1), namespaces(2)) with namespaces(1): lldpd("-r") with namespaces(2): lldpd("-M", str(classe)) with namespaces(1): out = lldpcli("-f", "keyvalue", "show", "neighbors", "details") assert out['lldp.eth0.lldp-med.device-type'] == expected

First, the test will be executed only if lldpd was compiled with LLDP-MED support. Second, the test is parametrized. We will execute four distinct tests, one for each role that lldpd should be able to take as an LLDP-MED-enabled endpoint.

The signature of the test has four parameters that are not covered by the parametrize() decorator: lldpd, lldpcli, namespaces and links. They are fixtures. A lot of magic happen in those to keep the actual tests short:

  • lldpd is a factory to spawn an instance of lldpd. When called, it will setup the current namespace (setting up the chroot, creating the user and group for privilege separation, replacing some files to be distribution-agnostic, …), then call lldpd with the additional parameters provided. The output is recorded and added to the test report in case of failure. The module also contains the creation of the pytest.config.lldpd object that is used to record the features supported by lldpd and skip non-matching tests. You can read fixtures/programs.py for more details.

  • lldpcli is also a factory, but it spawns instances of lldpcli, the client to query lldpd. Moreover, it will parse the output in a dictionary to reduce boilerplate.

  • namespaces is one of the most interesting pieces. It is a factory for Linux namespaces. It will spawn a new namespace or refer to an existing one. It is possible to switch from one namespace to another (with with) as they are contexts. Behind the scene, the factory maintains the appropriate file descriptors for each namespace and switch to them with setns(). Once the test is done, everything is wipped out as the file descriptors are garbage collected. You can read fixtures/namespaces.py for more details. It is quite reusable in other projects2.

  • links contains helpers to handle network interfaces: creation of virtual ethernet link between namespaces, creation of bridges, bonds and VLAN, etc. It relies on the pyroute2 module. You can read fixtures/network.py for more details.

You can see an example of a test run on the Travis build for 0.9.2. Since each test is correctly isolated, it’s possible to run parallel tests with pytest -n 10 --boxed. To catch even more bugs, both the address sanitizer (ASAN) and the undefined behavior sanitizer (UBSAN) are enabled. In case of a problem, notably a memory leak, the faulty program will exit with a non-zero exit code and the associated test will fail.

  1. A project like cwrap would definitely help. However, it lacks support for Netlink and raw sockets that are essential in lldpd operations. 

  2. There are three main limitations in the use of namespaces with this fixture. First, when creating a user namespace, only root is mapped to the current user. With lldpd, we have two users (root and _lldpd). Therefore, the tests have to run as root. The second limitation is with the PID namespace. It’s not possible for a process to switch from one PID namespace to another. When you call setns() on a PID namespace, only children of the current process will be in the new PID namespace. The PID namespace is convenient to ensure everyone gets killed once the tests are terminated but you must keep in mind that /proc must be mounted in children only. The third limitation is that, for some namespaces (PID and user), all threads of a process must be part of the same namespace. Therefore, don’t use threads in tests. Use multiprocessing module instead. 

Categories: Elsewhere

Lullabot: Lullabot DrupalCon Sessions 2016

Planet Drupal - Sun, 10/04/2016 - 10:29

This year we have a stellar lineup of sessions by the Lullabot and Drupalize.Me teams which were accepted for DrupalCon North America being held in New Orleans. Take a look at who is presenting and read a short synopsis of what they’ll be talking about.

Coding and Development Altering, Extending, and Enhancing Drupal 8 - Joe Shindelar A large part of Drupal's appeal lies in its flexibility. The fact that a developer can alter, extend, and enhance almost any aspect of Drupal without having to hack core. Historically this versatility has been made possible through the existence of hooks. Specially named PHP functions that are executed at critical points during the fulfillment of a request. And they've served the framework well for years. But times are changing, and Drupal 8 offers a variety of new patterns that all module developers will be required to learn, and understand. Configuration Management for Developers in Drupal 8 - Matthew Tift Is the configuration system your favorite feature of Drupal 8? Are you interested in doing continuous integration? Do you want to easily export all of your Drupal configuration to code? Interested in building a best practice continuous integration and deployment solution? This session, hosted by co-maintainers of the configuration system, will focus on how Drupal 8's configuration management system works, how to integrate it with a continuous integration system, and what developers can do to extend its power through contributed modules and custom code. Come with your questions and learn more about this magical part of Drupal 8. Core Conversations Drupal (admin) as an application: More JavaScript in core? - Marc Drummond In recent months, much debate has revolved around the compelling user experiences increasingly accompanying the runaway growth of JavaScript frameworks. Some argue that Drupal already has many moving parts and should evolve toward more seamless user experiences with existing tools and better processes. Some argue that Drupal should address this trend with additional capabilities for JavaScript in the form of a JavaScript framework. Some argue we should look at using modern PHP and JavaScript technologies that don’t require a JavaScript framework. Others have positions that fall both inside and outside this spectrum! Learning to Let Go (Contrib Burnout) and Module Giveaway - Dave Reid How can someone deeply involved in the Drupal contributed module ecosystem start to step away? How do we handle burnout not just in Drupal core development, but in contrib? I'd like to engage a conversation based the challenges I have encountered and currently face personally/emotionally on my journey from being one of the top contributors to Drupal 7, prolific writer of modules, to someone starting a family and needing to rebalance their personal, work, and Drupal life. With so much focus on getting people involved in Drupal.org, are there technical solutions we can explore to help make active contributors happier? Drupal.org Documentation Is Getting An Overhaul - Joe Shindelar Having high-quality documentation available for Drupal.org is key to gaining wider adoption, growing the community, and the overall success of the Drupal project. I want to share the work related to documentation going on in the community, as well as some of our plans for continued improvement in the future. Front End Debugging, Profiling, & Rocking Out with Browser-Based Developer Tools! - Mike Herchel Browser based developer tools have become an indispensable tool for modern front-end web development. New features and changes are being added at a rapid pace, and keeping up with all of the changes is difficult, but well worth it! In this session, Mike will walk attendees through modern debugging techniques, tips and tricks, front-end profiling, and more! Sizing up responsive images: Make a plan before you Drupal - Marc Drummond Drupal 8 has built-in responsive images support based off of Drupal 7’s contributed Picture and Breakpoint modules. Understanding how to use those modules without first making a plan could easily lead to a cat-tastrophe! Horizons AMPing up Drupal - Karen Stevenson, Matthew Tift, and Marc Drummond In many cases, the mobile web is a slow and frustrating experience. The Accelerated Mobile Pages (AMP) Project which involves Google is an open source initiative that embodies the vision that publishers can create mobile optimized content once and have it load instantly everywhere. When AMP was first introduced last October 2015, many commentators immediately compared it to Facebook's Instant Articles and Apple's News app. One of the biggest differentiators between AMP and other solutions is the fact that AMP is open source. Beyond the Blink: Add Drupal to Your IoT Playground - Amber Himes Matz What does making a light blink have to do with Drupal? Come to this session to find out how you can add Drupal to your Internet of Things data playground. (THERE WILL BE BLINKING LIGHTS.) Site Building Recoupling: Bridging Design and Structured Content - Jeff Eaton For years we’ve talked about separating content and presentation. Structure, reuse, and standardization are the name of the game in a future-friendly, multi-channel world — aesthetics are someone else’s concern … right? UX Web Accessibility 101: Principles, Concepts, and Financial Viability - Helena Zubkow If your website wouldn't work for anyone living in the state of New York, would that be a launch-blocker? Of course! So why are we ignoring the even larger population of people with disabilities?

Photo by: Jeff Turner and used via Creative Commons License

Categories: Elsewhere

Pages

Subscribe to jfhovinne aggregator