Elsewhere

Talha Paracha: GSoC'16 – Pubkey Encrypt – Week 11 Report

Planet Drupal - mer, 10/08/2016 - 02:00

As you might’ve already guessed, Pubkey Encrypt is a Google Summer of Code’2016 sponsored project. As a part of the GSoC program, I’ve spent the last 2.5 months building this module for Drupal 8. The journey so far has been amazing for me and now we’re approaching the end of the program. So I spent this week in finalizing the module. For those who don’t know, Pubkey Encrypt is a security-related module which provides a way for encrypting data with users’ login credentials. But the way the module is designed, it delegates the task of actual data encryption/decryption to some other module. Previously, we were using Encrypt Test for this purpose, which was just a test sub-module within the Encrypt module, and were waiting for Real AES module to get in a stable state. A few weeks ago, I posted a patch to fix the module but its maintainers haven’t responded yet, and its HEAD is still broken. Thus we decided to use the PHPSecLib Encryption module which in turn uses the external PHP Secure Communications Library and is hence expected to be pretty secure. The task seemed quite simple but the relevant changes simply broke all the tests for Pubkey Encrypt.

Catégories: Elsewhere

John Goerzen: Easily Improving Linux Security with Two-Factor Authentication

Planet Debian - mer, 10/08/2016 - 00:23

2-Factor Authentication (2FA) is a simple way to help improve the security of your systems. It restricts the scope of damage if a machine is compromised. If, for instance, you have a security token or authenticator app on your phone that is required for ssh to a remote machine, then even if every laptop you use to connect to the remote is totally owned, an attacker cannot establish a new ssh session on their own.

There are a lot of tutorials out there on the Internet that get you about halfway there, so here is some more detail.

Background

In this article, I will be focusing on authentication in the style of Google Authenticator, which is a special case of OATH HOTP or TOTP. You can use the Google Authenticator app, FreeOTP, or a hardware token like Yubikey to generate tokens with this. They are all 100% compatible with Google Authenticator and libpam-google-authenticator.

The basic idea is that there is a pre-shared secret key. At each login, a different and unique token is required, which is generated based on the pre-shared secret key and some other information. With TOTP, the “other information” is the current time, implying that both machines must be reasably well in-sync time-wise. With HOTP, the “other information” is a count of the number of times the pre-shared key has been used. Both typically have a “window” on the server side that can let times within a certain number of seconds, or a certain number of login accesses, work.

The beauty of this system is that after the initial setup, no Internet access is required on either end to validate the key (though TOTP requires both ends to be reasonably in sync time-wise).

The basics: user account setup and ssh authentication

You can start with the basics by reading one of these articles: one, two, three. Debian/Ubuntu users will find both the pam module and the user account setup binary in libpam-google-authenticator.

For many, you can stop there. You’re done. But if you want to kick it up a notch, read on:

Enhancement 1: Requiring 2FA even when ssh public key auth is used

Let’s consider a scenario in which your system is completely compromised. Unless your ssh keys are also stored in something like a Yubikey Neo, they could wind up being compromised as well – if someone can read your files and sniff your keyboard, your ssh private keys are at risk.

So we can configure ssh and PAM so that a OTP token is required even for this scenario.

First off, in /etc/ssh/sshd_config, we want to change or add these lines:

UsePAM yes ChallengeResponseAuthentication yes AuthenticationMethods publickey,keyboard-interactive

This forces all authentication to pass two verification methods in ssh: publickey and keyboard-interactive. All users will have to supply a public key and then also pass keyboard-interactive auth. Normally keyboard-interactive auth prompts for a password, but we can change /etc/pam.d/sshd on this. I added this line at the very top of /etc/pam.d/sshd:

auth [success=done new_authtok_reqd=done ignore=ignore default=bad] pam_google_authenticator.so

This basically makes Google Authenticator both necessary and sufficient for keyboard-interactive in ssh. That is, whenever the system wants to use keyboard-interactive, rather than prompt for a password, it instead prompts for a token. Note that any user that has not set up google-authenticator already will be completely unable to ssh into their account.

Enhancement 1, variant 2: Allowing automated processes to root

On many of my systems, I have ~root/.ssh/authorized_keys set up to permit certain systems to run locked-down commands for things like backups. These are automated commands, and the above configuration will break them because I’m not going to be typing in codes at 3AM.

If you are very restrictive about what you put in root’s authorized_keys, you can exempt the root user from the 2FA requirement in ssh by adding this to sshd_config:

Match User root AuthenticationMethods publickey

This says that the only way to access the root account via ssh is to use the authorized_keys file, and no 2FA will be required in this scenario.

Enhancement 1, variant 2: Allowing non-pubkey auth

On some multiuser systems, some users may still want to use password auth rather than publickey auth. There are a few ways we can support that:

  1. Users without public keys will have to supply a OTP and a password, while users with public keys will have to supply public key, OTP, and a password
  2. Users without public keys will have to supply OTP or a password, while users with public keys will have to supply public key, OTP, or a password
  3. Users without public keys will have to supply OTP and a password, while users with public keys only need to supply the public key

The third option is covered in any number of third-party tutorials. To enable options 1 or 2, you’ll need to put this in sshd_config:

AuthenticationMethods publickey,keyboard-interactive keyboard-interactive

This means that to authenticate, you need to pass either publickey and then keyboard-interactive auth, or just keyboard-interactive auth.

Then in /etc/pam.d/sshd, you put this:

auth required pam_google_authenticator.so

As a sub-variant for option 1, you can add nullok to here to permit auth from people that do not have a Google Authenticator configuration.

Or for option 2, change “required” to “sufficient”. You should not add nullok in combination with sufficient, because that could let people without a Google Authenticator config authenticate completely without a password at all.

Enhancement 2: Configuring su

A lot of other tutorials stop with ssh (and maybe gdm) but forget about the other ways we authenticate or change users on a system. su and sudo are the two most important ones. If your root password is compromised, you don’t want anybody to be able to su to that account without having to supply a token. So you can set up google-authenticator for root.

Then, edit /etc/pam.d/su and insert this line after the pam_rootok.so line:

auth required pam_google_authenticator.so nullok

The reason you put this after pam_rootok.so is because you want to be able to su from root to any account without having to input a token. We add nullok to the end of this, because you may want to su to accounts that don’t have tokens. Just make sure to configure tokens for the root account first.

Enhancement 3: Configuring sudo

This one is similar to su, but a little different. This lets you, say, secure the root password for sudo.

Normally, you might sudo from your user account to root (if so configured). You might have sudo configured to require you to enter in your own password (rather than root’s), or to just permit you to do whatever you want as root without a password.

Our first step, as always, is to configure PAM. What we do here depends on your desired behavior: do you want to require someone to supply both a password and a token, or just a token, or require a token? If you want to require a token, put this at the top of /etc/pam.d/sudo:

auth [success=done new_authtok_reqd=done ignore=ignore default=bad] pam_google_authenticator.so

If you want to require a token and a password, change the bracketed string to “required”, and if you want a token or a password, change it to “sufficient”. As before, if you want to permit people without a configured token to proceed, add “nullok”, but do not use that with “sufficient” or the bracketed example here.

Now here comes the fun part. By default, if a user is required to supply a password to sudo, they are required to supply their own password. That does not help us here, because a user logged in to the system can read the ~/.google_authenticator file and easily then supply tokens for themselves. What you want to do is require them to supply root’s password. Here’s how I set that up in sudoers:

Defaults:jgoerzen rootpw jgoerzen ALL=(ALL) ALL

So now, with the combination of this and the PAM configuration above, I can sudo to the root user without knowing its password — but only if I can supply root’s token. Pretty slick, eh?

Further reading

In addition to the basic tutorials referenced above, consider:

Edit: additional comments

Here are a few other things to try:

First, the libpam-google-authenticator module supports putting the Google Authenticator files in different locations and having them owned by a certain user. You could use this to, for instance, lock down all secret keys to be readable only by the root user. This would prevent users from adding, changing, or removing their own auth tokens, but would also let you do things such as reusing your personal token for the root account without a problem.

Also, the pam-oath module does much of the same things as the libpam-google-authenticator module, but without some of the help for setup. It uses a single monolithic root-owned password file for all accounts.

There is an oathtool that can be used to generate authentication codes from the command line.

Catégories: Elsewhere

Cocomore: Starting small: Why websites should get designed for small screens first

Planet Drupal - mer, 10/08/2016 - 00:00

Search engines like Google prefer mobile optimized websites for their ranking. No surprise, after all websites are mainly accessed from devices like smartphones nowadays. If you can’t offer a responsive design you will lose traffic and therefore customers. For this reason, mobile optimized websites are crucial and “mobile first” the best way to get there.

Catégories: Elsewhere

Reproducible builds folks: Finishing the final variations for reprotest

Planet Debian - mar, 09/08/2016 - 22:17

Author: ceridwen

I've been working on getting the last of the variations working. With no responses on the mailing list from anyone outside Debian and with limited time remaining, with Lunar I've decided to deemphasize it.

  1. Build path is done.

  2. Host and domain name use domainname and hostname. This site is old, but it indicates that domainname was available on most OSes and hostname was available everywhere as of 2004. Prebuilder uses a Linux-specific utility (unshare --uts) to run this variation in a chroot, but I'm not doing this for reprotest: if you want this variation, use qemu.

  3. User/group will not be portable, because they'll rely on useradd/groupadd and su. useradd and groupadd work on many but not all OSes, notably not including FreeBSD or MacOS X. su was universal in 2004.

  4. Time is not done but will probably be portable to some systems, because it will rely on date -s. Unfortunately, I haven't been able to find any information on how common date -s is across Unix-like OSes, as the -s option is not part of the POSIX standard.

  5. At the moment, I have no idea how to implement changes for /bin/sh and the login shell that will even work across different distributions of Linux, much less different OSes. There are a couple of key problems, starting with the need to find two different shells to use, because there's no way to find out what shells are installed. This blog post explains why /etc/shells doesn't work well for finding what shells are available: not everything in /etc/shells is necessarily a shell (Ubuntu has /usr/bin/screen) and not all available shells are in /etc/shells. Also, there's no good way to find out what shell is the system default because /bin/sh can be an arbitrary binary, not a symlink,and no good way to identify what it is if it is a binary. I can hard-code shell choices, but which shells? bash is obvious for Linux, but what's the best second choice?

On other topics:

  1. reprotest fails to build properly on jessie: Lunar said, and I agree, that fixing this is not a priority. I need someone with more knowledge of Debian Python packaging to help me. If I'm going to support old versions, I also need some kind of CI server, because I don't have the time or ability to maintain old versions of Debian and Python myself.

  2. libc-bin: ldconfig segfaults when run using "setarch uname26": I don't have a good solution for this, but I don't want to hard-code and maintain an architecture-specific disable. Would changing the argument to setarch work around the segfault? Is there a way to test for the presence of the bug that won't cause a segfault or similar crash?

  3. Please put adt-virt-* binaries back onto PATH: reprotest is not affected by this change because I forked the autopkgtest code rather than depending on it, so that reprotest can be installed through PyPi. At the moment, reprotest doesn't make its versions of the programs in virt/ available on $PATH. This is primarily because of the problems with distributing command-line scripts with setuptools. The approach I'm currently using, including the virt/ programs as non-code data with include reprotest/virt/* in MANIFEST.in, doesn't install them to make them available for other programs. Using one of the other approaches potentially could, but it's difficult to make the imports work with those approaches. (I haven't found a way to do it.) I think the best solution to this approach is to split autopkgtest into the Debian-specific components and the general-purpose virtualization components, but I don't have the time to do this myself or to negotiate with Martin Pitt, if he'd even be receptive. I'm also unsure at this point if it wouldn't be better for reprotest to switch from autopkgtest to using something like Ansible to run the virtualization, because Ansible has some solved some of the portability problems already and is not tied to Debian.

My goal is to finish the variations (finally), though as this has always proved more difficult than I expected in the past, I don't make any guarantees. Beyond that, I want to start working on finishing docstrings in the reprotest-specific (i.e., not inherited from autopkgtest) code, improving the documentation in general, and improving the tests.

Catégories: Elsewhere

FFW Agency: Mastering the Basics: The Content Plan

Planet Drupal - mar, 09/08/2016 - 21:26
Mastering the Basics: The Content Plan Ray Saltini Tue, 08/09/2016 - 19:26

It is amazing how often the content equation is underestimated or misunderstood whether building new properties or renewing existing sites.

The essence of any web project is the content or message to be conveyed. Understandably organizations will engage terrific creative agencies that will put tremendous effort into strategy and design. Unfortunately this often has the unintended effect of de-emphasizing existing content that may, or may not, need to be migrated to your new project. A thorough content audit early on in your planning process will help streamline your project and your budget.

Rarely is content brought over to a project wholesale without some important changes. This can be obvious like making PDF content more search engine friendly or less obvious like adding or changing metadata and reforming its underlying data structure. An audit will help determine if content migration should be included in your web development scope or handled as a separate component.

But unlike planning for strategy and design, resources are often scarce when it comes to content planning and conducting a content audit. It is likely the content planning process will be reduced to task level milestones and allocated to support staff or subject matter experts with little accommodation for the importance, difficulty or breadth of the requirements. This can lead to unforeseen and unwanted surprises during the development and user acceptance cycles of your project. 

Here are a few quick resources to get you started understanding the content planning process is Drupal.

A topic central strategy from a blog post that has only gotten more important: Understanding Content Planning: Why Taxonomy is Your New Best Friend

A great book from an accomplished Drupalist that will answer a lot of basic questions around content planning: The Drupal User's Guide by Emma Jane Hogbin Westby.

And a new series of articles about distributed content management by our very own Hank VanZile Director of Pre Sales Consulting starting with 'What is Distributed Content Management'

Tagged with Comments
Catégories: Elsewhere

FFW Agency: Breaking New Ground - Tapping Free Open Source Drupal to Achieve Big Gains in the Energy Sector

Planet Drupal - mar, 09/08/2016 - 21:10
Breaking New Ground - Tapping Free Open Source Drupal to Achieve Big Gains in the Energy Sector Ray Saltini Tue, 08/09/2016 - 19:10

Upheavals in the energy sector in recent years are driving a new Texas-sized need for efficiencies in all facets of business operations and are leading companies to begin mining new areas for streamlining and savings.

Houston, we have a solution.

The good news is open source technologies are opening up areas for exploration previously overlooked by many in the energy sector. In this multi-part series we’ll begin to look at how energy companies use the web and how open source internet technologies can drastically reduce acquisition costs, enable rapid prototyping, and create a potential for windfall profits.

Here’s a preview of our upcoming series on Drupal for the Energy Sector. Use the form below to sign up for our newsletter to get notified when we post new articles or sign up for our free training session on September 16 in Houston.

Acquisition Cost

Ever test drive a Lamborghini? Me neither. How about a proprietary web experience platform by one of the venders in Gartner’s Quadrants? Not happening. You wouldn’t buy a car you couldn’t test drive. Why consider purchasing a software platform you couldn’t pilot? Unless it’s open source you’re going to have to pay dearly for the privilege of any kind of test drive. Among the platforms listed in Gartner’s magic platform, Drupal is the only system you can freely pilot around your business needs.

Rapid Prototyping

Just because its free doesn’t mean you know what to do with it. Many open source systems have a huge transparent install base. You can install a simple plugin in your web browser that will tell you what Content Management System, technology or platform is being used. Drupal takes it even further with free custom developed distributions for different functional needs like ERP, localization and translation, publishing, ecommerce, event organizing, and networking. Agencies can build in various front end technologies and connect them to a Drupal backend with minimal customization.

Windfall Profits

How large is your IT department? Chances are the answer is something like, ‘Not big enough to do the job we have ahead of us.’ The nature of an open source project means you can tap more personnel than you could ever achieve a return on with a proprietary system. Drupal alone has over 3,000 developers that contributed to their latest version, Drupal 8. All of them are writing code that is highly secure and available for you to try and use for free.

Stay tuned for more on each of these and other topics in the coming days.
 

Tagged with Comments
Catégories: Elsewhere

jmolivas.com: The DrupalConsole RC-1 release is close with a lot of changes.

Planet Drupal - mar, 09/08/2016 - 20:27
The DrupalConsole RC-1 release is close with a lot of changes.

To make the DrupalConsole project more modular and easy to maintain, we are decoupling into separated projects.

jmolivas Tue, 08/09/2016 - 18:27
Catégories: Elsewhere

Gábor Hojtsy: There will be a Drupal 9, and here is why

Planet Drupal - mar, 09/08/2016 - 20:20

Earlier this week Steve Burge posted the intriguingly titled There Will Never be a Drupal 9. While that sure makes you click on the article, it is not quite true.

Drupal 8.0.0 made several big changes but among the biggest is the adoption of semantic versioning with scheduled releases.

Scheduled releases were decided to happen around twice a year. And indeed, Drupal 8.1.0 was released on time, Drupal 8.2.0 is in beta and Drupal 8.3.x is already open for development and got some changes committed that Drupal 8.2.x will never have. So this works pretty well so far.

As for semantic versioning, that is not a Drupalism either, see http://semver.org/. It basically means that we have three levels of version numbers now with clearly defined roles. We increment the last number when we make backwards compatible bug fixes. We increment the middle number when we add new functionality in a backwards compatible way. We did that with 8.1.0 and are about to do it with 8.2.0 later this year. And we would increment the first number (go from 8.x.x to 9.0.0) when we make backwards incompatible changes.

So long as you are on some version of Drupal 8, things need to be backwards compatible, so we can just add new things. This still allows us to modernize APIs by extending an old one in a backwards compatible way or introducing a new modern API alongside an old one and deprecate (but not remove!) the old one. This means that after a while there may be multiple parallel APIs to send emails, create routes, migrate content, expose web services and so on, and it will be an increasingly bigger mess.

There must be a balance between increasing that mess in the interest of backwards compatibility and cleaning it up to make developer's lives easier, software faster, tests easier to write and faster to run and so on. Given that the new APIs deprecate the old ones, developers are informed about upcoming changes ahead of time, and should have plenty of time to adapt their modules, themes, distributions. There may even be changes that are not possible in Drupal 8 with parallel APIs, but we don't yet have an example of that.

After that Drupal 9 could just be about removing the bad old ways and keeping the good new ways of doing things and the first Drupal 9 release could be the same as the last Drupal 8 release with the cruft removed. What would make you move to Drupal 9 then? Well, new Drupal 8 improvements would stop happening and Drupal 9.1 will have new features again.

While this is not a policy set in stone, Dries Buytaert had this to say about the topic right after his DrupalCon Barcelona keynote in the Q&A almost a year ago:

Read more about and discuss when Drupal 9 may be open at https://www.drupal.org/node/2608062

Catégories: Elsewhere

php[architect]: Testing Your Drupal Site with Behat

Planet Drupal - mar, 09/08/2016 - 20:05

If automated testing is not already part of your development workflow, then it’s time to get started. Testing helps reduce uncertainty by ensuring that new features you add to your application do not break older features. Having confidence that your not breaking existing functionality reduces time spent hunting bugs or getting reports from clients by catching them earlier.

Unfortunately, testing still does not get the time and attention it needs when you’re under pressure to make a deadline or release a feature your clients have been asking for. But—like using a version control system and having proper development, staging, and production environments—it should be a routine part of how you do your work. We are professionals, after all. After reading all the theory, I only recently took the plunge myself. In this post, I’ll show you how to use Behat to test that your Drupal site is working properly.

Before we dive in, the Behat documentation describes the project as:

[…] an open source Behavior Driven Development framework for PHP 5.3+. What’s behavior driven development, you ask? It’s a way to develop software through a constant communication with stakeholders in form of examples; examples of how this software should help them, and you, to achieve your goals.

Basically, it helps developers, clients, and others communicate and document how an application should behave. We’ll see shortly how Behat tests are very easy to read and how you can extend them for your own needs.

Mink is an extension that allows testing a web site by simulating interacting with it through a browser to fill out form fields, click on links, and so forth. Mink lets you test via Goutte, which makes requests and parses the contents but can’t execute JavaScript. It can also use Selenium, which controls a real browser and can thus test JS and Ajax interactions, but Selenium requires more configuration.

Requirements

To get started, you’ll need to have Composer on your machine. If you don’t already, head over to the Composer Website. Once installed, you can add Behat, Mink, and Mink drivers to your project by running the following in your project root:

composer require behat/behat composer require behat/mink composer require behat/mink-selenium2-driver composer require behat/mink-extension

Once eveything runs, you’ll have a composer.json file with:

"require": { "behat/behat": "^3.1", "behat/mink": "^1.7", "behat/mink-selenium2-driver": "^1.3", "behat/mink-extension": "^2.2" },

This will download Behat and it’s dependencies into your vendor/ folder. To check that it works do:

vendor/bin/behat -V

There are other ways to install Behat, outlined in the quick introduction.

The Drupal community has a contrib project, Behat Drupal Extension, that is an integration for Behat, Mink, and Drupal. You can install it with the requre command below. I had to specify the ~3.0 version, otherwise composer couldn’t satisfy dependencies.

composer require drupal/drupal-extension:~3.0

And you’ll have the following in your composer.json:

"drupal/drupal-extension": "~3.0", Configuring Behat

When you run Behat, it’ll look for a file named behat.yml. Like Drupal 8, Behat uses YAML for configuration. The file tells Behat what contexts to use. Contexts provide the tests that you can run to validate behavior. The file configures the web drivers for Mink. You can also configure a region_map which the Drupal extension uses to map identifiers (left of the :) to CSS selectors to identify theme regions. These come in very handy when testing Drupal theme output.

The one I use looks like:

default: suites: default: contexts: - Drupal\DrupalExtension\Context\DrupalContext - Drupal\DrupalExtension\Context\MarkupContext - Drupal\DrupalExtension\Context\MessageContext - FeatureContext extensions: Behat\MinkExtension: goutte: ~ javascript_session: selenium2 selenium2: wd_host: http://local.dev:4444/wd/hub capabilities: {"browser": "firefox", "version": "44"} base_url: http://local.dev Drupal\DrupalExtension: blackbox: ~ region_map: breadcrumb: '#breadcrumb' branding: '#region-branding' branding_second: '#region-branding-second' content: '#region-content' content_zone: '#zone-content' footer_first: '#region-footer-first' footer_second: '#region-footer-second' footer_fourth: '#region-footer-fourth' menu: '#region-menu' page_bottom: '#region-page-bottom' page_top: '#region-page-top' sidebar_first: '#region-sidebar-first' sidebar_second: '#region-sidebar-second' Writing a Simple Feature

Now comes the fun part. Let’s look at writing a feature and how to test that what we expect is on the page. The first time we run it, we need to initialize Behat to generate a FeatureContext class. Do so with:

vendor/bin/behat --init

That should also create a features/ directory, where we will save the features that we write. To behat, a feature is test suite. Each test in a feature evaluates specific functionality on your site. A feature is a text file that ends in .feature. You can have more than one: for example, you might have a blog.feature, members.feature, and resources.feature if your site has those areas available.

Of course, don’t confuse what Behat calls a feature—a set of tests—with the Features module that bundles and exports related functionality into a Drupal module.

For my current project, I created a global.feature file that checks if the blocks I expect to have in my header and footer are present. The contents of that file are:

Feature: Global Elements Scenario: Homepage Contact Us Link Given I am on the homepage Then I should see the link "Contact Us" in the "branding_second" region Then I should see the "Search" button in the "branding_second" region Then I should see the "div#block-system-main-menu" element in the "menu" region

As you can see, the tests is very readable even though it isn’t purely parsing natural language. Indents help organize Scenarios (a group of tests) and the conditions needed for each scenario to pass.

You can set up some conditions for the test, starting with “Given”. In this case, given that we’re on the homepage. The Drupal Extension adds ways to specify that you are a specific user, or have a specific role, and more.

Next, we list what we expect to see on the webpage. You can also tell Behat to interact with the page by specifying a link to click, form field to fill out, or a button to press. Again here, the Drupal extension (by extending the MinkExtension), provides ways to test if a link or button are in one of our configured regions. The third test above uses a CSS selector, like in jQuery, to check that the main menu block is in the menu region.

Testing user authentication

If you’re testing a site that is not local, you can use the drush api driver to test user authentication, node creation, and more. First, setup a drush alias for your site (in this example, I’m using local.dev. Then add the following are in your behat.yml:

api_driver: 'drush' drush: alias: "local.dev"

You can then create a scenario to test the user login’s work without having to specify a test username or password by tagging them with @api

@api Scenario: Admin login Given I am on the homepage Given I am logged in as a user with the "admin" role Then I should see the heading "Welcome" in the "content" region

If you’ve customized the username text for login, your test will fail. Don’t worry! Just add the following to your behat.yml file so that the test knows what text to look for. In this case, the username field label is just E-mail.

text: username_field: "E-mail" Custom Testing by Extending Contexts

When you initialized Behat, it created a features/bootstraps/FeatureContext.php file. This can be a handy class for writing custom tests for unique features on your site. You can add custom tests by using the Drupal Extension’s own sub-contexts. I changed my Feature Context to extend the Mink Context like this:

class FeatureContext extends MinkContext implements SnippetAcceptingContext {

Note that if you do that, you’ll need to remove MinkContext from the explicit list of default context in behat.yml.

No matter how you organize them, you can then write custom tests as methods. For example, the following will test that a link appears in the breadcrumb trail of a page. You can use CSS selectors to find items on the page, such as the ‘#breadcrumb’ div in a theme. You can also re-use other tests defined by the MinkContext like findLink.

/** * @Then I should see the breadcrumb link :arg1 */ public function iShouldSeeTheBreadcrumbLink($arg1) { // get the breadcrumb /** * @var Behat\Mink\Element\NodeElement $breadcrumb */ $breadcrumb = $this->getSession()->getPage()->find('css', 'div#breadcrumb'); // this does not work for URLs $link = $breadcrumb->findLink($arg1); if ($link) { return; } // filter by url $link = $breadcrumb->findAll('css', "a[href=\"{$arg1}\"]"); if ($link) { return; } throw new \Exception( sprintf("Expected link %s not found in breadcrumb on page %s", $arg1, $this->getSession()->getCurrentUrl()) ); }

If your context implements the SnippetAwareContext, behat will generate the Docblock and method signature when it encounters an unknown test. If you’re feature has the following:

Then I should see "foo-logo.png" as the header logo.

When you run your tests, behat will output the error message below that you can copy and paste to your context. Anything in quotes becomes a parameter. The DocBlock contains the annotation Behat uses to find your test when it’s used in a scenario.

/** * @Then I should see :arg1 as the header logo. */ public function iShouldSeeAsTheHeaderLogo($arg1) { throw new PendingException(); } Selenium

Follow the Behat docs to install selenium: http://mink.behat.org/en/latest/drivers/selenium2.html. When you’re testing you’ll need to have it running via:

java -jar /path/to/selenium-server-standalone-2.53.0.jar

To tell Behat how to use selenium your behat.yml file should have:

selenium2: wd_host: http://local.dev:4444/wd/hub capabilities: {"browser": "firefox"}

You’ll also need to have Firefox installed. OF course, at the time of this writing, Firefox is asking people to transition from use Webdriver to Marionette for automating browser usage. I have Firefox 47 and it’s still working with Webdriver as far as I can tell. I have not found clear, concise instructions for using Marionette with Selenium. Another option is to use Phantom.JS instead of Selenium for any features that need a real browser.

Once everything is working—you’ll know it locally because a Firefox instance will pop up—you can create a scenario like the following one. Use the @javascript tag to tell Behat to use Selenium to test it.

@javascript Scenario: Modal Popup on click Given I am at "/some/page" When I click "View More Details" Then I wait for AJAX to finish Then I should see an "#modal-content" element Then I should see text matching "This is a Modal" Conclusion

If you don’t have tests for your site, I urge you to push for adding them as part of your ongoing work. I’ve slowly added them to my main Drupal client project over the last few months and it’s really started to pay off. For one, I’ve captured many requirements and expectations about how pages on the site work that were only in my or the project manager’s heads, if not lost in a closed ticket somewhere. Second, whenever I merge new work in and before any deploy I can run tests. If they are all green, I can be confident that new code and bug fixes haven’t caused a regression. At the same time, I now have a way to test the site that makes it less risky to re-factor or reorganize code. I didn’t spend a lot of time building tests, but as I work on a new feature or fix a bug, writing a test is now just part of confirming that everything works as expected. For complicated features, it’s also become a time saver to have a test that automates a complicated interactions—like testing a 3-page web form, since Behat can run that scenario much faster than I can manually.

The benefits from investing in automated testing outweigh any initial cost in time and effort to set them up. What are you waiting for?

Catégories: Elsewhere

Arpit Jalan: GSOC 2016- Moving supporting functions to services and abstract parent classes- Week 11

Planet Drupal - mar, 09/08/2016 - 18:11
TL;DR Last week I had worked on modifying the tests for “Fill Alt Text”, “Emotion Detection” and “Image Properties” features of the Google Vision API module. The only tasks left are moving the supporting functions to a separate service, in addition to, creating an abstract parent class for tests and moving the functions there.

The issues Alt Text field gets properly filled using various detection features, Emotion Detection(Face Detection) feature and Implementation of Image Properties feature of the Google Vision API module are still under review by my mentors. Meanwhile, my mentors asked me to move the supporting functions of the “Fill Alt Text” issue to a separate service and use it from there. In addition, they also suggested me to create an abstract parent class for the Google Vision simple tests, and move the supporting functions to the parent class. Thus, this week, I contributed to follow these suggestions and implement them out.

There are few supporting functions, namely, google_vision_set_alt_text() and google_vision_edit_alt_text() to fill the Alt Text in accordance to the feature requested from the Vision API, and also to manipulate the value, if needed. I moved these functions to a separate service, namely, FillAltText, and have altered the code to use the functions from there instead of directly accessing them.

In addition, there are a number of supporting functions used in the simple web tests of the module, to create users, contents and fields, which were placed in the test file itself, which in one way, is a kind of redundancy. Hence, I moved all these supporting functions to abstract parent class named GoogleVisionTestBase, and altered the test classes to extend the parent class instead and in place of WebTestBase. This removed the redundant code, as well as, gave a proper structure and orientation to the web tests.
These minor changes would be committed to the module directly, once the major issues are reviewed by my mentors and committed to the module.
Catégories: Elsewhere

Galaxy: GSoC’ 16: Port Search Configuration module; coding week #11

Planet Drupal - mar, 09/08/2016 - 18:05

I have been involved with the Google Summer of Code' 16 project Search Configuration module to Drupal 8. It is entering into the final week of coding. I have really enjoyed the project process. This give university students a unique opportunity to code for real world projects that will be used by the various open source organisations.
Past week I dealt with testing certain components of my ported module. I worked on implementing the simpletest to test the various functionalities. I would like to take this opportunity to share with you the features of simpletest and how to implement it to test the proper working of the functionalities of various sections of the Drupal modules.

The test files are generally kept in src/Tests/ of the root of the module directory. The name of the test file should be suffixed by Test.php. Add a class that extends the webTestBase. Also, you need to add a namespace for the easy location of the class.

Add the name of the module to get loaded. i.e,
public static $module = ['search_config'];

Now lets start writing the test cases. The functions which should be executed should be prefixed by 'test'. For instance, to navigate to a particular link and check whether it returns the correct page;
public function testLink() {
$this->drupalGet('mypages/search');
$this->assertResponse(200);
}

You can run this test by going to Configuration->Testing->Select the test.
Also, add this test file to the .info.yml file. The following line of code will be sufficient enough.
files[]: src/Tests/fileTest.php

There are various types of test cases, here are the assertions available.
So, here is the complete structure of a sample test file to check the accessibility of a page in my search configuration module:
<?php

namespace Drupal\search_config\Tests;
use Drupal\simpletest\WebTestBase;

/**
* Class searchTest
* @group search_config
*/
class SearchConfigFormTest extends WebTestBase {
public static $module = ['search_config'];

/**
* Function to test the accessibility of the search configuration module location.
*/
public function testSearchConfigURL() {
$this->drupalGet('admin/config/search/pages');
$this->assertResponse(200);
}
}

We need to ensure that we write the group of test, i.e the @group tag. Here in this case is the search_config. The tests won't appear in the testing option in the absence of this tag. Also, ensure that a good definition of the function is given as comments for better understanding for the users reading the code.
You could also try out the various other assertions available to explore the various functionalities implemented. The development version of the module is currently available in the Drupal core.

Stay tuned for further updates on this porting process.

Tags: drupal-planet
Catégories: Elsewhere

Acquia Developer Center Blog: A Gentle Introduction to Data Science

Planet Drupal - mar, 09/08/2016 - 16:23

The words "Data Science" are not themselves sources of dread in most people. At just four and seven letters, respectively, they're almost too cute to be really off-putting like some of the other terms you come across when you begin digging into the field; terms like "k-nearest neighbors" or "tessellation."

And if you can hear the phrase "Euclidian minimum spanning tree" without feeling as though you've encountered something both bizarrely fascinating and deeply disturbing, you are a stronger intellectual force than I.

Tags: acquia drupal planet
Catégories: Elsewhere

Cheeky Monkey Media: Bootstrap Carousel + Drupal Paragraphs = Magic

Planet Drupal - mar, 09/08/2016 - 16:17
Bootstrap Carousel + Drupal Paragraphs = Magic calvin Tue, 08/09/2016 - 14:17

NOTE: This tutorial assumes you have some knowledge of Drupal site building, field and field display management. It will include some theming code examples.

If you've never heard of the Paragraphs module, it allows you to create "bundles" of fields and then call in those bundles via a paragraphs field on another entity. You can then add any number of these bundles into the entity, in any combination, to build out a page or some other super snazzy component.

Build bundles for a masthead, a pull quote, a banner, a simple text area... etc., and an editor now has a collection of tools to build dynamic pages, beyond the oldschool Title/Body/WYSIWYG situation. There are even modules that will help you get all parallaxy and animated with paragraphs.

The really beautiful thing is, you can nest paragraphs items. Yes, paragraphs items, within paragraphs items within... (The word Inception popped up a lot as we worked out various models).

Ok, so here's a practical example I built recently.

Catégories: Elsewhere

Anexus: How to add ReactJS in Drupal 8 with Composer

Planet Drupal - mar, 09/08/2016 - 15:37
How to add ReactJS in Drupal 8 with Composer

Inside the Drupal-verse the usage of Composer in our workflow is getting more and more popular.

To be honest, the immersion of composer in Drupal community hasn't been easy, but changes never are easy.

For the beginning of 2015, a new project sees the light Drupal Composer with the idea of modifying the way we do our development process in Drupal.

Drupal Composer project cover from install Drupal core, modules, themes, dependencies, patches and a long list of etc.

One of the elements that Drupal Composer it's going to replace is the famous *Drush Make*.

As I said the changes are not easy, and mostly everybody in our community is little lost (including me) regarding became familiar with this new way of doing our daily task in your jobs.

For that reason, I want to share with your, how to install ReactJS as a library to be used in our Drupal 8 Projects.

Maybe you could be thinking, well what is the fuss, and we could install ReactJS easily using Bower

But, the problem of use Bower is related with the fact of Bower doesn't know anything about Drupal, Bower doesn't have any idea where libraries in Drupal must be allocated, for that reason isn't the proper tool for our needs.

1. Drupal Installation.

In this example I will assume that did you install Drupal using *Drupal Composer* project, using an instruction similar to the following:

$ composer create-project drupal-composer/drupal-project:8.x-dev some-dir --stability dev --no-interaction 2. Adding repository

The library, ReactJS is not part of composer Packagist, but event that is possible to define a custom repository using ReactJS GitHub repository, adding a new composer repository in your composer.json file

"repositories": [ { "type": "package", "package": { "version": "15.3.0", "name": "drupal-libraries/reactjs", "type": "drupal-library", "source": { "url": "https://github.com/facebook/react.git", "type": "git", "reference": "15.3.0" }, "dist": { "url": "https://github.com/facebook/react/releases/download/v15.3.0/react-15.3.0.zip", "type": "zip" } } } ]

As you can see, we provide the ReactJS information in two formats, the first one is the GitHub repo and the second is a zip file of a specific release, depends of composer options, ReactJS will be clones or just downloaded and extracted. 

3. Adding ReactJS  in your project 

With your new ReactJS repository correctly added in your composer.json file the only remaining task in download the code using the next instructions.

$ composer require drupal-libraries/reactjs 15.3.0

When this command finished we will get inside /web/libraries/reactjs the requires files

libraries `-- reactjs |-- build | |-- react-dom.js | |-- react-dom.min.js | |-- react-dom-server.js | |-- react-dom-server.min.js | |-- react.js | |-- react.min.js | |-- react-with-addons.js | `-- react-with-addons.min.js |-- examples | |-- basic | |-- basic-click-counter | |-- basic-commonjs | |-- basic-jsx | |-- basic-jsx-external | |-- basic-jsx-harmony | |-- basic-jsx-precompile | |-- fiber | |-- jquery-bootstrap | |-- jquery-mobile | |-- quadratic | |-- README.md | |-- shared | |-- transitions | `-- webcomponents `-- README.md

I recommend checking the slides Improving your Drupal 8 development workflow http://weknowinc.com/talks/2016/drupalgov-workflow for more references about how to use Composer in Drupal 8 projects
 

If you are interested in more tricks and tips related to the new workflow in Drupal 8, stay tuned because Jesus Olivas and I will propose a BOF in DrupalCon Dublin to talk about the subject. 

enzo Tue, 08/09/2016 - 07:37
Catégories: Elsewhere

Reproducible builds folks: Reproducible builds: week 67 in Stretch cycle

Planet Debian - mar, 09/08/2016 - 14:56

What happened in the Reproducible Builds effort between Sunday July 31 and Saturday August 6 2016:

Toolchain development and fixes
  • dpkg/1.18.10 by Guillem Jover.
    • Generate reproducible source tarballs by using the new GNU tar --clamp-mtime option
    • Enable fixdebugpath build flag feature by default, original patch by Mattia Rizzolo.
  • cython/0.24.1-1 by Yaroslav Halchenko.
  • Chris Lamb and Thomas Schmidt worked on some patches to make reproducible ISO images.
  • Johannes Schauer continued the discussion on #763822 regarding dak and buildinfo files.
  • Johannes Schauer continued the discussion on #774415 regarding srebuild and debrebuild.
Packages fixed and bugs filed

The following 24 packages have become reproducible - in our current test setup - due to changes in their build-dependencies: alglib aspcud boomaga fcl flute haskell-hopenpgp indigo italc kst ktexteditor libgroove libjson-rpc-cpp libqes luminance-hdr openscenegraph palabos petri-foo pgagent sisl srm-ifce vera++ visp x42-plugins zbackup

The following packages have become reproducible after being fixed:

The following newly-uploaded packages appear to be reproducible now, for reasons we were not able to figure out. (Relevant changelogs did not mention reproducible builds.)

  • libitext-java/2.1.7-1 by Emmanuel Bourg.
  • lice/1:4.2.5i-2 by Kurt Roeckx.
  • pgbackrest/1.04-1 by Adrian Vondendriesch.
  • pxlib/0.6.7-1 by Uwe Steinmann.
  • runit/2.1.2-5 by Dmitry Bogatov.
  • ssvnc/1.0.29-3 by Magnus Holmgren.
  • syncthing/0.14.3+dfsg1-3 by Alexandre Viau.
  • tachyon/0.99~b6+dsx-5 by Jerome Benoit.
  • tor/0.2.8.6-2 by Peter Palfrader.

Some uploads have addressed some reproducibility issues, but not all of them:

Patches submitted that have not made their way to the archive yet:

Package reviews and QA

These are reviews of reproduciblity issues of Debian packages.

276 package reviews have been added, 172 have been updated and 44 have been removed in this week.

7 FTBFS bugs have been reported by Chris Lamb.

Reproducibility tools
  • diffoscope/56~bpo8+1 uploaded to jessie-backports by Mattia Rizzolo
  • strip-nondeterminism/0.022-1~bpo8+1 uploaded to jessie-backports by Mattia Rizzolo
Test infrastructure

For testing the impact of allowing variations of the buildpath (which up until now we required to be identical for reproducible rebuilds), Reiner Herrmann contribed a patch which enabled build path variations on testing/i386. This is possible now since dpkg 1.18.10 enables the --fixdebugpath build flag feature by default, which should result in reproducible builds (for C code) even with varying paths. So far we haven't had many results due to disturbances in our build network in the last days, but it seems this would mean roughly between 5-15% additional unreproducible packages - compared to what we see now. We'll keep you updated on the numbers (and problems with compilers and common frameworks) as we find them.

lynxis continued work to test LEDE and OpenWrt on two different hosts, to include date variation in the tests.

Mattia and Holger worked on the (mass) deployment scripts, so that the - for space reasons - only jenkins.debian.net GIT clone resides in ~jenkins-adm/ and not anymore in Holger's homedir, so that soon Mattia (and possibly others!) will be able to fully maintain this setup, while Holger is doing siesta.

Miscellaneous

Chris, dkg, h01ger and Ximin attended a Core Infrastricture Initiative summit meeting in New York City, to discuss and promote this Reproducible Builds project. The CII was set up in the wake of the Heartbleed SSL vulnerability to support software projects that are critical to the functioning of the internet.

This week's edition was written by Ximin Luo and Holger Levsen and reviewed by a bunch of Reproducible Builds folks on IRC.

Catégories: Elsewhere

InternetDevels: Drupal 8: overview of the Commerce 2.x module for online stores

Planet Drupal - mar, 09/08/2016 - 14:28

E-commerce in Drupal 8 is a topic of interest for many developers. If you are among them, you’ve come to the right place — our Drupal developer has written a blog post on the Commerce 2.x module.

Read more
Catégories: Elsewhere

ComputerMinds.co.uk: Drupal 8 Config Management - how should I add config to a D8 site?

Planet Drupal - mar, 09/08/2016 - 14:00

For me this is the biggest unanswered question hanging over my development of Drupal 8 websites: How should I add config to a Drupal 8 site?

This article will provide plenty of options, but unfortunately no definitive answer.

Catégories: Elsewhere

Thorsten Alteholz: My Debian Activities in July 2016

Planet Debian - mar, 09/08/2016 - 11:16

FTP assistant

This month I marked 248 packages for accept and rejected 60. I also sent 13 emails to maintainers asking questions. Again, this was a rather quiet month without much trouble.

Debian LTS

This was my twenty-fifth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.

As the number of participants increases, this month my all in all workload has been only 14.70h. Strangely enough, most of the time I choosed packages, where at the end the vulnerable code of the corresponding CVE was not present in the Wheezy version. So I could mark several CVEs for bind, libgd2 and mupfd as not-affected, without doing an upload.

Nevertheless I also did two uploads to fix another two CVEs:

  • [DLA 563-1] libgd2 security update
  • [DLA 569-1] xmlrpc-epi security update

As there arrived some new CVEs for PHP5 I didn’t do an upload this month. But don’t purge your testing environments, a new version is comming soon :-).

This month I also had another term of frontdesk work.

Other stuff

For the Alljoyn framework I took care of RC-bug #829148.

I also uploaded a new version of rplay to fix #805959.

In the Javascript world I could close #831006

Catégories: Elsewhere

Shirish Agarwal: Doha and the Supreme Court of DFSG Free

Planet Debian - mar, 09/08/2016 - 11:16

Hi,

I am in two minds of what to write about Doha. My job has been vastly simplified by a friend when he shared with me https://www.youtube.com/watch?v=LdrAd-44LW0 . That video is more relevant and more closer to the truth than whatever I can share. As can be seen it is funny but more sad the way Qatarians are trying to figure out how things will be and as can be seen it seems to heading towards a ‘real estate bubble’ . They would have to let go of the Sharia if they are thinking of wealthy westerners coming to stay put. I am just sad to know that many of my country-men are stuck there and although I hope the best for them, I dread it may turn out the way it has turned out for many people of Indians, and especially from Kerala in Saudi Arabia. I would touch about the Kerala situation probably in another blog post as this time is exclusively for legal aspects which were discussed in Debconf.

A bit of backgrounder here, one part of my family is lawyers which means I have somewhat notion of law as practiced in our land. As probably everybody knows, India was ruled by the British for around 150 odd years. One of the things that they gave while leaving was/is the IPC (Indian Penal Code) and is practiced with the common law concept. The concept means precedence of any judgement goes quite some way in framing rulings and law of the land as time goes on besides the lobbying and the politics which happens in any democracy.

Free software would not have been there without the GPL – The General Public License. And the license is as much a legal document as it’s something that the developers can work without becoming deranged, as it is one of the more simpler licenses to work with.

My own understanding of the legal, ethical and moral issues around me were framed by two-three different TV shows, books (fiction and non-fiction alike) apart from what little news I heard in family. One was ‘M*A*S*H* (with Alan Alda and his frailness, anarchism, humanism, civil rights), the ‘Practise’ and ‘Boston Legal’ which does lay bare the many grey areas that lawyers have to deal with (‘The Practice’ also influenced a lot of civil rights understanding and First amendment, but as it is a TV show, how much of it is actually practiced for lawyers and how much moral dilemma they are can only be guessed at.) . In books it is artists like John Grisham, Michael Connelly as well as Perry Mason – Agatha Christie. In non-fiction look at the treasures under bombayhighcourt e-books corner and series of Hamlyn Lectures. I would have to warn that all of the above are major time-sinks but rewarding in their own way. Also haven’t read all of them as time and interests are constrained but do know they are good for understanding bit of our history. I do crave for a meetup kind of scenario when non-lawyers can read and discuss about facets of law .

All that understanding was vastly amplified by Groklaw.net which made non-lawyers at the very least be able to decipher and understand what is going on in the free software world. After PJ (Pamela Jones) closed it in 2013 due to total surveillance by the Free World (i.e. the United States of America, NSA) we have been thirsty. We do get occasionally somewhat mildly interesting articles in lwn.net or arstechnica.net but nowhere the sheer brilliance of groklaw.

So, it was a sheer stroke of luck that I met Mr. Bradley M. Kuhn who works with Karen Sandler on Software Conservancy. While I wanted to be there for his presentation, it was just one of those days which doesn’t go as planned. However, as we met socially and over e-mail there were two basic questions which I asked him which also imbibes why we need to fight for software freedom in the court of law. Below is a re-wording of what he shared .

Q1. why do people think that GPL still needs to be challenged in the court of law while there are gpl-violations which has been more or less successfully defended in the court of law ?

Bradley Kuhn – the GPL violations is basically a violation of one or more clauses of the GPL license and not the GPL license as a whole and my effort during my lifetime would be to make/have such precedents that the GPL is held as a valid license in the court of law.

Q2. Let’s say IF GPL is held to be valid in the court of law, would FSF benefit monetarily, at least to my mind it might be so, as more people and comapnies could be convinced to use strong copyleft licenses such as GPLv3 or AGPLv3 .

Bradley Kuhn – It may or may not. It is possible that even after winning, that people and especially companies may go for weak copyleft licenses if it suits them. The only benefit would probably would be to those people who are already using GPLv3 as the law could be used to protect them as well. Although we would want and welcome companies who would use strong copyleft license such as the GPL, the future is ‘in future’ and hence uncertain. Both possibilities co-exist.

While Bradley didn’t say it, I would add further here it probably would mean also moving from being a more offensive mode (which GPL-violations is based upon where a violation occurs and somebody either from the victim’s side or a by-stander notices the violation, brings it to the notice of the victim and the GPL-volations team.) to perhaps it being defended by the DMCA people themselves, once GPL is held as a valid license in the eyes of law. Although should you use the DMCA or not is a matter of choice, personal belief system as well as your legal recourses.

I have to share that the FSF and the GPL-violations team are probably very discerning when they take up the fight as most of the work done by them is pro-bono (i.e. they don’t make a single penny/paisa from the work done therein.) and hence in view of scarce resources, it makes sense to go only for the biggest violators in the hopes that you can either make them agree to compensate and agree to the terms of license of any software/hardware combination or sue them and take a bigger share of the reward/compensation awarded by the Court to help the defendant and maybe some of the proceeds donated by the defendant and people like you and me to make sure that Conservancy and the GPL-violations team is still around to help the next time something similar happens.

Bradley Kuhn presenting at #Debconf 16

Now, as far as his presentation is concerned, whose video can be seen at http://meetings-archive.debian.net/pub/debian-meetings/2016/debconf16/The_Supreme_Court_of_DFSGFree.webm , I thought it was tame. While he talked about ‘gaming the system’ in some sense, he was sharing that the system debian-legal works (most-of-the-time). The list actually works because many far more brilliant people than me take time to understand the intricacies of various licenses and how they should be interpreted through the excellently written Debian Free Software Guidelines and whether the license under discussion contravenes the DFSG or is part of it. I do agree with his point though that the ftp-master/s and the team may not be the right person to judge the license in adherence to the DFSG, or her/is not giving a reason for rejecting a package to not entering into the package archive.

I actually asked the same question on debian-legal and while I had guessed, it seems there is enough review of the licenses per-se as answer from Paul Wise shows. Charles Pessley also shared an idea he has documented which probably didn’t get much traction as involves more ‘work’ on DD’s without any benefit to show for it. All in all I hope it sheds some light on why there is need to be more aware of law in software freedom. Two Organizations which work on software freedom from legal standpoint are SFLC  (Delhi) headed by the charming Mr. Eben Moglen  and ALF (Bangalore). I do hope more people, especially developers take a bit more interest in some of the resources mentioned above.


Filed under: Miscellenous Tagged: #Alternative Law Forum, #bombayhighcourt e-library, #Common Law, #Debconf16, #Fiction, #Hewlyn lectures, #India, #Jurispudence, #legal fiction, #real estate bubble, #SFLC.in, #Software Freedom, #timesink, Doha, Law
Catégories: Elsewhere

Pages

Subscribe to jfhovinne agrégateur - Elsewhere