Elsewhere

Michal Čihař: Porting python-gammu to Python 3

Planet Debian - ven, 27/03/2015 - 18:00

Over the time I started to get more and more requests to have python-gammu working with Python 3. Of course this request makes sense, but I somehow failed to find time for that.

Also for quite some time python-gammu has been distributed together with Gammu sources. This was another struggle to overcome when supporting Python 3 as in many cases users will want to build the module for both Python 2 and 3 (at least most distributions will want to do so) and with current CMake based build system this did not seem to be easy to achieve.

So I've decided it's time to split python module out of the library. The reasons for having that together are no longer valid (libGammu has quite stable API these days) and having standard module which can be installed by pip is a nice thing.

Once the code has been put into separate git module, I've slowly progressed on porting to Python 3. Most of the problems were on the C side of the code, where Python really does not make it easy to support both Python 2 and 3. So the code ended up with many #ifdefs, but I see no other way. While doing these changes, many points in the API were fixed to accept unicode stings in Python 2 as well.

Anyway, today we have first successful build of python-gammu working on both Python 2 and 3. I'm afraid there is still some bug leading to occasional segfaults on Travis, but not reproducible locally. But hopefully this will be fixed in upcoming weeks and we can release separate python-gammu module again.

Filed under: English Gammu python-gammu Wammu | 0 comments | Flattr this!

Catégories: Elsewhere

LevelTen Interactive: Part 2: More Videos That Will Help You Get Started with Drupal

Planet Drupal - ven, 27/03/2015 - 16:41

If you are new to Drupal, take a look at our previous blog New To Drupal? These Videos Will Help You Get StartedIf you just got started in Drupal, how about we provide you these short but thorough tutorial videos on Working with Content.... Read more

Catégories: Elsewhere

Annertech: COPE - Create Once, Publish Everywhere

Planet Drupal - ven, 27/03/2015 - 16:41
COPE - Create Once, Publish Everywhere

When developing websites, we always aim take a “COPE” approach to web content management. COPE – Create Once Publish Everywhere – was popularised by NPR (National Public Radio) in the United States. It is a content management philosophy that seeks to allow content creators to add content in one place and then use it in various forms in other places.

Catégories: Elsewhere

Chromatic: Integrating Recurly and Drupal

Planet Drupal - ven, 27/03/2015 - 16:21

If you’re working on a site that needs subscriptions, take a look at Recurly. Recurly’s biggest strength is its simple handling of subscriptions, billing, invoices, and all that goes along with it. But how do you get that integrated into your Drupal site? Let’s walk through it.

There are a handful of pieces that work to connect your Recurly account and your Drupal site.
  1. The Recurly PHP library.
  2. The recurly.js library (optional, but recommended).
  3. The Recurly module for Drupal.

The first thing you need to do is bookmark is the Recurly API documentation.
Note: The Drupal Recurly module is still using v2 of the API. A re-write of the module to support v3 is in the works, but we have few active maintainers right now (few meaning one, and you’re looking at her). If you find this module of use or potential interest, pop into the issue queue and lend a hand writing or reviewing patches!

Okay, now that I’ve gotten that pitch out of the way, let’s get started.

I’ll be using a new Recurly account and a fresh install of Drupal 7.35 on a local MAMP environment. I’ll also be using drush as I go along (Not using drush?! Stop reading this and get it set up, then come back. Your life will be easier and you’ll thank us.)

  1. The first step is to sign up at https://recurly.com/ and get your account set up with your subscription plan(s). Your account will start out in a sandbox mode, and once you have everything set up with Recurly (it’s a paid service), you can switch to production mode. For our production site, we have a separate account that’s entirely in sandbox mode just for dev and QA, which is nice for testing, knowing we can’t break anything.
  2. Recurly is dependent on the Libraries module, so make sure you’ve got that installed (7.x-2.x version). drush dl libraries && drush en libraries
  3. You’ll need the Recurly Client PHP library, which you’ll need to put into sites/all/libraries/recurly. This is also an open-source, community-supported library, using v2 of the Recurly API. If you’re using composer, you can set this as a dependency. You will probably have to make the libraries directory. From the root of your installation, run mkdir sites/all/libraries.
  4. You need the Recurly module, which comes with two sub-modules: Recurly Hosted Pages and Recurly.js. drush dl recurly && drush en recurly
  5. If you are using Recurly.js, you will need that library, v2 of which can be found here. This will need to be placed into sites/all/libraries/recurly-js.
    Your /libraries/ directory should look something like this now:
Which integration option is best for my site? There are three different ways to use Recurly with Drupal.

You can just use the library and the module, which include some built-in pages and basic functionality. If you need a great deal of customization and your own functionality, this might be the option for you.

Recurly offers hosted pages, for which there is also a Drupal sub-module. This is the least amount of integration with Drupal; your site won’t be handling any of the account management. If you are low on dev hours or availability, this may be a good option.

Thirdly, and this is the option we are using for one of our clients and demonstrating in this tutorial, you can use the recurly.js library (there is a sub-module to integrate this). Recurly.js is a client-side credit-card authorization service which keeps credit card data from ever touching your server. Users can then make payments directly from your site, but with much less responsibility on your end. You can still do a great deal of customization around the forms – this is what we do, as well as customized versions of the built-in pages.

Please note: Whichever of these options you choose, your site will still need a level of PCI-DSS Compliance (Payment Card Industry Data Security Standard). You can read more about PCI Compliance here. This is not prohibitively complex or difficult, and just requires a self-assessment questionnaire.

Settings

You should now have everything in the right place. Let’s get set up.

  1. Go to yoursite.dev/admin/config (just click Configuration at the top) and you’ll see Recurly under Web Services.
  2. You’ll now see a form with a handful of settings. Here’s where to find the values in your Recurly account. Once you set up a subscription plan in Recurly, you’ll find yourself on this page. On the right hand side, go to API Credentials. You may have to scroll down or collapse some menus in order to see it.
  3. Your Private API Key is the first key found on this page (I’ve blocked mine out):
  4. Next, you’ll need to go to Manage Transparent Post Keys on the right. You will not need the public key, as it’s not used in Recurly.js v2.
  5. Click to Enable Transparent Post and Recurly.js v2 API.
  6. Now you’ll see your key. This is the value you’ll enter into the Transparent Post Private Key field.
  7. The last basic setup step is to enter your subdomain. The help text for this field is currently incorrect as of 3/26/2015 and will be corrected in the next release. It is correct in the README file, and on the project page. There is no longer a -test suffix for sandbox mode. Copy your subdomain either from the address bar or from the Site Settings. You don’t need the entire url, so in my case, the subdomain is alanna-demo.
  8. With these settings, you can accept the rest of the default values and be ready to go. The rest of the configuration is specific to how you’d like to set up your account, how your subscription is configured, what fields you want to record in Recurly, how much custom configuration you want to do, and what functionality you need. The next step, if you are using Recurly’s built-in pages, is to enable your subscription plans. In Drupal, head over to the Subscription Plans tab and enable the plans you want to use on your site. Here I’ve just created one test plan in Recurly. Check the boxes next to the plan(s) you want enabled, and click Update Plans.
Getting Ready for Customers

So you have Recurly integrated, but how are people going to use it on your Drupal site? Good question. For this tutorial, we’ll use Recurly.js. Make sure you enable the submodule if you haven’t already: drush en recurlyjs. Now you’ll see some new options on the Recurly admin setting page.

I’m going to keep the defaults for this example. Now when you go to a user account page, you’ll see a Subscription tab with the option to sign up for a plan.

Clicking Sign up will bring you to the signup page provided by Recurly.js.

After filling out the fields and clicking Purchase, you’ll see a handful of brand new tabs. I set this subscription plan to have a trial period, which is reflected here.

Keep in mind, this is the default Drupal theme with no styling applied at all. If you head over to your Recurly account, you’ll see this new subscription.

There are a lot of configuration options, but your site is now integrated with Recurly. You can sign up, change, view, and cancel accounts. If you choose to use coupons, you can do that as well, and we’ve done all of this without any custom code.

If you have any questions, please read the documentation, or head over to the Recurly project page on Drupal.org and see if it’s answered in the issue queue. If not, make sure to submit your issue so that we can address it!

Catégories: Elsewhere

Deeson: Drush Registry Rebuild

Planet Drupal - ven, 27/03/2015 - 15:30

Keep Calm and Clear Cache!

This is an often used phrase in Drupal land. Clearing cache fixes many issues that can occur in Drupal, usually after a change is made and then isn't being reflected on the site.

But sometimes, clearing cache isn't enough and a registry rebuild is in order.

The Drupal 7 registry contains an inventory of all classes and interfaces for all enabled modules and Drupal's core files. The registry stores the path to the file that a given class or interface is defined in, and loads the file when necessary. On occasion a class maybe moved or renamed and then Drupal doesn't know where to find it and what appears to be unrecoverable problems occur.

One such example might be if you move the location of a module. This can happen if you have taken over a site and all the contrib and custom modules are stored in the sites/all/modules folder and you want to separate that out into sites/all/modules/contrib and sites/all/modules/custom.  After moving the modules into your neat sub folders, things stop working and clearing caches doesn't seem to help.

Enter, registry rebuild.  This isn't a module, its a drush command. After downloading from drupal.org, the registry_rebuild folder should be placed into the directory sites/all/drush.

You should then clear the drush cache so drush knows about the new command

drush cc drush

Then you are ready to rebuild the registry

drush rr

Registry rebuild is a standard tool we use on all projects now and forms part of our deployment scripts when new code is deployed to an environment.

So the next time you feel yourself about to tear your hair out and you've run clear cache ten times, keep calm and give registry rebuild a try.

Catégories: Elsewhere

Joachim's blog: A script for making patches

Planet Drupal - ven, 27/03/2015 - 14:46

I have a standard format for patchnames: 1234-99.project.brief-description.patch, where 1234 is the issue number and 99 is the (expected) comment number. However, it involves two copy-pastes: one for the issue number, taken from my browser, and one for the project name, taken from my command line prompt.

Some automation of this is clearly possible, especially as I usually name my git branches 1234-brief-description. More automation is less typing, and so in true XKCD condiment-passing style, I've now written that script, which you can find on github as dorgpatch. (The hardest part was thinking of a good name, and as you can see, in the end I gave up.)

Out of the components of the patch name, the issue number and description can be deduced from the current git branch, and the project from the current folder. For the comment number, a bit more work is needed: but drupal.org now has a public API, so a simple REST request to that gives us data about the issue node including the comment count.

So far, so good: we can generate the filename for a new patch. But really, the script should take care of doing the diff too. That's actually the trickiest part: figuring out which branch to diff against. It requires a bit of git branch wizardry to look at the branches that the current branch forks off from, and some regular expression matching to find one that looks like a Drupal development branch (i.e., 8.x-4.x, or 8.0.x). It's probably not perfect; I don't know if I accounted for a possibility such as 8.x-4.x branching off a 7.x-3.x which then has no further commits and so is also reachable from the feature branch.

The other thing this script can do is create a tests-only patch. These are useful, and generally advisable on drupal.org issues, to demonstrate that the test not only checks for the correct behaviour, but also fails for the problem that's being fixed. The script assumes that you have two branches: the one you're on, 1234-brief-description, and also one called 1234-tests, which contains only commits that change tests.

The git workflow to get to that point would be:

  1. Create the branch 1234-brief-description
  2. Make commits to fix the bug
  3. Create a branch 1234-tests
  4. Make commits to tests (I assume most people are like me, and write the tests after the fix)
  5. Move the string of commits that are only tests so they fork off at the same point as the feature branch: git rebase --onto 8.x-4.x 1234-brief-description 1234-tests
  6. Go back to 1234-brief-description and do: git merge 1234-tests, so the feature branch includes the tests.
  7. If you need to do further work on the tests, you can repeat with a temporary branch that you rebase onto the tip of 1234-tests. (Or you can cherry-pick the commits. Or do cherry-pick with git rev-list, which is a trick I discovered today.)

Next step will be having the script make an interdiff file, which is a task I find particularly fiddly.

Tags: gitpatchingdrupal.orgworkflow
Catégories: Elsewhere

agoradesign: Introducing the Outdated Browser module

Planet Drupal - ven, 27/03/2015 - 14:25
We proudly present our first official drupal.org project, which we released about two months ago: the Outdated Browser module. It detects outdated browsers and advises users to upgrade to a new version - in a very pretty looking way.
Catégories: Elsewhere

Drupal Watchdog: Entity Storage, the Drupal 8 Way

Planet Drupal - ven, 27/03/2015 - 13:18

In Drupal 7 the Field API introduced the concept of swappable field storage. This means that field data can live in any kind of storage, for instance a NoSQL database like MongoDB, provided that a corresponding backend is enabled in the system. This feature allows support of some nice use cases, like remotely-stored entity data or exploit storage backends that perform better in specific scenarios. However it also introduces some problems with entity querying, because a query involving conditions on two fields might end up needing to query two different storage backends, which may become impractical or simply unfeasible.

That's the main reason why in Drupal 8, we switched from field-based storage to entity-based storage, which means that all fields attached to an entity type share the same storage backend. This nicely resolves the querying issue without imposing any practical limitation, because to obtain a truly working system you were basically forced to configure all fields attached to the same entity type to share the same storage engine. The main feature that was dropped in the process, was the ability to share a field between different entity types, which was another design choice that introduced quite a few troubles on its own and had no compelling reason to exist.

With this change each entity type has a dedicated storage handler, that for fieldable entity types is responsible for loading, storing, and deleting field data. The storage handler is defined in the handlers section of the entity type definition, through the storage key (surprise!) and can be swapped by modules implementing hook_entity_type_alter().

Querying Entity Data

Since we now support pluggable storage backends, we need to write storage-agnostic contrib code. This means we cannot assume entities of any type will be stored in a SQL database, hence we need to rely more than ever on the Entity Query API, which is the successor of the Entity Field Query system available in Drupal 7. This API allows you to write complex queries involving relationships between entity types (implemented via entity reference fields) and aggregation, without making any assumption on the underlying storage. Each storage backend requires a corresponding entity query backend, translating the generic query into a storage-specific one. For instance, the default SQL query backend translates entity relationships to JOINs between entity data tables.

Entity identifiers can be obtained via an entity query or any other viable means, but existing entity (field) data should always be obtained from the storage handler via a load operation. Contrib module authors should be aware that retrieving partial entity data via direct DB queries is a deprecated approach and is strongly discouraged. In fact by doing this you are actually completely bypassing many layers of the Entity API, including the entity cache system, which is likely to make your code less performant than the recommended approach. Aside from that, your code will break as soon as the storage backend is changed, and may not work as intended with modules correctly exploiting the API. The only legal usage of backend-specific queries is when they cannot be expressed through the Entity Query API. However also in this case only entity identifiers should be retrieved and used to perform a regular (multiple) load operation.

Storage Schema

Probably one of the biggest changes introduced with the Entity Storage API, is that now the storage backend is responsible for managing its own schema, if it uses any. Entity type and field definitions are used to derive the information required to generate the storage schema. For instance the core SQL storage creates (and deletes) all the tables required to store data for the entity types it manages. An entity type can define a storage schema handler via the aptly-named storage_schema key in the handlers section of the entity type definition. However it does not need to define one if it has no use for it.

Updates are also supported, and they are managed via the regular DB updates UI, which means that the schema will be adapted when the entity type and field definitions change or are added or removed. The definition update manager also triggers some events for entity type and field definitions, that can be useful to react to the related changes. It is important to note that not all kind of changes are allowed: if a change implies a data migration, Drupal will refuse to apply it and a migration (or a manual adjustment) will be required to proceed.

This means that if a module requires an additional field on a particular entity type to implement its business logic, it just needs to provide a field definition and apply changes (there is also an API available to do this) and the system will do the rest. The schema will be created, if needed, and field data will be natively loaded and stored. This is definitely a good reason to define every piece of data attached to an entity type as a field. However if for any reason the system-provided storage were not a good fit, a field definition can specify that it has custom storage, which means the field provider will handle storage on its own. A typical example are computed fields, which may need no storage at all.

Core SQL Storage

The default storage backend provided by core is obviously SQL-based. It distinguishes between shared field tables and dedicated field tables: the former are used to store data for all the single-value base fields, that is fields attached to every bundle like the node title, while the latter are used to store data for multiple-value base fields and bundle fields, which are attached only to certain bundles. As the name suggests, dedicated tables store data for just one field.

The default storage supports four different shared table layouts depending on whether the entity type is translatable and/or revisionable:

  • Simple entity types use only a single table, the base table, to store all base field data. | entity_id | uuid | bundle_name | label | … |
  • Translatable entity types use two shared tables: the base table stores entity keys and metadata only, while the data table stores base field data per language. | entity_id | uuid | bundle_name | langcode | | entity_id | bundle_name | langcode | default_langcode | label | … |
  • Revisionable entity types also use two shared tables: the base table stores all base field data, while the revision table stores revision data for revisionable base fields and revision metadata. | entity_id | revision_id | uuid | bundle_name | label | … | | entity_id | revision_id | label | revision_timestamp | revision_uid | revision_log | … |
  • Translatable and revisionable entity types use four shared tables, combining the types described above: the base table stores entity keys and metadata only, the data table stores base field data per language, the revision table stores basic entity key revisions and revision metadata, and finally the revision data table stores base field revision data per language for revisionable fields. | entity_id | revision_id | uuid | bundle_name | langcode | | entity_id | revision_id | bundle_name | langcode | default_langcode | label | … | | entity_id | revision_id | langcode | revision_timestamp | revision_uid | revision_log | | entity_id | revision_id | langcode | default_langcode | label | … |

The SQL storage schema handler supports switching between these different table layouts, if the entity type definition changes and no data is stored yet.

Core SQL storage aims to support any table layout, hence modules explicitly targeting a SQL storage backend, like for instance Views, should rely on the Table Mapping API to build their queries. This API allows retrieval of information about where field data is stored and thus is helpful to build queries without hard-coding assumptions about a particular table layout. At least this is the theory, however core currently does not fully support this use case, as some required changes have not been implemented yet (more on this below). Core SQL implementations currently rely on the specialized DefaultTableMapping class, which assumes one of the four table layouts described above.

A Real Life Example

We will now have a look at a simple module exemplifying a typical use case: we want to display a list of active users having created at least one published node, along with the total number of nodes created by each user and the title of the most recent node. Basically a simple tracker.

Displaying such data with a single query can be complex and will usually lead to very poor performance, unless the number of users on the site is quite small. A typical solution in these cases is to rely on denormalized data that is calculated and stored in a way that makes it easy to query efficiently. In our case we will add two fields to the User entity type to track the last node and the total number of nodes created by each user:

function active_users_entity_base_field_info(EntityTypeInterface $entity_type) { $fields = []; if ($entity_type->id() == 'user') { $fields['last_created_node'] = BaseFieldDefinition::create('entity_reference') ->setLabel('Last created node') ->setRevisionable(TRUE) ->setSetting('target_type', 'node') ->setSetting('handler', 'default'); $fields['node_count'] = BaseFieldDefinition::create('integer') ->setLabel('Number of created nodes') ->setRevisionable(TRUE) ->setDefaultValue(0); } return $fields; }

Note that fields above are marked as revisionable so that if the User entity type itself is marked as revisionable, our fields will also be revisioned. The revisionable flag is ignored on non-revisionable entity types.

After enabling the module, the status report will warn us that there are DB updates to be applied. Once complete, we will have two new columns in our user_field_data table ready to store our data. We will now create a new ActiveUsersManager service responsible for encapsulating all our business logic. Let's add an ActiveUsersManager::onNodeCreated() method that will be called from a hook_node_insert implementation:

public function onNodeCreated(NodeInterface $node) { $user = $node->getOwner(); $user->last_created_node = $node; $user->node_count = $this->getNodeCount($user); $user->save(); } protected function getNodeCount(UserInterface $user) { $result = $this->nodeStorage->getAggregateQuery() ->aggregate('nid', 'COUNT') ->condition('uid', $user->id()) ->execute(); return $result[0]['nid_count']; }

As you can see this will track exactly the data we need, using an aggregated entity query to compute the number of created nodes.

Since we need to also act on node deletion (hook_node_delete), we need to add a few more methods:

public function onNodeDeleted(NodeInterface $node) { $user = $node->getOwner(); if ($user->last_created_node->target_id == $node->id()) { $user->last_created_node = $this->getLastCreatedNode($user); } $user->node_count = $this->getNodeCount($user); $user->save(); } protected function getLastCreatedNode(UserInterface $user) { $result = $this->nodeStorage->getQuery() ->condition('uid', $user->id()) ->sort('created', 'DESC') ->range(0, 1) ->execute(); return reset($result); }

In the case where the user's last created node is the one being deleted, we use a regular entity query to retrieve an updated identifier for the user's last created node.

Nice, but we still need to display our list. To accomplish this we add one last method to our manager service to retrieve the list of active users:

public function getActiveUsers() { $ids = $this->userStorage->getQuery() ->condition('status', 1) ->condition('node_count', 0, '>') ->condition('last_created_node.entity.status', 1) ->sort('login', 'DESC') ->execute(); return User::loadMultiple($ids); }

As you can see, in the entity query above we effectively expressed a relationship between the User entity and the Node entity, imposing a condition using the entity syntax, that is implemented through a JOIN by the SQL entity query backend.

Finally we can invoke this method in a separate controller class responsible for building the list markup:

public function view() { $rows = []; foreach ($this->manager->getActiveUsers() as $user) { $rows[]['data'] = [ String::checkPlain($user->label()), intval($user->node_count->value), String::checkPlain($user->last_created_node->entity->label()), ]; } return [ '#theme' => 'table', '#header' => [$this->t('User'), $this->t('Node count'), $this->t('Last created node')], '#rows' => $rows, ]; }

This approach is way more performant when numbers get big, as we are running a very fast query involving only a single JOIN on indexed columns. We could even skip it by adding more denormalized fields to our User entity, but I wanted to outline the power of the entity syntax. A possible further optimization would be collecting all the identifiers of the nodes whose titles are going to be displayed and preload them in a single multiple load operation preceding the loop.

Aside from the performance considerations, you should note that this code is fully portable: as long as the alternative backend complies with the Entity Storage and Query APIs, the result you will get will be the same. Pretty neat, huh?

What's Left?

What I have shown above is working code, you can use it right now in Drupal 8. However there are still quite some open issues before we can consider the Entity Storage API polished enough:

  • Switching between table layouts is supported by the API, but storage handlers for core entity types still assume the default table layouts, so they need to be adapted to rely on table mappings before we can actually change translatability or revisionability for their entity types. See https://www.drupal.org/node/2274017 and follow-ups.
  • In the example above we might have needed to add indexes to make our query more performant, for example, if we wanted to sort on the total number of nodes created. This is not supported yet, but of course «there's an issue for that!» See https://www.drupal.org/node/2258347.
  • There are cases when you need to provide an initial value for new fields, when entity data already exists. Think for instance to the File entity module, that needs to add a bundle column to the core File entity. Work is also in progress on this: https://www.drupal.org/node/2346019.
  • Last but not least, most of the time we don't want our users to go and run updates after enabling a module, that's bad UX! Instead a friendlier approach would be automatically applying updates under the hood. Guess what? You can join us at https://www.drupal.org/node/2346013.

Your help is welcome :)

So What?

We have seen the recommended ways to store and retrieve entity field data in Drupal 8, along with (just a few of) the advantages of relying on field definitions to write simple, powerful and portable code. Now, Drupal people, go and have fun!

Images: 
Catégories: Elsewhere

Annertech: Some recent fun we've had Mapping with Drupal

Planet Drupal - ven, 27/03/2015 - 12:31
Some recent fun we've had Mapping with Drupal

People love maps. People love being able to visually understand how locations relate to each other. And since the advent of Google Maps, people love to pan and zoom, to click and swipe. But what people hate, is a shoddy mapping experience. Mapping can be hard, but fortunately, Drupal takes a lot of the pain away.

Why map?

You might want a map if you:

Catégories: Elsewhere

Olivier Berger: New short paper : “Designing a virtual laboratory for a relational database MOOC” with Vagrant, Debian, etc.

Planet Debian - ven, 27/03/2015 - 12:07

Here’s a short preview of our latest accepted paper (to appear at CSEDU 2015), about the construction of VMs for the Relational Database MOOC using Vagrant, Debian, PostgreSQL (previous post), etc. :

Designing a virtual laboratory for a relational database MOOC

Olivier Berger, J Paul Gibson, Claire Lecocq and Christian Bac

Keywords: Remote Learning, Virtualization, Open Education Resources, MOOC, Vagrant

Abstract: Technical advances in machine and system virtualization are creating opportunities for remote learning to
provide significantly better support for active education approaches. Students now, in general, have personal
computers that are powerful enough to support virtualization of operating systems and networks. As a conse-
quence, it is now possible to provide remote learners with a common, standard, virtual laboratory and learn-
ing environment, independent of the different types of physical machines on which they work. This greatly
enhances the opportunity for producing re-usable teaching materials that are actually re-used. However, con-
figuring and installing such virtual laboratories is technically challenging for teachers and students. We report
on our experience of building a virtual machine (VM) laboratory for a MOOC on relational databases. The
architecture of our virtual machine is described in detail, and we evaluate the benefits of using the Vagrant tool
for building and delivering the VM.

TOC :

  • Introduction
    • A brief history of distance learning
    • Virtualization : the challenges
    • The design problem
  • The virtualization requirements
    • Scenario-based requirements
    • Related work on requirements
    • Scalability of existing approaches
  • The MOOC laboratory
    • Exercises and lab tools
    • From requirements to design
  • Making the VM as a Vagrant box
    • Portability issues
    • Delivery through Internet
    • Security
    • Availability of the box sources
  • Validation
    • Reliability Issues with VirtualBox
    • Student feedback and evaluation
  • Future work
    • Laboratory monitoring
    • More modular VMs
  • Conclusions

Bibliography

  • Alario-Hoyos et al., 2014
    Alario-Hoyos, C., Pérez-Sanagustın, M., Kloos, C. D., and Muñoz Merino, P. J. (2014).
    Recommendations for the design and deployment of MOOCs: Insights about the MOOC digital education of the future deployed in MiríadaX.
    In Proceedings of the Second International Conference on Technological Ecosystems for Enhancing Multiculturality, TEEM ’14, pages 403-408, New York, NY, USA. ACM.
  • Armbrust et al., 2010
    Armbrust, M., Fox, A., Griffith, R., Joseph, A. D., Katz, R., Konwinski, A., Lee, G., Patterson, D., Rabkin, A., Stoica, I., and Zaharia, M. (2010).
    A view of cloud computing.
    Commun. ACM, 53:50-58.
  • Billingsley and Steel, 2014
    Billingsley, W. and Steel, J. R. (2014).
    Towards a supercollaborative software engineering MOOC.
    In Companion Proceedings of the 36th International Conference on Software Engineering, pages 283-286. ACM.
  • Brown and Duguid, 1996
    Brown, J. S. and Duguid, P. (1996).
    Universities in the digital age.
    Change: The Magazine of Higher Learning, 28(4):11-19.
  • Bullers et al., 2006
    Bullers, Jr., W. I., Burd, S., and Seazzu, A. F. (2006).
    Virtual machines – an idea whose time has returned: Application to network, security, and database courses.
    SIGCSE Bull., 38(1):102-106.
  • Chen and Noble, 2001
    Chen, P. M. and Noble, B. D. (2001).
    When virtual is better than real [operating system relocation to virtual machines].
    In Hot Topics in Operating Systems, 2001. Proceedings of the Eighth Workshop on, pages 133-138. IEEE.
  • Cooper, 2005
    Cooper, M. (2005).
    Remote laboratories in teaching and learning-issues impinging on widespread adoption in science and engineering education.
    International Journal of Online Engineering (iJOE), 1(1).
  • Cormier, 2014
    Cormier, D. (2014).
    Rhizo14-the MOOC that community built.
    INNOQUAL-International Journal for Innovation and Quality in Learning, 2(3).
  • Dougiamas and Taylor, 2003
    Dougiamas, M. and Taylor, P. (2003).
    Moodle: Using learning communities to create an open source course management system.
    In World conference on educational multimedia, hypermedia and telecommunications, pages 171-178.
  • Gomes and Bogosyan, 2009
    Gomes, L. and Bogosyan, S. (2009).
    Current trends in remote laboratories.
    Industrial Electronics, IEEE Transactions on, 56(12):4744-4756.
  • Hashimoto, 2013
    Hashimoto, M. (2013).
    Vagrant: Up and Running.
    O’Reilly Media, Inc.
  • Jones and Winne, 2012
    Jones, M. and Winne, P. H. (2012).
    Adaptive Learning Environments: Foundations and Frontiers.
    Springer Publishing Company, Incorporated, 1st edition.
  • Lowe, 2014
    Lowe, D. (2014).
    MOOLs: Massive open online laboratories: An analysis of scale and feasibility.
    In Remote Engineering and Virtual Instrumentation (REV), 2014 11th International Conference on, pages 1-6. IEEE.
  • Ma and Nickerson, 2006
    Ma, J. and Nickerson, J. V. (2006).
    Hands-on, simulated, and remote laboratories: A comparative literature review.
    ACM Computing Surveys (CSUR), 38(3):7.
  • Pearson, 2013
    Pearson, S. (2013).
    Privacy, security and trust in cloud computing.
    In Privacy and Security for Cloud Computing, pages 3-42. Springer.
  • Prince, 2004
    Prince, M. (2004).
    Does active learning work? A review of the research.
    Journal of engineering education, 93(3):223-231.
  • Romero-Zaldivar et al., 2012
    Romero-Zaldivar, V.-A., Pardo, A., Burgos, D., and Delgado Kloos, C. (2012).
    Monitoring student progress using virtual appliances: A case study.
    Computers & Education, 58(4):1058-1067.
  • Sumner, 2000
    Sumner, J. (2000).
    Serving the system: A critical history of distance education.
    Open learning, 15(3):267-285.
  • Watson, 2008
    Watson, J. (2008).
    Virtualbox: Bits and bytes masquerading as machines.
    Linux J., 2008(166).
  • Winckles et al., 2011
    Winckles, A., Spasova, K., and Rowsell, T. (2011).
    Remote laboratories and reusable learning objects in a distance learning context.
    Networks, 14:43-55.
  • Yeung et al., 2010
    Yeung, H., Lowe, D. B., and Murray, S. (2010).
    Interoperability of remote laboratories systems.
    iJOE, 6(S1):71-80.
Catégories: Elsewhere

Michal Čihař: Spring is here

Planet Debian - ven, 27/03/2015 - 06:00

Finally winter seems to be over and it's time to take out camera and make some pictures. Out of many areas where you can see spring snowflakes, we've chosen area Čtvrtě near Mcely, village which is less famous, but still very nice.

Filed under: English Photography Travelling | 0 comments | Flattr this!

Catégories: Elsewhere

Chen Hui Jing: 542 days as a Drupal developer

Planet Drupal - ven, 27/03/2015 - 01:00

I’ve just listened to the latest episode of the Modules Unraveled podcast by Bryan Lewis, which talked about The current job market in Drupal. And it made me think about my own journey as a Drupal developer, from zero to reasonably competent (I hope). The thing about this industry is that everything seems to move faster and faster. There’s a best new tool or framework released every other day. Developers are creating cool things all the time. And I feel like I’m constantly playing catch-up. But looking back to Day 1, I realised that I did make quite a bit of progress since then.

Learning on the job

I’ve been gainfully employed as a Drupal architect...

Catégories: Elsewhere

Isovera Ideas & Insights: 6 Ways to Increase ROI on Your Drupal Project

Planet Drupal - jeu, 26/03/2015 - 23:28
Drupal is a great choice for your enterprise-level web application project, and maybe even your website. Like every other framework under the sun, it’s also a terrible choice if you’re cavalier about the management of its implementation. Things go awry, scope creeps, budgets get drained, and hellfire rains down from the sky… You get it. The good news is that there are steps you can take to prevent this from happening.
Catégories: Elsewhere

Palantir: Increasing Velocity and Drupal 8

Planet Drupal - jeu, 26/03/2015 - 23:12

Palantir CEO Tiffany Farriss recently keynoted MidCamp here in Chicago where she spoke about the economics of Drupal contribution. In it, she explored some of the challenges of open-source contribution, showing how similar projects like Linux have managed growth and releases, and what the Drupal Association might do to help push things along toward a Drupal 8 release. You can give her presentation a watch here.

With this post, we want to highlight one of the important takeaways from the keynote: the Drupal 8 Accelerate Fund.

Some of you are clients and use Drupal for your sites, others build those sites for clients around the world, and still others provide technology that enhances Drupal greatly. We also know that Drupal 8 is going to be a game changer for us and our clients for a lot of reasons.

While we use a number of tools and technologies to drive success for our clients, Drupal is in our DNA. In addition to being a premium supporting partner of the Drupal Association, we also count amongst our team members prominent Drupal core and contributed module maintainers, initiative leads, and Drupal Association Board, Advisory Board, and Working Group members.

We've all done our part, but despite years of support and contributions from countless companies and individuals, we need to take a new approach to incentivize contributors to get Drupal 8 done. That’s where the Drupal 8 Accelerate Fund comes in.

Palantir is one of seven anchor donors who are raising funds alongside the Drupal Association to support Drupal 8 development. These efforts relate directly to the Drupal Association's mission of uniting a global open source community to build and promote Drupal, and will (and already have) support contributors directly through additional dollars for grants.

The fund breaks down like this:

  • The Drupal Association has contributed $62,500
  • The Drupal Association Board has raised another $62,500 from Anchor Donors
  • Now, the Drupal Association’s goal is to raise contributions from the Drupal community. This is the chance for everyone from end users to independents to Drupal shops to show your support for Drupal 8. Every dollar donated by the community has already been matched, doubling your impact. That means the total pool could be as much as $250,000 with your help.

 
Drupal 8 Accelerate is first and foremost about getting Drupal 8 to release. However, it’s also a pilot program for Drupal Association to obtain and provide financial support for the project. This is a recognition that, as a community, Drupal must find (and fund) a sustainable model for core development.

This is huge on a lot of levels, and those in the community have already seen the benefits with awards for sprints and other specific progress in D8. Now it’s our turn to rally. Give today and spread the word so we can all help move Drupal 8 a little closer to release.

Fundraising Websites - Crowdrise

 

Catégories: Elsewhere

Daniel Pocock: WebRTC: DruCall in Google Summer of Code 2015?

Planet Debian - jeu, 26/03/2015 - 22:58

I've offered to help mentor a Google Summer of Code student to work on DruCall. Here is a link to the project details.

The original DruCall was based on SIPml5 and released in 2013 as a proof-of-concept.

It was later adapted to use JSCommunicator as the webphone implementation. JSCommunicator itself was updated by another GSoC student, Juliana Louback, in 2014.

It would be great to take DruCall further in 2015, here are some of the possibilities that are achievable in GSoC:

  • Updating it for Drupal 8
  • Support for logged-in users (currently it just makes anonymous calls, like a phone box)
  • Support for relaying shopping cart or other session cookie details to the call center operative who accepts the call
Help needed: could you be a co-mentor?

My background is in real-time and server-side infrastructure and I'm providing all the WebRTC SIP infrastructure that the student may need. However, for the project to have the most impact, it would also be helpful to have some input from a second mentor who knows about UI design, the Drupal way of doing things and maybe some Drupal 8 experience. Please contact me ASAP if you would be keen to participate either as a mentor or as a student. The deadline for student applications is just hours away but there is still more time for potential co-mentors to join in.

WebRTC at mini-DebConf Lyon in April

The next mini-DebConf takes place in Lyon, France on April 11 and 12. On the Saturday morning, there will be a brief WebRTC demo and there will be other opportunities to demo or test it and ask questions throughout the day. If you are interested in trying to get WebRTC into your web site, with or without Drupal, please see the RTC Quick Start guide.

Catégories: Elsewhere

Daniel Pocock: WebRTC: DruCall in Google Summer of Code 2015?

Planet Drupal - jeu, 26/03/2015 - 22:58

I've offered to help mentor a Google Summer of Code student to work on DruCall. Here is a link to the project details.

The original DruCall was based on SIPml5 and released in 2013 as a proof-of-concept.

It was later adapted to use JSCommunicator as the webphone implementation. JSCommunicator itself was updated by another GSoC student, Juliana Louback, in 2014.

It would be great to take DruCall further in 2015, here are some of the possibilities that are achievable in GSoC:

  • Updating it for Drupal 8
  • Support for logged-in users (currently it just makes anonymous calls, like a phone box)
  • Support for relaying shopping cart or other session cookie details to the call center operative who accepts the call
Help needed: could you be a co-mentor?

My background is in real-time and server-side infrastructure and I'm providing all the WebRTC SIP infrastructure that the student may need. However, for the project to have the most impact, it would also be helpful to have some input from a second mentor who knows about UI design, the Drupal way of doing things and maybe some Drupal 8 experience. Please contact me ASAP if you would be keen to participate either as a mentor or as a student. The deadline for student applications is just hours away but there is still more time for potential co-mentors to join in.

WebRTC at mini-DebConf Lyon in April

The next mini-DebConf takes place in Lyon, France on April 11 and 12. On the Saturday morning, there will be a brief WebRTC demo and there will be other opportunities to demo or test it and ask questions throughout the day. If you are interested in trying to get WebRTC into your web site, with or without Drupal, please see the RTC Quick Start guide.

Catégories: Elsewhere

Acquia: 2014 greatest hits - 30 Awesome Drupal 8 API Functions you Should Already Know - Fredric Mitchell

Planet Drupal - jeu, 26/03/2015 - 22:02
Language Undefined

Looking back on 2014, it was a great year of events and conversations with people in and around Acquia, open source, government, and business. I think I could happily repost at least 75% of the podcasts I published in 2014 as "greatest hits," but then we'd never get on to all the cool stuff I have been up to so far in 2015!

Nonetheless, here's one of my favorite recordings from 2014: a terrific session that will help you wrap your head around developing for Drupal 8 and a great conversation with Frederic Mitchell that covered the use of Drupal and open source in government, government decision-making versus corporate decision-making, designing Drupal 7 sites with Drupal 8 in mind, designing sites for the end users and where the maximum business value comes from in your organization, and more!

Catégories: Elsewhere

Chris Hall on Drupal 8: D8 theming first impression

Planet Drupal - jeu, 26/03/2015 - 21:11
D8 theming first impression Thu, 03/26/2015 - 20:11 chrishu Introduction

After upgrading this site to a nice shiny Beta, I was itching to try themeing on Drupal 8, I have left off up to now as a few simple experiments showed me that even a simple sub-theme broke quickly under the pace of Drupal change, now though I should be able to upgrade any efforts and improvements without too much difficulty.

I theme Drupal every now and again and spend more time doing back-end and server related work, I usually have to have a good understanding of the mechanics of the themeing though even when not actively doing it. 

Often in the past I have been at odds with the themeing philosophy of teams I am working with (and have had to capitulate when outnumbered ;)) as I am more in the camp and would rather strip out most of the guff that Drupal inserts and break away from the 'rails' that make many Drupal sites turn out kind of samey apparently the 33% camp.

Also when working with talented front-end developers who don't necessarily deal mostly with Drupal it seems such a shame to clip their wings, I would rather try and start with a theme like Mothership.

The challenge

The assumption I had was that Drupal 8 will be much easier to customise and "go your own way" than Drupal has ever been before. The mini-challenge I set myself was to re-implement the look from another site chris-david-hall.info  which runs on ExpressionEngine and use the same CSS stylesheet verbatim (in the end I changed one line). 

The theme is pretty basic, based on Bootstrap 3, but even despite that has a few elements of structure that are not very Drupally, so made an interesting experiment.

More than enough for my first attempt.

The result

Well this site no longer looks like a purple Bartik, and does bear more than a passing resemblance to the site I ripped the CSS from.

It was pretty easy to restructure things and Twig theming in Drupal is a massive improvement, I am now convinced that Drupal 8 wins hands down over Drupal 7 for themeability.

There is still a lot more stuff I could strip out, this was a first pass, I am going to take a breather and come back to it. I have a couple of style-sheets left from Drupal to keep the in-line editing and admin stuff (mostly) working. I would prefer to target those bits more selectively.

The theme is on Github, just for interest and comparison at the moment, but depending on later experiments might turn into something more generically useful. 

Still a few glitches

It is a bit difficult working out if I have done something wrong or whether I am encountering bugs in the Beta, I will take the time to find out if issues have been raised when I get the chance. There are problems, for example for an anonymous user the home link is always active and some blocks seem to leave a trace even when turned off for a page (which messes with detecting whether a sidebar is active for example), both of these problems also exhibit in Bartik though.

I plucked the theme from my site at chris-david-hall.info and needs a lot of work anyway, I am hoping to improve both sites in tandem now. 

Comments Add new comment Your name Subject Comment About text formats Restricted HTML
  • Allowed HTML tags: <a> <em> <strong> <cite> <blockquote> <code> <ul> <ol> <li> <dl> <dt> <dd> <h4> <h5> <h6>
  • Lines and paragraphs break automatically.
  • Web page addresses and email addresses turn into links automatically.
Leave this field blank
Catégories: Elsewhere

Wunderkraut blog: How to combine two facet items in Facet API

Planet Drupal - jeu, 26/03/2015 - 20:31

How to change solr search query for one facet using these modules: Search API, Search API solr, facet API, Facet API bonus and some custom code.

I have configured these modules to have a search page showing content that you search for and a list of facet items that you can filter the search result on. In my case the facet items was an representation of node types that you could filter the search result with. There are tons of blogpost how to do that, Search API solr documentation.

The facet item list can look like this (list of node types to filter search result on):

- Foo (22) - Bar (18) - Elit (10) - Ipsum (9) - Ultricies (5) - Mattis (2) - Quam (1)

What I wonted to achieve was to combine two facet items to one so the list would look like this:

- Foo and Bar (40) - Elit (10) - Ipsum (9) - Ultricies (5) - Mattis (2) - Quam (1)

The solution was using Search API hook hook_search_api_solr_query_alter(). I need to only change the query for the facet Item (node type) "Foo" and try to include (node type) "Bar" in the search query. So I fetched the facet item name by digging deep into the argument "$query".

<?php
function YOUR_CUSTOM_MODULE_search_api_solr_query_alter(array &$call_args, SearchApiQueryInterface $query) {

  // Fetching the facet name to change solr query on.
  $facet_item = $query->getFilter()->getFilters();
  if (!empty($facet_item)) {
    $facet_item = $facet_item[0]->getFilters();

    if (!empty($facet_item[0])) {
      if (!empty($facet_item[0][1])) {
        $facet_item = $facet_item[0][1];
        // This is my facet item I wont to change solr query on "Foo" and also add node type "Bar" to the filter.
        if ($facet_item === 'foo') {
          $call_args['params']['fq'][0] = $call_args['params']['fq'][0] . ' OR  ss_type:"bar"';
        }
      }
    }
  }
}
?>

We have now altered the solr query, but the list looks the same, the only differences now is that if you click on the "Foo" facet you will get "Foo" and "Bar" (node type) nodes in the search result.

To change the facet item list, I used drupal hook_facet_items_alter() provided by contrib module Facet API Bonus

<?php
function YOUR_CUSTOM_MODULE_facet_items_alter(&$build, &$settings) {

  if ($settings->facet == "type") {

    // Save this number to add on the combined facet item.
    $number_of_bar = $build['bar']['#count'];

    foreach($build as $key => $item) {
      switch ($key) {
        case 'foo':
          // Change the title of facet item to represent two facet items
          // (Foo & Bar).
          $build['foo']["#markup"] = t('Foo and Bar');

          // Smash the count of hits together.
          if ($build['foo']['#count'] > 0) {
            $build['foo']['#count'] = $build['foo']['#count'] + $number_of_bar;
          }
          break;

        // Remove this facet item now when Foo item will include this node type in the search result.
        case 'bar':
          unset($build['bar']);
          break;
      }
    }
  }
}
?>

After this should the list look like we want.

- Foo and Bar (40) - Elit (10) - Ipsum (9) - Ultricies (5) - Mattis (2) - Quam (1)

I also have a text printed out by Facet API submodule Current Search. This module lets you add blocks with text and tokens. In my case I added text when you searched and filtered to inform the user what he just searched for and/or filtered on. This could be done by adding existing tokens in the configuration of Current Search module configuration page "admin/config/search/current_search". The problem for me was that the token provided was the facet items that facet API created and not the one I changed. So I needed to change the token with text "Foo" to "Foo & Bar". This can be accomplished by hook_tokens_alter().

<?php
function YOUR_CUSTOM_MODULE_tokens_alter(array &$replacements, array $context) {

  if (isset($replacements['facetapi_active:active-value'])) {
    switch ($replacements['[facetapi_active:active-value]']) {

      case 'Foo':
        $replacements['[facetapi_active:active-value]'] = 'Foo and Bar';
        break;
    }
  }
}
?>

And that's it.
Link to all code

Catégories: Elsewhere

Angie Byron: How and why D8 Accelerate is spending your hard-earned cash

Planet Drupal - jeu, 26/03/2015 - 20:13

Hopefully you've heard the news about the Drupal Association's new D8 Accelerate grants program, and the fundraising drive we have currently going on. If not, the gist is that the Drupal Association has created a central fund, managed by the Drupal core committers, to fund both "bottom-up" community grants for things like targeted sprints or "bug bounties," as well as "top-down" spending driven from the core committers on larger strategic initiatives that help accelerate Drupal 8's release. All D8 Accelerate grants that are provided are tracked centrally at https://assoc.drupal.org/d8accelerate/awarded, including what the money was used for, how much was spent, to whom it went, and a report from the grant recipient(s) that outlines the work that was accomplished.

However, it can be a little hard to parse from that format the larger meaning/context of this work, especially if you don't spend upwards of 30 hours per week in the core queue like most of these folks. :) As Chief Wearer of Many Hats™, I sit on both the Drupal Association board as well as the committee who manages these funds. This puts me in a good position to provide a bit of "behind the scenes" info on how the funding process works, as well as provide some of the larger story of how these funds are benefitting not only Drupal 8 core, but the larger Drupal ecosystem.

Drupal 8 Acceleration Performance


Source: https://www.drupal.org/node/2370667

As noted in my post-DrupalCon Bogotá critical issue run-down, performance improvements are a large chunk of the work remaining in Drupal 8. We had deliberately postponed most of this work until post-beta to avoid premature optimization and to allow all the major architectural chunks to be in place. However, we definitely can't release Drupal 8 while it is much slower than Drupal 7.

The D8 Accelerate fund has been instrumental in not only helping to address critical performance regressions, but also in accelerating the development of Drupal 8's next-generation cache system.

Or, to sum it up in a cheesy catch-phrase:

For less than $10,000, we're making Drupal 8 twice as fast! :D

Win!

Unblocking the beta-to-beta upgrade path

The second large focus of D8 Accelerate grants has been around D8 upgrade path blockers. These are extremely strategic, because they unblock a beta-to-beta upgrade path between Drupal 8 releases, which is extremely important for early Drupal 8 adopters.

For example, one large chunk of work D8 Accelerate has funded is in making sure Entity Field API and Views work well together. This is critical for features such as Multilingual, so content shows up in the right languages when expected, and it's necessary to complete this work prior to providing an upgrade path since it would be horrendous to write hook_update_N() functions for some of the necessary changes.

We're also exploring other alternatives to provide a beta-to-beta upgrade path to early adopters sooner, which is viable now that the hardest data-model-changing issues are done.

Security


Source: http://www.codepositive.com/code-positive/about

Obviously, we do not want to release Drupal 8 with known security vulnerabilities, but neither do we want to release an "upgrade path beta" that we encourage early adopters to use with known security vulnerabilities. Hence, we are trying to get any critical security issues taken care of sooner than later.

For example, one chunk of work that D8 Accelerate is funding in this area is tightening security around Drupal 8 entity API. Numerous form validation functions in core contain entity-level validation, to the wild dismay of anyone who's ever tried to implement web services on top of Drupal. The reason that's bad is because if you attempt to save something using just the Entity API (as you will in a REST API scenario where there are no forms), you will end up skipping validation routines and could end up with invalid and/or insecure data entry.

Targeted Sprints to Crush Criticals


Source: https://groups.drupal.org/node/456353

As demonstrated at the Ghent critical issue sprint last year, nothing is better for D8's velocity than getting a bunch of awesome contributors in the same place to pound on the critical queue together. D8 Accelerate funded a fantastic Menu link critical sprint at DrupalCamp New Jersey which, in addition to turning this area of criticals from "OMGWTFBBQ" to an actionable plan, resulted directly or indirectly in all of the related critical issues in this area being closed, within a couple of weeks after the sprint.

We aim to do more of these same types of targeted sprints throughout the year, the next one being the DrupalCI sprint in a couple of weeks. More on that in a sec.

Drupal Community acceleration Testbot modernization


Source: https://www.previousnext.com.au/blog/architecting-drupalci-drupalcon-ams...

Our beloved Drupal.org testbot has been showing its age. https://qa.drupal.org/ is still on Drupal 6, and largely on life-support these days. Testbot doesn't support testing on multiple PHP versions and databases, both of which are Drupal.org (websites/infra) blockers to a Drupal 8 release.

The DrupalCI: Modernizing Testbot Initiative is being driven by numerous devops-inclined community members from around the world. It aims to rebuild testbot from a collection of Drupal modules to a more standard CI stack (using big fancy words like Jenkins and Docker and Puppet and Travis and Silex) that your average PHP/devops folk can both understand and help maintain.

There's been a lot of work on DrupalCI already, and the upcoming D8 Accelerate sprint on DrupalCI: Modernizing Testbot Initiative will bring all the various contributors together to form DrupalCI Voltron to get an MVP of all the various pieces working together. Actual deployment will happen some time later, and both the new and old testbot will run alongside each other for a good while so any kinks can be worked out while D8 development stays stable.

This particular improvement not only allows Drupal 8 to ship, it also will provide great new functionality to all projects on Drupal.org! The architecture allows for ample room for later expansion as well, so we could start doing things like automated code reviews, performance testing, front-end testing, etc.

People power!

Here are some of the awesome Drupal contributors who've benefited from these funds:


Daniel Wehner (dawehner)
Daniel is a major driving force in Drupal core, as well as the person with the most commit mentions in Drupal 8. His work spans not only the Views in Drupal Core initiative, but in all other areas upon which he sets his sights. No issue is safe! :)
Lee Rowlands (larowlan)
Lee is another powerhouse core generalist who tries to tackle a #CriticalADay. You can learn more about Lee in this Community Spotlight.
Andrei Mateescu (amateescu)
Andrei is a co-maintainer Drupal 8 Core's Entity reference field, Field API/UI, and the transliteration system. He's also contributed to dozens of contributed projects.
Francesco Placella (plach)
Fancesco has been a significant contributor to internationalization functionality since the Drupal 7 days. In Drupal 8 he's a key fixture in the D8 Multilingual Initiative.
Wolfgang Ziegler (fago)
Wolfgang has worked with Drupal since 2005. In addition to his major contributions to Drupal 8's Entity Field API, he also maintains various widely-used contributed modules such as Rules and Profile2.
Klaus Purer (klausi)
In addition to his work driving Drupal 8's REST functionality, Klaus is also a member of Drupal's Security Team and a driver of automated tools for coding standards checking.
Fabian Franz (Fabianx)
In addition to being a mad performance scientist extraordinaire, Fabian has also been a prominent contributor to the Twig in core initiative.
Jelle Sebreghts (Jelle_S)
One of the drivers of mobile improvements in Drupal 8, Jelle also maintains dozens of contributed modules through his work at Attiks. Not just code!

D8 Accelerate funds are being used not only to fund development work, but also to fund patch reviews as well as more "project management"-y tasks like "triaging" a set of issues to find the truly critical ones, research on different approaches, etc. Wherever possible, the core committers explicitly look for opportunities to fund two people, usually a developer and a patch reviewer, in order to maintain the sanctity of Drupal core's peer review process.

I think this is great, because it highlights that it's not just raw PHP that's going to get Drupal 8 out the door; it's a joint effort of many complementary skills coming together.

Show me the money!


Source: https://blackincense.files.wordpress.com/2008/10/skeptical-cat-is-fraugh...

Why $250k to accelerate D8?

This is a very reasonable question to ask, particularly in light of the widely-cited statistic of 2,750+ contributors to Drupal 8, and various Drupal companies employing major contributors to Drupal core. Here are a few points:

  1. Drupal 8 will be a truly revolutionary release, not only by providing tons more useful functionality out of the box for site builders and content authors (WYSIWYG, mobile support, Views, configuration management, etc.), but by modernizing the underlying code base to address years of technical debt, and help "future-proof" Drupal for the next 10+ years. Unsurprisingly, this means that the total amount of work that has already gone into D8, and that remains needed to move D8 from a late beta to a release, is larger than it was for earlier versions of Drupal. Most technology maturations follow that pattern.

    Drupal 8's release also unlocks the move to a new release cycle that introduces backwards-compatible feature releases every 6 months. This allows us to "release early, release often," as opposed to "release every 4+ years, coupled with lots and lots of API breaks." ;)

  2. For these reasons, as well as many others, there's significant community benefit for 8.0.0 to be released as soon as possible, both so that sites can be built on it, and so that the 8.1.x branch can be opened for development for everyone with a feature idea itch to scratch. Additionally, many organizations and individuals who would benefit from D8 getting released sooner than later don't have the expertise or time to solve the remaining critical issues. These organizations/people might be willing to contribute money, but don't know who to best send it to or don't want to deal with the administration of contracting directly with individual core contributors. This fund is an opportunity for those organizations/people to make a difference without dealing with that administration.

    Make no mistake: Drupal 8 will get done, with or without this money. The goal of the fund is not about saying that our current awesome core contributor base is incapable of completing the work; it's only a recognition that funding work can make it happen faster.

  3. Why is this so? It's a common misconception that most core developers are paid for their work, either by Drupal companies who employ them, or by their customers. In reality, those directly financially compensated for their contributions to core (and especially to Drupal 8, which is not yet commercially viable for the masses) are a tiny fraction of the overall number of contributors.

    While there are numerous contributors who have already spent literally years contributing to core during their nights and weekends, and as a result have developed the kind of expertise needed to finish some of the remaining hard critical issues, relying on their ongoing availability of free time is not sustainable. These include contributors who work as freelance developers for clients, and it's certainly unfair to expect these people to turn down paid client work in order to have free time to work on core, or to quit being freelancers and become employees of forward-thinking Drupal companies who provide company time for core contribution. One of my favorite aspects of D8 Accelerate is that it is helping to "level the playing field" by making it possible for these people to have time to work on core regardless of their current employment situation.

  4. It's also important to emphasize here that injecting funding into the "bug fix slog" phase of major Drupal releases, when all the fun stuff that tends to motivate volunteers is long exhausted, is nothing new. That should come as no surprise, given that there have always been companies with financial interests in having a given version of Drupal ready sooner. For example, in Drupal 6, Acquia funded release manager Gábor Hojtsy full-time to help get that release done. In Drupal 7, in addition to employing core contributors full-time, Examiner.com paid numerous "bug bounties" out to folks to help slay specific critical issues. The difference here is that the DA as a non-profit organization needs to be extremely transparent in anything it's doing with the community's money, so there is greater visibility on things this time around.

If you don't want to donate, that's totally okay. You'll still be able to use Drupal 8 all you want, for free, when it's ready. Donating to this fund is only an opportunity to help make that happen sooner, if that's sufficiently valuable to you.

For a lot more "deep thinking" around these topics, see:

Many thanks to effulgentsia for his extensive help on this part!

How do you decide on how money gets spent?

The core committers have a well-documented process that explains how we decide what to fund. The TL;DR version is we look at criteria like:

  • Is a proposal genuinely a release blocker to Drupal 8, or something that will otherwise directly lead to an accelerated Drupal 8 release? (That's a biggie.)
  • Is a proposal resolving a blocker to other work, especially other release blockers?
  • Is a proposal resolving an "ecosystem" blocker? (For example "D8 upgrade path" issues that block early D8 adoption, blockers to a major portion of contributed modules/themes porting)
  • Is this a place where we can inject funding to take an issue the "last 20%" and get it across the finish line quickly?
  • Is momentum in this area slow, making it unlikely to be fixed "organically" by D8 contributors?
  • Are the people working in this area not directly funded (by an employer or client) to fix it already?
  • Do we have some confidence that funding will lead to a successful outcome?

Proposals that answer "yes" to more of these questions than not are more likely to get funded. And the D8 Accelerate team is constantly on the lookout for things that meet this criteria and proactively reaching out to contributors to help get things started.

In short, we take our responsibility with the community's money very seriously, and have turned down multiple community proposals that were fantastic ideas, but did not fit this criteria. (Where appropriate, we refer folks over to the Community Cultivation Grants instead.)

Also please note that a previous restriction around people asking for funding for their own time has been lifted a month or so back (Thanks, DA!). So if you are a contributor who knows a lot about critical issue #12345, you can request a stipend (initially capped at $500 for five hours) to help push it forward.

If that sounds like you, or you have other creative ideas on how we can get Drupal 8 out faster, apply for a grant today!

Thank you for your support!

I wanted to take the opportunity to give a huge shout-out to the "anchor donors" of the D8 Accelerate campaign:









Thanks to their efforts, every dollar you contribute is already matched by the Drupal Association and these anchor donors, doubling your impact. If you'd like, you can make a donation to my fundraising drive (I've set a very ambitious goal of $20,000 since that's 8% of $250,000 — get it? ;)):

Fundraising Websites - Crowdrise

...or, find your favourite Drupal person at https://www.crowdrise.com/d8accelerate/fundraiser and donate to theirs instead, or create your own! :)

Thanks as well to the folks who somehow stumbled across it and donated to my fundraiser already—Andreas Radloff, Douglas Reith, and Ian Dunn. I thought Ian's note was particularly awesome! :D

And finally, thank YOU for any and all support you can provide that will help us make Drupal 8 the most successful release of Drupal yet! :D If you have any other questions, please feel free to ask!

Tags: drupal 8drupal
Catégories: Elsewhere

Pages

Subscribe to jfhovinne agrégateur - Elsewhere