Feed aggregator

KnackForge: Vimeo Advanced API for Drupal

Planet Drupal - Wed, 17/06/2015 - 09:33
The Vimeo Advanced API allows the users to access, edit, and upload videos (if approved). With the advanced API, we will be able to create albums, add videos to albums, channels, and groups as well. Let's take a look at what you need to do in order to make the vimeo advanced API work in drupal:
  1. Install the vimeo API on your drupal site.
  2. Register your site with vimeo and get the Consumer Keys.
  3. Authorize your site and get the OAuth Tokens.
  4. Start using the Advanced API for vimeo.
Installing the vimeo API
  • Download the "Advanced API PHP Library" from here
  • Extract the contents of the zip file and move them to a folder called "vimeo" on the root of your drupal website.
Registering the drupal site with Vimeo
  • Login to your vimeo account, and register your website as a new "app" at https://developer.vimeo.com/apps/new
  • Enter the values of required field.
  • Use the URL of your website as the 'Application URL'.
  • Enter the "Application Callback URL", which is very important as the authentication process will start from here.
  • When you are registering an app for the first time, then you may have to wait for a human to authorize the app request.
  • Get your 'Consumer Key' and 'Consumer Secret' and save them to proceed forward.
Categories: Elsewhere

Modules Unraveled: 139 DrupalCon New Orleans Details with Eric Schmidt, Sabrina Schmidt, Jason Want and Jeff Diecks - Modules Unraveled Podcast

Planet Drupal - Wed, 17/06/2015 - 07:00
Published: Wed, 06/17/15Download this episodeDrupalCon New Orleans
  • Where is the DrupalCon going to be?
    • New Orleans convention center
  • When is it?
    • May 9-13 2016
  • Why New Orleans?
    • We are seeing an incredible rebirth of a Great American City. Hurricane Katrina was such an unbelievable disaster, 80% of the city was flooded. Surrounding Parishes even worse, (we have Parishes instead of Counties), St. Bernard Parish, just down river, 99% was flooded. In the last 10 years we have overcome seemingly unsurmountable rebuilding, and have plenty more in the works. DrupalCon coming to New Orleans is great affirmation of progress we have made. It has a vibe like no other city, you can feel the life.
  • Why were you so driven to bring DrupalCon to your town?
    • It’s such a great place to be! Growing up 5 miles from Bourbon Street, we tend to take our City for granted. We do things that are rarely seen in the world! Food, Festivals, Family activities, Music, and of course you can drink in public! The general attitude across the whole city is very inviting and laid back. Really, a perfect place for a crowd like the Drupal Community!
  • What does the tech community look like there?
    *Growing in leaps and bounds. The entrepreneur landscape is one of the top in the country – we lead the nation in startups per capita by 64%, and we have a growing network of capital, which is important for startups. Game Loft, GE Capital, High Voltage Software have all chosen New Orleans because of our deep incentives, unique culture, and low cost of living. And our tech community is coalescing with the formation of TechNO, a coalition of local tech companies who meet regularly to promote the presence of the industry, New Orleans Entrepreneur Week hosted by the Idea Village, and NOLA Tech Week, which attracts national speakers and provides a great opportunity to showcase the industry. Finally, many local community colleges and universities are developing curricula to meet new digital workforce demands. There is no better opportunity in the country for tech companies than New Orleans. (from GNO, Inc. can be summarized)
  • What does the Drupal community look like?
    • We just had our Second camp! :-) Small but dedicated! We have had Meetups Monthly since 2010.
  • How important is the local community with the regards to putting on a DrupalCon.
    • I think now that the Drupal Association has taken the reigns of the Cons, the local community plays a part, but not like say, 2010, when we were in San Francisco. The local community had to shoulder the brunt of the work. And frankly, it was a lot, plus we probably had a limited number of cities with that size local community. That’s one of the great things about the Association, organizing DrupalCons!
  • What’s the Drupal adoption look like in New Orleans?
    • Growing, like everything else down here! The larger Universities have adopted Drupal, Tulane, Loyola, LSU up in Baton Rouge, plus the WWII museum, WWOZ (a great radio station, you should listen online), Cafe Du Monde, The Chef John Besh Restaurant Group, Audubon Nature Institute, Dr. Tichenor’s, maybe more….(or we keep it short??)
  • Who’s going to be the “boots on the ground” in New Orleans playing “host”?
    • Hopefully, us! We are both born and raised in the New Orleans area. I am involved in the local civic and business community and the entire Tech community are excited to host Drupalers!
  • How is it organized compared to years past? (Level of community and association involvement)
    • Again the Drupal Association has done a great job of spearheading the Cons. We worked closely with them to develop the logo and overall branding of the Con. In the coming months, we will work with them to look at venues and locations for events. Sponsors have reached out to us to help them organization of their specific needs for parties and meeting.
  • How will you be choosing who is selecting sessions

    • Each Con we put together the Track Team which is comprised of global track chairs (people who have evaluated and selected sessions for a Con at least once before) and then we work to assemble the Local Track Chairs who work in conjunction with the globals. We get these people from recommendations from within the community, people reaching out to volunteer and people expressing interest to Global Track Chairs. They go through an interview process with the Drupal Association and then the team is assembled and starts working to get out the call for content. It’s quite a ways away planning-wise but the Drupal Association will start putting together the New Orleans Track Team in the late fall, so if you’re interested or know someone who would be a great addition please reach out to amanda@association.drupal.org.
      You can read all about the session selection process here: https://events.drupal.org/barcelona2015/session-selection-process
  • For those that want to have a future Con in their community, do you have any advice?

    • We heard interest from the Drupal Association in having a New Orleans Con about five years ago, but we didn’t have a local community to support it. We started up a small meetup in Baton Rouge in 2010, then it slowly migrated to New Orleans. We didn’t lobby anyone to win the conference for the city. We just tried to establish a community and show consistent interest over the years, and trusted that New Orleans is a destination that the community would want to visit. Eric: you were at that first meetup and have helped to coordinate the growth of the group, what are your thoughts?
    • Have a consistent Meetup! We decided at our first Meetup we would meet on the First Thursday of the Month, even if it was only two people. And barring that occasional conflict with a carnival parade we have done that. Then organize a Camp, start small and be consistent!
  • Before we started recording, you mentioned that you wanted to talk about possible afterparty locations. Do you want to do that now?
    • Everywhere!
    • Crawfish season
Questions from Twitter
  • Ryan Gibson
    What kind of festivities can we expect during DrupalCon NO? #MUP139
  • Carie Fisher
    #MUP139 best place for drupalcon parties? any places we should try and visit in NO?
  • Robyn Green
    Question for Jeff, What amount of LSU attire will I be required to have for Drupalcon, and where can I get a 'I <3 Hallman" hat? #MUP139
  • Ryan Gibson
    What is the must-have NO food that I should plan on bringing Tums to be able to enjoy? #MUP139
  • markie
    Thanks @jasonawant for letting me crash at your place during JazzFest. #MUP139
  • Ryan Gibson
    And for letting us take a spin on the boat.
Episode Links: DrupalCon New Orleans WebsiteDrupalCon NA on TwitterSabrina on TwitterSabrina on Drupal.orgEric on TwitterEric on Drupal.orgJason on TwitterJason on Drupal.orgJeff on TwitterJeff on Drupal.orgNew Orleans Announcement VideoMedia Current WebsiteevanSchmidt design WebsiteLouisiana Drupal GroupLouisiana Drupal on Meetup.comDrupal Camp NOLALouisiana Drupal on TwitterTags: DrupalConNew Orleansplanet-drupal
Categories: Elsewhere

Realityloop: Wysiwyg Fields

Planet Drupal - Wed, 17/06/2015 - 06:40
17 Jun Stuart Clark

Wysiwyg Fields has been one of my more ambitious ideas in the world of Drupal. It is something that I feel Drupal has needed for a very long time, and something I could not resist taking on, but at times I felt that I had bitten off more than I could chew.

As difficult as it has been to write, it has been equally as difficult to write about, but here I go; My name is Stuart Clark, and this is Wysiwyg Fields.

 

What is Wysiwyg Fields?

Wysiwyg Fields is an Inline field management system, a module that bridges the gap between Drupal fields and CKEditor, giving the power of Drupal’s field system via the simple usability of a CKEditor dialog.

What that means is that Wysiwyg Fields allows for any Drupal field to be embedded directly into CKEditor and behave as a native CKEditor plugin, removing unnecessary clutter from your Drupal entity forms.

 

So… what is Wysiwyg Fields?

Let’s look at a standard use case for Drupal; Inline image management.

The below screenshot is of the Article content type provided by a default Drupal install. It includes three fields: Tags, Body and Image.

In my experience, clients and content editors alike would take an instant disliking to this form, as it’s missing a Wysiwyg and a simple method of inserting images into the body content.

Adding a Wysiwyg is easy enough, and while most (if not all) Wysiwygs will provide an image solution, the Images live outside of the Drupal realm; They can not take advantage of Drupal’s field formatter system, not be easily re-used in content or Views, or generally utilised by any other Drupal module.

 

Below is the same Article content type with Wysiwyg Fields enabled and configured for the Image field.

Apart from the addition of the Wysiwyg, the other most obvious change here is that the Image field is no longer present, leaving us with a much more compact form.

But the Image field is still there, it’s just now embedded in the Wysiwyg instead of part of the page. Simply click the Image field button on the Wysiwyg and you will be presented with something like the following:

This is a standard Image field widget embedded into a CKEditor dialog with a minor difference; Formatter and Formatter settings.

Upon uploading the image, the user can choose the formatter (if multiple formatters are set up for the field) and the formatters settings (if the formatter has settings) before clicking the OK button to insert the image.

Once inserted, the field, formatter and formatter settings can all be adjusted by selecting the inserted field and re-opening the Image field dialog, or simply by double clicking on the inserted field.

The field is rendered in the Wysiwyg as per the formatter and formatter settings as it would be when viewing the article after it has been created, however in the source code view (or via a non-Wysiwyg based Text format) it is a simple Token, as seen below.

The benefit of this approach is that whether the field or formatter ever changes, the content will automatically change to reflect those changes, whereas where markup injected it would not have the same flexibility.

 

Is Wysiwyg Fields a Image/Media solution?

No, Wysiwyg Fields can be used as an Image or Media solution, but it is not limited to any specific type of field. It can be used with any Drupal field, be it an Image field or a Text field, an Entity reference field or a View field.

Wysiwyg Fields doesn’t focus on being the best Image or Media solution, instead I would generally use Wysiwyg Fields in conjunction with existing Image or Media solutions; If they provide a field, Wysiwyg Fields can work with it.

However, Wysiwyg Fields is the successor to a module that was intended to be an inline image solution, Wysiwyg ImageField, and it is not out of the realm of possibility that Wysiwyg ImageField may see a future where it becomes a layer on top of Wysiwyg Fields to provide a simple inline solution.

 

Ok, I’m convinced, how do I set it up?

Setup is (hopefully) relatively easy to do, there are really only a few steps to get it running on a fresh Drupal installation:

  1. Install the module and dependencies as per standard Drupal instructions.

  2. Create or update a field so that it uses the Wysiwyg field widget.

  3. As per instructions provided on screen, add your Wysiwyg field button to a CKEditor profile.

That’s all it takes.

As Wysiwyg Fields is made up of many components, some of these components also require relevant setup, but Wysiwyg Fields manages this all behind the scenes as simply as possible. Primarily, Wysiwyg Fields that the Replace tokens filter is enabled on the Text formats utilised by CKEditor profiles with a Wysiwyg field button assigned.

 

Additional configuration

In the case that you create the field rather than change the widget of an existing field, you should have seen that there were some additional settings for the field, as seen below.

  • Sub widget type
    Wysiwyg Fields defines it’s own field widget, but some field types have multiple other widgets that change the way a field acts. This field allows you to chose the sub widget that will be used within the Wysiwyg Fields wrapper widget.
     
  • Sub widget settings
    These are settings specific to the fields sub widget.
     
  • Label
    By default, the button and dialog on the Wysiwyg will use the field label. This field allows that value to be overridden.
     
  • Icon
    Allows the customisation of the Icon which will be displayed in the Wysiwyg.
    Icons are provided by the Icon API module and any modules that defines Icon providers.
     
  • Formatters
    These are the field formatters that you wish to make available to the end user for the rendering of the field output.

 

Sounds good, can I use it now?

Yes! As of today (the 17th of June 2015) the first stable release for Drupal 7 is available; 7.x-2.0-beta1.

You can head over to the Wysiwyg Fields project page and grab it right now.

 

Can I simply test it?

Why yes, you can simply test it out now, thanks to SimplyTest.me, you can spin up a test Drupal 7 site with Wysiwyg Fields already installed just by going to: http://ply.st/wysiwyg_fields

You will still need to run through the setup steps above, but as I said, it’s easy.

 

This is a beta?

Yes, this is a beta, and as such there may still be some outstanding issues, as well as functionality still on the todo list.

It is recommended that in using the module you keep the beta status in mind, and if you do experience any troubling behaviour, or just have suggestions for the module, let us know over at the Wysiwyg Fields issue queue.

 

Get it now!

Download Wyswiyg Fields 7.x-2.0-beta 1 now!

drupaldrupal planetmodule
Categories: Elsewhere

Freelock : The case for git as a deployment tool

Planet Drupal - Wed, 17/06/2015 - 06:10

More and more I keep running into assertions that Git is a version control tool, and that if you use it for deployment, you're doing it wrong.

Why?

At Freelock we find it to be a very effective deployment tool, and I'm not seeing a solution that meets our needs any better.

Two presentations in particular caught my attention recently mentioned this:

DevOpsDeploymentDrupal PlanetDrupalgitSaltJenkinsDocker
Categories: Elsewhere

C.J. Adams-Collier: Trip Report: UW signing-party

Planet Debian - Wed, 17/06/2015 - 01:28

Dear Debian Users,

I met last night with a friend from many years ago and a number of students of cryptography. I was disappointed to see the prevalence of black hat, anti-government hackers at the event. I was hoping that civilized humanity had come to agree that using cryptography for deception, harm to others and plausible deniability is bad, m’kay? When one speaks of the government as “they,” nobody’s going to get anywhere really quick. Let’s take responsibility for the upkeep of the environment in which we find ourselves, please.

Despite what I perceived as a negative focus of the presentation, it was good to meet with peers in the Seattle area. I was very pleasantly surprised to find that better than half of the attendees were not male, that many of the socioeconomic classes of the city were represented, as were those of various ethnic backgrounds. I am really quite proud of the progress of our State University, even if I’m not always in agreement with the content that they’re polluting our kids’ brains with. I guess I should roll up my sleeves and get busy, eh?

V/R,

C.J.

Categories: Elsewhere

Norbert Preining: ptex2pdf release 0.8

Planet Debian - Wed, 17/06/2015 - 00:38

I have uploaded ptex2pdf to CTAN. Changes are mainly about how files are searched and extensions are treated. ptex2pdf now follows the TeX standard, that is:

  • first check if the passed in filename can be found with libkpathsea
  • if not, try the filename with .tex appended
  • if that also does not fail, try the filename with .ltx appended.

This also made finding the .dvi file more robust.

The files should be in today’s TeX Live, and can be downloaded directly from here: ptex2pdf-0.8.zip, or only the lua scrip: ptex2pdf.lua.

For more details, see the main description page: /software-projects/ptex2pdf (in Japanese) or see the github project.

Categories: Elsewhere

Drupal Easy: DrupalEasy Podcast 157 - Rabbit Food (Anna Kalata - Getting Started in Core Development)

Planet Drupal - Tue, 16/06/2015 - 20:40
Download Podcast 157

Anna Kalata (akalata), a freelance Drupal Developer (AnnaKalata.com) from the Chicago area joins Ryan Price and Mike Anello to talk about getting started in core development, the New Jersey Shore Sprint, workflow, the Druplicon, Drupal major version stats, and our picks of the week.

read more

Categories: Elsewhere

Blink Reaction: Building Native Apps - Part 1

Planet Drupal - Tue, 16/06/2015 - 19:39
Building native mobile apps with Ionic Framework and Drupal back-end: Setup Development Environment

Today, a large amount of web traffic comes from mobile device users. For this reason, responsive websites – those with great adaptation for smartphones, tablets and even watches and TVs – are a must-have for any company or startup. However, browser web-applications have a lot of limitations and some performance issues on devices. If you want a prime user experience for customers who use gadgets to interact with your services, a native mobile app is the best choice. It will give you an opportunity to use all native APIs, which use device hardware resources to communicate with users. In most cases web-apps don’t have that capability.

Technology introduction

There are currently two popular platforms for mobile devices: Android and iOS. Each relies on a specific language for applications – Android uses Java as its main programming language, and iOS uses Swift in its latest version. But it is possible to make apps for these and other platforms, without needing to learn each language (or having a different developer for each). This is done through hybrid applications: you can write simple HTML + CSS + JavaScript web-apps, and then convert them to native apps. There a few pros and cons to using hybrid apps instead of native ones:

Pros:
  • No need for developers to learn new programming languages

  • One codebase for all mobile platforms

  • Can use almost all native API’s

  • Can use the same content as your web-app

Cons:
  • A little bit slower than native, but not critical

  • Output files have a bigger size

Why Ionic?

In this tutorial series, I will use Ionic Framework to build our native app. Ionic Framework is an open-source project that works under the Apache Cordova platform and has AngularJS under the hood. The Cordova platform functions as a bridge between our HTML5 application and native device APIs; it has a base functionality for building, and many more available plugins. In addition, it works with a dozen mobile operating systems such as: Android, iOS, Windows Phone, FirefoxOS and more. Ionic provides us with an opportunity to write an app with all capabilities of AngularJS and its modules, along with a design starting point and UI elements. It has a NodeJS CLI to control building processes and a couple of boilerplates to make project starts immediate and simple.

Environment Setup

At this point we have a choice of how to build our development process. The first option is to use a preconfigured Vagrant install called Ionic Box - if you aren’t familiar with Vagrant you can read the following tutorial series. Ionic Box is a Windows virtual machine with all the software you’ll need to develop Android native apps. In this post, I will build an Android application because I’m working on Windows, but I will show how to add iOS support to your project if you are on Mac.

The second option is to install all programs manually on your system. For this, I made a step-by-step plan to set up your Windows environment:

  1. Install Java:

    1. Go to Oracle Java downloads page and click on the JDK download link:

    2. On the JDK download page, accept license agreement, choose and download the distributive for your operating system

    3. Run installer with default options

    4. Configure Environment Variables in system

        1. Go to Control Panel -> System and Security -> System

        2. Click on Advanced system settings

        3. Click Environment Variables button

        4. Click New button in System variables fieldset and add JAVA_HOME variable with value being the path to your jdk folder



  1. Install Apache Ant

    1. Download distributive from here

    2. Run installer

    3. Configure Environment Variables in system

        1. Go to Control Panel -> System and Security -> System

        2. Click on Advanced system settings

        3. Click Environment Variables button

        4. Click New button in System variables fieldset and add ANT_HOME variable with value being the path to your ant/bin folder

  1. Install Android SDK:

    1. Go to the Android SDK download page

    2. Click on “Other Download Options” link

    3. Download distributive for your system

    4. Run installer with default options

    5. In command prompt, run command: android
      This will start Android SDK Manager Tool, which allows you to download all packages that you need

    6. Check all Tools, Extras, and APIs 14 and higher, then click Install Packages button

    7. Configure Environment Variables in system

        1. Go to Control Panel -> System and Security -> System

        2. Click on Advanced system settings

        3. Click Environment Variables button

        4. Click New button in System variables fieldset and add ANDROID_HOME variable with value being the path to your sdk folder

  1. Install NodeJS:

    1. Go to the NodeJS web-site.

    2. Click on “Install” link, and it will start downloading the distributive for your operating system automatically

    3. Run installer with default options

  2. Install Cordova and Ionic:

    1. Open NodeJS command prompt

    2. Run the following command: npm install -g cordova ionic
      This will install Cordova and Ionic globally in your system, so you can access their commands from command prompt.

 

Congratulations - you have setup your environment to build hybrid mobile applications for Android with Ionic Framework! Check back in tomorrow and we’ll continue this process. Or, contact us if you have questions about this process or anything else, we'd like to help.

Best PracticesDrupal PlanetLearning SeriesTechnology ToolsPost tags: AppsIonicDrupal
Categories: Elsewhere

Drupal Watchdog: Write a Migrate Process Plugin, Learn Drupal 8

Planet Drupal - Tue, 16/06/2015 - 19:24

A few of us were coaching Campbell Vertesi on porting the CSV source to Drupal 8 and he asked as an aside about mapping US states he had in a taxonomy vocabulary to taxonomy IDs during a migration. Glad you asked! The answer gives us an example for quite a few concepts in Drupal 8, so let’s dig in! We will go over the code line by line.

Plugins

This particular class is a plugin. Plugins are normal objects in a predefined directory with a little metadata. For example, field widgets and formatters are plugins: they get a field and they return a form or a render array. We can change the formatter freely, only the type and meaning of the inputs and the output is fixed. Another good example are the image effects. Migrate uses plugins for everything: sources, processing, destinations. See more.

Namespaces, PSR-4

Line 8 contains a namespace declaration: the first part is Drupal and then the module name migrate_plus then the rest. Typically a plugin will follow by the a Plugin part and then the name of the defining module migrate and finally the type of a plugin process if the defining module has several. Not every plugin type requires such a long namespace, for example entities simply use Entity after the module name: Drupal\taxonomy\Entity. Drupal 8 will look for classes of the migrate_plus module under modules/migrate_plus/src (and all the other usual places for modules) and then the rest of the path is the same as namespace -- this is specified by the PSR-4 standard so this class is in the directory modules/migrate_plus/src/Plugin/migrate/process (sneak preview: a few lines later we will find the class name is TermReference and so the filename is TermReference.php).

Use Statements

Line 10-16 contains use statements. use some\namespace\classallows us to just write class in later code and the Drupal coding standards require this. It really is just syntactic sugar, you can even use non-existing classes. As an aside, many of us have found the PhpStorm IDE very convenient for Drupal 8 development: for example, it takes care of the file placement and naming from the previous section and adds these use statements automatically for you.

Annotations

Line 21-23 contains an annotation. Annotations are a very useful feature in sane languages (like Python) so much so that the PHP community have implemented them in user space… several times. As such, Drupal 8 uses the annotations syntax of Doctrine on classes and PHPUnit annotation on tests. The Doctrine annotations are pretty close to a PHP array except {} is used instead of array(). We can see a very simple example here: this is using the MigrateProcessPlugin annotation and the plugin definition is array(‘id’ => ‘term_reference’). Every plugin must have an id at least. In previous versions of Drupal you would’ve used a hook_migrate_process_info returning an array keyed by the same id and some data. Although the info hooks are gone the alter hooks are still here: for example migrate_process_info_alter is a valid hook (although at this moment undocumented as its utility is severely limited). Other similar hooks, however, are much more useful, for example hook_entity_info_alter.

MigrateProcessPlugin itself is a class in the Drupal\migrate\Annotation namespace and it’s useful to know this because this class is the nexus of information about process plugins.

Classes, Base Classes and Interfaces

Line 25 contains the class name, a base class and an interface. One of the fundamental building bricks of Drupal 8 are interfaces. Interfaces provide a contract, that by which classes that implement it agree to provide certain functionality so that they can be used the same way other classes that use the interface. In other words, every class will have certain methods which take a certain kind of input and provide a certain kind of output. They are absolutely fundamental to plugins since any code interacting with a plugin will only know about the methods the interface require and nothing about the plugin details itself. Because of this, plugin types can require their plugins to implement a specific interface and Drupal throw an exception if they don’t.

Base classes are not a language feature, they are typical of Drupal 8 however: these classes contain some useful common logic for implementing an interface. Extending these instead of implementing an interface is very strongly recommended (although not mandatory at all). Some interfaces do not have a base class, for example ContainerFactoryPluginInterface.

Services, Injection

We will skip the constructor for now and talk about the create method starting on Line 40 required for implementing ContainerFactoryPluginInterface and then we will cover the constructor briefly.

Previous versions of Drupal were often strongly coupled: hardwired function calls were the norm. In Drupal 8 a lot of functionality is provided by so called services. There is a service for all sorts of things: working with entities, logging information, installing modules etc. The container itself is an object and the most used method of it (by far) is get as visible on line 46. You can find the services provided by core here. Because the container provides so many things it is not a good practice to pass and store the container in an object. By doing so, it becomes harder to understand (and to test) a class as it can basically depend on anything. Instead only the static create method will get the container, it passes the necessary services to the constructor and the class itself now has clean dependencies.

By far the most commonly used service is the entity manager: the getDefinition method gives us the entity type object, the equivalent of entity_get_info in Drupal 7. The getStorage gives us the storage object, which in turn can query and load entities of a particular type. (Then the entity objects can save themselves.) If we are not coding a nice little plugin then the entity manager can also be accessed at \Drupal::entityManager(). The Drupal class has methods for most common of the functionality. Most of these methods are just wrapping a $container->get() call so this list is also useful as a list of services. See more on services.

So the create method grabs the taxonomy term storage object and passes it to the constructor. The constructor in turn will call the base class constructor which initializes the common plugin properties, our constructor then initializes our own properties: most importantly the term storage is now available to every method in the class.

Entity Query

We have a getTermId helper method, not required by any interface -- it can not be as interfaces have public methods only. This method queries the term storage for the terms in the specified vocabulary. This perhaps looks familiar -- almost like a database query in Drupal 7. This, however, is for entities only and the condition method is extremely powerful, for example to find nodes posted by users joined in the last hour, condition(‘uid.entity.created’, REQUEST_TIME - 3600, ‘>’). Also, in general, already in Drupal 7 using SQL queries was discouraged but in Drupal 8 it’s safe to assume accessing the database is just doing it wrong.

The entity query returns a list of entity ids and then we load those terms. The following interesting tidbit is $term->name->value, this is one of the ways to access a field value in D8 but it’s mostly just for demo, using a proper method $term->label() is strongly preferred. This $entity->fieldname->propertyname chain can continue: we can write $node->uid->entity->created->value to get the created time for the node author.

The entity query condition closely mirrors this syntax: change the arrows to dots, optionally drop the main property , in this case value and you will get the previously mentioned condition('uid.entity.created', ... to query the same. The Entity API is a really powerful feature of Drupal 8.

Process Plugins

Finally we arrived to the transform method which is the only method required from a process plugin. Migrate works by reading a row from a source plugin then running each property through a pipeline of process plugins and then hand the resulting row to a destination plugin. Each process plugin gets the current value and returns a value. Core provides quite a number of these, a list can be found here. Most process plugins are really small: the average among the core process plugins is a mere 58 LoC (lines of code) and there is only one above 100 LoC: the migration process plugin which is used to look up previously migrated identifiers and even that is only 196 LoC (lines of code).

In our case the actual functionality is just one line of code after all this setup. Of course this doesn’t include error handling etc.

So there you have it: in order to be able to run this single line of code, we needed to put a file in the right directory, containing the right namespace and classname, implement the right interfaces, get a service from the container and run an entity query.

Categories: Elsewhere

Julien Danjou: Timezones and Python

Planet Debian - Tue, 16/06/2015 - 18:15

Recently, I've been fighting with the never ending issue of timezones. I never thought I would have plunged into this rabbit hole, but hacking on OpenStack and Gnocchi I felt into that trap easily is, thanks to Python.

“Why you really, really, should never ever deal with timezones”

To get a glimpse of the complexity of timezones, I recommend that you watch Tom Scott's video on the subject. It's fun and it summarizes remarkably well the nightmare that timezones are and why you should stop thinking that you're smart.

The importance of timezones in applications

Once you've heard what Tom says, I think it gets pretty clear that a timestamp without any timezone attached does not give any useful information. It should be considered irrelevant and useless. Without the necessary context given by the timezone, you cannot infer what point in time your application is really referring to.

That means your application should never handle timestamps with no timezone information. It should try to guess or raises an error if no timezone is provided in any input.

Of course, you can infer that having no timezone information means UTC. This sounds very handy, but can also be dangerous in certain applications or language – such as Python, as we'll see.

Indeed, in certain applications, converting timestamps to UTC and losing the timezone information is a terrible idea. Imagine that a user create a recurring event every Wednesday at 10:00 in its local timezone, say CET. If you convert that to UTC, the event will end up being stored as every Wednesday at 09:00.

Now imagine that the CET timezone switches from UTC+01:00 to UTC+02:00: your application will compute that the event starts at 11:00 CET every Wednesday. Which is wrong, because as the user told you, the event starts at 10:00 CET, whatever the definition of CET is. Not at 11:00 CET. So CET means CET, not necessarily UTC+1.

As for endpoints like REST API, a thing I daily deal with, all timestamps should include a timezone information. It's nearly impossible to know what timezone the timestamps are in otherwise: UTC? Server local? User local? No way to know.

Python design & defect

Python comes with a timestamp object named datetime.datetime. It can store date and time precise to the microsecond, and is qualified of timezone "aware" or "unaware", whether it embeds a timezone information or not.

To build such an object based on the current time, one can use datetime.datetime.utcnow() to retrieve the date and time for the UTC timezone, and datetime.datetime.now() to retrieve the date and time for the current timezone, whatever it is.

>>> import datetime
>>> datetime.datetime.utcnow()
datetime.datetime(2015, 6, 15, 13, 24, 48, 27631)
>>> datetime.datetime.now()
datetime.datetime(2015, 6, 15, 15, 24, 52, 276161)


As you can notice, none of these results contains timezone information. Indeed, Python datetime API always returns unaware datetime objects, which is very unfortunate. Indeed, as soon as you get one of this object, there is no way to know what the timezone is, therefore these objects are pretty "useless" on their own.

Armin Ronacher proposes that an application always consider that the unaware datetime objects from Python are considered as UTC. As we just saw, that statement cannot be considered true for objects returned by datetime.datetime.now(), so I would not advise doing so. datetime objects with no timezone should be considered as a "bug" in the application.

Recommendations

My recommendation list comes down to:

  1. Always use aware datetime object, i.e. with timezone information. That makes sure you can compare them directly (aware and unaware datetime objects are not comparable) and will return them correctly to users. Leverage pytz to have timezone objects.
  2. Use ISO 8601 as input and output string format. Use datetime.datetime.isoformat() to return timestamps as string formatted using that format, which includes the timezone information.

In Python, that's equivalent to having:

>>> import datetime
>>> import pytz
>>> def utcnow():
return datetime.datetime.now(tz=pytz.utc)
>>> utcnow()
datetime.datetime(2015, 6, 15, 14, 45, 19, 182703, tzinfo=<UTC>)
>>> utcnow().isoformat()
'2015-06-15T14:45:21.982600+00:00'


If you need to parse strings containing ISO 8601 formatted timestamp, you can rely on the iso8601, which returns timestamps with correct timezone information. This makes timestamps directly comparable:

>>> import iso8601
>>> iso8601.parse_date(utcnow().isoformat())
datetime.datetime(2015, 6, 15, 14, 46, 43, 945813, tzinfo=<FixedOffset '+00:00' datetime.timedelta(0)>)
>>> iso8601.parse_date(utcnow().isoformat()) < utcnow()
True


If you need to store those timestamps, the same rule should apply. If you rely on MongoDB, it assumes that all the timestamp are in UTC, so be careful when storing them – you will have to normalize the timestamp to UTC.

For MySQL, nothing is assumed, it's up to the application to insert them in a timezone that makes sense to it. Obviously, if you have multiple applications accessing the same database with different data sources, this can end up being a nightmare.

PostgreSQL has a special data type that is recommended called timestamp with timezone, and which can store the timezone associated, and do all the computation for you. That's the recommended way to store them obviously. That does not mean you should not use UTC in most cases; that just means you are sure that the timestamp are stored in UTC since it's written in the database, and you check if any other application inserted timestamps with different timezone.

OpenStack status

As a side note, I've improved OpenStack situation recently by changing the oslo.utils.timeutils module to deprecate some useless and dangerous functions. I've also added support for returning timezone aware objects when using the oslo_utils.timeutils.utcnow() function. It's not possible to make it a default unfortunately for backward compatibility reason, but it's there nevertheless, and it's advised to use it. Thanks to my colleague Victor for the help!

Have a nice day, whatever your timezone is!

Categories: Elsewhere

DrupalCon News: Planning for Friends and Family at DrupalCon

Planet Drupal - Tue, 16/06/2015 - 17:59

As part of our extended Drupal family, many Drupalistas bring their spouse, significant other, friend or children along to DrupalCon. As we know, the Con is always jam-packed with sessions, BoFs and sprints that keep us busy; Barcelona will be no different. After the Drupalers have drained our brains at the convention center, we jaunt off to group dinners, sponsor parties or the coder lounge to continue getting our Drupal on.

Categories: Elsewhere

Simon Josefsson: SSH Host Certificates with YubiKey NEO

Planet Debian - Tue, 16/06/2015 - 14:05

If you manage a bunch of server machines, you will undoubtedly have run into the following OpenSSH question:

The authenticity of host 'host.example.org (1.2.3.4)' can't be established. RSA key fingerprint is 1b:9b:b8:5e:74:b1:31:19:35:48:48:ba:7d:d0:01:f5. Are you sure you want to continue connecting (yes/no)?

If the server is a single-user machine, where you are the only person expected to login on it, answering “yes” once and then using the ~/.ssh/known_hosts file to record the key fingerprint will (sort-of) work and protect you against future man-in-the-middle attacks. I say sort-of, since if you want to access the server from multiple machines, you will need to sync the known_hosts file somehow. And once your organization grows larger, and you aren’t the only person that needs to login, having a policy that everyone just answers “yes” on first connection on all their machines is bad. The risk that someone is able to successfully MITM attack you grows every time someone types “yes” to these prompts.

Setting up one (or more) SSH Certificate Authority (CA) to create SSH Host Certificates, and have your users trust this CA, will allow you and your users to automatically trust the fingerprint of the host through the indirection of the SSH Host CA. I was surprised (but probably shouldn’t have been) to find that deploying this is straightforward. Even setting this up with hardware-backed keys, stored on a YubiKey NEO, is easy. Below I will explain how to set this up for a hypothethical organization where two persons (sysadmins) are responsible for installing and configuring machines.

I’m going to assume that you already have a couple of hosts up and running and that they run the OpenSSH daemon, so they have a /etc/ssh/ssh_host_rsa_key* public/private keypair, and that you have one YubiKey NEO with the PIV applet and that the NEO is in CCID mode. I don’t believe it matters, but I’m running a combination of Debian and Ubuntu machines. The Yubico PIV tool is used to configure the YubiKey NEO, and I will be using OpenSC‘s PKCS#11 library to connect OpenSSH with the YubiKey NEO. Let’s install some tools:

apt-get install yubikey-personalization yubico-piv-tool opensc-pkcs11 pcscd

Every person responsible for signing SSH Host Certificates in your organization needs a YubiKey NEO. For my example, there will only be two persons, but the number could be larger. Each one of them will have to go through the following process.

The first step is to prepare the NEO. First mode switch it to CCID using some device configuration tool, like yubikey-personalization.

ykpersonalize -m1

Then prepare the PIV applet in the YubiKey NEO. This is covered by the YubiKey NEO PIV Introduction but I’ll reproduce the commands below. Do this on a disconnected machine, saving all files generated on one or more secure media and store that in a safe.

user=simon key=`dd if=/dev/random bs=1 count=24 2>/dev/null | hexdump -v -e '/1 "%02X"'` echo $key > ssh-$user-key.txt pin=`dd if=/dev/random bs=1 count=6 2>/dev/null | hexdump -v -e '/1 "%u"'|cut -c1-6` echo $pin > ssh-$user-pin.txt puk=`dd if=/dev/random bs=1 count=6 2>/dev/null | hexdump -v -e '/1 "%u"'|cut -c1-8` echo $puk > ssh-$user-puk.txt yubico-piv-tool -a set-mgm-key -n $key yubico-piv-tool -k $key -a change-pin -P 123456 -N $pin yubico-piv-tool -k $key -a change-puk -P 12345678 -N $puk

Then generate a RSA private key for the SSH Host CA, and generate a dummy X.509 certificate for that key. The only use for the X.509 certificate is to make PIV/PKCS#11 happy — they want to be able to extract the public-key from the smartcard, and do that through the X.509 certificate.

openssl genrsa -out ssh-$user-ca-key.pem 2048 openssl req -new -x509 -batch -key ssh-$user-ca-key.pem -out ssh-$user-ca-crt.pem

You import the key and certificate to the PIV applet as follows:

yubico-piv-tool -k $key -a import-key -s 9c < ssh-$user-ca-key.pem yubico-piv-tool -k $key -a import-certificate -s 9c < ssh-$user-ca-crt.pem

You now have a SSH Host CA ready to go! The first thing you want to do is to extract the public-key for the CA, and you use OpenSSH's ssh-keygen for this, specifying OpenSC's PKCS#11 module.

ssh-keygen -D /usr/lib/x86_64-linux-gnu/opensc-pkcs11.so -e > ssh-$user-ca-key.pub

If you happen to use YubiKey NEO with OpenPGP using gpg-agent/scdaemon, you may get the following error message:

no slots cannot read public key from pkcs11

The reason is that scdaemon exclusively locks the smartcard, so no other application can access it. You need to kill scdaemon, which can be done as follows:

gpg-connect-agent SCD KILLSCD SCD BYE /bye

The output from ssh-keygen may look like this:

ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCp+gbwBHova/OnWMj99A6HbeMAGE7eP3S9lKm4/fk86Qd9bzzNNz2TKHM7V1IMEj0GxeiagDC9FMVIcbg5OaSDkuT0wGzLAJWgY2Fn3AksgA6cjA3fYQCKw0Kq4/ySFX+Zb+A8zhJgCkMWT0ZB0ZEWi4zFbG4D/q6IvCAZBtdRKkj8nJtT5l3D3TGPXCWa2A2pptGVDgs+0FYbHX0ynD0KfB4PmtR4fVQyGJjJ0MbF7fXFzQVcWiBtui8WR/Np9tvYLUJHkAXY/FjLOZf9ye0jLgP1yE10+ihe7BCxkM79GU9BsyRgRt3oArawUuU6tLgkaMN8kZPKAdq0wxNauFtH

Now all your users in your organization needs to add a line to their ~/.ssh/known_hosts as follows:

@cert-authority *.example.com ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCp+gbwBHova/OnWMj99A6HbeMAGE7eP3S9lKm4/fk86Qd9bzzNNz2TKHM7V1IMEj0GxeiagDC9FMVIcbg5OaSDkuT0wGzLAJWgY2Fn3AksgA6cjA3fYQCKw0Kq4/ySFX+Zb+A8zhJgCkMWT0ZB0ZEWi4zFbG4D/q6IvCAZBtdRKkj8nJtT5l3D3TGPXCWa2A2pptGVDgs+0FYbHX0ynD0KfB4PmtR4fVQyGJjJ0MbF7fXFzQVcWiBtui8WR/Np9tvYLUJHkAXY/FjLOZf9ye0jLgP1yE10+ihe7BCxkM79GU9BsyRgRt3oArawUuU6tLgkaMN8kZPKAdq0wxNauFtH

Each sysadmin needs to go through this process, and each user needs to add one line for each sysadmin. While you could put the same key/certificate on multiple YubiKey NEOs, to allow users to only have to put one line into their file, dealing with revocation becomes a bit more complicated if you do that. If you have multiple CA keys in use at the same time, you can roll over to new CA keys without disturbing production. Users may also have different policies for different machines, so that not all sysadmins have the power to create host keys for all machines in your organization.

The CA setup is now complete, however it isn't doing anything on its own. We need to sign some host keys using the CA, and to configure the hosts' sshd to use them. What you could do is something like this, for every host host.example.com that you want to create keys for:

h=host.example.com scp root@$h:/etc/ssh/ssh_host_rsa_key.pub . gpg-connect-agent "SCD KILLSCD" "SCD BYE" /bye ssh-keygen -D /usr/lib/x86_64-linux-gnu/opensc-pkcs11.so -s ssh-$user-ca-key.pub -I $h -h -n $h -V +52w ssh_host_rsa_key.pub scp ssh_host_rsa_key-cert.pub root@$h:/etc/ssh/

The ssh-keygen command will use OpenSC's PKCS#11 library to talk to the PIV applet on the NEO, and it will prompt you for the PIN. Enter the PIN that you set above. The output of the command would be something like this:

Enter PIN for 'PIV_II (PIV Card Holder pin)': Signed host key ssh_host_rsa_key-cert.pub: id "host.example.com" serial 0 for host.example.com valid from 2015-06-16T13:39:00 to 2016-06-14T13:40:58

The host now has a SSH Host Certificate installed. To use it, you must make sure that /etc/ssh/sshd_config has the following line:

HostCertificate /etc/ssh/ssh_host_rsa_key-cert.pub

You need to restart sshd to apply the configuration change. If you now try to connect to the host, you will likely still use the known_hosts fingerprint approach. So remove the fingerprint from your machine:

ssh-keygen -R $h

Now if you attempt to ssh to the host, and using the -v parameter to ssh, you will see the following:

debug1: Server host key: RSA-CERT 1b:9b:b8:5e:74:b1:31:19:35:48:48:ba:7d:d0:01:f5 debug1: Host 'host.example.com' is known and matches the RSA-CERT host certificate.

Success!

One aspect that may warrant further discussion is the host keys. Here I only created host certificates for the hosts' RSA key. You could create host certificate for the DSA, ECDSA and Ed25519 keys as well. The reason I did not do that was that in this organization, we all used GnuPG's gpg-agent/scdaemon with YubiKey NEO's OpenPGP Card Applet with RSA keys for user authentication. So only the host RSA key is relevant.

Revocation of a YubiKey NEO key is implemented by asking users to drop the corresponding line for one of the sysadmins, and regenerate the host certificate for the hosts that the sysadmin had created host certificates for. This is one reason users should have at least two CAs for your organization that they trust for signing host certificates, so they can migrate away from one of them to the other without interrupting operations.

Categories: Elsewhere

ThinkShout: A Tale of Two Devsigners

Planet Drupal - Tue, 16/06/2015 - 14:00

It’s June, which means Devsigner is just around the corner so, naturally, we’ve got design on the brain. What’s Devsigner? Well, I’m glad you asked. Devsigner is a conference held here in the Pacific Northwest geared towards front end developers and development-minded designers. Sessions focus on the relationship between design and web development, bridging the gap that separates the design from the code. The math looks like this: developer + designer = devsigner.

ThinkShout’s own devsigners Josh Riggs (User Experience Lead) and Eric Paxton (Front End Engineer), will be speaking at this conference at the end of the month. I sat down with Josh and Eric to learn a little bit more about their design process, and how we work with our nonprofit clients to ensure that their sites don’t just work, but that they also deliver a fantastic user experience.

You two make up the dynamic design duo here at ThinkShout. What do your respective roles entail? How do you leverage your different skill sets?

Josh: My role as the UX lead right now is handling all aspects of user experience and visual design. I’m responsible for interpreting site maps and requirements, plus things like client/user needs and creating a user interface out of that. That starts with wireframing and ends with a visual design layer.

Eric: My role as Front End Engineer is very much in the implementation phase. Though I do advise in the discovery and budgeting phase, just so we can be sure that we can actually implement what the client wants. It’s nice because in the past, before joining the ThinkShout team, I’d done the whole gamut. From the requirements gathering phase to wireframing, and then the implementation. Here at ThinkShout, I’ve found my sweet spot. I do occasional wireframing, but I get to focus on lots of implementation. I also implement Josh’s designs. I write a lot of Javascript and Sass, basically.

Josh: Eric is like the alchemist. He takes the metals - the designs from me - and turns them into websites. There is actually a large spectrum in between where my responsibilities stop and Eric’s begin. We still talk about things like, how do we go from an idea being on a screen, to that idea being a functioning website? We’re constantly thinking about how to best utilize our respective skillsets, always reevaluating our process to improve upon it.

What’s a recent project that you’ve really enjoyed working on?

Eric: The SPLC (Southern Poverty Law Center) microsite. I thought that was very well done. Josh did a lot of the front end work on that and I came in and did the site optimization, which is what I’ll be talking about at Devsigner. I thought that went really smoothly because at that time, all the work he’d done in the browser went directly to implementation. We were able to take exactly what he’d designed and just build off of it.

Can you talk a little bit about what the design process for the SPLC microsite was like, Josh?

Josh: We happened to be working on that right around the same time as I was doing wireframes for the upcoming SPLC main site that we’re redesigning. We were already doing a lot of thinking about their content and what their needs were. Because the Selma: Bridge to the Ballot movie was coming out on the anniversary of the Selma March, we wanted to have this ready to go in time for that day. There was no way we were going to launch the whole SPLC site along with it - we were too early in development for that - so we decided to split that project up and give them a campaign microsite that would be easy to build while we continued to work on their main site.

A lot of that meant working with their team to define their content needs. I began with basic wireframes in Sketch, and uploaded them into Invision to give them interactivity. As SPLC came up with more fidelity to what their needs were, we solidified the visual designs. Luckily, they already had a lot of assets that their really great internal design team had created for the movie, so I was able to go off of that style. I took their visual style and applied it to the wireframes and at that point, I went to Eric for a consultation and said, "Ok, if we’re going to build this in Jekyll, what’s the best way to do this as far as the architecture goes?" Eric was a huge help in regards to file structure. He wrote a great rake script to automate all the Jekyll, Sass, and Javascript components. That’s when I jumped in and rebuilt what I’d done in Sketch, and added more fidelity with HTML and Sass. I then passed it onto to Eric so he could do his unicorn magic.

Eric: And that’s a nice part about where our skills overlap: we can get closer to what we want. He’s a better designer than I am. My strengths lie in the code. I’ve designed when I had to, but it’s not my forte, so it’s nice to have Josh’s expertise. So these skill sets compliment each other. I feel comfortable handing over my implementation to design and saying, "Hey, can you polish the nav? Or the design?" Things like that.

What design trends do you want to see more of? Or less of?

Eric: I think flat design is getting boring. I’m starting to see a little bit more texture in the things we’ve done. Like patterns, not just flat design for the sake of flat design. There’s texture strategically used to make things look better. For instance, in the Capital Area Food Bank of Texas site, there’s a bit of a pattern in the footer. It’s not just a flat blue background with text. I really like patterns that are used to call out different sections of a design. It adds to it and brings something out of the page. It used to just be that admin interfaces were this flat. But now everything reflects that. Lots of rectangles. I personally like shapes and textures and patterns.

Josh: It’s tricky to know when to add life to what’s a very flat trend right now. I come from the old school world of web design, which was about how cool can you make your shadows look in Photoshop, how three-dimensional can you make things appear. Now that’s kind of like wearing skinny jeans in the late nineties, when you wouldn’t be caught dead wearing them. Or neon colors. So I think what’s happening is that it’s not just that flat design is popular. If you look at other design mediums, like automotive or architecture, there’s a phase with extreme ornate elements. You know, crazy fins, details, lights, every car had a custom badge. All that stuff. And then you have the modern era after that where everything gets streamlined and simplified. It’s more about the function over the form, and the function drives the form. You see the opposite in the Victorian era. Go walk along the St. Johns bridge and look up at a lamp. You’ll see these ornate, twisted little embellishments along the lamps. But the purpose of a lamp is to provide light. Those embellishments do nothing to support the function. They’re just there to make it look pretty.

I think we’re seeing a lot of that in digital design as it matures. We’re getting rid of the stuff that doesn’t support the function and focusing more on the intent of the users. While we’re taking that ornate-ness out of it, we’re also adding a lot more micro-interactions and animations. Things that actually help you do what you’re there to do. At first, I was kind of against that. But now that I think about it as post-modern design for the web, it makes more sense to me.

How do you advise nonprofits on this? Do these same trends benefit nonprofits as much as they do for-profits?

Eric: I think knowing your end user is what determines your path. A lot of nonprofits have similar goals as for-profits when it comes to their websites - they’re trying to tell a story and engage their users. But the main thing is, do the organizational goals reflect what the user is coming there for? For instance, we work with the LA Conservancy. They work to preserve historical buildings in LA. We didn’t just look at them, and then try to make their website look like a pretty building. But we also had this discussion in LA about form versus function. But I wonder, where does that meet in the middle? That’s what I struggle with. Because I do think there’s value in ornate elements like that. They set a mood. So I think that’s part of function - that ornateness sets the mood you want to present to your users to help them feel the connection to the organization’s cause.

Josh: Nearly every major design phase, whether it be automotive, architecture, art, whatever, there’s always a backlash to those current trends. So there will be backlash to flat web design. It may be a subculture, it may take over. But whenever something gets to be ubiquitous, there’s always someone who wants to do something totally different. It’ll be interesting to see what that is.

I feel like that’s the nature of creativity… We see something, we make it part of our process, plus a spark of something new.

Eric: We all have things we’re influenced by. To me, Google stands out. They’ve really led in the trends that people are using. There’s a level of depth to their designs that make me feel like I can reach out and grab it. It’s flat in some ways, but yeah, there’s definitely some depth.

Josh: Yeah, I think Google’s done a really great job. And you can see this happening in the app world. The current trend is also getting ubiquitous.

Devsigner is at the end of the month and you both are leading your own sessions. Can you tell us a bit about them?

Eric: My session is called "Optimization is User Experience." I think this is something everybody can use, which is why it’s listed as a beginner talk. We learn web design, we learn app design, we release these things to the world where we don’t have control over devices and users’ bandwidth, so it’s important to know that this beautiful thing you’ve created can be experienced correctly regardless of what device it’s viewed on.

Josh: So my session is based on something I’ve noticed. I worked on a lot of projects where there’s limited time, budget, or resources. Maybe there isn’t any resource for stock photography, or there’s just a really small team working on it. I’ve always had to find ways to be creative with what I have and with a small budget. I signed up to speak at Refresh Portland and I figured this might be a shared struggle and that other people could learn from my experience: how to stay under budget and still come up with a great, workable design. It’s called "Ballin’ on a Budget."

Want to dig deeper into design with Josh and Eric and pick their brains? Come to Devsigner, which takes place during June 27-28 at the Pacific Northwest College of Art in Portland, Oregon. Check out the full session schedule on the Devsigner site. You can also follow Josh and Eric on Twitter at @joshriggs and @epxtn.

Categories: Elsewhere

InternetDevels: Best Drupal Video Player Modules

Planet Drupal - Tue, 16/06/2015 - 13:47

Greetings to all who want to add video integration to their Drupal website! Drupal module development never stops, offering us a large number of various modules for working with videos. I have hunted through a huge amount of Drupal video modules for you.

To begin with, you need to decide where you want to store your video, how you want to display it, etc.

Let's discuss the pros and cons of each method. Here we go!

Read more
Categories: Elsewhere

KnackForge: Mitigating Apache Internal Dummy Connection issue

Planet Drupal - Tue, 16/06/2015 - 06:00

This is one of the bothering issues we had lately in our project. I'm summarizing the list of causes and possible ways to fix / mitigate the same. So what is Apache's Internal Dummy Connection is all about? Official Wiki page explains it better. See snip below,

When the Apache HTTP Server manages its child processes, it needs a way to wake up processes that are listening for new connections. To do this, it sends a simple HTTP request back to itself. This request will appear in the access_log file with the remote address set to the loop-back interface (typically 127.0.0.1 or ::1 if IPv6 is configured). If you log the User-Agent string (as in the combined log format), you will see the server signature followed by "(internal dummy connection)" on non-SSL servers. During certain periods you may see up to one such request for each httpd child process.

#1: VirtualHost

As mentioned, Apache makes a call to itself. If your default VirtualHost is configured to serve dynamic database driven site like Drupal, it will certainly result in increased resource utilization. Changing the same to serve static index.html should make the dummy http request faster and less resource intense. Even if you have directory listing, symbolic links and/or AllowOverriding turned on, it is suggested to disable them.

#2: .htaccess Rewrite Rule

If default VirtualHost couldn't be changed for some reason, with mod_rewrite you can prevent request hitting the Drupal with rewrite rule. 

Categories: Elsewhere

Mediacurrent: Contrib Committee Status Review for May, 2015

Planet Drupal - Mon, 15/06/2015 - 22:18

As with most other Drupal development studios, our May was dominated by DrupalCon. For the first week we were doing final preparations - making sure everything was ready for our booth, adding the final polish to our presentations, and packing for the trip. Needless to say, it was an excellent week from many perspectives, and we look forward to DrupalCon being in New Orleans next year.

Categories: Elsewhere

Lunar: Reproducible builds: week 7 in Stretch cycle

Planet Debian - Mon, 15/06/2015 - 19:33

What happened about the reproducible builds effort for this week:

Presentations

On June 7th, Reiner Herrmann presented the project at the Gulaschprogrammiernacht 15 in Karlsruhe, Germany. Video and audio recordings in German are available, and so are the slides in English.

Toolchain fixes
  • Joachim Breitner uploaded ghc/7.8.4-9 which uses a hash of the command line instead of the pid when calculating a “random” directory name.
  • Lunar uploaded mozilla-devscripts/0.42 which now properly sets the timezone. Patch by Reiner Herrmann.
  • Dmitry Shachnev uploaded python-qt4/4.11.4+dfsg-1 which now outputs the list of imported module in a stable order. The issue has been fixed upstream. Original patch by Reiner Herrmann.
  • Norbert Preining uploaded tex-common/6.00 which tries to ensure reproducible builds in files generated by dh_installtex.
  • Barry Warsaw uploaded wheel/0.24.0-2 which makes the output deterministic. Barry has submitted the fixes upstream based on patches by Reiner Herrman.

Daniel Kahn Gillmor's report on help2man started a discussion with Brendan O'Dea and Ximin Luo about standardizing a common environment variable that would provide a replacement for an embedded build date. After various proposals and research by Ximin about date handling in several programming languages, the best solution seems to define SOURCE_DATE_EPOCH with a value suitable for gmtime(3).

  1. Martin Borgert wondered if Sphinx could be changed in a way that would avoid having to tweak debian/rules in packages using it to produce HTML documentation.

Daniel Kahn Gillmor opened a new report about icont producing unreproducible binaries.

Packages fixed

The following 32 packages became reproducible due to changes in their build dependencies: agda, alex, c2hs, clutter-1.0, colorediffs-extension, cpphs, darcs-monitor, dispmua, haskell-curl, haskell-glfw, haskell-glib, haskell-gluraw, haskell-glut, haskell-gnutls, haskell-gsasl, haskell-hfuse, haskell-hledger-interest, haskell-hslua, haskell-hsqml, haskell-hssyck, haskell-libxml-sax, haskell-openglraw, haskell-readline, haskell-terminfo, haskell-x11, jarjar-maven-plugin, kxml2, libcgi-struct-xs-perl, libobject-id-perl, maven-docck-plugin, parboiled, pegdown.

The following packages became reproducible after getting fixed:

Some uploads fixed some reproducibility issues but not all of them:

Patches submitted which did not make their way to the archive yet:

reproducible.debian.net

A new variation to better notice when a package captures the environment has been introduced. (h01ger)

The test on Debian packages works by building the package twice in a short time frame. But sometimes, a mirror push can happen between the first and the second build, resulting in a package built in a different build environment. This situation is now properly detected and will run a third build automatically. (h01ger)

OpenWrt, the distribution specialized in embedded devices like small routers, is now being tested for reproducibility. The situation looks very good for their packages which seems mostly affected by timestamps in the tarball. System images will require more work on debbindiff to be better understood. (h01ger)

debbindiff development

Reiner Herrmann added support for decompling Java .class file and .ipk package files (used by OpenWrt). This is now available in version 22 released on 2015-06-14.

Documentation update

Stephen Kitt documented the new --insert-timestamp available since binutils-mingw-w64 version 6.2 available to insert a ready-made date in PE binaries built with mingw-w64.

Package reviews

195 obsolete reviews have been removed, 65 added and 126 updated this week.

New identified issues:

Misc.

Holger Levsen reported an issue with the locales-all package that Provides: locales but is actually missing some of the files provided by locales.

Coreboot upstream has been quick to react after the announcement of the tests set up the week before. Patrick Georgi has fixed all issues in a couple of days and all Coreboot images are now reproducible (without a payload). SeaBIOS is one of the most frequently used payload on PC hardware and can now be made reproducible too.

Paul Kocialkowski wrote to the mailing list asking for help on getting U-Boot tested for reproducibility.

Lunar had a chat with maintainers of Open Build Service to better understand the difference between their system and what we are doing for Debian.

Categories: Elsewhere

Drupal Watchdog: Caffeinated Drupal

Planet Drupal - Mon, 15/06/2015 - 18:42
Column

Once upon a time, I drank coffee purely to wake myself up in the morning or to stay awake during a late night coding marathon. Eventually, I gained an appreciation for the different flavors, smells, and textures to be found in different coffees and brewing methods. That appreciation has grown into a pursuit of the perfect cup of coffee which, while it may never be achieved, provides me with a fun hobby as well as an endless supply of amazing coffee.

Performance tuning a website is another of those endless pursuits wherein you may never actually reach a happy ending.

Is there such a thing as a perfectly performing website? The answer to that question is much like the perfect cup of coffee: perfection lies in the eye of the beholder. While we may not ever be able to achieve a perfectly performing website, we can certainly define goals for what would be considered a well performing site. And by precisely measuring aspects of the site’s performance, we can know if our adjustments are moving us in the right direction or not.

Of course, when defining performance goals, like any project, it’s best to begin at the beginning. In this case there’s no better place to start than a nice cup of Kenya Peaberry, brewed in a manual pour-over to bring out the amazing citrus fruitiness (with a touch of spice). Mmmm, if that’s not nirvana, it sure is close! Now we can jump right in.

Defining Performance Goals

As I mentioned, we need to define goals in order to know where we’re going with the performance tuning, otherwise we’re likely to get people working on random performance improvements that may or may not meet our business requirements. The more specific the goals, the better. Here are a few ideas to get us going:

  • The front page must load in under X seconds.
  • The site must support at least Y concurrent users.
  • Popular entry points to the site must load in under Z seconds.

The important point here is to create an authoritative list which will get everyone on the same page and understand exactly what they’re working towards. Even if you are a team of one person, this is still a great way to define an endpoint for your (current) performance work.

Categories: Elsewhere

Microserve: Setting Up Drupal Bootstrap

Planet Drupal - Mon, 15/06/2015 - 18:02

For those looking for a reliable, responsive front-end framework to base their website/drupal theme upon, Twitter Bootstrap can be hard to beat. Luckily there is an existing, contributed theme available to take out the hard work of integrating Bootstrap and Drupal... Well nearly all the hard work.

This step by step tutorial hopes to serve as an extension to existing documentation for Drupal Bootstrap and strives to fill in a few blanks and signpost the odd 'gotcha' that can potentially leave the novice banging their head against their monitor. It assumes you already have a decent grasp of the drupal folder structure and a knowledge of LESS CSS preprocessor.

Drupal Bootstrap Theme

Download the latest version of the Bootstrap Drupal Theme.
https://www.drupal.org/project/bootstrap

Unzip the contents into the sites/all/themes/ folder of your drupal site.

Copy the folder 'bootstrap_subtheme' and place the copy in the root of your regular sites/all/themes/ folder (You should now have two separate theme folders 'Bootstrap' and 'bootstrap_subtheme' at the same level in your theme folder structure).

Before anything else, rename the 'subtheme' copy to reflect the project you are working on. (for the purposes of this tutorial we'll name ours 'mytheme')

Bootstrap Editable Source Files

Bootstrap Drupal Theme provides the core framework to use bootstrap within Drupal, but we still need to include the latest working distribution of the editable bootstrap source files themselves.

In the future this should be possible using drush, but for now there are two methods for including these files. Either via link to the CDN, which is convenient, but does not give us full editability of LESS files, or by downloading the files to our theme to be used locally. Further info: https://www.drupal.org/node/1978010

We want to choose the second method...

  1. Download the latest distribution of bootstrap from: http://getbootstrap.com/getting-started/#download (Choose the second, 'SOURCE CODE' version.)
  2. Download to the root of your new sub_theme (mytheme).
  3. Unzip and rename the unzipped folder 'bootstrap'. (Yep this is where it can seem confusing, you will now have a new folder called 'bootstrap' inside your new bootstrap sub_theme)
  4. Inside your new subtheme edit the .info file. On the first line change 'name =' value to match your new theme name ('mytheme' in this instance).
  5. Now we need to tell the theme which method to use for including the Bootstrap distribution. Towards the bottom of the  .info file, uncomment all lines under the heading 'METHOD 1: Bootstrap Source Files'  (yes, all those JS files.)
LESS Preprocessor Method

Although you can run a (very restrictive) installation of Bootstrap using standard CSS, it's unlikely you'll want to pass up access to the wealth of in-built variables and mixins available in the core LESS files, so now we need to choose which method of LESS compilation we want to use.

If you wish to install and use a local LESS compiler, you can leave the .info file set to use /css/style.css and then set your preprocessor to compile all LESS files to this file.

*I recommend however using the Drupal LESS module, to let Drupal do the compiling for you in browser. For this method, change the 'Stylesheets' entry in .info to point directly to /less/style.less

For this method to work, you will need to download and install the drupal Less module here:
https://www.drupal.org/project/less

Secondly download the Preprocessor library (lessphp) from:
http://lessphp.gpeasy.com/#integration-with-other-projects

to > /sites/all/libraries/ unzip and rename the folder to 'lessphp'

Enable the LESS module (if you haven't already) and go to /admin/config/development/less in the Drupal admin menu.

Choose 'less.php' as your LESS engine and turn on 'Developer Mode'. (This will ensure LESS files are recompiled on each page load.) - *Make sure this is turned OFF before site goes live.

Turn On The Theme

If you haven't already, enable your sub_theme and make it the default theme.

Disable the main Bootstrap theme (it doesn't need to be enabled for the subtheme to work.)

Clear your drupal cache and you should be good to go.

JQuery Update

For boostrap to run properly, you will have to have JQuery installed and running at atleast version 1.7. Make sure you have the JQuery Update module installed and set to 1.7 or above. (I've run bootstrap on 1.10 without problems.) 

You can change the version on the JQuery Update config page, or specifically for the theme, you can switch the version on your bootstrap sub_theme's theme settings page. 

*If you have selected a version of JQuery 1.7 or above and you're still getting drupal errors complaining that Bootstrap requires this version, you can choose to 'Suppress jQuery version error message' under Advanced on the sub_theme settings page. 

Missing Variable Errors?

Sometimes the Drupal Bootstrap theme can fall out of sync with the latest Bootstrap version.

If after enabling the subtheme you get lots of red errors about missing variables, do the following:

Inside your subtheme:

Make a COPY of the latest variables.less from the distribution files (mytheme/bootstrap/less/variables.less) and use it to REPLACE the version in your theme's custom files (mytheme/less/variables.less)

This should stop bootstrap looking for out of date variables.

Page Templates

While you 'could' copy the page.tpl.php and html.tpl.php templates from the Drupal core and set about adding all the necessary bootstrap classes and regions to them, It makes much more sense to start off by making copies of the versions supplied inside the main Bootstrap parent theme, where most of the ground work has been done for you.

You can find templates at: bootstrap/theme/ (where 'bootstrap' is the main parent theme installed from drupal.org.) the page and html templates are inside the 'system' sub folder.

Copy the templates you need, to you sub_theme's template folder. (Create one if there isn't one already.)

Bootstrap LESS Files

In your sub_theme, you will initially have the following LESS files:

  • bootstrap.less 
    Never edit this. It's only purpose is to import all of bootstrap's core less files - the integral part of the framework.
  • overrides.less
    You will sometimes want edit some values in this file. It mainly contains drupal specific resets and 'overrides'.
  • variables.less
    This is where you can change the values of default bootstrap variables to set site wide typography, form styles, grid styles, branding etc. VERY USEFUL
  • styles.less
    This is initially empty other than a few import declarations. This like a normal style.css or .less file, is where you will put the bulk of your project specific custom LESS code.
  • header.less, content.less, footer.less
    I don't personally tend to find any use for these region specific files. These can safely be deleted if you don't intend to use them. If you do delete them, also make sure to delete their import declarations from the top of 'style.less'.
Custom Variables

You could create a new LESS file for your own custom variables, but I find a lot of my custom variables can be additions to existing bootstrap variable structures (for instance there is already a @brand-primary color value in variables.less and I nearly always add a @brand-secondary color), so it makes sense to include them in the same file and flow. So I add my variables to the existing file, making one consolidated, semantic file.

Custom Mixins

Mixins are a little different. You can just include them in style.less. You can also include them in the bottom of the existing overrides.less file. (You can include them anywhere really, but as you will often want to use variables within your mixins, it's advisable to call them after all variables have been imported from bootstrap and your own custom files.) I think the neatest way is to create new custom LESS file and keep all the custom mixins separate. For instance, on my current project, i've created a ‘custom-mixins.less’ file and imported it into style.less straight AFTER the existing imports like so:

// Bootstrap library. @import 'bootstrap.less'; // Base-theme overrides. @import 'overrides.less'; // Theme specific. @import 'custom-mixins.less';

Wait!? Where was variables.less in those import declarations? 

Well, one thing to be careful of is that you don't want to import the same file into more than one other less file directly. This would in essence mean the entire file would be imported twice. So because variables.less has already been imported into overrides.less, it's content will be inherited through importing override.less into the above file.

Here's a diagram to try and better explain the bootstrap .less inheritance flow, mentioned above: 

In Conclusion

Hopefully these tips will be of use and help navigate the initially daunting landscape of getting Drupal and Bootstrap to play nice together.

This guide is based on the worflow I have personally found most logical and efficient, but if you have other methods or further tips to 'share with the class', feel free to leave a comment below.

My closing 'top tip' to developing with Bootstrap, Drupal or otherwise is to always have the Bootstrap site open in a tab, for easy reference of it's existng grid structure, variables, mixins, js plugins and info.

Martin White
Categories: Elsewhere

Acquia: Build Your Drupal 8 Team: The Forrester Digital Maturity Model

Planet Drupal - Mon, 15/06/2015 - 16:46

In business, technology is a means to an end, and using it effectively to achieve that end requires planning and strategy.

The Capability Maturity Model, designed for assessing the formality of a software development process, was initially described back in 1989. The Forrester Digital Maturity Model is one of several models that update the CMM for modern software development in the age of e-commerce and mobile development, when digital capability isn't an add-on but rather is fundamental to business success. The model emphasizes communicating strategy while putting management and control processes into place.

Organizations that are further along within the maturity model are more likely to repeatedly achieve successful completion of their projects.

Let's take a look at the stages of this model, as the final post in our Build Your Drupal 8 Team series.

Here are the four stages:

Stage 1 is ad hoc development. When companies begin e-commerce development, there is no defined strategy, and the companies' products are not integrated with other systems. Most products are released in isolation and managed independently.

Stage 2 organizations follow a defined process model. The company is still reactive and managing projects individually, but the desired digital strategy has been identified.

Stage 3 is when the digital strategy and implementation is managed. An overall environment supportive for web and e-commerce development exists, and products are created within the context of that environment.

In Stage 4, the digital business needs are integrated. Products aren't defined in isolation, but rather are part of an overall strategic approach to online business. The company has a process for planning and developing the products and is focused on both deployment and ongoing support.

The final capability level, Stage 5, is when digital development is optimized. Cross-channel products are developed and do more than integrate: they are optimized for performance. The company is able to focus on optimizing the development team as well, with continuous improvement and agile development providing a competitive advantage.

Understanding where your company currently finds itself on the maturity scale can help you plan how you will integrate and adapt the new functionality of Drupal 8 into your development organization.

If you are an ad hoc development shop, adopting Drupal 8 and achieving its benefits may be very challenging for you. You may need to work with your team to move up at least one maturity level before you try to bring in the new technology.

In contrast, if your team is at stage 5, you can work on understanding how Drupal 8 will benefit not just your specific upcoming project, but also everything else that is going on within your organization.

Resources:

  • A comprehensive SlideShare presentation on Digital Maturity Models.
  • A blog post by Forrester's Martin Gill that mentions the Digital Maturity Model in the context of digital acceleration.
Tags:  acquia drupal planet
Categories: Elsewhere

Pages

Subscribe to jfhovinne aggregator