Feed aggregator

A Dev From The Plains: Upgrading Drupal’s Viewport module to Drupal 8 (Part 3 – Final)

Planet Drupal - Sun, 16/10/2016 - 11:44

Image taken from https://drupalize.me

With parts one and two of the series covered, this 3rd (and final!) part will cover some other important aspects of Drupal development that I wanted to pay attention to while working on the Drupal 8 version of the viewport module.

Essentially, the kind of aspects that are easy to forget (or ignore on purpose) when a developer comes up with a working version or something “good enough” to be released, and thinks there’s no need to write tests for the existing codebase (we’ve all done it at some point).

This post will be a mix of links to documentation and resources that were either useful or new to me, and tips about some of the utilities and classes available when writing unit or functoinal tests.


Since I wrote both functional tests and unit tests for the viewport module, this section is split in two parts, for the sake of clarity and structure.

- Functional tests

Before getting into the bits and blobs of functional test classes, I decided read a couple articles on the matter, just to see how much testing had changed in the last year(s) in Drupal 8. This article on the existing types of Drupal 8 tests was a good overview, as well as Acquia’s lesson on unit and functional tests.

Other than that, there were also a few change notices I went through in order to understand what was being done with the testing framework and why. TL;DR: Drupal runs away from simpletest and moves to PHPUnit to modernise the test infrastructure. That started by adding the PHPUnit test framework to Drupal core. Also, new classes were added that leveraged PHPUnit instead of the existing Simpletest-based classes. Specifically, a new KernelTestBase was added, and also a BrowserTestBase class, replacing the well-known WebTestBase.

I decided to base all my tests in the PHPUnit classes exclusively, knowing that Simpletest will just die at some point. One of the key requirements for this was to put all test classes in a different path: {module-folder}/tests/src/{test-type}/{test-classes}. Also, the @group annotation was still required, as with Simpletest, so that tests of specific groups or modules can be executed alone, without all the test suite running.

The first thing I noticed when I started to write the first Functional test class, ViewportPermissionsTest, was that the getInfo() method was no longer needed, since it’s been removed in favor of PHPDoc, which means all the info there is retrieved from the documentation block of the test class.

With PHPUnit introduced in Drupal 8, a lot of new features have been added to test classes through PHP Annotations. Without getting into much details, two of them that called my attention when trying to debug a test, were the @runTestsInSeparateProcesses and @preserveGlobalState  annotations. Most of the time the defaults will work for you, although there might be cases where you may want to change them. Note that running tests in separate processes is enabled by default, but some performance issues have been reported on this feature for PHPUnit.

I also came across some issues about the lack of a configuration schema defined for my module, when trying to execute tests in the ViewportTagPrintedOnSelectedPagesTest class, which led me to find another change notice (yeah… too many!) about tests enforcing strict configuration schema adherence by default. The change notice explains how to avoid that (if really necessary), but given that’s not a good practice, you should just add a configuration schema to all your modules, if applicable.

Some other scattered notes of things I noticed when writing functional tests:

  • drupalPostForm() $edit keys need to be the HTML name of the form field being tested, not the field name as defined in the Form API $form array. Same happens with drupalPost().
  • When working on a test class, $this->assertSession()->responseContains() should be used to check HTML contents returned on a page. The change notice about BrowserTestBase class, points to $this->assertSession()->pageTextContains(), but that one is useful only for actual contents displayed on a page, and not the complete HTML returned to the browser.
- Unit Tests

The main difference between unit tests and functional tests (speaking only of structure in the codebase), is that unit tests need to be placed under {module-name}/tests/src/Unit, and they need to extend the UnitTestCase class.

Running PHPUnit tests just requires executing a simple command from the core directory of a Drupal project, specifying the testsuite or the group of tests to be executed, as shown below:

php ../vendor/bin/phpunit –group=viewport

php ../vendor/bin/phpunit –group=viewport –testsuite=unit

As mentioned in the link above, there has to be a properly configured phpunit.xml file within the core folder. Also, note that Drupal.org’s testbot runs tests in a different way. As detailed in the link above, tests can be executed locally in the same way the bot does, to ensure that the setup will allow a given module to receive automated testing support once contributed to Drupal.org.

PHPUnit matchers: In PHPUnit parlance, a matcher is ultimately an instance of an object that implements the PHPUnit_Framework_MockObject_Matcher_Invocation interface. In summary, it helps to define the result that would be expected of a method call in an object mock, depending on the arguments passed to the method and the number of times it’s been called. For example:

$this->pathMatcher->expects($this->any()) ->method('getFrontPagePath') ->will($this->returnValue('/frontpage-path'));

That’s telling the pathMatcher mock (don’t confuse with a PHPUnit matcher), to return the value “/frontpage-path” whenever the method getFrontPagePath() is called. However, this other snippet:

$this->pathMatcher->expects($this->exactly(3)) ->method('getFrontPagePath') ->will($this->returnValue('/third-time'));

It’s telling the mock to return the value “/third-time”, only when the getFrontPagePath() method is executed exactly 3 times.

will(): As seen in the examples above, the will() method can be used to tell the object mock to return different values on consecutive calls, or to get the return value processed by a callback, or fetch it from a values map.

These concepts are explained in more detail in the Chapter 9 of the PHPUnit manual.

Coding Standards and Code Sniffer

With the module fully working, and tests written for it, the final stage was to run some checks on the coding standards, and ensure everything is according to Drupal style guides. Code Sniffer makes this incredibly easy, and there’s an entire section dedicated to it in the Site Building Guide at Drupal.org, which details how to install it and run it from the command line.

Finally, and even though I didn’t bother changing the README.txt contents, I also noticed the existence of a README Template file available in the online documentation, handy when contributing new modules. With all the codebase tidied up and proper tests in place, the module was finally good to go.

That’s it! This ends the series of dev diaries about porting the Viewport module to D8. I hope you enjoyed it! There might be a similar blog post about the port process of the User Homepage module, so stay tuned if you’re interested in it.

Links and (re)sources

As usual, I’ll leave a list of the articles and other resources I read while working on the module upgrade.

  • Which D8 test is right for me?: link.
  • Acquia’s lesson on Unit and Functional tests: link.
  • Simpletest Class, File and Namespace structure (D8): link (note Simpletest wasn’t used in the end in the module port).
  • Converting D7 SimpleTests to D8: link.
  • Configuration Schema and Metadata: link.
  • Drupal SimpleTest coding standards: link
  • PHPUnit in Drupal 8: link.
  • Agile Unit Testing: link.
  • Test Doubles in PHPUnit: link.
  • Drupal coding standards: link.
  • README Template file: link.
Change Records and topic discussions
Categories: Elsewhere

Mirco Bauer: Debian 8 on Dell XPS 15

Planet Debian - Sun, 16/10/2016 - 05:46

It was time for a new work laptop so I got a Dell XPS 15 9950. I wasn't planning to write a blog post of how to install Debian 8 "Jessie" on the laptop but since it wasn't just install and use, I will share what is needed to get the wifi and graphics card to work.

So first download the DVD-1 AMD64 image of Debian 8 from your favorite download mirror. The closest one for me is the Hong Kong mirror. You do not need to download the other DVDs, just the first one is sufficient. The netinstaller and CD images will not provide a good experience since they need a working network/internet connection. With the DVD image you can do a full default desktop install and most things will just work out-of-the-box.

Now you can do a regular install, no special procedure or anything will be needed. Depending on your desktop selection it will boot right into lovely GNOME3.

You will quickly notice that the wifi is not working out-of-the-box though. It is a Qualcomm Atheros QCA6174 and the Linux kernel version 3.16 shipped with Debian 8 does not support that wifi card. This card needs the ath10k_pci kernel module which is included in a newer Linux kernel package from the Debian backports archive. If you don't have the Dell docking station as neither I do, then there is no wired ethernet that you can use for getting a temporary Internet connection. So use a different computer with Internet access to download the following packages from the Debian backports archive manually and put them on a USB disk.

After that connect the USB disk to the new Dell laptop and mount the disk using the GNOME3 file browser (nautilus). It will mount the USB disk to /media/$your_username/$volume_name. Become root using sudo or su. Then install all downloaded package from USB with like this:

cd /media/$your_username/$volume_name dpkg -i linux-base_*.deb dpkg -i linux-image-4.7.0-0.bpo.1-amd64_*.deb dpkg -i firmware-atheros_*.deb dpkg -i firmware-misc-nonfree_*.deb dpkg -i xserver-xorg-video-intel_*.deb

That's it. If dpkg finished without error message then you can reboot and your wifi and graphics card should just work! After reboot you can verify the wifi card is recognized by running "/sbin/iwconfig" and see if wlan0 shows up.

Have fun with your Dell XPS and Debian!

PS: if this does not work for you, leave a comment or write to meebey at meebey . net

Categories: Elsewhere

Dirk Eddelbuettel: tint 0.0.3: Tint Is Not Tufte

Planet Debian - Sun, 16/10/2016 - 03:17

The tint package, whose name stands for Tint Is Not Tufte , on CRAN offers a fresh take on the excellent Tufte-style for html and pdf presentations.

It marks a milestone for me: I finally have a repository with more "stars" than commits. Gotta keep an eye on the big prize...

Kidding aside, and as a little teaser, here is what the pdf variant looks like:

This release corrects one minor misfeature in the pdf variant. It also adds some spit and polish throughout, including a new NEWS.Rd file. We quote from it the entries for the current as well as previous releases:

Changes in tint version 0.0.3 (2016-10-15)
  • Correct pdf mode to no use italics in table of contents (#9 fixing #8); also added color support for links etc

  • Added (basic) Travis CI support (#10)

Changes in tint version 0.0.2 (2016-10-06)
  • In html mode, correct use of italics and bold

  • Html function renamed to tintHtml Roboto fonts with (string) formats and locales; this allow for adding formats; (PRs #6 and #7)

  • Added pdf mode with new function tintPdf(); added appropriate resources (PR #5)

  • Updated resource files

Changes in tint version 0.0.1 (2016-09-24)
  • Initial (non-CRAN) release to ghrr drat

Courtesy of CRANberries, there is a comparison to the previous release. More information is on the tint page.

For questions or comments use the issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Categories: Elsewhere

Thorsten Alteholz: DOPOM: libmatthew-java – Unix socket API and bindings for Java

Planet Debian - Sat, 15/10/2016 - 23:01

While looking at the “action needed”-paragraph of one of my packages, I saw that a dependency was orphaned and needed a new maintainer. So I decided to restart DOPOM (Debian Orphaned Package Of the Month), that I started in 2012 with ent as the first package.

This month I adopted libmatthew-java. Sure it was not a big deal as the QA-team already did a good job and kept the package in shape. But now there is one burden lifted from their shoulders.

According to the Work-Needing and Prospective Packages page 956 packages are ophaned at the moment. If every Debian contributor grabs one of them, we could unwind the QA-team (no, just kidding). So similar to NEW which was down to 0 this year, can we get rid of the WNPP as well? At least for a short time?

Categories: Elsewhere

Drupal core announcements: No Drupal core security release planned for Wednesday, October 19

Planet Drupal - Sat, 15/10/2016 - 05:11

The monthly security release window for Drupal 8 and 7 core will take place on Wednesday, October 19.

Unless there is an unexpected security emergency, however, this window will not be used and there will be no Drupal 8 or 7 core releases on that date.

Drupal 8 site owners should note that the Drupal 8.1.x branch is no longer supported; sites running on an 8.1.x release should therefore update to the Drupal 8.2.x branch as soon as possible in preparation for any future security releases. (The most recent release on that branch is 8.2.1, as of the date of this post.)

Similarly, Drupal 7 site owners should update to the 7.51 release (the most recent Drupal 7 release as of this post), which includes the larger changes from Drupal 7.50 as well.

Other upcoming core release windows include:

  • Wednesday, November 2 (patch release window)
  • Wednesday, November 16 (security release window)

Drupal 6 is end-of-life and will not receive further releases.

For more information on Drupal core release windows, see the documentation on release timing and security releases, as well as the Drupal core release cycle overview.

Categories: Elsewhere

Daniel Silverstone: Gitano - Approaching Release - Access Control Changes

Planet Debian - Sat, 15/10/2016 - 05:11

As mentioned previously I am working toward getting Gitano into Stretch. A colleague and friend of mine (Richard Maw) did a large pile of work on Lace to support what we are calling sub-defines. These let us simplify Gitano's ACL files, particularly for individual projects.

In this posting, I'd like to cover what has changed with the access control support in Gitano, so if you've never used it then some of this may make little sense. Later on, I'll be looking at some better user documentation in conjunction with another friend of mine (Lars Wirzenius) who has promised to help produce a basic administration manual before Stretch is totally frozen.


With a more modern lace (version 1.3 or later) there is a mechanism we are calling 'sub-defines'. Previously if you wanted to write a ruleset which said something like "Allow Steve to read my repository" you needed:

define is_steve user exact steve allow "Steve can read my repo" is_steve op_read

And, as you'd expect, if you also wanted to grant read access to Jeff then you'd need yet set of defines:

define is_jeff user exact jeff define is_steve user exact steve define readers anyof is_jeff is_steve allow "Steve and Jeff can read my repo" readers op_read

This, while flexible (and still entirely acceptable) is wordy for small rulesets and so we added sub-defines to create this syntax:

allow "Steve and Jeff can read my repo" op_read [anyof [user exact jeff] [user exact steve]]

Of course, this is generally neater for simpler rules, if you wanted to add another user then it might make sense to go for:

define readers anyof [user exact jeff] [user exact steve] [user exact susan] allow "My friends can read my repo" op_read readers

The nice thing about this sub-define syntax is that it's basically usable anywhere you'd use the name of a previously defined thing, they're compiled in much the same way, and Richard worked hard to get good error messages out from them just in case.

No more auto_user_XXX and auto_group_YYY

As a result of the above being implemented, the support Gitano previously grew for automatically defining users and groups has been removed. The approach we took was pretty inflexible and risked compilation errors if a user was deleted or renamed, and so the sub-define approach is much much better.

If you currently use auto_user_XXX or auto_group_YYY in your rulesets then your upgrade path isn't bumpless but it should be fairly simple:

  1. Upgrade your version of lace to 1.3
  2. Replace any auto_user_FOO with [user exact FOO] and similarly for any auto_group_BAR to [group exact BAR].
  3. You can now upgrade Gitano safely.
No more 'basic' matches

Since Gitano first gained support for ACLs using Lace, we had a mechanism called 'simple match' for basic inputs such as groups, usernames, repo names, ref names, etc. Simple matches looked like user FOO or group !BAR. The match syntax grew more and more arcane as we added Lua pattern support refs ~^refs/heads/${user}/. When we wanted to add proper PCRE regex support we added a syntax of the form: user pcre ^/.+?... where pcre could be any of: exact, prefix, suffix, pattern, or pcre. We had a complex set of rules for exactly what the sigils at the start of the match string might mean in what order, and it was getting unwieldy.

To simplify matters, none of the "backward compatibility" remains in Gitano. You instead MUST use the what how with match form. To make this slightly more natural to use, we have added a bunch of aliases: is for exact, starts and startswith for prefix, and ends and endswith for suffix. In addition, kind of match can be prefixed with a ! to invert it, and for natural looking rules not is an alias for !is.

This means that your rulesets MUST be updated to support the more explicit syntax before you update Gitano, or else nothing will compile. Fortunately this form has been supported for a long time, so you can do this in three steps.

  1. Update your gitano-admin.git global ruleset. For example, the old form of the defines used to contain define is_gitano_ref ref ~^refs/gitano/ which can trivially be replaced with: define is_gitano_ref prefix refs/gitano/
  2. Update any non-zero rulesets your projects might have.
  3. You can now safely update Gitano

If you want a reference for making those changes, you can look at the Gitano skeleton ruleset which can be found at https://git.gitano.org.uk/gitano.git/tree/skel/gitano-admin/rules/ or in /usr/share/gitano if Gitano is installed on your local system.

Next time, I'll likely talk about the deprecated commands which are no longer in Gitano, and how you'll need to adjust your automation to use the new commands.

Categories: Elsewhere

DrupalEasy: DrupalEasy Podcast 187 - Breaking BADCamp (Anne Stefanyk, Heather Rodriguez, Jon Peck - BADCamp)

Planet Drupal - Sat, 15/10/2016 - 04:25

Direct .mp3 file download.

BADCamp organizers Anne Stefanyk, Heather Rodriguez, and Jon Peck join Mike Anello and Ryan Price to discuss the biggest and baddest (pun intended) non-DrupalCon event in all the land(s)! Recent Drupal-related news, picks of the week, and five questions are - as always - included free with this episode.

Interview DrupalEasy News Three Stories Sponsors Picks of the Week Upcoming Events Follow us on Twitter Five Questions (answers only)
  1. Plays synthesizer for a progressive rock band, Imager.
  2. Dokuwiki.
  3. "Bike in a basket" - build a motorcycle
  4. Llamas.
  5. WNY DrupalCamp.
Intro Music Subscribe

Subscribe to our podcast on iTunes, Google Play or Miro. Listen to our podcast on Stitcher.

If you'd like to leave us a voicemail, call 321-396-2340. Please keep in mind that we might play your voicemail during one of our future podcasts. Feel free to call in with suggestions, rants, questions, or corrections. If you'd rather just send us an email, please use our contact page.

Categories: Elsewhere

Dirk Eddelbuettel: anytime 0.0.3: Extension and fixes

Planet Debian - Sat, 15/10/2016 - 02:37

anytime arrived on CRAN with releases 0.0.1 and 0.0.2 about a month ago. anytime aims to convert anything in integer, numeric, character, factor, ordered, ... format to POSIXct (or Date) objects.

Release 0.0.3 brings a bugfix for Windows (where for dates before the epoch of 1970-01-01, accessing the tm_isdst field for daylight savings would crash the session) and a small (unexported) extension to test format strings. This last feature plays well the ability to add format strings which we added in 0.0.2.

The NEWS file summarises the release:

Changes in Rcpp version 0.0.3 (2016-10-13)
  • Added (non-exported) helper function testFormat()

  • Do not access tm_isdst on Windows for dates before the epoch (pull request #13 fixing issue #12); added test as well

Courtesy of CRANberries, there is a comparison to the previous release. More information is on the anytime page.

For questions or comments use the issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Categories: Elsewhere

Drupal Association News: Here’s to our Drupal Association Supporting Partners! – Q3

Planet Drupal - Fri, 14/10/2016 - 20:54

There are many inspiring companies that are giving their time to contribute code, help educate others, share their experiences, and be part of the Drupal community. Some of these companies have taken one step further, becoming Supporting Partners with the Drupal Association.

Supporting Partners fund Drupal.org and the engineering team who make the improvements you continuously see. Without our Supporting Partners, the work we do would not be possible.

Here is what is going on now.

Q3 2016 Roadmap


We would like to acknowledge the companies that continue to show their endless support. Here is a list of new and renewing companies just this quarter:

  1. Translations.com
  2. OpsGenie
  3. Kellton Tech Solutions Ltd.
  4. PSW Group GmbH & Co. KG
  5. ANNAI Inc
  6. Crew.co
  7. PreviousNext
  8. KWALL
  9. Code Koalas Digital Bridge Solutions
  10. Mediacurrent Interactive Solutions
  11. Trellon LLC
  12. XIO
  13. Live Person
  14. Pantheon
  15. Rochen Ltd.
  16. Bluehost, Inc.
  17. BlackMesh Inc.
  18. Arvixe, LLC
  19. Green Geeks
  20. JetBrains
  21. Platform.sh
  22. HighWire Press
  23. Wunder Group
  24. ADCI Solutions
  25. Inclind Inc.
  26. Appnovation Technologies
  27. GeekHive
  28. Deeson
  29. Isovera
  30. Forum One
  31. Palantir.net

Our Supporting Partners, Technology Supporters, and Hosting Supporters help us make Drupal.org great. They also become some of the best-known Drupal contributors. Read about the program benefits and if you want to give back to the Project, please contact our Executive Director, Megan Sanicki, for details.

Categories: Elsewhere

Zivtech: 4 Common Developer Concerns in a Site Migration

Planet Drupal - Fri, 14/10/2016 - 18:43

Websites have a shelf life of about 5 years, give or take. Once a site gets stale, it’s time to update. You may be going from one CMS to another, i.e., WordPress to Drupal, or you may be moving from Drupal 6 to Drupal 8. Perhaps the legacy site was handcrafted, or it may have been built on Squarespace or Wix.

Content is the lifeblood of a site. A developer may be able to automate the migration, but in many cases, content migration from an older site may be a manual process. Indeed, the development of a custom tool to automate a migration can take weeks to create, and end up being far costlier than a manual effort.

Before setting out, determine if the process is best accomplished manually or automatically. Let’s look at the most common concerns for developers charged with migrating content from old to new.

1. It’s All About Data Quality

Old data might not be very structured, or even structured at all. A common bad scenario occurs when you try to take something that was handcrafted and unstructured and turn it into a structured system. Case in point would be an event system managed through HTML dumped into pages.

There's tabular data, there are dates, and there are sessions; these structured things represent times and days, and the presenters who took part. There could also be assets like video, audio, the slides from the presentation, and an accompanying paper.

What if all that data is in handcrafted HTML in one big blob with links? If the HTML was created using a template, you might be able to parse it and figure out which fields represent what, and you can synthesize structured data from it. If not, and it's all in a slightly different format that's almost impossible to synthesize, it just has to be done manually.

2. Secret Data Relationships

Another big concern is a system that doesn't expose how data is related.

You could be working on a system that seems to manage data in a reasonable way, but it's very hard to figure out what’s going on behind the scenes. Data may be broken into components, but then it does something confusing.

A previous developer may have used a system that's structured, but used a page builder tool that inserted a text blob in the top right corner and other content in the bottom left corner. In that scenario, you can't even fetch a single record that has all the information in it because it's split up, and those pieces might not semantically describe what they are.

3. Bad Architecture

Another top concern is a poorly architected database.

A site can be deceptive because it has structured data that describes itself. The system could find stuff as each element was requested, but then it is really hard to find the list of elements and load all of the data in a coordinated way.

It's just a matter of your architecture. It’s important to have a clearly structured, normalized database with descriptively named columns. And you need consistency, with all the required fields actually in all the records.

4. Automated Vs. Manual Data Migration

Your migration needs to make some assumptions about what data it’s going to find and how it can use that to connect to other data.

Whether there are 6 or 600,000 records of 6 different varieties, it's the same amount of effort to automate a migration. So how do you know if you should be automating, or just cutting and pasting?

Use a benchmark. Migrate five pieces of content and time out how long that takes. Multiply by the number of pieces of content in the entire project to try to get a baseline of what it would take to do it manually. Then estimate the effort to migrate in an automated fashion. Then double it. Go with the number that’s lower.

One of the reasons to pick a system like Drupal is that the data is yours. It's an open platform. You can read the code and look at the database. You can easily extract all of the data and take it wherever you want.

If you’re with a hosted platform, that may not be the case. It's not in the hosted platform’s best interest to give you a really easy way to extract everything so you can migrate it somewhere else.

If you're not careful and you pick something because it seems like an easy choice now, you run the risk of getting locked in. That can be really painful because the only way to get everything out is to cut and paste. It’s still technically a migration. It's just not an automated one.

Categories: Elsewhere

Acquia Developer Center Blog: Nothing About Us Without Us - Diversity of the Web with Nikki Stevens

Planet Drupal - Fri, 14/10/2016 - 18:42

Following her inspiring keynote on diversity, inclusion, and equality at Drupal Camp Costa Rica 2016, I got the chance to speak with Nikki Stevens on front of my mic and camera. Along with sharing some fun Drupal memories like the time young Nikki broke production when Steve Rude was out for the day, a lot of this conversation is about the benefits diversity brings and how we all can improve our own organisations and communities.

One eye-opening moment for me was when Nikki talked about how many diversity surveys are themselves flawed by the assumptions of their authors. Seemed perfectly obvious to me once I got there. Nikki and Nick Lewis have been working on a better diversity survey by crowdsourcing the survey itself.

Help write the diversity survey!

Go to the Diversity of the Web community-drafted survey and make your own contributions and suggestions now!

For more background information on the survey and the challenges of involvement, identity, and measuring it all, read its origin story on Nikki's blog: Nothing About Us Without Us.

Interview Video - 19 min.


A full transcript of our conversation will appear here shortly!

Skill Level: BeginnerIntermediateAdvanced
Categories: Elsewhere

DrupalEasy: Announcing Florida DrupalCamp’s Second Featured Speaker, Drupal Association Director Megan Sanicki!

Planet Drupal - Fri, 14/10/2016 - 18:37

The 9th annual Florida DrupalCamp (FLDC) will be held February 17,18, and 19th, 2017 in Orlando, Florida; DrupalEasy is proud to be a sponsor.

We’re super excited to announce Megan Sanicki as our second featured speaker. Megan is the Executive Director of the Drupal Association (DA), and will talk about all the things that the DA does to promote and serve Drupal.

Megan joins Helena Zubkow as a featured speaker for FLDC. Helena is an accessibility expert for Lullabot and will be leading a double-length session on how to make Drupal sites accessible. The third (and final) featured speaker will be announced in the coming weeks.

Bigger and better!

FLDC is expanding to 3 full days of activities in 2017. On Friday, February 17, we'll be offering several free training events (details coming soon) as well as sprint opportunities and a social event. Saturday, February 18 will be a day full of sessions followed by an afterparty. Sunday, February 19 will include a half-day of sessions and additional sprints.

Sponsor Florida Drupalcamp!

FLDC is accepting sponsorships. More information can be found at https://www.fldrupal.camp/sponsors/become-sponsor.

Session Submissions are Open!

Have new Drupal or web development knowledge that you’d like to share? We’d love to have you! Submit your session proposal at https://www.fldrupal.camp/sessions

Categories: Elsewhere

lakshminp.com: Entity validation in Drupal 8 - part 1 - how validation works

Planet Drupal - Fri, 14/10/2016 - 18:10

Drupal 8 has its entity validation separate and decoupled from the typical validation given by its form API. This is done for a lot of reasons. For one, entities might get added from other non UI means, like via the REST API, or programmatically, while importing data from an external source. Under these circumstances, the entity validation API comes in handy.

Drupal 8's validation API uses the Symfony validator component.Each validation mechanism can be at the entity level(composite), field level or entity property level. Validation can be specified by multiple means.

1.While creating the entity as a part of the annotation.

Ex: the Comment entity has a validation constraint which imposes a restriction where the name of the anonymous comment author cannot match the name of any registered user. This is implemented using CommentNameConstraint and specified in the Comment entity annotation.

* bundle_entity_type = "comment_type", * field_ui_base_route = "entity.comment_type.edit_form", * constraints = { * "CommentName" = {} * } * ) */ class Comment extends ContentEntityBase implements CommentInterface {

2.Inside the entity class's baseFieldDefinitions().

Ex: The User entity has a constraint where each user name should be a unique value.

$fields['name'] = BaseFieldDefinition::create('string') ->setLabel(t('Name')) ->setDescription(t('The name of this user.')) ->setRequired(TRUE) ->setConstraints(array( // No Length constraint here because the UserName constraint also covers // that. 'UserName' => array(), 'UserNameUnique' => array(), ));

We will see what BaseFieldDefinition means in a future post. For now, all you have to understand is, the above line places a validation constraint that the name property of every user object should be unique.

3.Entity validation constraints can be placed on existing entities from other modules via hooks.

This implements hook_entity_type_alter.

function my_module_name_entity_type_alter(array &$entity_types) { $node = $entity_types['node']; $node->addConstraint('CustomPluginName', ['plugin', 'options']); }

We shall be creating one such validation constraint on the node entity shortly.

A validation component consists of 2 parts.

The constraint contains the metadata/rules required for the validation, the messages to show as to what exactly got invalidated, and a pointer to the validation class, whose default value is a "Validator" string appended to the fully qualified constraint class name.

/** * Returns the name of the class that validates this constraint. * * By default, this is the fully qualified name of the constraint class * suffixed with "Validator". You can override this method to change that * behaviour. * * @return string */ public function validatedBy() { return get_class($this).'Validator'; }

The validation class contains the actual validation implementation. For example, a "unique name" constraint's validator will iterate through all entities in the database to ensure that the name of the entity being validated is not used by any other entity. The validator class also has access to the constraint class metadata, messages etc. It should, at minimum, implement the validate method, which takes in the object to be validated(string, entity etc.) and the associated constraint. Upon failing the validation, this method returns an object of type ConstraintViolationInterface. This gives all the information as to why the validation failed, where exactly it failed, the invalid value etc.

Let's see how a node can be validated and the validation errors consumed with the below example.

use Drupal\node\Entity\Node; $node = Node::create([ 'title' => 'New article', 'type' => 'article']); $node->field_email = 'foobar'; $violations = $node->validate(); if ($violations->count() > 0) { foreach($violations as $violation) { print_r($violation->getMessage()->render()); print("\n"); print_r($violation->getPropertyPath()); print("\n"); print_r($violation->getInvalidValue()); print("\n"); } }

Assuming you have an email field which goes by the machine name field_email, if you run this code using drush scr command in a Drupal 8 setup, your output should be very similar to this.

$ drush scr node-validate.php This value is not a valid email address. field_email.0.value foobar

The getPropertyPath give the field name and the delta as to where the violation occurs.

Now that we got a hang of how entity validation works, let's create our own validation constraint in the next post.

Categories: Elsewhere

Michal Čihař: New free software projects on Hosted Weblate

Planet Debian - Fri, 14/10/2016 - 18:00

Hosted Weblate provides also free hosting for free software projects. I'm quite slow in processing the hosting requests, but when I do that, I process them in a batch and add several projects at once.

This time, the newly hosted projects include:

Filed under: Debian English SUSE Weblate | 0 comments

Categories: Elsewhere

Mike Gabriel: [Arctica Project] Release of nx-libs (version

Planet Debian - Fri, 14/10/2016 - 17:47

NX is a software suite which implements very efficient compression of the X11 protocol. This increases performance when using X applications over a network, especially a slow one.

NX (v3) has been originally developed by NoMachine and has been Free Software ever since. Since NoMachine obsoleted NX (v3) some time back in 2013/2014, the maintenance has been continued by a versatile group of developers. The work on NX (v3) is being continued under the project name "nx-libs".

Release Announcement

On Thursday, Oct 13th, version of nx-libs has been released [1].

This release brings a major backport of libNX_X11 to the status of libX11 1.3.4 (as provided by X.org). On top of that, all CVE fixes provided for libX11 by the Debian X11 Strike Force and the Debian LTS team got cherry-picked to libNX_X11, too. This big chunk of work has been performed by Ulrich Sibiller and there is more to come. We currently have a pull request pending review that backports more commits from libX11 (bumping the status of libNX_X11 to the state of libX11 1.6.4, which is the current HEAD on the X.org Git site).

Another big clean-up performed by Ulrich is the split-up of XKB code which got symlinked between libNX_X11 and nx-X11/programs/Xserver. This brings in some code duplications but allows maintaing the nxagent Xserver code and the libNX_X11 code separately.

In the upstream ChangeLog you will find some more items around code clean-ups and .deb packaging, see the diff [2] on the ChangeLog file for details.

So for this releas, a very special and massive thanks goes to Ulrich Sibiller!!! Well done!!!

Change Log

A list of recent changes (since can be obtained from here.

Known Issues

This version of nx-libs is known to segfault when LDFLAGS / CFLAGS have the -pie / -fPIE hardening flags set. This issue is currently under investigation.

Binary Builds

You can obtain binary builds of nx-libs for Debian (jessie, stretch, unstable) and Ubuntu (trusty, xenial) via these apt-URLs:

Our package server's archive key is: 0x98DE3101 (fingerprint: 7A49 CD37 EBAE 2501 B9B4 F7EA A868 0F55 98DE 3101). Use this command to make APT trust our package server:

wget -qO - http://packages.arctica-project.org/archive.key | sudo apt-key add -

The nx-libs software project brings to you the binary packages nxproxy (client-side component) and nxagent (nx-X11 server, server-side component).

Ubuntu developers, please note: we have added nightly builds for Ubuntu latest to our build server. This has been Ubuntu 16.10 so far, but we will soon drop 16.10 support in nightly builds and add 17.04 support.

Categories: Elsewhere

Antoine Beaupré: Managing good bug reports

Planet Debian - Fri, 14/10/2016 - 17:11

Bug reporting is an art form that is too often neglected in software projects. Bug reports allow contributors to participate without deep technical knowledge and at the same time provide a crucial space for developers to be made aware of issues with their software that they could not have foreseen or found themselves, for lack of resources, variety or imagination.

Prior art

Unfortunately, there are rarely good guidelines for submitting bug reports. Historically, people have pointed towards How to report bugs effectively or How to ask questions the smart way. While those guides can be useful for motivated people and may seem attractive references for project managers, they suffer from serious issues:

  • they are written by technical people, for non-technical people
  • as a result, they have a deeply condescending attitude such as calling people "stupid" or various animal names like "mongoose"
  • they are also very technical themselves: one starts with a copyright notice and a changelog, the other uses magic words like "Core dumps" and $Id$
  • they are too long: sgtatham's is about 3600 words long, esr's is even longer at about 11800 words. those texts will take about 20 to 60 minutes to read by an average reader, according to research

Individual projects have their own guides as well. Linux has the REPORTING_BUGS file that is a much shorter 1200 that can be read under 5 minutes, provided that you can understand the topic at hand. Interestingly, that guide refers to both esr's and sgtatham's guidelines, which means, in the degenerate case where the user hasn't had the "privilege" of reading esr's prose already, they will have an extra hour and a half of reading to do to have honestly followed the guidelines before reporting the bug.

I often find good documentation in the Tails project. Their bug reporting guidelines are easily accessible and quick to read, although they still might be too technical. It could be argued that you need to get technical at some point to get that information out, of course.

In the Monkeysign project, I have started a bug reporting guide that doesn't yet address all those issues. I am considering writing a new guide, but I figured I would look at other people's work and get feedback before writing my own standard.

What's the point?

Why have those documents been written? Are people really expected to read them before seeking help? It seems to me unlikely that someone would:

  1. be motivated enough to do something about a broken part of their computer
  2. figure out they can do something about it
  3. read a fifteen thousand words novel about how to report a bug...
  4. just to finally write a 20-line bug report that has no warranty of support attached to it

And if I would be a paying customer, I wouldn't want to be forced to waste my time reading that prose either: it's your job to help me fix your broken things, not the reverse. As someone doing consulting these days: I totally understand: it's not you, the user, it's us, the developers, that have a problem. We have been socialized through computers, and it makes us weird and obtuse, but that's no excuse, and we need to clean up our act.

Furthermore, it's surprising how often we get (and make!) bug reports that can be difficult to use. The Monkeysign project is very "technical" and I have expected that the bug reports I would get would be well written, with ways to reproduce and so on, but it happened that I received bug reports that were all over the place, didn't have any ways of reproducing or were simply incomplete. Those three bug reports were filed by people that I know to be very technically capable: one is a fellow Debian developer, the second had filed a good bug report 5 days before, and the third one is a contributor that sent good patches before.

In all three cases, they knew what they were doing. Those three people probably read the guidelines mentioned in the past. They may have even read the Monkeysign bug reporting guidelines as well. I can only explain those bug reports by the lack of time: people thought the issue was obvious, that it would get fixed rapidly because, obviously, something is broken.

We need a better way.

The takeaway

What are those guides trying to tell us?

  1. ask questions in the right place
  2. search for similar questions and issues before reporting the bug
  3. try to make the developers reproduce the issues
  4. failing that, try to describe the issue as well as you can
  5. write clearly, be specific and verbose yet concise

There are obviously contradictions in there, like sgtatham telling us to be verbose and esr tells us to, basically, not be verbose. There is definitely a tension in there, and there are many, many more details about how great bug reports can be if done properly.

I tend towards the side of terseness in our descriptions: people that will know how to be concise will be, people that don't will most likely not learn by reading a 12 000 words novel that, in itself, didn't manage to be parsimonious.

But I am willing to allow for verbosity in bug reports: I prefer too many details instead of missing a key bit of information.

Issue trackers

Step 1 is our job: we should send people in the right place, and give them the right tools. Monkeysign used to manage bugs with bugs-everywhere and this turned out to be a terrible idea: you had to understand git and bugs-everywhere to file any bug reports. As a result, there were exactly zero bug reports filed by non-developers during the whole time BE was used, although some bugs were filed in the Debian Bugtracker.

So have a good bug tracker. A mailing list or email address is not a good bug tracker: you lose track of old issues, and it's hard for newcomers to search the archives. It does have the advantage of having a unified interface for the support forum and bug tracking, however.

Redmine, Gitlab, Github and others are all decent-enough bug trackers. The key point is that the issue tracker should be publicly available, and users should be able to register easily to file new issues. You should also be able to mass-edit tickets and users should be able to discover the tracker's features easily. I am sorry to say that the Debian BTS somewhat falls short on those two features.

Step 2 is a shared responsibility: there should be an easy way to search for issues, and we should help the user looking for similar issues. Stackexchange sites do an excellent job at this, by automatically searching for similar questions while you write your question, suggesting similar ones in an attempt to weed out duplicates. Duplicates still happen, but they can then clearly be marked and linked with a distinct mechanism. Most bug trackers do not offer such high level functionality, but should, so I feel the fault lies more on "our" end than at the user's end.

Reproducing the environment

Step 3 and 4 are more or less the user's responsibility. We can detail in our documentation how to clearly share the environment where we reproduced the bug, for example, but in the end, the user decides if they want to share that information or not.

In Monkeysign, I have finally implemented joeyh's suggestion of shipping the test suite with the program. I can now tell people to run the test suite in their environment to see if this is a regression that is specific to their environment - so a known bug, in a way - or a novel bug for which I can look at writing a new unit test. I also include way more information about the environment in the --version output, an idea I brought forward in the borg project to ease debugging. That way, people can just send the output of monkeysign --test and monkeysign --version, and I have a very good overview of what is happening on their end. Of course, Monkeysign also supports the usual --verbose and --debug flag that users should enable when reproducing issues.

Another idea is to report bugs directly from the application. We have all seen Firefox or other software have automatic bug reporting tools, but somehow those seem unsatisfactory for a user: we have no feedback of where the report goes, if it's followed up on. It is useful for larger project to get statistical data, but not so useful for users in the short term.

Monkeysign tries to handle exceptions in the code in a graceful way, but could do better. We use a small library to handle exceptions, but that library has since then been improved to directly file bugs against the Github project. This assumes the user is logged into Github, but it is nice to pre-populate bug reports with the relevant information up front.

Issue templates

In the meantime, to make sure people provide enough information, I have now moved a lot of the bug reporting guidelines to a separate issue template. That issue template is available through the issue creation form now, although it is not enabled by default, a weird limitation of Gitlab. Issue templates are available in Gitlab and Github.

Issue templates somewhat force users in a straight jacket: there is already something to structure their bug report. Those could be distinct form elements that had to be filled in, but I like the flexibility of the template, and the possibility for users to just escape the formalism and just plead for help in their own way.

Issue guidelines

In the end, I opted for a short few paragraphs in the style of the Tails documentation, including a reference to sgtatham, as an optional future reference:

  • Before you report a new bug, review the existing issues in the online issue tracker and the Debian BTS for Monkeysign to make sure the bug has not already been reported elsewhere.

  • The first aim of a bug report is to tell the developers exactly how to reproduce the failure, so try to reproduce the issue yourself and describe how you did that.

  • If that is not possible, try to describe what went wrong in detail. Write down the error messages, especially if they have numbers.

  • Take the necessary time to write clearly and precisely. Say what you mean, and make sure it cannot be misinterpreted.

  • Include the output of monkeysign --test, monkeysign --version and monkeysign --debug in your bug reports. See the issue template for more details about what to include in bug reports.

If you wish to read more about issues regarding communication in bug reports, you can read How to Report Bugs Effectively, which takes around 20 to 30 minutes.

Unfortunately, short of rewriting sgtatham's guide, I do not feel there is much more we can do as a general guide. I find esr's guide to be too verbose and commanding, so sgtatham it will be for now.

The prose and literacy

In the end, there is a fundamental issue with reporting bugs: it assumes our users are literate and capable of writing amazing prose that we will enjoy reading as the last J.K. Rowling novel (if you're into that kind of thing). It's just an unreasonable expectation: some of your users don't even speak the same language as you, let alone read or write it. This makes for challenging collaboration, to say the least. This is where automated reporting makes sense: it doesn't require user's intervention, and the communication is mediated by machines without human intervention and their pesky culture.

But we should, as maintainers, "be liberal in what we accept and conservative in what we send". Be tolerant, and help your users in fixing their issues. It's what you are there for, after all.

And in the end, we all fail the same way. In an attempt to improve the situation on bug reporting guides, I seem to have myself written a 2000 short story that will have taken up a hopefully pleasant 10 minutes of your time at minimum. Hopefully I will have succeeded at being clear, specific, verbose and concise all at once and look forward to your feedback on how to improve our bug reporting culture.

Categories: Elsewhere

Acquia Developer Center Blog: The White House is Open Sourcing a Drupal Chat Module for Facebook Messenger

Planet Drupal - Fri, 14/10/2016 - 17:02

Today, Jason Goldman, Chief Digital Officer of the White House, announced that the White House is open-sourcing a Drupal module that will enable Drupal 8 developers to quickly launch a Facebook Messenger bot.

The goal, Goldman said, is to make it easier for citizens to message the President via Facebook.

This is the first-ever government bot on Facebook messenger.

Tags: acquia drupal planet
Categories: Elsewhere

ThinkShout: Content Modeling in Drupal 8

Planet Drupal - Fri, 14/10/2016 - 17:00

In many modern frameworks, data modeling is done by building out database tables. In Drupal, we use a web-based interface to build our models. This interface makes building the database accessible for people with no database experience. However, this easy access can lead to overly complex content models because it’s so easy to build out advanced structures with a few hours of clicking. It’s surprising how often Drupal developers are expected to be content modeling experts. Rachel Lovinger wrote this great overview of content modeling for the rest of us who aren’t experts yet.

Data Modeling Goal

Our goal when modeling content in Drupal is to build out the structure that will become our editor interface and HTML output. We also need to create a model that supports the functionality needed in the website. While accomplishing this, we want to reduce the complexity of our models as much as possible.

Getting Started

One of the first things to do when building a Drupal site is build content types. So, before you start a site build, start with either a content model or a detail page wireframe. This spreadsheet from Palantir will help you. The home page design may look amazing, but it’s unhelpful for building out content types. Get the detail pages before you start building.

Why Reduce Complexity?

The more content types you create, the more effort it will take to produce a site. Furthermore, the more types you have, the more time it will take to maintain the site in the future. If you have 15 content types and need to make a site-wide change, you need to edit 15 different pages.

The more pages you need to edit, the more mistakes you will make in choosing labels, settings, and formatters. Lastly, content can’t easily be copied from one type to another, which makes moving content around your site harder when there are many content types. So, the first thing you’ll want to do with your content model is collapse your types into as few types as feasible. How many is that?

5 Content Types is Enough

Drupal has many built in entities like files, taxonomy, users, nodes, comments, and config. So, the vast majority of sites don’t need any more than 5 content types. Instead of adding a new content type for every design, look for ways to reuse existing types by adding fields and applying layouts to those fields.

Break Up the Edit Form

Drupal 8 allows you to have different form displays for a single content type. With either Form Mode Control or Form Mode Manager, you can create different edit experiences for the same content type without overloading the admin interface.

By reducing the complexity of the content model, we decrease maintenance cost, improve the consistency of the website, and simplify the editing experience. Now that you’ve got some content modeling basics, look for opportunities to reduce and reuse content types in your Drupal projects. Content editors will thank you.

Categories: Elsewhere

Mediacurrent: Friday 5: 5 Advantages of Component-Driven Theming

Planet Drupal - Fri, 14/10/2016 - 16:44

Happy Friday and thanks for tuning into Episode 17, it's good to be back!

Categories: Elsewhere

Jonathan Dowland: Hi-Fi Furniture

Planet Debian - Fri, 14/10/2016 - 16:23

sadly obsolete

For the last four years or so, I've had my Hi-Fi and the vast majority of my vinyl collection stored in a self-contained, mildly-customized Ikea unit. Since moving house this has been in my dining room—which we have always referred to as the "play room", since we have a second dining room in which we actually dine.

The intention for the play room was for it to be the room within which all our future children would have their toys kept, in an attempt to keep the living room from being overrun with plastic. The time has thus come for my Hi-Fi to come out of there, so we've moved it to our living room. Unfortunately, there's not enough room in the living room for the Ikea unit: I need something narrower for the space available.

via IkeaHackers.net

In the spirit of my original hack, I started looking at what others might have achieved with Ikea components. There are some great examples of open-style units built out of the (extremely cheap) Lack coffee tables, such as this ikeahackers article, but I'd prefer something more enclosed. One problem I've had with the Expedit unit was my cat trying to scratch the records. I ended up putting framed records at the front to cover the spines of the records within. If I were keeping the unit, I'd look at fitting hinges (another ikeahackers article)

Asides from hacked Ikea stuff, there are a few companies offering traditional enclosed Hi Fi cabinets. I'm going to struggle to fit both the equipment and a subset of records into these, so I might have to look at storing them separately. In some ways that makes life easier: the records could go into a 1x4 Ikea KALLAX unit, leaving the amp and deck to home somewhere. Perhaps I could look at a bigger unit for under the TV.

My parents have a nice Hi-Fi unit that pretends to be a chest of drawers. I'm fairly sure my Dad custom-built it, as it has a hinged top to provide access to the turntable and I haven't seen anything like that on the market.

That brings me onto thinking about other AV things I'd like to achieve in the living room. I've always been interested in exploring surround sound, but my initial attempt in my prior flat did not go well, either because the room was not terribly suited accoustically, or because the Pioneer unit I bought was rubbish, or both. It seems that there aren't really AV receivers which are designed to satisfy both people wanting to use them in a Hi-Fi and a home cinema setting. I could stick to stereo and run the TV into my existing (or a new) amplifier, subject to some logistics around wiring. A previous house owner ran some phono cables under the hard-wood flooring from the TV alcove to the opposite side of the fire place, which might give me some options.

There's also the world of wireless audio, Sonos etcetera. Realistically the majority of my music is digital nowadays, and it would be good to be able to listen to it conveniently in the house. I've heard good reports on the entry level Sonos stuff, but they seem to be Mono, and even the more high-end ones with lots of drivers have very small separation. I did buy a Chromecast Audio on a whim recently, but I haven't looked at it much yet: perhaps that's part of the solution.

So, lots of stuff in the melting pot to figure out here!

Categories: Elsewhere


Subscribe to jfhovinne aggregator