Feed aggregator

Petter Reinholdtsen: I want the courts to be involved before the police can hijack a news site DNS domain (#domstolkontroll)

Planet Debian - Thu, 19/05/2016 - 14:00

I just donated to the NUUG defence "fond" to fund the effort in Norway to get the seizure of the news site popcorn-time.no tested in court. I hope everyone that agree with me will do the same.

Would you be worried if you knew the police in your country could hijack DNS domains of news sites covering free software system without talking to a judge first? I am. What if the free software system combined search engine lookups, bittorrent downloads and video playout and was called Popcorn Time? Would that affect your view? It still make me worried.

In March 2016, the Norwegian police seized (as in forced NORID to change the IP address pointed to by it to one controlled by the police) the DNS domain popcorn-time.no, without any supervision from the courts. I did not know about the web site back then, and assumed the courts had been involved, and was very surprised when I discovered that the police had hijacked the DNS domain without asking a judge for permission first. I was even more surprised when I had a look at the web site content on the Internet Archive, and only found news coverage about Popcorn Time, not any material published without the right holders permissions.

The seizure was widely covered in the Norwegian press (see for example Hegnar Online and ITavisen and NRK), at first due to the press release sent out by Økokrim, but then based on protests from the law professor Olav Torvund and lawyer Jon Wessel-Aas. It even got some coverage on TorrentFreak.

I wrote about the case a month ago, when the Norwegian Unix User Group (NUUG), where I am an active member, decided to ask the courts to test this seizure. The request was denied, but NUUG and its co-requestor EFN have not given up, and now they are rallying for support to get the seizure legally challenged. They accept both bank and Bitcoin transfer for those that want to support the request.

If you as me believe news sites about free software should not be censored, even if the free software have both legal and illegal applications, and that DNS hijacking should be tested by the courts, I suggest you show your support by donating to NUUG.

Categories: Elsewhere

jfhovinne pushed to 7.x-1.x at jfhovinne/integration_couchdb

Devel - Thu, 19/05/2016 - 13:50
May 19, 2016 jfhovinne pushed to 7.x-1.x at jfhovinne/integration_couchdb
  • c3c3098 Do not use getResourceEndpoint method from Integration module.
Categories: Networks

Blair Wadman: Create a Drupal 8 module using the Drupal Console

Planet Drupal - Thu, 19/05/2016 - 11:00

Developing custom modules in Drupal 8 is a daunting prospect for many. Whether you're still learning Drupal 7 module development or are more experienced, Drupal 8 represents a significant shift in the underlying architecture and the way modules are constructed.  

Categories: Elsewhere

Larry Garfield: HTML Application or Network Application?

Planet Drupal - Thu, 19/05/2016 - 08:45

There has been much discussion in the last few years of "web apps". Most of the discussion centers around whether "web apps" that do not degrade gracefully, use progressive enhancement, have bookmarkable pages, use semantic tags, and so forth are "Doing It Wrong(tm)", or if JavaScript is sufficiently prevalent that a JavaScript-dependent site/app is reasonable.

What I fear is all too often missing from these discussions is that there isn't one type of "web app". Just because two "things" use HTTP doesn't mean they're conceptually even remotely the same thing.

read more

Categories: Elsewhere

Drupal.org blog: The Drupal.org Complexity

Planet Drupal - Thu, 19/05/2016 - 04:06

At DrupalCon New Orleans, during both Dries's keynote and at the State of Drupal Core Conversation, question of whether/when to move to Github came up again.

Interestingly, I was already deep into researching ways we could reduce the cost of our collaboration tools while bringing in new contributors. This post is meant to serve as a little history of how we got to where we are and to provide information about how we might choose to go forward.

It's complex

To say Drupal.org is complex is an understatement. There are few systems with more integration points than Drupal.org and its related sites and services.

The ecosystem is complex with lots of services that share integrations like login (Bakery single sign on) and cross-site code/themes.

It all starts with the code

The slogan "come for the code stay for the community" is accurate. The community would not exist without the unifying effort of collaborating to create the code. The growth of the community is primarily because of the utility the code provides and the (relative) ease of creating a wide range of websites and applications using Drupal core combined with contributed modules and themes that allow that framework to be extended.

Drupal.org was an extension of the development of Drupal for a very long time. Up until Drupal 6, Drupal.org was always upgraded the day of the release of a new version. When Drupal was smaller with a more limited scope, this made a lot of sense. With the release of Drupal 6 and the surge in usage of Drupal, more and more contributors started working on the Drupal.org infrastructure and creating new sites and services to speed the collaborative work of the community.

One of the biggest transitions during the Drupal 6 lifecycle and community surge was The Great Git Migration. Much of the complexity of Drupal.org and the related sites and services was created during this time period. Understanding that timeline will help in understanding just how much work went into Drupal.org and Drupal at that time.

The Great Git Migration

In the Great Git Migration, all of the history of Drupal code was migrated to Git from CVS. The timeline for migrating to git was about what you would expect. Community conversation took time, getting volunteers to start the process took time, finally, there was a phase were dedicated (paid) resources were contracted to finish the work.

Our repos are vast

We have over 35,000 projects that total over 50 GB on disk.

All of the Git repos on Drupal.org are associated with Projects.(e.g. modules, themes, distributions, etc.)

We have issues

At the end of 2015, there were nearly 900,000 issues on Drupal.org. Drupal core alone has over 74,000 issues—over 14,000 of those issues are open. Open issues is not an indicator of code quality, but it is an indicator of how many people have contributed to a project. In general, the more issues a project has, the more challenging it is for maintainers to continuously triage those bug reports, feature requests, plans, tasks and support requests.

The issue queues are part project management, part bug tracking. As such, they are organic and messy and have lots of rules that have been documented over years of community development. We have 23 pages of documentation dedicated to explaining how to use the issue queues. There is additional documentation dedicated to how to properly fill out an issue for core, for Drupal.org, and for numerous other contributed projects.

Issues are integrated into collaboration on Drupal.org

Those issues belong to projects and are connected to the Git repos through hooks that show a system comment when an issue is related by node ID (issue number) to a commit in Git.

Issues can have patches uploaded to them that are the primary means of suggesting a change to code hosted on Drupal.org. The patch-based workflow has extensive documentation, but it is not a simple task for a novice user to jump in and start contributing.

Most Git hosting solutions (Github, Gitlab, Bitbucket, etc.) either have some version of an issue or at least integrate with an issue tracking system (Jira, Pivotal Tracker, etc.) and provide pull request functionality (a.k.a. merge requests).

Having the same name is where the similarities and consistencies stop. Issues on Drupal.org have status, priority, category, component, tagging and more that are unique to Drupal project workflow. It would be a significant exercise to remap all of those categorizations to a new system.

Packaging

If the projects are what you can browse and find, and the issues are how you collaborate and change the code, the next most important service for Drupal is likely the packaging system.

Packaging is based on project maintainers creating a release of the code by associating a branch of the Git repository with the release. Every 5 minutes, our automation infrastructure checks for new releases and will package those releases into a downloadable file to represent the project.

Few developers actually access this directly from the project page anymore. They are much more likely to use Git, Drush, Console or Composer to automate that part of the workflow. Drush, and to some extent Composer, both use the packaged files when a command is issued. Also, the Drupal feature of just putting the code in the correct directory and it will run—with no compiling—is fundamental to the history of Drupal site building.

Updates

Another crucial Drupal service, updates is built into how Drupal core checks on itself to see if it is up to date.

The 1.3 million plus websites that call home to updates.drupal.org get back XML that is then parsed by that installation's update status module; that updates module has different names depending on the version of Drupal. Each month, about 12 terabytes of traffic to our CDN is requests for updates XML. Considering this is a bunch of text files, this is an amazing number to consider. Some sites call home once a week, some once a day, and some do it every few minutes. (Really people! Be nice to your free updates service. Telling your server to ask for updates daily is plenty frequent enough.)

Tallying the unique site keys that request this information is how we get our usage statistics. While this is probably not the most precise way to measure our usage, it is directionally accurate. There are plenty of development sites in those stats and plenty of production websites that don't call home. It should roughly balance out. To be anymore precise, users of Drupal would have to give up some privacy. We've balanced accuracy with privacy.

Because of our awesome CDN (thanks, Fastly!), we are able to deliver up to date packages and updates information in milliseconds after we update the underlying data.

Composer

On May 3rd, we launched the alpha version of our Composer endpoints on Drupal.org. If you don't know about Composer, you should read up on it. Composer is package management for PHP. (It's similar to what NPM does for Node.js or RubyGems does for Ruby.)

Core developers have been using Composer for some time as a means to manage the dependencies of PHP libraries that are now included in core.

The Composer endpoints allow any Drupal site developer to use composer install to build out their websites.

The new Composer service will also allow contrib project maintainers to using composer.json files to define the requirements for their modules and themes. The service even translates project release versions into semantic versioning. Semantic versioning was the biggest reason we could not "just" use Packagist.org like other projects in the PHP community.

This is all a huge benefit, but more importantly, we now have deep integration between a best practice approach to PHP dependency management and the Drupal.org code repos that can scale to our community needs.

Testing with DrupalCI

Speaking of needs, DrupalCI ran 67,000 test runs in January 2016. Each test run for Drupal core has 18,511 tests per run. That means over 100,000 assertions (steps) in the unit and functional tests that make sure Drupal's code is stable and that an accepted patch does not create a regression.

At the time of this post, we are using Amazon Web Services cc2.8xlarge EC2 spot instances for our testbots. These bots are powerful. They have 2 processors with 8 hardware cores. AWS claims they can provide 88 EC2 compute units per instance. They are packed with processing power because we have a lot of tests to run. While there are bigger instances, the combination of price and power allows us to keep Drupal core complete test runs right around 30 minutes. We autoscale up to 20 of these instances depending on demand, which keeps queue times low and allows maintainers to get quick feedback about whether a patch works or not.

I truly believe that getting DrupalCI up and stable is what allowed Drupal 8 to get to a full release last fall. Without it, we would have continued to struggle with test times that were well over an hour and a system that required surplus testbots to be manually spun up when a big sprint was happening. That was costly and a huge time waste.

If anyone asks me "what's the most important thing your team did in 2015", I can unequivocally say "unblocking core development to get Drupal 8 released."

Issue credits

The second most important service we built in 2015—but certainly the more visible—is a system for crediting users and organizations that contribute on Drupal.org.

Issue credits sprang forth from an idea that Dries proposed around DrupalCon Austin in June of 2014. At the time, his intent was a means of structuring commit messages or using commit notes to provide the credit. Eventually, we shifted the implementation to focus on participation in issues rather than code commits. This made it possible to credit collaboration that did not result in a code change.

I won't get into the specifics; I wrote a A guide to issue credits and the Drupal.org marketplace earlier this year. Issue credits have been extremely successful.

As there name implies, we store the data about credits as a relationship to closed issues. Issue credits touch issues, projects, users, organizations and the marketplace on Drupal.org.

Why not just migrate all of this complexity to Github?

Why can't we just move all this to Github?

— said lots of people, often

To be fair, this is a challenging discussion. Angie Byron (webchick) wrote an amazingly concise summary of the Github issue on Groups.drupal.org.

That wiki/discussion/bikeshed was heated. The conversation lasted over two years. I started as CTO about 6 months into the conversation. Along with a couple of other themes, the Github move has been a constant background conversation that has defined much of my time leading the Drupal.org team.

How are these services connected?

To truly understand the problem of a migration of this scale, we have to look at how all of the major Drupal.org services are connected.

Each block in this diagram is a service. Each line is a point of integration between the services. Some of these services are on Druapl.org or subsites with thousands of lines of custom code defining the interactions. Other services are not built in Drupal and represent projects in Java (Jenkins) or Python (our Git daemon) with varying degrees of customization and configuration.

As the diagram suggests, it is truly a web of integrations. Pull one or more services out of this ecosystem and you have to either refactor a ton of code or remove a critical component of how the community collaborates and how our users build sites with Drupal.

It's kinda like a great big game of Jenga.

What would a migration to Github require?

Please believe me when I say that if it were "easy" or "simple", we would have made either moved to Github or at least upgraded our Git collaboration with nifty new tools on our own infrastructure.

However, disrupting the development of Drupal 8 would have been devastating to the project. We were correct to collectively backlog this project.

So if we were to try this migration now, what would it take? First, you have to consider the services that Github would effectively replace.

Github replaces:

  • Git repositories
  • Issues
  • Patches (they would become pull requests)
  • Git viewing (and we'd get inline editing for quick fix and onboarding)

That's four (4!) services that we would not have to maintain anymore. Awesome! Cost savings everywhere! Buy a boat!

Wait a second. You have 16 integration points that you need to refactor. Some of them would come with the new system. Issues, pull requests, repos and the viewer would all just work with huge improvements. That leaves us with 12 integration points that would require a ton of discovery and refactoring.

  1. Users - we have 100,000 Drupal.org users that are pretty engaged. (We have over 1 million user accounts—but that number is likely a little inflated by spam accounts.) Do we make them all get Github accounts? Do we integrate Github login to Drupal.org? Do we just link the accounts like Symfony does?
  2. Projects - Github is not a project browsing experience. Drupal.org is a canonical repository where the "one true project" lives for packaging and updates. At the very least, we have to integrate our projects with Github. Does that mean we have to keep a Git repo associated to the project that has hooks to pull in changes from Github?
  3. Testing - One of the less complex integration refactors would be getting DrupalCI integrated with pull requests. That effort would still be a months long project.

And DrupalCI would be its own effort to migrate to another testing service because it is tailored to the issue queue workflow and tightly integrated with projects.

Those are just a few of the major integration points.

I have a personal goal to detail every single integration and get that documented somewhere on Drupal.org. I don't think that level of documentation will increase the ability for others to contribute to the Drupal.org infrastructure—though that would be a pleasant side effect. I do think it is necessary for us to continue to support and maintain our systems and ensure that all of the tribal knowledge from the Drupal.org team can be passed on.

What would it cost?

I have joked that it would take roughly 1 million dollars (USD) to complete a Github migration. (Cue Dr. Evil.) That is only partially meant in jest.

As anyone who has estimated a large project knows, there is a point of uncertainty that leads project owners to guess at what they are willing to pay for the project.

If we take the four biggest lifts in the Drupal project's history, what do we get?

  1. Drupal.org redesign - There were tens of people involved in the project, hundreds giving feedback. The timeline was about a year from start to implementation.
  2. The Great Git Migration - There were tens of people involved in the project. Far fewer users gave feedback, but the project took about two years from brainstorming to initial commit to the Git repos—with a few months of clean up after.
  3. Drupal.org upgrades to Drupal 7 - The project took about two years with tens of people involved in about 8 months of clean up issues.
  4. Drupal 8 - 5 years of development by over 3,000 contributors.

I don't think than anyone would argue that each of these projects would have been bid at well over $1 million. I would put a migration to Github at somewhere between the complexity of The Great Git Migration and Drupal 8.

In none of these cases did the Drupal Association actually spend $1 million USD in project dollars. However, in all of the projects, there was lengthy discussion followed by substantial volunteer contribution, and then a significant bit of paid work to finish the job. It's a pattern that makes sense and will likely repeat itself over and over.

Would it be worth it?

I'm going to go back to the summary on the Github discussion. There are reasons why both options seem to be the best possible option or the worst possible option.

Would a best practice workflow and toolset be worth the change? Absolutely. Github (or Gitlab) tools are easier for newcomers to learn. Further, because we are using PHP and Javascript libraries that are hosted on Github, we could get contributions from developers and designers that are involved in those projects and do not wish to have account on Drupal.org.

The drawbacks are considerable. We cannot afford a full migration right now. Dries put it well at DrupalCon Los Angeles during core conversations. The Drupal Association is not a bag of money. With significant growth of revenue, there is a long term possibility of more paid developer resources, but not in the short term. It is too much to ask volunteers to give up a year of their life to run the project as a community initiative. That leads to burn out and frustration.

We should also consider whether the disruption to the current collaboration workflow will be worth it. I don't think so. Not if that disruption meant stalling the update of contrib projects that are critical to solidifying Drupal 8 adoption. (Though I could argue that much of this upgrade to Drupal 8 work is being performed on Github as some—perhaps many—developers prefer those tools.)

Is there a middle ground?

Drupal spends a lot of time getting to the middle ground. Many of the best innovations in Drupal come from getting to the middle ground—from reaching a general consensus and then allowing someone who has support and time iron out the details.

So for the first step, we should add functionality to projects on Drupal.org that allow maintainers to shift their workflow to Github while still publishing their project on Drupal.org. This allows the canonical browsing of projects, the continued support of the security team, and most importantly the continued distribution of Drupal through Composer, release packaging and the updates system.

We have a solid way forward for these integrations as the requirements are narrow enough in scope to accomplish in a 4-6 month timeframe using dedicated resources. We would still need to figure out how to award an issue credit to someone that participated in an issue on Github. We might be able to institute commit credits that could be parsed into issue credits from the participation on Github, but it would not be as inclusive as the current model.

It would be important to phase in this new feature rather than make a wholesale change. Once that integration is in place, we could extend DrupalCI to test pull requests similar to how we currently test patches submitted to an issue.

Stay flexible

We need to be flexible. GitHub has a lot of potential as tool for open source distribution and collaboration—likely for the foreseeable future. However, not every major project is on GitHub. The Linux Kernel uses Git repositories with cli tools and a patch-based workflow that relies heavily on email. It works for them. Wordpress is still on Subversion—even though they've started to accept some pull requests on GitHub. These projects are poised to make the right decision rather than a rash decision.

The sky will not fall if we keep our current model, but we are losing opportunities to grow as a community of contributors. Rather than a wholesale migration, we must understand the value and history of this web of integration points. Targeting our efforts on specific integrations points can achieve our goal of opening our doors to the developers who live and breathe GitHub, without losing the character of our collaboration. And in the long run, this focus on services and integrations can make us more adaptable to the next change in the broader development landscape.

Republished from joshuami.com

Categories: Elsewhere

Lullabot: Rebuilding POP in D8 - Development Environments

Planet Drupal - Thu, 19/05/2016 - 00:30

This is the second in a series of articles about building a website for a small non-profit using Drupal 8. These articles assume that the reader is already familiar with Drupal 7 development, and focuses on what is new / different in putting together a Drupal 8 site.

In the last article, I talked about Drupal 8's new block layout tools and how they are going to help us build the POP website without relying on external modules like Context. Having done some basic architectural research, it is now time to dive into real development. The first part of that, of course, is setting up our environments. There are quite a few new considerations in getting even this simple a setup in place for Drupal 8, so lets start digging into them.

My Setup

I wanted a pretty basic setup. I have a local development environment setup on laptop, then I wanted to host the code on github, and be able to push updates to a dev server so that my partner Nicole could see them, make comments, and eventually begin entering new content into the site. This is going to be done using a pretty basic dev/stage/live setup, along with using a QA tool we've built here at Lullabot called Tugboat. We'll be going into the details of workflow and deployment in the next article, but there is actually a bunch of new functionality in Drupal 8 surrounding environment and development settings. So what all do we need to know to get this going? Lets find out!

Local Settings

In past versions of Drupal, devs would often modify settings.php to include a localized version to store environment-specific information like database settings or API keys. This file does not get put into version control, but is instead created by hand in each environment to ensure that settings from one do not transfer to another inadvertently. In Drupal 8 this functionality is baked into core.

At the bottom of your settings.php are three commented out lines:

# if (file_exists(__DIR__ . '/settings.local.php')) { # include __DIR__ . '/settings.local.php'; # }

If you uncomment these lines and place a file named settings.local.php into the same directory as your settings.php, Drupal will automatically see it and include it, along with whatever settings you put in. Drupal core even ships with an example.settings.local.php which you can copy and use as your own. This example file includes several settings pre-configured which can be helpful to know about.

Caching

There are several settings related to caching in the example.settings.local.php which are useful to know about. $settings['cache']['bins']['render'] controls what cache backend is used for the render cache, and $settings['cache']['bins']['dynamic_page_cache'] controls what cache backend is used for the page cache. There are commented out lines for both of these which set the cache to cache.backend.null, which is a special cache backend that is equivalent to turning caching off for the specified setting.

The cache.backend.null cache backend is defined in the development.services.yml file, which is by default included in the example.settings.local.php with this line:

$settings['container_yamls'][] = DRUPAL_ROOT . '/sites/development.services.yml';

If you want to disable caching as described above, then you must leave this line uncommented. If you comment it out, you will get a big ugly error the next time you try and run a cache rebuild.

Drush error message when the null caching backend has not been enabled.

The development.services.yml file is actually itself a localized configuration file for a variety of other Drupal 8 settings. We'll circle back to this a bit later in the article.

Other Settings

example.settings.local.php also includes a variety of other settings that can help during development. One such setting is rebuild_access. Drupal 8 includes a file called rebuild.php, which you can access from a web browser in order to rebuild Drupal's caches in situations where the Drupal admin is otherwise inaccessible. Normally you need a special token to access rebuild.php, however by setting $settings['rebuild_access'] = TRUE, you can access rebuild without a token for specific environments (like your laptop.)

Another thing you can do is turn on or off CSS and Javascript preprocessing, or show/hide testing modules and themes. It is worth taking the time to go through this file and see what all is available to you in addition to the usual things you would put in a local settings file like your database information.

Trusted Hosts

One setting you'll want to set that isn't pre-defined in example.settings.local.php is trusted_host_patterns. In earlier versions of Drupal, it was relatively easy for attackers to spoof your HTTP host in order to do things like rewrite the link in password reset emails, or poison the cache so that images and links pointed to a different domain. Drupal offers the trusted_host_patterns setting to allow users to specify exactly what hosts Drupal should respond to requests for. For the site www.example.com, you would set this up as follows.

$settings['trusted_host_patterns'] = array( '^www\.example\.com$', );

If you want your site to respond to all subdomains of example.com, you would add an entry like so:

$settings['trusted_host_patterns'] = array( '^www\.example\.com$', '^.+\.example\.com$', );

Trusted hosts can be added as needed to this array dependent on your needs. This is also something you'll want to set up on a per-environment basis in a local.settings.php, since each environment will have its own trusted hosts.

Local Service Settings

When Drupal 8 started merging in components from Symfony, we introduced the concept of "services". A service is simply an object that performs a single piece of functionality which is global to your application. For instance, Symfony uses a Mailer service which is used globally to send email. Some other examples of services are Twig (for template management) and Session Handling.

Symfony uses a file called services.yml for managing configuration for services, and just like with our settings.local.php, we can use a file called development.services.yml to manage our localized service configuration. As we saw above, this file is automatically included when we use Drupal 8's default local settings file. If you add this file to your .gitignore, then we can use it for environment-specific configuration just like we do with settings.local.php.

The full scale of configuration that can be managed through services.yml is well outside the scope of this article. The main item of interest from a development standpoint is Twig debugging. When you set debug: true in the twig.config portion of your services configuration file, your HTML output will have a great deal of debugging information added to it. You can see an example of this below:

Drupal page output including Twig debugging information.

Every template hook is outlined in the HTML output, so that you can easily determine where that portion of markup is coming from. This is extremely useful, especially for people who are new to Drupal theming. This does come with a cost in terms of performance, so it should not be turned on in production, but for development it is a vital tool.

Configuration Management

One of the major features of Drupal 8 is its new configuration management system. This allows configuration to be exported from one site and imported on another site with the ease of deploying any other code changes. Drupal provides all installations with a sync directory which is where configuration is exported to and imported from. Be default this directory is located in Drupal's files directory, however this is not the best place for it considering the sensitive data that can be stored in your configuration. Ideally you will want to store it outside of your webroot. For my installation I have setup a directory structure like this:

Sample Drupal 8 directory structure.

The Drupal installation lives inside docroot, util contains build scripts and other tools that are useful for deployments (more on this in the next article) and config/sync is where my configuration files are going to live. To make this work, you must change settings.php as follows:

$config_directories = array( CONFIG_SYNC_DIRECTORY => '../config/sync', );

Note that this will be the same for all sites, so you will want to set it in your main settings.php, not a local.settings.php.

Having done all this, we are now setup for work and ready to setup our development workflow for pushing changes upstream and reviewing changes as they are worked on. That will be the subject of our next article so stay tuned!

Categories: Elsewhere

Stig Sandbeck Mathisen: Puppet 4 uploaded to Debian experimental

Planet Debian - Thu, 19/05/2016 - 00:00

I’ve uploaded puppet 4.4.2-1 to Debian experimental.

Please test with caution, and expect sharp corners. This is a new major version of Puppet in Debian, with many new features and potentially breaking changes, as well as a big rewrite of the .deb packaging. Bug reports for src:puppet are very welcome.

As previously described in #798636, the new package names are:

  • puppet (all the software)

  • puppet-agent (package containing just the init script and systemd unit for the puppet agent)

  • puppet-master (init script and systemd unit for starting a single master)

  • puppet-master-passenger (This package depends on apache2 and libapache2-mod-passenger, and configures a puppet master scaled for more than a handful of puppet agents)

Lots of hugs to the authors, keepers and maintainers of autopkgtest, debci, piuparts and ruby-serverspec for their software. They helped me figure out when I had reached “good enough for experimental”.

Some notes:

  • To use exported resources with puppet 4, you need a puppetdb installation and a relevant puppetdb-terminus package on your puppet master. This is not available in Debian, but is available from Puppet’s repositories.

  • Syntax highlighting for Emacs and Vim are no longer built from the puppet package. Standalone packages will be made.

  • The packaged puppet modules need an overhaul of their dependencies to install alongside this version of puppet. Testing would probably also be great to see if they actually work.

I sincerely hope someone finds this useful. :)

Categories: Elsewhere

Jonathan McDowell: First steps with the ATtiny45

Planet Debian - Wed, 18/05/2016 - 23:25

These days the phrase “embedded” usually means no console (except, if you’re lucky, console on a UART for debugging) and probably busybox for as much of userspace as you can get away with. You possibly have package management from OpenEmbedded or similar, though it might just be a horrible kludged together rootfs if someone hates you. Either way it’s rare for it not to involve some sort of hardware and OS much more advanced than the 8 bit machines I started out programming on.

That is, unless you’re playing with Arduinos or other similar hardware. I’m currently waiting on some ESP8266 dev boards to arrive, but even they’re quite advanced, with wifi and a basic OS framework provided. A long time ago I meant to get around to playing with PICs but never managed to do so. What I realised recently was that I have a ready made USB relay board that is powered by an ATtiny45. First step was to figure out if there were suitable programming pins available, which turned out to be all brought out conveniently to the edge of the board. Next I got out my trusty Bus Pirate, installed avrdude and lo and behold:

$ avrdude -p attiny45 -c buspirate -P /dev/ttyUSB0 Attempting to initiate BusPirate binary mode... avrdude: Paged flash write enabled. avrdude: AVR device initialized and ready to accept instructions Reading | ################################################## | 100% 0.01s avrdude: Device signature = 0x1e9206 (probably t45) avrdude: safemode: Fuses OK (E:FF, H:DD, L:E1) avrdude done. Thank you.

Perfect. I then read the existing flash image off the device, disassembled it, worked out it was based on V-USB and then proceeded to work out that the only interesting extra bit was that the relay was hanging off pin 3 on IO port B. Which led to me knocking up what I thought should be a functionally equivalent version of the firmware, available locally or on GitHub. It’s worked with my basic testing so far and has confirmed to me I understand how the board is set up, meaning I can start to think about what else I could do with it…

Categories: Elsewhere

Dries Buytaert: Megan Sanicki to become Executive Director at the Drupal Association

Planet Drupal - Wed, 18/05/2016 - 23:03

This is a time of transition for the Drupal Association. As you might have read on the Drupal Association blog, Holly Ross, our Executive Director, is moving on. Megan Sanicki, who has been with the Drupal Association for almost 6 years, and was working alongside Holly as the Drupal Association's COO, will take over Holly's role as the Executive Director.

Open source stewardship is not easy but in the 3 years Holly was leading the Drupal Association, she lead with passion, determination and transparency. She operationalized the Drupal Association and built a team that truly embraces its mission to serve the community, growing that team by over 50% over three years of her tenure. She established a relationship with the community that wasn't there before, allowing the Drupal Association to help in new ways like supporting the Drupal 8 launch, providing test infrastructure, and more. Holly also matured our DrupalCon, expanding its reach to more users with conferences in Latin America and India. She also executed the Drupal 8 Accelerate Fund, which allowed direct funding of key contributors to help lead Drupal 8 to a successful release.

Holly did a lot for Drupal. She touched all of us in the Drupal community. She helped us become better and work closer together. It is sad to see her leave, but I'm confident she'll find success in future endeavors. Thanks, Holly!

Megan, the Drupal Association staff and the Board of Directors are committed to supporting the Drupal project. In this time of transition, we are focused on the work that Drupal Association must do and looking at how to do that in a sustainable way so we can support the project for many years to come.

Categories: Elsewhere

Andy Simpkins: OpenTAC sprint, Cambridge

Planet Debian - Wed, 18/05/2016 - 23:00

Last weekend saw a small group get togeather in Cambridge to hack on the OpenTAC.  OpenTAC is an OpenHardware OpenSoftware test platform, designed specificly to aid automated testing and continious intergration.

Aimed at small / mobile / embedded targets OpenTAC v1 provides all of the  support infrastructure to drive up to 8 DUTs (Device Under Test) to your test or CI system.
Each of the 8 EUT ports provides:

  • A serial port (either RS232 levels on an DB9 socket, or 3V3 TTL on a molex kk plug)
  • USB Power (up-to 2A with a software defined fuse, and alarm limits)
  • USB data interconnect
  • Ethernet

All ports on the EUT interface are relay issolated, this means that cables to your EUT can be ‘unplugged’ under software control (we are aware of several SoC development boards that latch up if there is a serial port connected before power is applied).

Additionly there are 8 GPIO lines that can be used as switch controls to any EUT (perhaps to put a specific EUT into a programming mode, reboot it or even start it)

 

Anyway, back to the hacking weekend. ..

 

Joining Steve McIntyre and myself were Mark Brown, and Michael Grzeschik  (sorry Michael, I couldn’t find a homepage).  Mark traveled down from Scotland whilst Michael flew in from Germany for the weekend.  Gents we greatly apprecate you taking the time and expence to join us this weekend.  I should also thank my employer Toby Churchill Ltd. for allowing us to use the office to host the event.

A lot of work got done, and I beleive we have now fully tested and debugged the hardware.  We have also made great progress with the device tree and dvice drivers for the platform.  Mark got the EUT power system working as proof of concept, and has taken an OpenTAC board back with him to turn this into suitable drivers and hopfully push them up stream.  Meanwhile Michael spent his time working on the system portion of the device tree; OpenTAC’s internal power sequancing, thermal managment subsystem, and USB hub control.  Steve  got to grips with the USB serial converters (including how to read and program their internal non-volatile settings).  Finally I was able to explain hardware sequancing to everyone, and to modify boards to overcome some of my design mistakes (the biggest was by far the missing sence resistors for the EUT power managment)

 

 

Categories: Elsewhere

Chapter Three: The Myth of the Unsupportable Drupal Site

Planet Drupal - Wed, 18/05/2016 - 21:56

On the frontlines of Drupal support the team here at Chapter Three has seen a lot of different ways you can complicate or overengineer a Drupal build. These can range from sites with 350 unique modules, to PHP-heavy frankensites that do everything they can to avoid using core APIs.

Categories: Elsewhere

Steve Kemp: Accidental data-store ..

Planet Debian - Wed, 18/05/2016 - 20:49

A few months back I was looking over a lot of different object-storage systems, giving them mini-reviews, and trying them out in turn.

While many were overly complex, some were simple. Simplicity is always appealing, providing it works.

My review of camlistore was generally positive, because I like the design. Unfortunately it also highlighted a lack of documentation about how to use it to scale, replicate, and rebalance.

How hard could it be to write something similar, but also paying attention to keep it as simple as possible? Well perhaps it was too easy.

Blob-Storage

First of all we write a blob-storage system. We allow three operations to be carried out:

  • Retrieve a chunk of data, given an ID.
  • Store the given chunk of data, with the specified ID.
  • Return a list of all known IDs.

 

API Server

We write a second server that consumers actually use, though it is implemented in terms of the blob-storage server listed previously.

The public API is trivial:

  • Upload a new file, returning the ID which it was stored under.
  • Retrieve a previous upload, by ID.

 

Replication Support

The previous two services are sufficient to write an object storage system, but they don't necessarily provide replication. You could add immediate replication; an upload of a file could involve writing that data to N blob-servers, but in a perfect world servers don't crash, so why not replicate in the background? You save time if you only save uploaded-content to one blob-server.

Replication can be implemented purely in terms of the blob-servers:

  • For each blob server, get the list of objects stored on it.
  • Look for that object on each of the other servers. If it is found on N of them we're good.
  • If there are fewer copies than we like, then download the data, and upload to another server.
  • Repeat until each object is stored on sufficient number of blob-servers.

 

My code is reliable, the implementation is almost painfully simple, and the only difference in my design is that rather than having an API-server which allows both "uploads" and "downloads" I split it into two - that means you can leave your "download" server open to the world, so that it can be useful, and your upload-server can be firewalled to only allow a few hosts to access it.

The code is perl-based, because Perl is good, and available here on github:

TODO: Rewrite the thing in #golang to be cool.

Categories: Elsewhere

Wunderkraut blog: Dropcat - the Jenkins build

Planet Drupal - Wed, 18/05/2016 - 20:31

Time to part three of the Dropcat blog-series. Now it is time to get less abstract, and show what is done. In Wunderkraut Sweden we are normally using Jenkins to do deploys, but you could do it also in a script from you local machine.

Jenkins setup We are using jenkins to clone the project, and the only task we do after that is executing a shell, like this: 

In the end of this series I am doing a post about our composer workflow, so I will not go into that now. First we export some variables, to use with dropcat later. PATHLocal path to composer DROPCAT_ENVWhich environment we are going to deploy, in this case stage, so dropcat will use the dropcat.stage.yml file for settings. ENVThis is a variable we are using for composer. BUILD_DATEUsed to name our deployed folder. And then the dropcat tasks dropcat prepareThis command checks if the db used for the site exists, if it does not, it tries to create it. It also creates the drush-alias used for the site etc. dropcat backupThis backups the db, if you want to backup the whole web folder, add the option -- backup_site  dropcat tarPacks the site in a tar-file. The options here could be set in dropcat.stage.yml, but I think it is more useful to use Jenkins variables here. dropcat uploadUploads the tar to the remote server, and removes it from the local server. dropcat moveThis unpacks the tar file and moves it in place. And creates a symlink to the deployed folder, like mysite_latest_stage. It also deletes the uploaded tar-file. dropcat symlinkThis we are using to create the files folder, which in our setup is outside the web folder. dropcat config-importImports the configuration. dropcat reset-loginGets us a login-link to the site so we could check our deploy. In next blogpost we are starting to look into in detail what happens in each step.
Categories: Elsewhere

KnackForge: Create single page site in drupal 8 within 15 minutes

Planet Drupal - Wed, 18/05/2016 - 20:00
Create single page site in drupal 8 within 15 minutes

We all know that the Drupal setup and installation are easy, and one click installation is also available in most of the hosting services, but the tricky part is theming the drupal site. It would take some time to design the site to make it look more professional.

In this blog, I am going to show you how to build a single page drupal site within 15 minutes with a good looking theme.

Pathirakaliappan Wed, 05/18/2016 - 23:30
Categories: Elsewhere

LevelTen Interactive: Drupal Con[densed] 2016: The Best Content Marketing Sessions

Planet Drupal - Wed, 18/05/2016 - 16:20

As a marketer, a Drupal newbie, and the newest LevelTen employee, I was super excited for DrupalCon.

Historically, there has not been a lot of overlap between the Drupal world and the marketing world. One one level that makes some sense: companies who need a Drupal web solution are often large enterprise-level organizations who have the resources for an in-house marketing team.

But it’s clear after my week at DrupalCon 2016 that more and more agencies and web developers are recognizing that they need to offer some kind of content or content strategy services to their...Read more

Categories: Elsewhere

LevelTen Interactive: Drupal Con[densed] 2106: The Best Content Marketing Sessions

Planet Drupal - Wed, 18/05/2016 - 16:20

As a marketer, a Drupal newbie, and the newest LevelTen employee, I was super excited for DrupalCon.

Historically, there has not been a lot of overlap between the Drupal world and the marketing world. One one level that makes some sense: companies who need a Drupal web solution are often large enterprise-level organizations who have the resources for an in-house marketing team.

But it’s clear after my week at DrupalCon 2016 that more and more agencies and web developers are recognizing that they need to offer some kind of content or content strategy services to their...Read more

Categories: Elsewhere

Jeff Geerling's Blog: Adding a role to a user programmatically in Drupal 8

Planet Drupal - Wed, 18/05/2016 - 16:18

Since a quick Google search didn't bring up how to do this in Drupal 8 (there are dozens of posts on how to do it in Drupal 7), I thought I'd post a quick blog post on how you can modify a user's roles in Drupal 8. Hint: It's a lot easier than you'd think!

In Drupal 7, $user was an object... but it was more like an object that acted like a dumb storage container. You couldn't really do anything with it directly—instead, you had to stick it in functions (like user_multiple_role_edit()) to do things like add or remove roles or modify account information.

In Drupal 8, $user is a real, useful object. Want to modify the account name and save the change?

Categories: Elsewhere

OSTraining: Track Google Analytics Clicks and Downloads in Drupal

Planet Drupal - Wed, 18/05/2016 - 15:58

Over the last week, several people have asked us about tracking Google Analytics "events" on their Drupal site.

"Events" describe anything from clicking on an external link to leave your site or downloading a file.

We're going to use the Google Analytics module which is available for Drupal 6, 7 and 8.

Categories: Elsewhere

Pixelite: 10 things I learnt building in Drupal 8

Planet Drupal - Wed, 18/05/2016 - 14:00
Introduction

I have had the chance to be involved with 2 fresh builds with Drupal 8 now, I thought I would describe some of the neat things I have found during this time and some of my lessons learned. My hope is that blog post will help you in your journey with Drupal 8.

1. Drupal Console is awesome

Every time you need to generate a custom module, or a new block in a custom module, you can quickly and easily use Drupal Console to produce the code scaffolding code for you. This quite easily makes the job of a developer a lot less stressful, and allows you to focus on actually writing code that delivers functionality.

I plucked these example commands that I use frequently from my bash history:

drupal site:mode dev drupal generate:module drupal generate:plugin:block drupal generate:routesubscriber drupal generate:form:config

Documentation is online but for the most part, the commands are self documenting, if you use the --help option, then you get a great summary on the command, and the other options you can pass in.

The other nice thing is that this is a Symfony Console application, so it should feel very familiar to you if you used another tool written in the same framework.

2. Custom block types are amazing

In Drupal 7 land there was bean which was an attempt to stop making ‘meta’ nodes to fill in content editable parts of complex landing pages. Now, fast forward to Drupal 8, and custom block types are now in Drupal Core.

This basically means as a site builder you now have another really powerful tool at your disposal in order to model content effectively in Drupal 8.

Each custom block type can have it’s own fields, it’s own display settings, and form displays.

Here are the final custom block types on a recent Drupal 8 build:

One downside is that there is no access control per custom block type (just a global permission “administer blocks”), no doubt contrib will step in to fill this hole in the future (does anyone know a module that can help here?). In the mean time there is drupal.org issue on the subject.

I also found it weird that the custom blocks administration section was not directly under the ‘structure’ section of the site, there is another drupal.org issue about normalising this as well. Setting up some default shortcuts really helped me save some time.

3. View modes on all the things

To create custom view modes in Drupal 7 required either a custom module or Dave Reid’s entity_view_mode contrib module. Now this is baked into Drupal 8 core.

View modes on your custom block types takes things to yet another level still as well. This is one more feather in the Drupal site builder’s cap.

4. Twig is the best

In Drupal 7 I always found it weird that you could not unleash a front end developer upon your site and expect to have a pleasant result. In order to be successful the themer would need to know PHP, preprocess hooks, template naming standards, the mystical specific order in which the templates apply and so on. This often meant that a backend and front end developer would need to work together in order to create a good outcome.

With the introduction of Twig, I now feel that theming is back in the hands of the front end developer, and knowledge of PHP is no longer needed in order to override just about any markup that Drupal 8 produces.

Pro tip - use the Drupal Console command drupal site:mode dev to enable Twig development options, and disable Drupal caching. Another positive side effect is that Twig will then render the entire list of templates that you could be using, and which one you actually are using (and where that template is located).

Pro tip: - If you want to use a template per custom block type (to which I did), then you can use this PHP snippet in your theme’s .theme file (taken from drupal.org):

<?php /** * Implements hook_theme_suggestions_HOOK_alter() for form templates. * * @param array $suggestions * @param array $variables */ function THEMENAME_theme_suggestions_block_alter(array &$suggestions, array $variables) { if (isset($variables['elements']['content']['#block_content'])) { array_splice($suggestions, 1, 0, 'block__bundle__' . $variables['elements']['content']['#block_content']->bundle()); } } 5. Panelizer + panels IPE is a formidable site building tool

When looking for a layout manager to help build the more complex landing pages, I came across panelizer + panels IPE. Using panelizer you are able to:

  • create per node layout variants
  • apply a single layout to all nodes of a particular bundle (e.g. all your news articles have the same layout)

The other neat thing is that the layouts themselves are now standardised between all the various layout managers using a contrib module called layout_plugin. Also they are just YAML and Twig. Simple. There is even an effort to get this merged into Drupal 8.2 which I think would be a great idea.

Downside - all JS is still rendered on the page even though the user (e.g. anonymous users) have no access to panelizer. There is a patch on drupal.org to help fix this.

Since starting this build there has also been a stable release of display suite come out for Drupal 8 as well giving you even more options.

6. You can build a rather complex site with very little contributed modules

For this most recent site I build I got away with using only 10 contributed modules (one of which - devel was purely for debugging purposes).

  • ctools
  • google_analytics
  • metatag
  • panels
  • token
  • contact_block
  • devel
  • layout_plugin
  • panelizer
  • pathauto

This means you are inherently building a more stable and supportable site, as most of the functionality now comes out of Drupal core.

7. The contact module now is supercharged

In Drupal 7, the contact module was one of those modules to which I never turned on, as it was rather inflexible. You could not change the fields in a UI, nor add email recipients, or have more than 1 form. Now in Drupal 8 you can have as many “contact” forms as you want, each one is fieldable, and can send emails to as many people as needed.

You can also enhance the core module with:

  • contact_block - allows you to place the contact form in a block
  • contact_storage - allows you to store the submissions in the database, rather than firing an email and forgetting about it

There is still a place for webform, namely:

  • large complex form with lots of fields
  • multi-step forms
  • forms you want to ‘save draft’

You can read more about this in the OS training blog post on the contact module.

Downside - I wanted to have a plain page use the path /contact but the contact module registers this path, so pathauto gave my contact page a path of /contact-0. Luckily creating a route subscriber with Drupal Console was painless, so altering the contact module route was very simple to do. I can paste the code here if needed, but most of it is the code that Drupal Console generates for you.

8. PHPunit is bundled into core

Now that Drupal 8 is largely Object Oriented (OO), you are able to test classes using PHPunit. I have wrote about phpunit in the past if you want to know more.

9. Views is in core

This was the main reason why adoption of Drupal 7 was so slow after it’s initial 7.0 release, as everyone needed views to be stable before jumping ship. Now with views bundled into core, views plugins are also being ported at a great rate of knots too.

10. CKEditor is in core

I often found that this was one library that never (or hardly ever) got updated on sites that had been around for a while. More worryingly, CKEditor (the library) would from time to time fix security related issues. Now that this comes with Drupal 8 core, it is just one less thing to worry about.

Also I would love to shout out to Wim Leers (and other contributors) for revamping the image dialog with alignment and caption options. I cannot tell you how much pain and suffering this caused me in Drupal 7.

Comments

If you have built a site recently in Drupal 8 and have found anything interesting or exciting, please let me know in the comments. Also keen to see what sites people have built, so post a link to it if it is public.

Categories: Elsewhere

IXIS: British Council win RealIT Award 2016 for Infrastructure as an Enabler

Planet Drupal - Wed, 18/05/2016 - 13:10

Members of the British Council Digital team were delighted to receive the RITA2016 award last Thursday for the huge change in IT cloud infrastructure that Ixis delivered in the summer of 2015.

read more

Categories: Elsewhere

Pages

Subscribe to jfhovinne aggregator