Because of CVE-2015-0847 and CVE-2013-7441, two security issues in nbd-server, I've had to updates for nbd, for which there are various supported versions: upstream, unstable, stable, oldstable, oldoldstable, and oldoldstable-backports. I've just finished uploading security fixes for the various supported versions of nbd-server in Debian. There're various relevant archives, and unfortunately it looks like they all have their own way of doing things regarding security:
- For squeeze-lts (oldoldstable), you check out the secure-testing repository, run a script from that repository that generates a DLA number and email template, commit the result, and send a signed mail (whatever format) to the relevant mailinglist. Uploads go to ftp-master with squeeze-lts as target distribution.
- For backports, you send a mail to the team alias requesting a BSA number, do the upload, and write the mail (based on a template that you need to modify yourself), which you then send (inline signed) to the relevant mailinglist. Uploads go to ftp-master with $dist-backports as target distribution, but you need to be in a particular ACL to be allowed to do so. However, due to backports policy, packages should never be in backports before they are in the distribution from which they are derived -- so I refrained from uploading to backports until the regular security update had been done. Not sure whether that's strictly required, but I didn't think it would do harm; even so, that did mean the procedure for backports was even more involved.
- For the distributions supported by the security team (stable and oldstable, currently), you prepare the upload yourself, ask permission from the security team (by sending a debdiff), do the upload, and then ask the security team to send out the email. Uploads go to security-master, which implies that you may have to use dpkg-buildpackage's -sa parameter in order to make sure that the orig.tar.gz is actually in the security archive.
- For unstable and upstream, you Just Upload(TM), because it's no different from a regular release.
While I understand how the differences between the various approaches have come to exist, I'm not sure I understand why they are necessary. Clearly, there's some room for improvement here.
As anyone who reads the above may see, doing an upload for squeeze-lts is in fact the easiest of the three "stable" approaches, since no intermediate steps are required. While I'm not about to advocate dropping all procedures everywhere, a streamlining of them might be appropriate.
When preparing an email newsletter, one part of it that is time consuming is gathering together all the content that is needed. In my experience, virtually all the content already exists elsewhere, such as in the local CMS, in CiviCRM, or on a blog, or some other online source. So I was thinking how can I make this process easier. What I did: I created mail merge tokens for CiviCRM that autofill a list of recent blog posts, stories, or any other type of CMS content. So the end-user sees a list of tokens, one for each content type, each term/category, each aggregator feed, and for each date range. Such as "Content of type 'blog' created in the last 7 days" . What is particulary powerful about this approach, is that if you are also using a CMS aggregator (such as the aggregator module in Drupal core) then virually any external RSS feed is turned into CMS content, which is now available as a CiviCRM token. (The original blog post about this extension is at: https://civicrm.org/blogs/pogstonesarahgladstone/easier-creation-email-newsletters-content-tokens )
Thanks to community involvement (specifically thanks to https://github.com/jorich-2000), there is a new version of the Content Token extension. This version now supports Joomla, in addition to Drupal7, Drupal6, and WordPress.
The lastest version is 2.9 and can be downloaded from: https://civicrm.org/extensions/content-tokens
I am looking forward to getting feedback on this.
Long time without a blog post. My time got eaten by work and travel and work-related travel. Hopefully more content soon.
This is just a quick note about the release of version 1.34 of the git-pbuilder script (which at some point really should just be rewritten in Python and incorporated entirely into the git-buildpackage package). Guido Günther added support for creating chroots for LTS distributions.
You can get the latest version from my scripts page.
I plan on doing a more in depth article on how I've been using Panels instead of templates or contexts for laying out this Drupal 7 site, but I feel like I still have more to learn. Until then, I wanted to share what I found to be a missing piece of the puzzle, Page Manager Existing Pages.
PMEP allows you to override any page that is in the admin menu for use in Page Manager. That way, you can create variants, and add whatever layout, content, selection rules, that you want. Without this plugin, you get an error message in Page Manager when trying to overwrite an existing URL.
So, where would I use this? Page Manager comes with defaults for Node, Taxonomy, and some User pages, most of what you need to present your site to the world. But there are certain administration pages, when viewed in a front end theme that slipped through the cracks. For example, node/add, which lists all the content types you can add, or the Style Guide Module generated /admin/appearance/styleguide
Install and configure Page Manager Existing Pages
What do we want to achive:
- SFTP server
- only a specified account is allowed to connect to SFTP
- nothing outside the SFTP directory is exposed
- no SSH login is allowed
- any extra security measures are welcome
Mount the removable drive which will hold the SFTP area (you might need to add some entry in fstab).
Create the account to be used for SFTP access (on a Debian system this will do the trick):
# adduser --system --home /media/Store/sftp --shell /usr/sbin/nologin sftp
This will create the account sftp which has login disabled, shell is /usr/sbin/nologin and create the home directory for this user.
Unfortunately the default ownership of the home directory of this user are incompatible with chroot-ing in SFTP (which prevents access to other files on the server). A message like the one below will be generated in this kind of case:
$ sftp -v sftp@localhost
debug1: Authentication succeeded (password).
Authenticated to localhost ([::1]:22).
debug1: channel 0: new [client-session]
debug1: Requesting firstname.lastname@example.org
debug1: Entering interactive session.
Write failed: Broken pipe
Couldn't read packet: Connection reset by peerAlso /var/log/auth.log will contain something like this:
fatal: bad ownership or modes for chroot directory "/media/Store/sftp"
The default permissions are visible using the 'namei -l' command on the sftp home directory:
# namei -l /media/Store/sftp
drwxr-xr-x root root /
drwxr-xr-x root root media
drwxr-xr-x root root Store
drwxr-xr-x sftp nogroup sftpWe change the ownership of the sftp directory and make sure there is a place for files to be uploaded in the SFTP area:
# chown root:root /media/Store/sftp
# mkdir /media/Store/sftp/upload
# chown sftp /media/Store/sftp/upload
We isolate the sftp users from other users on the system and configure a chroot-ed environment for all users accessing the SFTP server:
# addgroup sftpusers
# adduser sftp sftusersSet a password for the sftp user so password authentication works:
# passwd sftpPutting all pieces together, we restrict access only to the sftp user, allow it access via password authentication only to SFTP, but not SSH (and disallow tunneling and forwarding or empty passwords).
Here are the changes done in /etc/ssh/sshd_config:
Subsystem sftp internal-sftp
Match Group sftpusers
PermitTunnel noReload the sshd configuration (I'm using systemd):
# systemctl reload ssh.serviceCheck sftp user can't login via SSH:
$ ssh sftp@localhost
This service allows sftp connections only.
Connection to localhost closed.But SFTP is working and is restricted to the SFTP area:
$ sftp sftp@localhost
Connected to localhost.
Remote working directory: /
sftp> put netbsd-nfs.bin
Uploading netbsd-nfs.bin to /netbsd-nfs.bin
remote open("/netbsd-nfs.bin"): Permission denied
sftp> cd upload
sftp> put netbsd-nfs.bin
Uploading netbsd-nfs.bin to /upload/netbsd-nfs.bin
netbsd-nfs.bin 100% 3111KB 3.0MB/s 00:00 Now your system is ready to accept sftp connections, things can be uploaded in the upload directory and whenever the external drive is unmounted, SFTP will NOT work.
Note: Since we added 'AllowUsers sftp', you can test no local user can login via SSH. If you don't want to restrict access only to the sftp user, you can whitelist other users by adding them in the AllowUsers directive, or dropping it entirely so all local users can SSH into the system.
DebConf team: Second Call for Proposals and Approved Talks for DebConf15 (Posted by DebConf Content Team)
DebConf15 will be held in Heidelberg, Germany from the 15th to the 22nd of August, 2015. The clock is ticking and our annual conference is approaching. There are less than three months to go, and the Call for Proposals period closes in only a few weeks.
This year, we are encouraging people to submit “half-length” 20-minute events, to allow attendees to have a broader view of the many things that go on in the project in the limited amount of time that we have.
To make sure that your proposal is part of the official DebConf schedule you should submit it before June 15th.
If you have already sent your proposal, please log in to summit and make sure to improve your description and title. This will help us fit the talks into tracks, and devise a cohesive schedule.
For more details on how to submit a proposal see: http://debconf15.debconf.org/proposals.xhtml.Approved Talks
We have processed the proposals submitted up to now, and we are proud to announce the first batch of approved talks. Some of them:
- This APT has Super Cow Powers (David Kalnischkies)
- AppStream, Limba, XdgApp: Past, present and future (Matthias Klumpp)
- Onwards to Stretch (and other items from the Release Team) (Niels Thykier for the Release Team)
- GnuPG in Debian report (Daniel Kahn Gillmor)
- Stretching out for trustworthy reproducible builds - creating bit by bit identical binaries (Holger Levsen & Lunar)
- Debian sysadmin (and infrastructure) from an outsider/newcomer perspective (Donald Norwood)
- The Debian Long Term Support Team: Past, Present and Future (Raphaël Hertzog & Holger Levsen)
If you have already submitted your event and haven’t heard from us yet, don’t panic! We will contact you shortly.
We would really like to hear about new ideas, teams and projects related to Debian, so do not hesitate to submit yours.
See you in Heidelberg,
Prior to spending any time configuring a new physical server, I like to ensure that the hardware is fine.
To check memory, I boot into memtest86+ from the grub menu and let it run overnight.
Then I check the hard drives using:smartctl -t long /dev/sdX badblocks -swo badblocks.out /dev/sdX Configuration apt-get install etckeepr git sudo vim
To keep track of the configuration changes I make in /etc/, I use etckeeper to keep that directory in a git repository and make the following changes to the default /etc/etckeeper/etckeeper.conf:
- turn off daily auto-commits
- turn off auto-commits before package installs
To get more control over the various packages I install, I change the default debconf level to medium:dpkg-reconfigure debconf
Since I use vim for all of my configuration file editing, I make it the default editor:update-alternatives --config editor ssh apt-get install openssh-server mosh fail2ban
Since most of my servers are set to UTC time, I like to use my local timezone when sshing into them. Looking at file timestamps is much less confusing that way.
I also ensure that the locale I use is available on the server by adding it the list of generated locales:dpkg-reconfigure locales
Other than that, I harden the ssh configuration and end up with the following settings in /etc/ssh/sshd_config (jessie):HostKey /etc/ssh/ssh_host_ed25519_key HostKey /etc/ssh/ssh_host_rsa_key HostKey /etc/ssh/ssh_host_ecdsa_key KexAlgorithms email@example.com,ecdh-sha2-nistp521,ecdh-sha2-nistp384,ecdh-sha2-nistp256,diffie-hellman-group-exchange-sha256 Ciphers firstname.lastname@example.org,aes256-ctr,aes192-ctr,aes128-ctr MACs email@example.com,firstname.lastname@example.org,email@example.com,hmac-sha2-512,hmac-sha2-256,firstname.lastname@example.org UsePrivilegeSeparation sandbox AuthenticationMethods publickey PasswordAuthentication no PermitRootLogin no AcceptEnv LANG LC_* TZ LogLevel VERBOSE AllowGroups sshuser
or the following for wheezy servers:HostKey /etc/ssh/ssh_host_rsa_key HostKey /etc/ssh/ssh_host_ecdsa_key KexAlgorithms ecdh-sha2-nistp521,ecdh-sha2-nistp384,ecdh-sha2-nistp256,diffie-hellman-group-exchange-sha256 Ciphers aes256-ctr,aes192-ctr,aes128-ctr MACs hmac-sha2-512,hmac-sha2-256
On those servers where I need duplicity/paramiko to work, I also add the following:KexAlgorithms ...,diffie-hellman-group-exchange-sha1 MACs ...,hmac-sha1
Then I remove the "Accepted" filter in /etc/logcheck/ignore.d.server/ssh (first line) to get a notification whenever anybody successfully logs into my server.
I also create a new group and add the users that need ssh access to it:addgroup sshuser adduser francois sshuser
and add a timeout for root sessions by putting this in /root/.bash_profile:TMOUT=600 Security checks apt-get install logcheck logcheck-database fcheck tiger debsums apt-get remove john john-data rpcbind tripwire
Logcheck is the main tool I use to keep an eye on log files, which is why I add a few additional log files to the default list in /etc/logcheck/logcheck.logfiles:/var/log/apache2/error.log /var/log/mail.err /var/log/mail.warn /var/log/mail.info /var/log/fail2ban.log
while ensuring that the apache logfiles are readable by logcheck:chmod a+rx /var/log/apache2 chmod a+r /var/log/apache2/*
and fixing the log rotation configuration by adding the following to /etc/logrotate.d/apache2:create 644 root adm
I also modify the main logcheck configuration file (/etc/logcheck/logcheck.conf):INTRO=0 FQDN=0
Other than that, I enable daily checks in /etc/default/debsums and customize a few tiger settings in /etc/tiger/tigerrc:Tiger_Check_RUNPROC=Y Tiger_Check_DELETED=Y Tiger_Check_APACHE=Y Tiger_FSScan_WDIR=Y Tiger_SSH_Protocol='2' Tiger_Passwd_Hashes='sha512' Tiger_Running_Procs='rsyslogd cron atd /usr/sbin/apache2 postgres' Tiger_Listening_ValidProcs='sshd|mosh-server|ntpd' General hardening apt-get install harden-clients harden-environment harden-servers apparmor apparmor-profiles apparmor-profiles-extra
While the harden packages are configuration-free, AppArmor must be manually enabled:perl -pi -e 's,GRUB_CMDLINE_LINUX="(.*)"$,GRUB_CMDLINE_LINUX="$1 apparmor=1 security=apparmor",' /etc/default/grub update-grub Entropy and timekeeping apt-get install haveged rng-tools ntp
To keep the system clock accurate and increase the amount of entropy available to the server, I install the above packages and add the tpm_rng module to /etc/modules.Preventing mistakes apt-get install molly-guard safe-rm sl
These tools help me keep packages up to date and remove unnecessary or obsolete packages from servers. On Rackspace servers, a small configuration change is needed to automatically update the monitoring tools.
In addition to this, I use the update-notifier-common package along with the following cronjob in /etc/cron.daily/reboot-required:#!/bin/sh cat /var/run/reboot-required 2> /dev/null || true
to send me a notification whenever a kernel update requires a reboot to take effect.Handy utilities apt-get install renameutils atool iotop sysstat lsof mtr-tiny
Most of these tools are configure-free, except for sysstat, which requires enabling data collection in /etc/default/sysstat to be useful.Apache configuration apt-get install apache2-mpm-event
While configuring apache is often specific to each server and the services that will be running on it, there are a few common changes I make.
I enable these in /etc/apache2/conf.d/security:<Directory /> AllowOverride None Order Deny,Allow Deny from all </Directory> ServerTokens Prod ServerSignature Off
and remove cgi-bin directives from /etc/apache2/sites-enabled/000-default.
I also create a new /etc/apache2/conf.d/servername which contains:ServerName machine_hostname Mail apt-get install postfix
Configuring mail properly is tricky but the following has worked for me.
In /etc/hostname, put the bare hostname (no domain), but in /etc/mailname put the fully qualified hostname.
Change the following in /etc/postfix/main.cf:inet_interfaces = loopback-only myhostname = (fully qualified hostname) smtp_tls_security_level = may smtp_tls_protocols = !SSLv2, !SSLv3
Set the following aliases in /etc/aliases:
- set francois as the destination of root emails
- set an external email address for francois
- set root as the destination for www-data emails
before running newaliases to update the aliases database.
Create a new cronjob (/etc/cron.hourly/checkmail):#!/bin/sh ls /var/mail
to ensure that email doesn't accumulate unmonitored on this box.
Finally, set reverse DNS for the server's IPv4 and IPv6 addresses and then test the whole setup using mail root.Network tuning
To reduce the server's contribution to bufferbloat I change the default kernel queueing discipline by putting the following in /etc/sysctl.conf:net.core.default_qdisc=fq_codel
In this guest post, Luke Herrington shares his experience with integrating an existing Drupal backend with a Backbone.Marionette Todo app.
If you're reading this, you probably already know about all of the great work that Gizra has done in the Drupal/REST space. If you haven't, I highly recommend you check out their github repo. Also see the RESTful module.
I saw the Todo Restful project and it got me thinking, "If Amitai did this right (hint: he did), then I should be able to get this working with Backbone pretty easily". I was pleasantly surprised!View demo Get the source code Todo app with a Drupal backend
Here's a simplified list of everything I had to do to get it working:
If you are sub-theming Drupal Bootstrap, you are probably spoiled by all of the awesome functionality that comes with the Bootstrap framework and the Drupal Bootstrap theme. One place where you can’t easily throw a row and col class around your divs through the admin UI is if you are creating a Webform.
I came up with a quick solution to this that, with a little setup, allows the user to leverage Bootstrap through the Webform UI.... Read more
Last week, was in sunny Los Angeles for DrupalCon 2015. Though many were seasoned veterans, it was my first time at a Con. It was a whirlwind of team building, a magical Prenote, great one-on-one coversations and plenty of Drupal talk. Needless to say, I'm still recovering! But if one thing is certain, our team had a wonderful time. Here are some of their takeaways:
Commercial Progression presents Hooked on Drupal, “Episode 9: DrupalCon LA 2015 Highlights with Steve Burge from OSTraining". In this special DrupalCon edition of Hooked on Drupal we conferenced in Steve Burge of OSTraining for an on the ground report from Los Angeles. Held on May 11-15, 2015 DrupalCon LA was the premiere event for the Drupal community. Steve brings us the inside scoop of highlights and takeaways as the conference wraps up. Additionally, Alex Fisher (also a DrupalCon veteran) shares his memories and insights from past DrupalCons. Commercial Progression has recently sponsored OSTraining with a $5000 kickstarter backing to bring Drupal 8 upgrade training to the masses. This new collection of video resources will be released in September 2015. With Dries call to support Drupal as public utility from DrupalCon, this announcement seems especially timely.
Hooked on Drupal is available for RSS syndication here at the Commercial Progression site. Additionally, each episode is available to watch online via our YouTube channel, within the iTunes store, on SoundCloud, and now via Stitcher.
If you would like to participate as a guest or contributor, please email us at
Content Links and Related Information
- DrupalCon LA 2015
- Steve Burge
- DrupalCon Los Angeles Review from Steve Burge
- Drupal 8 Upgrade Training Sponsorship by Commercial Progression
ALEX FISHER - Founder of Commercial Progression
STEVE BURGE - Founder of OSTrainingHooked on Drupal, podcast, Drupal 8, DrupalCon, Planet Drupal, training, sponsorship
The Drupal 8 multilingual team is really great in spreading know-how on the new things in the upcoming version, so we had our session (1h) and workshop (2h) recordings published and widely available. While we of course love our baby and can talk all day about it, who has hours when they just want to explore what is coming up? We just addressed that this week with the following.1. New 2m22s introduction video with the key benefits 2. A quick summary of key benefits and an easy to skim features list
http://www.drupal8multilingual.org/#topbenefits lists the top 12 benefits and http://www.drupal8multilingual.org/features provides the more detailed information in an easy to skim text form. And yeah, that 1h session video if you have the time.3. Easy to launch demo to try features out
Thanks to our work on the multilingual workshops for DrupalCons, BADCamp and DrupalCamps, we have a demo with sample content in 4 languages that you can try out in your browser for 30 minutes without any registration or local software install required thanks to simplytest.me.4. Check out who voted with their feet already
Drupal 8 is not yet released, yet there are numerous live multilingual Drupal 8 sites helping with nature preservation, finding health professionals or concert tickets among other good uses. Now there is a handy list to review at http://www.drupal8multilingual.org/showcase.
If you like what you see, we still have guided workshops (those that last 2h). The next one is coming up right this Sunday at DrupalCamp Spain. We also believe that the multilingual team is one of the best to get involved with if you want to know Drupal 8 better and give back some to improve the new version as well. We have weekly meetings and a huge sprint coming up at DrupalCon Barcelona. Maybe we'll have some opportunity to celebrate as well. See you there!
Years ago now, the Drupal community adopted Git as a version control system to replace CVS. That move has helped development since the distributed nature of Git allows better tracking of work privately before uploading a patch to drupal.org.
Sandbox repositories allow contributors to clone an existing project to work on independently (therefore not needing permissions for the canonical repository), but there is currently no way that I know of to request that those changes are pulled back, facilitate a review of changes and then merge the changes in (a pull request).
Hopefully that functionality is on the way!
But as a community the challenge is not just the development on drupal.org, collaboration with GitHub, or whatever form the technical change takes. Alongside those changes, we need the workflows that will help us better manage multiple versions, allow fast bug fixes whilst features are being tested, and provide for reviews without alienating developers. And the technical element goes hand in hand with the workflow.
As an example, for the Drupal PM module, we recently debated how to set up Git branches to allow more flexibility than the traditional "single line of code" inheritted from CVS.
There were a few criteria that the new solution had to have:
- Flexibility that allowed bug fixes to be more quickly applied to a release: Under the "single line of code" approach, Releasing bug fixes only would require adhoc branches and tags.
- Fit with drupal.org infrasturcture: In particular, we'd like users to be able to test a development version without cloning from Git. So the development release on drupal.org needed to correspond to an appropriate codeset for people to test.
- Alignment to industry standard approaches where possible: Looking into what is used elsewhere in the software world, the Gitflow model has been received well.
Putting all of this together and discussing on Skype and a drupal.org issue, we came up with a branching model that seems to fit these criteria.
For each major version of the module (i.e., 7.x-1.x, 7.x-2.x, 8.x-1.x), we will have the following branches:
- Release branches: There will be one release branch for each major version, named after the version (for example: "7.x-1.x"). The codebase in here will always be the release candidate for the next point release, and those point releases will always be tagged from this release branch.
- Development branches: There will be one development branch for each major version, named "develop-[version]" (for example: "7.x-1.x"). This will effectively be a staging branch for the next release but one. Features will be merged into here, and then this development branch will be merged into the release branch when the next release candidate is required.
- Feature branches: There will be one feature branch for each feature (drupal.org issue), named "feature-[issue]-[title]" (for example, "feature-12345-add-feature"). These will be worked on until the given feature is finished. Once completed, the feature branch is merged into the development branch.
- Hotfix branches: There will be one hotfix branch for each bug fix (drupal.org issue), named "hotfix-[issue]-[title]" (for example, "hotfix-12345-fix-bug"). These will be worked on until the bug is confirmed fixed. Once completed, the hotfix branch is merged into both the development and release branches.
We're just beginning to use this system in entirety, and I hope that it works out.
One caveat is that the system only works for developers with permissions on the project repository. I would love for any contributor to be able to fit into this model and to have the pull request system available for the final merge... perhaps soon...Category: WebsitesTags: GitDrupalDrupal Planetworkflowsbranching
Last week most of Lullabot was at DrupalCon Los Angeles. In this episode Addison Berry, Greg Dunlap, Matthew Tift, Chris Albrecht, Helena Zubkow, and Will Hetherington share their thoughts and experiences from the whirlwind of awesome that is DrupalCon. We chat about the keynotes, and the contrast between them, session picks, the coffeepocalypse, the real value of DrupalCon, and let Greg rant a bit, which is always a fun romp.
Weblate 2.3 has been released today. It comes with better features for project owners, better file formats support and more configuration options for users.
Full list of changes for 2.3:
- Dropped support for Django 1.6 and South migrations.
- Support for adding new translations when using Java Property files
- Allow to accept suggestion without editing.
- Improved support for Google OAuth2.
- Added support for Microsoft .resx files.
- Tuned default robots.txt to disallow big crawling of translations.
- Simplified workflow for accepting suggestions.
- Added project owners who always receive important notifications.
- Allow to disable editing of monolingual template.
- More detailed repository status view.
- Direct link for editing template when changing translation.
- Allow to add more permissions to project owners.
- Allow to show secondary language in zen mode.
- Support for hiding source string in favor of secondary language.
You can find more information about Weblate on http://weblate.org, the code is hosted on Github. If you are curious how it looks, you can try it out on demo server. You can login there with demo account using demo password or register your own user.
If you are free software project which would like to use Weblate, I'm happy to help you with set up or even host Weblate for you.
Further development of Weblate would not be possible without people providing donations, thanks to everybody who have helped so far!
If you're building a Drupal website with a lot of content for a community of users, chances are you'll need to set up some editorial controls. Starting with the Workbench and Workbench Moderation modules, you can create editorial workflows for content types. Nodes pass through different 'States', like Draft, Needs Review, and Published. Different User Roles control the flow of nodes through these different states.... Read more
Earlier this week Matt Mullenweg, founder and CEO of Automattic, parent company of WordPress.com, announced the acquisition of WooCommerce. This is a very interesting move that I think cements the SMB/enterprise positioning between WordPress and Drupal.
As Matt points out a huge percentage of the digital experiences on the web are now powered by open source solutions: WordPress, Joomla and Drupal. Yet one question the acquisition may evoke is: "How will open source platforms drive ecommerce innovation in the future?".
Larger retailers with complex requirements usually rely on bespoke commerce engines or built their online stores on solutions such as Demandware, Hybris and Magento. Small businesses access essential functions such as secure transaction processing, product information management, shipping and tax calculations, and PCI compliance from third-party solutions such as Shopify, Amazon's merchant services and increasingly, solutions from Squarespace and Wix.
I believe the WooCommerce acquisition by Automattic puts WordPress in a better position to compete against the slickly marketed offerings from Squarespace and Wix, and defend WordPress's popular position among small businesses. WooCommerce brings to WordPress a commerce toolkit with essential functions such as payments processing, inventory management, cart checkout and tax calculations.
Drupal has a rich library of commerce solutions ranging from Drupal Commerce -- a library of modules offered by Commerce Guys -- to connectors offered by Acquia for Demandware and other ecommerce engines. Brands such as LUSH Cosmetics handle all of their ecommerce operations with Drupal, others, such as Puma, use a Drupal-Demandware integration to combine the best elements of content and commerce to deliver stunning shopping experiences that break down the old division between brand marketing experiences and the shopping process. Companies such as Tesla Motors have created their own custom commerce engine and rely on Drupal to deliver the front-end customer experience across multiple digital channels from traditional websites to mobile devices, in-store kiosks and more.
To me, this further accentuates the division of the CMS market with WordPress dominating the small business segment and Drupal further solidifying its position with larger organizations with more complex requirements. I'm looking forward to seeing what the next few years will bring for the open source commerce world, and I'd love to hear your opinion in the comments.
DrupalCon News: The PM Track: What it’s about and how to get your session picked +Bonus 40 ideas for sessions you can steal!
If you’re anything like me, right now you’re thinking: Finally! It’s a very exciting moment for those in our field who have craved ways to collaborate, learn from experiences and refine our craft. The Drupalcon team has heard our request loud and clear, and we can now enjoy the very first Project Management Track!
So I did not make it along to DrupalCon Los Angeles, but I did spend some time reading twitter, and watching the sessions online. Here are some of the sessions I found entertaining and insightful and would recommend to others.Driesnote Keynote
Dries, as always, sets the lay of the land with Drupal. He also goes into the early days of Drupal, and how some key people he was involved with and have now gone on to form organisations that centre around Drupal.
Obstacles don’t block the path, they are the pathNo
Larry Garfield gives an interesting talk on why sometimes it is best to say NO in order to give focus to the things that actually matter.
Case and point, the new Macbook Airs, they say NO TO EVERYTHING.PHP Containers at Scale: 5K Containers per Server
David Strauss explains the history of web hosting, and how this is now far more complex. David is CTO of Pantheon, and they now run 100,000+ websites, all with dev + test + production environments. Pantheon run 150+ containers on a 30GB box (205MB each on average). Really interesting talk on how to run large amounts of sites efficiently.Decoupled Drupal: When, Why, and How
Amitai Burstein and Josh Koenig give a really entertaining presentation on monolithical architectures and some developer frustrations. And then introduce REST web services in Drupal 8, and how this can be used to provide better consumer interfaces for other frameworks.Features for Drupal 8
Mike Potter goes through what role features played in Drupal 7, and how features will adapt in Drupal 8 now that CMI is in. Features in Drupal 8 will be going back to it’s roots and provide ‘bundles’ of configuration for re-use.Meet Commerce 2.x
Ryan and Bojan go through 1.x on Drupal 7, and how they have chosen to develop Commerce 2.x on Drupal 8. This is a complete rewrite. The hierarchical product model is really exciting.How, When and Why to Patch a Module
Joshua Turton goes over what a patch is, when you should patch contributed modules, and how to keep track of these with Drush make.
My colleague Josh also wrote a blog post on how to use Drush make.CI for CSS: Creating a Visual Regression Testing Workflow
I topic that I am passionate about is visual regressions, here Kate Kligman goes through some tools that can help you test your site for visual changes. Tools covered include PhantomJS, SlimerJS, Selenium, Wraith.Speeding up Drupal 8 development using Drupal Console
Eduardo and Jesus give us an introduction to your new best friend in Drupal 8. Drupal console is a Symfony CLI application to help you write boilerplate code, e.g. to create a new module. Personally, I am excited for the form API generator, and the ability to create a new entity with a single command.
For more information see drupalconsole.com.Q&A with Dries
As Drupal heads down from 130 critical issues down to 22 currently, what are some key concerns by people. The questions are answered by dries, xjm, webchick and alexpott.Where can I find more videos
Don’t worry there are plenty more videos on the Drupal Association Youtube page.Comments
If you have any awesome sessions that I have missed let me know in the comments.