Feed aggregator

Daniel Leidert: Motion picture capturing: Debian + motion + Logitech C910

Planet Debian - Thu, 12/02/2015 - 20:02

Winter time is a good time for some nature observation. Yesterday I had a woodpecker (picture) in front of my kitchen window. During the recent weeks there were long-tailed tits, a wren and other rarely seen birds. So I thought, it might be a good idea to capture some of these events :) I still own a Logitech C910 USB camera which allows HD video capturing up to 1080p. So I checked the web for some software that would begin video capturing in the case of motion detection and found motion, already available for Debian users. So I gave it a try. I tested all available resolutions of the camera together with the capturing results. I found that the resulting framerate of both the live stream and the captured video is highly depending on the resolution and some few configuration options. Below is a summary of my tests and the results I've achieved so far.

Logitech C910 HD camera

Just a bit of data regarding the camera. AFAIK it allows for fluent video streams up to 720p.


$ dmesg
[..]
usb 7-3: new high-speed USB device number 5 using ehci-pci
usb 7-3: New USB device found, idVendor=046d, idProduct=0821
usb 7-3: New USB device strings: Mfr=0, Product=0, SerialNumber=1
usb 7-3: SerialNumber: 91CF80A0
usb 7-3: current rate 0 is different from the runtime rate 16000
usb 7-3: current rate 0 is different from the runtime rate 32000
uvcvideo: Found UVC 1.00 device (046d:0821)
input: UVC Camera (046d:0821) as /devices/pci0000:00/0000:00:1a.7/usb7/7-3/7-3:1.2/input/input17

$ lsusb
[..]
Bus 007 Device 005: ID 046d:0821 Logitech, Inc. HD Webcam C910
[..]

$ v4l2-ctl -V -d /dev/video1
Format Video Capture:
Width/Height : 1280/720
Pixel Format : 'YUYV'
Field : None
Bytes per Line: 2560
Size Image : 1843200
Colorspace : SRGB

Also the uvcvideo kernel module is loaded and the user in question is part of the video group.

Installation and start

Installation of the software is as easy as always:

apt-get install motion

It is possible to run the software as a service. But for testing, I copied /etc/motion/motion.conf to ~/.motion/motion.conf, fixed its permissions (you cannot copy the file as user - it's not world readable) and disabled the daemon mode.


daemon off

Note that in my case the correct device is /dev/video1 because the laptop has a built-in camera, that is /dev/video0. Also the target directory should be writeable by my user:


videodevice /dev/video1
target_dir ~/Videos

Then running motion from the command line ...


motion
[..]
[0] [NTC] [ALL] motion_startup: Motion 3.2.12+git20140228 Started
[..]
[1] [NTC] [ALL] motion_init: Thread 1 started , motion detection Enabled
[0] [NTC] [ALL] main: Thread 1 is device: /dev/video1 input -1
[1] [NTC] [VID] v4l2_get_capability:
------------------------
cap.driver: "uvcvideo"
cap.card: "UVC Camera (046d:0821)"
cap.bus_info: "usb-0000:00:1a.7-1"
cap.capabilities=0x84000001
------------------------
[1] [NTC] [VID] v4l2_get_capability: - VIDEO_CAPTURE
[1] [NTC] [VID] v4l2_get_capability: - STREAMING
[1] [NTC] [VID] v4l2_select_input: name = "Camera 1", type 0x00000002, status 00000000
[1] [NTC] [VID] v4l2_select_input: - CAMERA
[..]
[1] [NTC] [ALL] image_ring_resize: Resizing pre_capture buffer to 1 items

... will begin to capture motion detection events and also output a live stream. CTRL+C will stop it again.

Live stream

The live stream is available by pointing the browser to localhost:8081. However, the stream seems to run at 1 fps (frames per second) and indeed does. The stream gets more quality by this configuration:


stream_motion on
stream_maxrate 100

The first option is responsible, that the stream only runs at one fps if there is no motion detection event. Otherwise the framerate increases to its maximum value, which is either the one given by stream_maxrate or the camera limit. The quality of the stream picture can be increased a bit further too by increasing the stream_quality value. Because I neither need the stream nor the control feed I disabled both:


stream_port 0
webcontrol_port 0
Picture capturing

By default there is video and picture capturing if a motion event is detected. I'm not interested in these pictures, so I turned them off:

output_pictures off

FYI: If you want a good picture quality, then the value of quality should very probably be increased.

Video capturing

This is the really interesting part :) Of course if I will "shoot" some birds (with the camera), then a small image of say 320x240 pixels is not enough. The camera allows for a capture resolution up to 1920x1080 pixels (1080p). It is advertised for fluent video streams up to 720p (1280x720 pixels). So I tried the following resolutions: 320x240, 640x480, 800x600, 640x360 (360p), 1280x720 (720p) and 1920x1080 (1080p). These are easily configured by the width and height variables. For example the following configures motion for 1280x720 pixels (720p):


width 1280
height 720

The result was really disappointing. No event is captured with more then 20 fps. At higher resolutions the framerate drops even further and at the highest resolution of 1920x1080 pixels, the framerate is only two(!) fps. Also every created video runs much too fast and even faster by increasing the framerate variable. Of course its default value of 2 (fps) is not enough for fluent videos. AFAIK the C910 can run with 30 fps at 1280x720 pixels. So increasing the value of framerate, the maximum framerate recorded, is a must-do. (If you wanna test yourself, check the log output for the value following event_new_video FPS.)

The solution to the issue, that videos are running too fast, however is to increase the pre_capture value, the number of pre-captured (buffered) pictures from before motion was detected. Even small values like 3..5 result in a distinctive improvement of the situation. Though increasing the value further didn't have any effect. So the values below should almost get the most out of the camera and result in videos in normal speed.


framerate 100
pre_capture 5

Videos in 1280x720 pixels are still captured with 10 fps and I don't know why. Running guvcview, the same camera allows for 30 fps in this resolution (even 60 fps in lower resolutions). However, even if the framerate could be higher, the resulting video runs fluently. Still the quality is just moderate (or to be honest, still disappointing). It looks "pixelated". Only static pictures are sharp. It took me a while to fix this too, because I first thought, the reason is the camera or missing hardware support. It is not :) The reason is, that ffmpeg is configured to produce a moderate(?)-quality video. The relevant variables are ffmpeg_bps and ffmpeg_variable_bitrate. I got the best results just changing the latter:


ffmpeg_variable_bitrate 2

Finally the resulting video quality is promising. I'll start with this configuration setting up an observation camera for the bird feeding ground.

There is one more tweak for me. I got even better results by enabling the auto_brightness feature.


auto_brightness on
Complete configuration

So the complete configuration looks like this (only those options changed to the original config file)


daemon off
videodevice /dev/video1
width 1280
height 720
framerate 100
auto_brightness on
ffmpeg_variable_bitrate 2
target_dir /home/user/Videos
stream_port 0 #8081
stream_motion on
stream_maxrate 100
webcontrol_port 0 #8080
Links
Categories: Elsewhere

Chromatic: Automated Servers and Deployments with Ansible & Jenkins

Planet Drupal - Thu, 12/02/2015 - 19:47

In a previous post, Dave talked about marginal gains and how, in aggregate, they can really add up. We recently made some infrastructure improvements that I first thought would be marginal, but quickly proved to be rather significant. We started leveraging Ansible for server creation/configuration and Jenkins to automate our code deployments.

We spend a lot of time spinning up servers, configuring them and repeatedly deploying code to them. As a Drupal-focused shop, this process can get repetitive very quickly. The story usually goes something like this:

  1. Project begins
  2. Dev/Staging server(s) built from scratch (usually on Linode)
    1. Install Ubuntu
    2. Install Apache
    3. Install PHP
    4. Install MariaDB
    5. Install Git
    6. Install Drush
    7. Install Redis, etc.
  3. Deploy project codebase from GitHub
  4. Development happens
  5. Pull Requests opened, reviewed, merged
  6. Manually login to server via SSH, git pull
  7. More development happens
  8. Pull Requests opened, reviewed, merged
  9. Manually login to server via SSH, git pull
  10. and so on…

All of this can be visualized in a simple flowchart:

Development Workflow Visualized

View in Google Docs

This story is repeated over and over. New client, new server new deployments. How does that old programmer’s adage go? “Don’t Repeat Yourself?” Well, we finally got around to doing something about all of this server configuration and deployment repetition nonsense. We configured a Jenkins server to automatically handle our deployments and created Ansible roles and playbooks to easily spin up and configure new servers (specifically tuned for Drupal) at will. So now our story looks something like this:

Development Workflow Visualized w/Ansible & Jenkins

View in Google Docs

What is Ansible?

“Ansible is an IT automation tool. It can configure systems, deploy software, and orchestrate more advanced IT tasks such as continuous deployments or zero downtime rolling updates.”

Sounds like voodoo magic doesn’t it? Well, I’m here to tell you it isn’t, that it works, and that you don’t have to be a certified sysadmin to use it. Though you may need one to set it all up for you. The basic premise is that you create “playbooks” to control your remote servers. These can be as complex as a set of steps to build a LAMP server up from scratch (see below), or as simple as a specific configuration that you wish to enforce. Typically, playbooks are made up of “roles”. Roles are “reusable abstractions” as their docs page explains. You might have roles for installing Apache, adding Git, or adding a group of user’s public keys. String your roles together in a YAML file and that’s a playbook. Have a look at the official Ansible examples GitHub repo to see some real life examples.

Automate Server Creation/Configuration with Ansible

We realized we were basically building the same Drupal-tuned servers over and over. While the various steps for this process are well documented, doing the actual work takes loads of time, is prone to error and really isn’t all that fun. Ansible to the rescue! We set out to build a playbook that would build a LAMP stack up from scratch, with all the tools we use consistently across all of our projects. Here’s an example playbook:

Benefits:

  • Consistent server environments: Adding additional servers to your stack is a piece of cake and you can be sure each new box will have the same exact configuration.
  • Quickly roll out updates: Update your playbook and rerun against the affected servers and each will get the update. Painless.
  • Add-on components: Easily tack on custom server components like Apache Solr by adding a single line to a server’s playbook.
  • Allow your ops team to focus on real problems: Developers can quickly create servers without needing to bug your ops guys about how to compile PHP or install Drush, allowing them to focus on higher priority tasks.
What is Jenkins?

“Jenkins is an award-winning application that monitors executions of repeated jobs, such as building a software project or jobs run by cron.”

Think of Jenkins as a very well-trained, super organized, exceptionally good record-keeping ops robot. Train Jenkins a job once and Jenkins will repeat it over and over to your heart’s content. Jenkins will keep records of everything and will let you know should things ever go awry.

Deploy Code Automatically with Jenkins

Here’s the rundown of how we’re currently using Jenkins to automatically deploy code to our servers:

  1. Jenkins listens for changes to master via the Jenkins Github Plugin
  2. When changes are detected, Jenkins automatically kicks off a deployment by SSHing into our box and executing our standard deployment commands:
    1. cd /var/www/yourProject/docroot/
    2. git pull
    3. drush updb -y
    4. drush fra -y
    5. drush cc all
  3. Build statuses are reported to Slack via the Slack Notification Plugin

Here’s a full view of a configuration page for a deployment job:

The biggest benefit here is saving time. No more digging for SSH credentials. No more trying to remember where the docroot is on this machine. No more of the, “I can’t access that server, Bob usually handles…” nonsense. Jenkins has access to the server, Jenkins knows where the docroot is, and Jenkins runs the exact same deployment code every single time. The other huge win here, at least for me personally, is that it takes the worry out of deployments. Setting it up right the first time means a project lifetime of known workflow/deployments. No more worrying about if pushing the button breaks all the things.

What else is great about using Jenkins to deploy your code? Here’s some quick hits:

  • Historical build data: Jenkins stores a record of every deployment. Should a deploy fail, you can see exactly when things broke down and why. Jenkins records everything that happened in a Console Output tab.
  • Empower non server admins: Jenkins users can login to Jenkins and kick off manual deployments or jobs at the push of a button. They don’t need to know how to login via ssh or even how to run a single command from the command line.
  • Enforce Consistent Workflow: By using Jenkins to deploy your code you also end up enforcing consistent workflow. In our example, drush will revert features on every single deployment. This means that devs can’t be lazy and just switch things in production. Those changes would be lost on the next deploy!
  • Status Indicators across projects: The Jenkins dashboard shows a quick overview of all of your jobs. There’s status of the last build, an aggregated “weather report” of the last few builds, last build duration, etc. Super useful.
  • Slack Integration: You can easily configure jobs to report statuses back to Slack. We have ours set to report to each project channel when a build begins and when it succeeds or fails. Great visibility for everyone on the project.
Other possible automations with Jenkins
  • Automate scheduled tasks (like Drupal’s cron, mass emailing, report generation, etc.)
  • Run automated tests

Both of these tools have done wonders for our workflow. While there was certainly some up-front investment to get these built out, the gains on the back end have been tremendous. We’ve gained control of our environments and their creation. We’ve taken the worry and the repetition out of our deployments. We’ve freed up our developers to focus on the work at hand. Our clients are getting their code sooner. Our team members are interrupted less often. Win after win after win. If you’re team is facing similar, consider implementing one or both of these tools. You’re sure to see similar results.

Categories: Elsewhere

Raphaël Hertzog: Freexian’s report about Debian Long Term Support, January 2015

Planet Debian - Thu, 12/02/2015 - 19:24

Like each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In January, 48 work hours have been equally split among 4 paid contributors. Their reports are available:

Evolution of the situation

During the last month, the number of paid work hours has made a noticeable jump: we’re now at 58 hours per month. At this rate, we would need 3 more months to reach our minimal goal of funding the equivalent of a half-time position. Unfortunately, the number of new sponsors actually in the process is not likely to be enough to have a similar raise next month.

So, as usual, we are looking for more sponsors.

In terms of security updates waiting to be handled, the situation looks a bit worse than last month: the dla-needed.txt file lists 37 packages awaiting an update (7 more than last month), the list of open vulnerabilities in Squeeze shows about 63 affected packages in total (7 more than last month).

The increase is not too worrying, but the waiting time before an issue is dealt with is sometimes more problematic. To be able to deal with all incoming issues in a timely manner, the LTS team needs more resources: some months will have more issues than usual, some issues will be longer to handle than others, etc.

Thanks to our sponsors

The new sponsors of the month are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Categories: Elsewhere

Drupal Watchdog: Contributing to Open Source Projects

Planet Drupal - Thu, 12/02/2015 - 17:54
Article

Drupal is one of the largest and most successful open source projects, and much of our success is due to the vibrant and thriving community of contributors who make the platform what it is – the individuals who help put on Drupal Conferences and events, the documentation writers, the designers and usability experts, the developers who help write the software, and countless others.

Participating in open source communities is a rewarding experience that will help you advance and develop, both personally and professionally. Through participation, you gain an opportunity to learn from your peers. You are constantly challenged and exposed to new and interesting ideas, perspectives, and opinions. You are not only learning the current best practices, you are also helping develop innovative new solutions, which will improve the tools in your arsenal and take your career to the next level – not to mention contributing to your personal growth. (One of the five Drupal core committers for Drupal 8, Angie Byron got her start only a few years ago – as a student in the Google Summer of Code – and has rapidly advanced her skills and career through open source participation.)

Participation gives you significantly better insight and awareness. By attending Drupal events and engaging online, you place yourself in a better position to understand and leverage the solutions that are already available, know where and how to find those solutions, and have a clearer sense of how you can leverage them to achieve your goals. With this knowledge and experience you become capable of executing faster and more efficiently than your peers who don’t engage.

Categories: Elsewhere

Sven Hoexter: Out of the comfort zone: What I learned from my net-snmp backport on CentOS 5 and 6

Planet Debian - Thu, 12/02/2015 - 16:19

This is kind roundup of things I learned after I've rolled out the stuff I wrote about here.

Bugs and fighting with multiarch rpm/yum

One oversight led me to not special case the perl-devel dependency on the net-snmp-devel package for CentOS 5, to depend on perl instead. That was easily fixed but afterwards a yum install net-snmp-devel still failed because it tried to install the stock CentOS 5 net-snmp-devel package and its dependencies. Closer investigation showed that it did so because I only provided x86_64 packages in our internal repository but it wanted to install i386 and x86_64 packages.

Looking around this issue is documented in the Fedora Wiki. So the modern way to deal with it is to make the dependency architecture dependend with the

%{?_isa}

macro. That is evaluated at build time of the src.rpm and then the depedency is hardcoded together with the architecture. Compare

$ rpm -qRp net-snmp-devel-5.7.2-18.el6.x86_64.rpm|grep perl perl-devel(x86-64)

to

$ rpm -qRp net-snmp-devel-5.7.2-19.el5.centos.x86_64.rpm |grep perl perl

As you can see it's missing for the el5 package and that's because it's too old, or more specific, rpm is too old to know about it.

The workaround we use for now is to explicitly install only the x86_64 version of net-snmp-devel on CentOS 5 when required. That's a

yum install net-snmp-devel.x86_64

Another possible workaround is to remove i386 from your repository or just blacklist it in your yum configuration. I've read that's possible but did not try it.

steal time still missing on CentOS 6

One of the motivations for this backport is support for steal time reporting via SNMP. For CentOS 5 that's solved with our net-snmp backport but for CentOS 6 we still do not see steal time. A short look around showed that it's also missing in top, vmstat and friends. Could it be a kernel issue?

Since we're already running on the CentOS Xen hypervisor we gave the Linux 3.10.x packages a spin in a domU and now we also have steal time reporting. Yes, a kernel issue with the stock CentOS 6/RHEL 6 kernel. I'm not sure where and how to fill it since RedHat moved to KVM.

Categories: Elsewhere

Sven Hoexter: Comparing apples and oranges: IPv6 adoption vs HTTP/2 adoption

Planet Debian - Thu, 12/02/2015 - 15:34

I did not yet read the HTTP/2 draft but I've read the SPDY draft about 2.5 years ago. So I might be wrong here with my assumptions.

We're now about 17 years into the IPv6 migration and still we do not have widespread IPv6 adoption on the client side. That's mostly an issue of CPE device rotation at the customer side and updating provisioning solutions. For the server side we're more or less done. Operating system support is there, commercial firewall and router vendors picked it up in the last ten years, datacenter providers are ready, so we're waiting for the users to put some pressure on providing a service via IPv6.

Looking at HTTP/2 it's also a different protocol. The gap might be somewhat closer than the one between IPv4 and IPv6, but it's still huge. Now I'd bet that in the case of HTTP/2 we'll also see a really slow adoption, but this time it's not the client side that's holding back the adoption. For HTTP/2 I've no doubts about a fast adoption on the client side. Google and Mozilla are nowadays providing some kind of continues delivery of new features to the end user on a monthly basis (surprisingly that also works for mobile devices!). So the web browser near you will soon have a HTTP/2 implementation. Even Microsoft is switching to a new development model of rolling updates with Windows 10 and the Internet Explorer successor. But looking at the server side I doubt we'll have widespread HTTP/2 support within the next 5 years. Maybe in 10. But I doubt even that.

With all those reverse proxies, interception devices for header enrichment at mobile carriers, application servers, self written HTTP implementations, load balancers and so on I doubt we'll have a fast and smooth migration ahead. But maybe we're all lucky and I'm wrong. I'd really love to be wrong here.

Maybe we can provide working IPv4-IPv6 dual-stack setups for everyone in the meantime.

Categories: Elsewhere

Code Karate: Drupal 7 Options Element: A quicker way to add radio and checkbox options

Planet Drupal - Thu, 12/02/2015 - 15:20
Episode Number: 192

We kept things simple for this episode of the DDoD. The options element module uses Javascript to create an easy way to create radio button and checkbox options for fields on a Drupal content type. Before this module you had to add key|value for each options you wanted. Using this module the key and value is broken down into two fields making it easier to distinguish the difference.

Tags: DrupalContent TypesFieldsDrupal 7Drupal PlanetUI/DesignJavascript
Categories: Elsewhere

Daniel Leidert: Setting up a network buildd with pbuilder ... continued

Planet Debian - Thu, 12/02/2015 - 15:15

Last year I'd described my setup of a local network-buildd using pbuilder, ccache, inoticoming and NFS. One then-still-open goal was to support different Debian releases. This is especially necessary for backports of e.g. bluefish. The rules to contribute backports require to include all changelog entries since the last version on debian-backports or since stable if it's the first version in an uploaded package. Therefor one needs to know the last version in e.g. wheezy-backports. Because I'm not typing the command (the source package only gets uploaded and inoticoming starts the build process) I was looking for a way to automatically retrieve that version and add the relevant -vX.Y-Z switch to dpkg-buildpackage.

The solution I found requires aptitude and a sources.list entry for the relevant release. If you are only interested in the solution, just jump to the end :)

I'm going to add the version switch to the DEBBUILDOPTS variable of pbuilder. In my setup I have a common (shared) snippet called /etc/pbuilderrc.inc and one configuration file per release and architecture, say /etc/pbuilderrc.amd64.stable. Now the first already contains ...

DEBBUILDOPTS="-us -uc -j2"

... and DEBBUILDOPTS can be extended in the latter:

DEBBUILDOPTS+="..."

Because the config file is parsed pretty early in the process the package name has not yet been assigned to any variable. The last argument to pbuilder is the .dsc file. So I use the last argument and parse the file to retrieve the source package name.

cat ${@: -1} | grep -e ^Source | awk -F\ '{ print $2 }'

The solution above works, because pbuilder is a BASH script. Otherwise it maybe needs some tweaking. I use the source package name, because it is unique and there is just one :) Now with this name I check for all versions in wheezy* and stable* and sort them. The sort order of aptitude is from low to high, so the last line shopuld contain the highest version. This covers the possibility that there has not yet been a backport or that there is one:

aptitude versions -F '%p' --show-package-names=never --group-by=none --sort=version \
"?narrow(?source-package(^PACKAGE\$), ?or(?archive(^wheezy.*), ?archive(^stable.*)))" |\
tail -n 1 | sed -e 's#~bpo.*$##g'

The sed-part is necessary because otherwise dh_genchanges will add a superfluous changelog entry (the last one of the last upload). To make things easier, I assign the name and version to variables. So this is the complete solution:


[..]
MYPACKAGE="`cat ${@: -1} | grep -e ^Source | awk -F\ '{ print $2 }'`"
MYBPOVERS="`aptitude versions -F '%p' --show-package-names=never --group-by=none --sort=version "?narrow(?source-package(^$MYPACKAGE\$), ?or(?archive(^wheezy.*), ?archive(^stable.*)))" | tail -n 1 | sed -e 's#~bpo.*$##g'`"

log "I: Package is $MYPACKAGE and last stable/bpo version is $MYBPOVERS"

DEBBUILDOPTS+=" -v$MYBPOVERS"
[..]
Examples

I've recently built a new bluefish backport. The last backport version is 2.2.6-1~bpo70+1. There is also the stable version 2.2.3-4. So the version I need is 2.2.6-1 (2.2.6-1~bpo70+1 < 2.2.6-1!!). Checking the log it works:

I: Package is bluefish and last stable/bpo version is 2.2.6-1

A different example is rsync. I've recently locally rebuilt it for a stable system (wanted to make use of the --chown switch). There is not yet a backport. So the version I (would) need is 3.0.9-4. Checking the logs again and works too:

I: Package is rsync and last stable/bpo version is 3.0.9-4

Feedback appreciated ...

Links
Categories: Elsewhere

ERPAL: The 6 most important steps of the ERPAL Platform roadmap

Planet Drupal - Thu, 12/02/2015 - 13:04

In 2014 I got in contact with many other Drupal shops. We had lots of great discussions about the future of Drupal, the future of ERPAL and the industries other than publishing that could definitely take advantage of Drupal. What with all the new ideas and results from these personal contacts, I want to take a little time now to make the ERPAL roadmap in 2015 more transparent to you. All our activities in 2015 will align with our vision to make Drupal – via the ERPAL distributions – into the most flexible web-based framework available for business applications.
In some of my previous blog posts and the Drupal application module stack poster, I’ve shown why I think Drupal has all the components needed for flexible business applications.

As we’re almost done with the development work to release a first beta version of ERPAL Platform, the next steps need to be planned out. In 2015 we’ll focus on the following six roadmap activities:

What, exactly, do these roadmap steps mean? Here are the details on each one:

Teach other developers how to develop business applications with Drupal
Modeling business processes and implementing them in software isn’t an easy job. Over the last three years, we’ve discovered many best practices for analyzing processes and using Drupal for business applications; we want to share these with the Drupal community, so we’ll release more screencasts and blogposts covering the most important ones. Sticking to best practices like using a combination of rules, entities, fields, feeds, views, and commerce modules – all modules that can be extended easily with custom plugins – will keep Drupal applications flexible, extendible and maintainable. 

Port ERPAL Platform to Drupal 8
Our goal is to have a first alpha release of ERPAL Platform ready six months after Drupal 8 has been released. Since there’s currently no reliable roadmap for the first Drupal 8 release, we can’t announce a fixed deadline. We’ve already started porting ERPAL Core to bring flexible resource planning to Drupal 8, but we do depend on the Drupal commerce roadmap for Drupal 8, which contains many improvements to the overall architecture of Drupal commerce. As soon as there’s a stable beta release of Drupal commerce, we’ll continue with our port of ERPAL Platform based on Drupal commerce 2.x.

Start our development partner network
In 2015 we’ll start our development partner program, building a network of qualified Drupal developers and shops who focus on the quality and flexibility of Drupal applications. Our development partners will benefit from our support in their projects as well as from new business opportunities stemming from our corporate marketing promoting them. For Drupal, this means more people striving to bring Drupal into other industries and increase its application range. This strategic goal is tightly related to the first roadmap activity, teach other Drupal developers to build business applications with Drupal.

Promote Drupal business applications and industry case studies created by the community
Two extremely important facts that I realized at Drupalcon in Amsterdam 2014 were

  • 1) that almost everyone agreed that Drupal is a better application framework than a CMS
  • 2) that it’s perfectly suited for business applications because it’s open, flexible and can be integrated with other enterprise legacy software

What’s missing, however, are public project references with case studies showing potential clients the power of Drupal – not only for content sites but also for business applications in different industries and their integrations. With this promotion, we want to help our partners grow their business in this market while simultaneously increasing Drupal’s uptake in other vertical markets.

Release the Drupal update automation service, “Drop Guard”
The technology to automate Drupal updates, and security updates in particular, has already been in use for more than 2.5 years at Bright Solutions. We realized with Drupalgeddon that Drupal security updates are business critical: they need to be applied within minutes after their release! This year we want to launch Drop Guard as a service for Drupal developers to help shops and agencies keep their clients’ sites secured – automatically. The service will integrate with their CI deployment processes and help Drupal avoid the negative press of hacked sites. If you want to know how it actually works in our internal infrastructure and how it’s integrated with ERPAL, read my previous blog post.

Provide cloud app integration for ERPAL Platform
With ERPAL for Service Providers we created a Drupal distribution that gives service providers a centralized, web-based platform for managing all their business processes within one tool. The Drupal distribution, ERPAL Platform, provides Drupal users and site builders with a pre-configured distribution to build flexible business applications based on Drupal commerce and other flexible contrib modules. Since ERPAL Platform implements the full sales process – starting with first contact and sales activity; quotes, orders and invoicing; all the way through to reports – and a slim project controlling feature, we want to let users extend this solution easily and with the best vertical cloud tools out there. Via this solution, ERPAL Platform can integrate with cloud apps such as Jira, Trello, Mite, Redmine, Basecamp, Toggle and many others. This has the benefit that users can use ERPAL Platform as their central business process and controlling application while their project collaboration is supported by specialized platforms. The clear advantage is that agencies will save lots of time in project controlling and administration, as many processes can be automated across all integrated applications. Using Drupal as their centralized platform, they remain flexible and agile in their business development.

What about the roadmap for ERPAL for Service Providers?
ERPAL for Service Providers is currently very stable and is already being used by more than 30 of our known customers at Bright Solutions. We will continue to maintain this distribution, fix bugs and give support to all users. During the lifecycle of Drupal 8, we’ll port ERPAL for Service Providers to be based on ERPAL Platform. So, in the future, ERPAL Platform will be the base distribution for building a vertical use case for service providers. 

Categories: Elsewhere

Annertech: 10 Great reasons why you should attend your local Drupal meet-up

Planet Drupal - Thu, 12/02/2015 - 12:41
10 Great reasons why you should attend your local Drupal meet-up

Drupal has a vibrant community supporting it. A lot of people around the world are involved in its development, way more than in a purely technical sense. How do they do it? Drupal Groups.

Drupal Groups: Where Drupal community members organise, plan and work on projects.

At groups.drupal.org you can find groups based on geography, or join online groups allocated to planning upcoming events, and working groups designated to a particular aspect of drupal and drupal distributions.

Categories: Elsewhere

Christian Perrier: Bug #777777

Planet Debian - Thu, 12/02/2015 - 06:57
Who is going to report bug #777777? :-)
Categories: Elsewhere

Shomeya: Why you should ALWAYS have a troubleshooting guide

Planet Drupal - Thu, 12/02/2015 - 03:40

Your demo is in 4hrs. 4 hours! The issues you have left could take that much time, not even counting pushing the changes and hoping it doesn't break the dev site.

Oh, and did I mention...your design changes aren't showing up. So you can't fix anything right now. ANYTHING! You haven't changed an issue to DONE in over an hour.

You've tried everything.

Save. Reload. Save. RELOAD AGAIN. What could it be?

Read more
Categories: Elsewhere

John Goerzen: Reactions to “Has modern Linux lost its way?” and the value of simplicity

Planet Debian - Thu, 12/02/2015 - 00:39

Apparently I touched a nerve with my recent post about the growing complexity of issues.

There were quite a few good comments, which I’ll mention here. It’s provided me some clarity on the problem, in fact. I’ll try to distill a few more thoughts here.

The value of simplicity and predictability

The best software, whether it’s operating systems or anything else, is predictable. You read the documentation, or explore the interface, and you can make a logical prediction that “when I do action X, the result will be Y.” grep and cat are perfect examples of this.

The more complex the rules in the software, the more hard it is for us to predict. It leads to bugs, and it leads to inadvertant security holes. Worse, it leads to people being unable to fix things themselves — one of the key freedoms that Free Software is supposed to provide. The more complex software is, the fewer people will be able to fix it by themselves.

Now, I want to clarify: I hear a lot of talk about “ease of use.” Gnome removes options in my print dialog box to make it “easier to use.” (This is why I do not use Gnome. It actually makes it harder to use, because now I have to go find some obscure way to just make the darn thing print.) A lot of people conflate ease of use with ease of learning, but in reality, I am talking about neither.

I am talking about ease of analysis. The Linux command line may not have pointy-clicky icons, but — at least at one time — once you understood ls -l and how groups, users, and permission bits interacted, you could fairly easily conclude who had access to what on a system. Now we have a situation where the answer to this is quite unclear in terms of desktop environments (apparently some distros ship network-manager so that all users on the system share the wifi passwords they enter. A surprise, eh?)

I don’t mind reading a manpage to learn about something, so long as the manpage was written to inform.

With this situation of dbus/cgmanager/polkit/etc, here’s what it feels like. This, to me, is the crux of the problem:

It feels like we are in a twisty maze, every passage looks alike, and our flashlight ran out of battieries in 2013. The manpages, to the extent they exist for things like cgmanager and polkit, describe the texture of the walls in our cavern, but don’t give us a map to the cave. Therefore, we are each left to piece it together little bits at a time, but there are traps that keep moving around, so it’s slow going.

And it’s a really big cave.

Other user perceptions

There are a lot of comments on the blog about this. It is clear that the problem is not specific to Debian. For instance:

  • Christopher writes that on Fedora, “annoying, niggling problems that used to be straightforward to isolate, diagnose and resolve by virtue of the platform’s simple, logical architecture have morphed into a morass that’s worse than the Windows Registry.” Alessandro Perucchi adds that he’s been using Linux for decades, and now his wifi doesn’t work, suspend doesn’t work, etc. in Fedora and he is surprisingly unable to fix it himself.
  • Nate bargman writes, in a really insightful comment, “I do feel like as though I’m becoming less a master of and more of a slave to the computing software I use. This is not a good thing.”
  • Singh makes the valid point that this stuff is in such a state of flux that even if a person is one of the few dozen in the world that understand what goes into a session today, the knowledge will be outdated in 6 months. (Hal, anyone?)

This stuff is really important, folks. People being able to maintain their own software, work with it themselves, etc. is one of the core reasons that Free Software exists in the first place. It is a fundamental value of our community. For decades, we have been struggling for survival, for relevance. When I started using Linux, it was both a question and an accomplishment to have a useable web browser on many platforms. (Netscape Navigator was closed source back then.) Now we have succeeded. We have GPL-licensed and BSD-licensed software running on everything from our smartphones to cars.

But we are snatching defeat from the jaws of victory, because just as we are managing to remove the legal roadblocks that kept people from true mastery of their software, we are erecting technological ones that make the step into the Free Software world so much more difficult than it needs to be.

We no longer have to craft Modelines for X, or compile a kernel with just the right drivers. This is progress. Our hardware is mostly auto-detected and our USB serial dongles work properly more often on Linux than on Windows. This is progress. Even our printers and scanners work pretty darn well. This is progress, too.

But in the place of all these things, now we have userspace mucking it up. We have people with mysterious errors that can’t be easily assisted by the elders in the community, because the elders are just as mystified. We have bugs crop up that would once have been shallow, but are now non-obvious. We are going to leave a sour taste in people’s mouth, and stir repulsion instead of interest among those just checking it out.

The ways out

It’s a nasty predicament, isn’t it? What are your ways out of that cave without being eaten by a grue?

Obviously the best bet is to get rid of the traps and the grues. Somehow the people that are working on this need to understand that elegance is a feature — a darn important feature. Sadly I think this ship may have already sailed.

Software diagnosis tools like Enrico Zini’s seat-inspect idea can also help. If we have something like an “ls for polkit” that can reduce all the complexity to something more manageable, that’s great.

The next best thing is a good map — good manpages, detailed logs, good error messages. If software would be more verbose about the permission errors, people could get a good clue about where to look. If manpages for software didn’t just explain the cavern wall texture, but explain how this room relates to all the other nearby rooms, it would be tremendously helpful.

At present, I am unsure if our problem is one of very poor documentation, or is so bad that good documentation like this is impossible because the underlying design is so complex it defies being documented in something smaller than a book (in which case, our ship has not just sailed but is taking on water).

Counter-argument: progress

One theme that came up often in the comments is that this is necessary for progress. To a certain extent, I buy that. I get why udev is important. I get why we want the DE software to interact well. But here’s my thing: this already worked well in wheezy. Gnome, XFCE, and KDE software all could mount/unmount my drives. I am truly still unsure what problem all this solved.

Yes, cloud companies have demanding requirements about security. I work for one. Making security more difficult to audit doesn’t do me any favors, I can assure you.

The systemd angle

To my surprise, systemd came up quite often in the discussion, despite the fact that I mentioned I wasn’t running systemd-sysv. It seems like the new desktop environemt ecosystem is “the systemd ecosystem” in a lot of people’s minds. I’m not certain this is justified; systemd was not my first choice, but as I said in an earlier blog post, “jessie will still boot”.

A final note

I still run Debian on all my personal boxes and I’m not going to change. It does awesome things. For under $100, I built a music-playing system, with Raspberry Pis, fully synced throughout my house, using a little scripting and software. The same thing from Sonos would have cost thousands. I am passionate about this community and its values. Even when jessie releases with polkit and all the rest, I’m still going to use it, because it is still a good distro from good people.

Categories: Elsewhere

SitePoint PHP Drupal: Push your Drupal Site’s Events to your Phone with Pushover

Planet Drupal - Wed, 11/02/2015 - 18:00

In this article I am going to show you how you can integrate Pushover with your Drupal site. I will illustrate a couple of examples of how you can use Pushover to notify yourself as soon as something happens on your site.

The code I write in this article is also available in this repository so you can just clone that if you want to follow along.

What is Pushover?

Pushover is a web and mobile application that allows you to get real time notifications on your mobile device. The way it works is that you install an app on your Android or Apple device and using a handy API you can send that app notifications. The great thing about this is that it happens more or less in real time (depending on your internet connection) as Pushover uses the Google and Apple servers to send the notifications.

The price is also very affordable. At a rate of $4.99 USD per platform (Android, Apple or desktop) paid only once, you can use it on any number of devices under that platform. And you also get a 5 day trial period for free the moment you create your account.

What am I doing here?

In this article I am going to set up a Pushover application and use it from my Drupal site to notify my phone of various events. I will give you two example use cases that Pushover can be handy with:

  • Whenever an anonymous user posts a comment that awaits administrative approval, I’ll send a notification to my phone
  • Whenever the admin user 1 logs into the site, I’ll send an emergency notification to my phone (useful if you are the only user of that admin account).

Naturally, these are examples and you may not find them useful. But they only serve as illustration of the power you can have by using Pushover.

Continue reading %Push your Drupal Site’s Events to your Phone with Pushover%

Categories: Elsewhere

Michal &#268;iha&#345;: Hosted Weblate welcomes new projects

Planet Debian - Wed, 11/02/2015 - 18:00

In past days, several new free software projects have been added to Hosted Weblate. If you are interested in translating your project there, just follow instruction at our website.

The new projects include:

Filed under: English phpMyAdmin SUSE Weblate | 0 comments | Flattr this!

Categories: Elsewhere

Daniel Leidert: Blogger RSS feed and category URLs with combined labels/tags

Planet Debian - Wed, 11/02/2015 - 17:07

Run a blog on blogger.com? Maybe made it bilingual? Maybe blog on different topics? Wondering how the URL for an RSS feed for e.g. two labels looks like? Asking how to see all articles matching two tags (labels)? Finding a keyword under one or more labels? Many things are possible. I'll show a few examples below. Maybe that is even interesting for the planet Debian folks. I happen to blog mostly in English about Debian topics. But sometimes I also want to post something in German only (e.g. about German tax software). It is discouraged to put the latter on planet-debian. Instead it can be published in the language specific planet feed. So instead of adding new tags, one could easily combine two labels: the one for language of the feed and the one for Debian related posts (e.g. debian+english or debian+german). Therefor this post goes to the Debian planet.

Search for combbined labels/tags

Say I want to view all postings related to the topics FOO and BAR. Then it is:


http://domain.tld/search/label/FOO+BAR OR
http://domain.tld/search/?q=label:FOO+label:BAR

Be aware that labels are case sensitive and that more labels can be added. The examples below will show all postings related to the topics debian and n54l and xbmc:


http://www.wgdd.de/search/label/debian+n54l+xbmc
http://www.wgdd.de/search/?q=label:debian+label:n54l+label:xbmc

It is also possible to search for all posts related to the topics FOO or BAR:


http://domain.tld/search/?q=label:FOO|label:BAR

Say for example, you want to see all postings related to the topics logitech or toshiba, then it is:


http://www.wgdd.de/search/?q=label:logitech|label:toshiba
Feed URLs

To get back to the first example lets say, the feed shall contain all posts related to the topics FOO and BAR. Then it is:


http://domain.tld/feeds/posts/default/-/FOO/BAR/ OR
http://domain.tld/feeds/posts/default?q=label:FOO+label:BAR

Respecitvely to show all feeds related to either of those topics use:


http://domain.tld/feeds/posts/default/?q=label:FOO|label:BAR

To get a feed of the example topics as shown above then would be:


http://www.wgdd.de/feeds/posts/default/-/debian/n54l/xbmc/
http://www.wgdd.de/feeds/posts/default?q=label:debian+label:n54l+label:xbmc
http://www.wgdd.de/feeds/posts/default?q=label:logitech|label:toshiba

Coming back to planet Debian, below is a solution for a multi-lingual planet contribution (if both planets would exist):


http://www.wgdd.de/feeds/posts/default?q=label:planet-debian+label:english
http://www.wgdd.de/feeds/posts/default?q=label:planet-debian+label:german
Advanced ...

There is much more possible. I'll just show two more examples. AND and OR can be combined ...

http://www.wgdd.de/feeds/posts/default?q=label:debian+(label:logitech|label:toshiba)

... and a keyword search can be added too:

http://www.wgdd.de/feeds/posts/default?q=stick+(label:debian+(label:logitech|label:toshiba)) Links
Categories: Elsewhere

Appnovation Technologies: How to create an area plugin for views

Planet Drupal - Wed, 11/02/2015 - 16:50

Sometimes using views, you need to place some dynamic content in the header or footer of a view.

var switchTo5x = false;stLight.options({"publisher":"dr-75626d0b-d9b4-2fdb-6d29-1a20f61d683"});
Categories: Elsewhere

Acquia: Working on Remote Teams – the Developers

Planet Drupal - Wed, 11/02/2015 - 16:33
Language Undefined

Part 1 of 2 – I ran into Elia Albarran, Four Kitchens' Operations Manager ... ahem "Funmaster", in the inspiring atmosphere of BADCamp 2014. She mentioned she'd read my blog post 10 Tips for Success as a Remote Employee; we started exchanging tips and ideas until I basically yelled, "Stop! I need to get this on camera for the podcast!" She graciously agreed and brought along two Four Kitchens developers for the session, too: Taylor Smith and Matt Grill.

Categories: Elsewhere

cs_shadow: Summary of Google Code-In 2014 and Welcome GSoC 2015

Planet Drupal - Wed, 11/02/2015 - 15:33

tl;dr Quick links
- Google Code-In 2014 results: http://google-opensource.blogspot.in/2015/02/google-code-in-2014-welcome...
- Google Summer of Code announcement: https://groups.drupal.org/node/456563
- Google Summer of Code Task Wiki: https://groups.drupal.org/node/455978
- Relevant groups to join: https://groups.drupal.org/google-summer-code and https://groups.drupal.org/google-code-in
- Getting started guide for GSoC students: https://www.drupal.org/node/2415225

And that's a wrap for Google Code-In 2014

As you might be knowing, Drupal recently participated in Google Code-In 2014, which is a contest for high school students aged 13-17. We received great participation from students all around the world and they heavily contributed to Drupal during past couple of months. I served as one of the organization administrators for Drupal and had the wonderful opportunity to mentor these students and watch their transformation from complete newbies to Core contributors.

Tons of Core issues worked upon, lots of documentation created/updated and a bunch of modules ported - yes, that's what GCI meant for Drupal. For a more comprehensive list, you can look at the complete lists of tasks on Melange. Although all the participants did great, there are a few who stood apart from the others.

  • Getulio Sanchez (gvso) [Grand Prize Winner]. Among other tasks, he ported a bunch of interesting modules to Drupal 8 - FB Like Button, Login Destination, Administer Users by Role, Delete All to name a few. He also writes about his experience with Drupal in GCI and also why he chose Drupal on his blog: https://conocimientoplus.wordpress.com/2015/02/07/what-google-code-in-an.... Its a good read especially for students who're interested in working with Drupal in GCI/GSoC.

  • Tasya Rukmana (tadityar) [Grand Prize Winner]. She rocked the Core issue queue and went on to become 2500th Core contributor (Albeit it was sheer luck, it was a nice motivation for her any way):

#Drupal 8 now has over 2500 contributors! Congratulations tadityar https://t.co/T2PdgbML0a on becoming the 2500th D8 contributor on Dec. 9.

— xjm (@xjmdrupal) December 12, 2014

Read more about her experience on her blog post: http://tadityar.web.id/2015/02/04/my-google-code-in-experience-with-drupal/.

  • Akshay Kalose (akshaykalsoe) [Runner up]. Besides reviewing some of the GSoC 2104 projects and contributing to the issue queue, his most important task was to Set up a Drupal 8 installation using Load Balancing(using HAProxy) and you can read more about this on his blog post: http://www.kalose.net/oss/drupal-8-rdf-ui-schema-org-mappings/.

  • Our other two finalists were: Ilkin Musaev (Polonium) and Mark Klein (areke) who also did great.

Congratulations to all the finalists and prize winners!

Welcome Google Summer of Code 2015

While was GCI was a lot of fun, its over now. To keep the momentum going, we've decided to apply as an organization into GSoC again. To stay tuned for further updates regarding GSoC, join our discussion group here: https://groups.drupal.org/google-summer-code. If you or any of your friends/colleagues have an idea and/or want to mentor a project in GSoC 2015, please add that information to our GSoC 2015 Task Organization wiki.

We'd like to apply with at least 30 solid ideas and the deadline is 20th February (which is about 8-9 days from now), please add your ideas before 18th February, so that we've some time to review them before we submit them to Google. Even if you're not available as a mentor, please share the ideas page (https://groups.drupal.org/node/455978) to help make Drupal more AWESOME for everyone.

If you've any issues/doubts, feel free to contact me or Matthew Lechleider (Slurpee) or directly either via our contact page or via comments below. You can also ask any questions on our IRC channel: #drupal-google on Freenode.

For Students: where to start

All the instructions that you need are documented here: https://www.drupal.org/node/2415225 but following is a short summary of most important stuff.

If you're a student reading this post, the first thing that you need to do is join our GSoC discussion group. Also, feel free to hangout on our IRC channel: #drupal-google. Even if don't have any specific doubts at the moment, just keep irc open in one window and try to follow the discussion if it interests you (whenever you can). If you want to start contributing to Drupal, you can go through the official Getting Involved Guide. Since the amount of text might be overwhelming to start in this guide, the above mentioned link should suffice you immediate needs.

The most important thing is that you should try to connect with mentors as much as possible so that you can discuss/refine your ideas further. If you find an idea in the Task Organization Wiki which interests you, feel free to contact the mentor either via mail or on IRC. If you've any interesting idea that you'd like to propose for GSoC, you can also add those to the wiki but you need to contact the admins first. If you'd like to read some tips for GSoC application, you can read my last post: http://chandansingh.net/blog/tips-google-summer-code-0. Best of luck!

Tags: Google Summer of CodeDrupal Planetgsoc2015gsocGoogle Code-Ingci2014
Categories: Elsewhere

Drupalize.Me: Release Day: Free Introduction to PhpStorm IDE

Planet Drupal - Wed, 11/02/2015 - 15:15

This month, we're excited to partner with JetBrains and provide to our wonderful members and curious public (hey, that's you!) a completely free series that will get you up and running like a pro with PhpStorm.

Categories: Elsewhere

Pages

Subscribe to jfhovinne aggregator