Feed aggregator

DrupalCon News: Session Spotlight: the Business track is for more than just business people

Planet Drupal - Mon, 27/07/2015 - 20:20

Whether you're counting Business Summit attendees or conference registrants with C-Suite titles, last year DrupalCon Europe saw about 500 attendees who were highly interested in the business-side of Drupal. As we saw in the Business Track and the business-related BoFs, there is a strong interest at Cons for not only learning the skills to code better, but also to make your business better, and DrupalCon Barcelona will be no different.

Categories: Elsewhere

Drupal Association News: Take the 2015 Drupal Job Market Survey

Planet Drupal - Mon, 27/07/2015 - 18:19

Last year we conducted a Drupal Job Market survey to better understand the opportunities for those who know Drupal. The survey showed strong demand for Drupal skills and demonstrated why Drupal is a rewarding and potentially lucrative career path. We are conducting another survey this year. 

Take the Survey

This year we are adding questions about compensation to help Drupal talent and hiring organizations benchmark themselves.

You can expect to see the results from the survey published in late August. Thank you for taking the survey!   

 

Categories: Elsewhere

Faichi.com: The Next Drupal Move

Planet Drupal - Mon, 27/07/2015 - 14:17
Categories: Elsewhere

Tim Millwood: Overriding Drupal 8 services

Planet Drupal - Mon, 27/07/2015 - 13:42
Since July 2014 there’s been a feature in Drupal 8 has a way to override backend specific services....
Categories: Elsewhere

Red Crackle: Adding multiple SKUs of a product

Planet Drupal - Mon, 27/07/2015 - 13:40
In this post, you will learn how to add multiple SKUs of a product. When user adds product to the cart, he will be able to select the specific SKU to check out. Creating multiple SKUs and showing them in the same product display is helpful if the underlying product is the same, only some of the attributes are different. A common attribute that can be changed is color. In this specific example, we have used the number of LEDs within the flashlight as an attribute that the customer can select to purchase.
Categories: Elsewhere

Annertech: How to Integrate your Drupal Website with Salesforce CRM

Planet Drupal - Mon, 27/07/2015 - 12:44
How to Integrate your Drupal Website with Salesforce CRM

Recently, I wrote a blog post on the benefits of integrating your website and CRM, and Anthony followed up with another on the typical integration patterns you commonly see. Annertech have a lot of experience integrating Drupal websites with various CRMs, so this is the start of a new series on CRM integration where we will go into more detail on some of the more popular CRMs we’ve worked with.

Categories: Elsewhere

Drupal core announcements: Recording from July 24th 2015 Drupal 8 critical issues discussion

Planet Drupal - Mon, 27/07/2015 - 11:53

This was our 9th critical issues discussion meeting to be publicly recorded in a row. (See all prior recordings). Here is the recording of the meeting video and chat from Friday in the hope that it helps more than just those who were on the meeting:

If you also have significant time to work on critical issues in Drupal 8 and we did not include you, let me know as soon as possible.

The meeting log is as follows (all times are GMT real time at the meeting):


10:08 WimLeers
https://www.drupal.org/project/issues/search/drupal?project_issue_followers=&status[0]=1&status[1]=13&status[2]=8&status[3]=14&status[4]=15&status[5]=4&priorities[0]=400&version[0]=8.x&issue_tags_op=%3D&issue_tags=D8%20cacheability

10:08 WimLeers
https://www.drupal.org/project/issues/search/drupal?project_issue_follow...

10:08 WimLeers
https://www.drupal.org/node/2524082
10:09 Druplicon
https://www.drupal.org/node/2524082 => Config overrides should provide cacheability metadata [
#2524082]
=> 147 comments, 39 IRC mentions

10:09 WimLeers
https://www.drupal.org/node/2429617
10:09 Druplicon
https://www.drupal.org/node/2429617 => [PP-1] Make D8 2x as fast: SmartCache: context-dependent page caching (for *all* users!) [
#2429617]
=> 226 comments, 21 IRC mentions

10:10 WimLeers
https://www.drupal.org/node/2499157
10:10 Druplicon
https://www.drupal.org/node/2499157 => Auto-placeholdering [
#2499157]
=> 2 comments, 3 IRC mentions

10:14 pfrenssen
https://www.drupal.org/node/2524082
10:14 Druplicon
https://www.drupal.org/node/2524082 => Config overrides should provide cacheability metadata [
#2524082]
=> 147 comments, 40 IRC mentions

10:14 pfrenssen
https://www.drupal.org/node/2525910
10:14 Druplicon
https://www.drupal.org/node/2525910 => Ensure token replacements have cacheability + attachments metadata and that it is bubbled in any case [
#2525910]
=> 176 comments, 29 IRC mentions

10:18 alexpott
http://drupal.org/node/2538228
10:18 Druplicon
http://drupal.org/node/2538228 => Config save dispatches an event - may conflict with config structure changes in updates [
#2538228]
=> 6 comments, 1 IRC mention

10:20 alexpott
https://www.drupal.org/node/2538514
10:20 Druplicon
https://www.drupal.org/node/2538514 => Remove argument support from TranslationWrapper [
#2538514]
=> 12 comments, 4 IRC mentions

10:25 WimLeers
lauriii: welcome!
10:29 lauriii
WimLeers: little late because I'm in a sprint and was helping people ;<

10:45 alexpott
The upgrade path we're talking about http://drupal.org/node/2528178
10:45 Druplicon
http://drupal.org/node/2528178 => Provide an upgrade path for #2354889 (block context manager) [#2528178]
=> 143 comments, 1 IRC mention

10:52 alexpott
https://www.drupal.org/node/2538514
10:52 Druplicon
https://www.drupal.org/node/2538514 => Remove argument support from TranslationWrapper [#2538514]
=> 12 comments, 5 IRC mentions

10:52 WimLeers
dawehner++

11:02 dawehner
core/modules/views/src/Plugin/Derivative/ViewsEntityRow.php:100
11:02 catch
\Drupal\block\Plugin\Derivative\ThemeLocalTask also.

11:19 alexpott
berdir: is talking about http://drupal.org/node/2513094

11:19 Druplicon
http://drupal.org/node/2513094 => ContentEntityBase::getTranslatedField and ContentEntityBase::__clone break field reference to parent entity [
#2513094]
=> 36 comments, 1 IRC mention

Categories: Elsewhere

Michael Stapelberg: dh-make-golang: creating Debian packages from Go packages

Planet Debian - Mon, 27/07/2015 - 08:50

Recently, the pkg-go team has been quite busy, uploading dozens of Go library packages in order to be able to package gcsfuse (a user-space file system for interacting with Google Cloud Storage) and InfluxDB (an open-source distributed time series database).

Packaging Go library packages (!) is a fairly repetitive process, so before starting my work on the dependencies for gcsfuse, I started writing a tool called dh-make-golang. Just like dh-make itself, the goal is to automatically create (almost) an entire Debian package.

As I worked my way through the dependencies of gcsfuse, I refined how the tool works, and now I believe it’s good enough for a first release.

To demonstrate how the tool works, let’s assume we want to package the Go library github.com/jacobsa/ratelimit:

midna /tmp $ dh-make-golang github.com/jacobsa/ratelimit 2015/07/25 18:25:39 Downloading "github.com/jacobsa/ratelimit/..." 2015/07/25 18:25:53 Determining upstream version number 2015/07/25 18:25:53 Package version is "0.0~git20150723.0.2ca5e0c" 2015/07/25 18:25:53 Determining dependencies 2015/07/25 18:25:55 2015/07/25 18:25:55 Packaging successfully created in /tmp/golang-github-jacobsa-ratelimit 2015/07/25 18:25:55 2015/07/25 18:25:55 Resolve all TODOs in itp-golang-github-jacobsa-ratelimit.txt, then email it out: 2015/07/25 18:25:55 sendmail -t -f < itp-golang-github-jacobsa-ratelimit.txt 2015/07/25 18:25:55 2015/07/25 18:25:55 Resolve all the TODOs in debian/, find them using: 2015/07/25 18:25:55 grep -r TODO debian 2015/07/25 18:25:55 2015/07/25 18:25:55 To build the package, commit the packaging and use gbp buildpackage: 2015/07/25 18:25:55 git add debian && git commit -a -m 'Initial packaging' 2015/07/25 18:25:55 gbp buildpackage --git-pbuilder 2015/07/25 18:25:55 2015/07/25 18:25:55 To create the packaging git repository on alioth, use: 2015/07/25 18:25:55 ssh git.debian.org "/git/pkg-go/setup-repository golang-github-jacobsa-ratelimit 'Packaging for golang-github-jacobsa-ratelimit'" 2015/07/25 18:25:55 2015/07/25 18:25:55 Once you are happy with your packaging, push it to alioth using: 2015/07/25 18:25:55 git push git+ssh://git.debian.org/git/pkg-go/packages/golang-github-jacobsa-ratelimit.git --tags master pristine-tar upstream

The ITP is often the most labor-intensive part of the packaging process, because any number of auto-detected values might be wrong: the repository owner might not be the “Upstream Author”, the repository might not have a short description, the long description might need some adjustments or the license might not be auto-detected.

midna /tmp $ cat itp-golang-github-jacobsa-ratelimit.txt From: "Michael Stapelberg" <stapelberg AT debian.org> To: submit@bugs.debian.org Subject: ITP: golang-github-jacobsa-ratelimit -- Go package for rate limiting Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 8bit Package: wnpp Severity: wishlist Owner: Michael Stapelberg <stapelberg AT debian.org> * Package name : golang-github-jacobsa-ratelimit Version : 0.0~git20150723.0.2ca5e0c-1 Upstream Author : Aaron Jacobs * URL : https://github.com/jacobsa/ratelimit * License : Apache-2.0 Programming Lang: Go Description : Go package for rate limiting GoDoc (https://godoc.org/github.com/jacobsa/ratelimit) . This package contains code for dealing with rate limiting. See the reference (http://godoc.org/github.com/jacobsa/ratelimit) for more info. TODO: perhaps reasoning midna /tmp $

After filling in all the TODOs in the file, let’s mail it out and get a sense of what else still needs to be done:

midna /tmp $ sendmail -t -f < itp-golang-github-jacobsa-ratelimit.txt midna /tmp $ cd golang-github-jacobsa-ratelimit midna /tmp/golang-github-jacobsa-ratelimit master $ grep -r TODO debian debian/changelog: * Initial release (Closes: TODO) midna /tmp/golang-github-jacobsa-ratelimit master $

After filling in these TODOs as well, let’s have a final look at what we’re about to build:

midna /tmp/golang-github-jacobsa-ratelimit master $ head -100 debian/**/* ==> debian/changelog <== golang-github-jacobsa-ratelimit (0.0~git20150723.0.2ca5e0c-1) unstable; urgency=medium * Initial release (Closes: #793646) -- Michael Stapelberg <stapelberg@debian.org> Sat, 25 Jul 2015 23:26:34 +0200 ==> debian/compat <== 9 ==> debian/control <== Source: golang-github-jacobsa-ratelimit Section: devel Priority: extra Maintainer: pkg-go <pkg-go-maintainers@lists.alioth.debian.org> Uploaders: Michael Stapelberg <stapelberg@debian.org> Build-Depends: debhelper (>= 9), dh-golang, golang-go, golang-github-jacobsa-gcloud-dev, golang-github-jacobsa-oglematchers-dev, golang-github-jacobsa-ogletest-dev, golang-github-jacobsa-syncutil-dev, golang-golang-x-net-dev Standards-Version: 3.9.6 Homepage: https://github.com/jacobsa/ratelimit Vcs-Browser: http://anonscm.debian.org/gitweb/?p=pkg-go/packages/golang-github-jacobsa-ratelimit.git;a=summary Vcs-Git: git://anonscm.debian.org/pkg-go/packages/golang-github-jacobsa-ratelimit.git Package: golang-github-jacobsa-ratelimit-dev Architecture: all Depends: ${shlibs:Depends}, ${misc:Depends}, golang-go, golang-github-jacobsa-gcloud-dev, golang-github-jacobsa-oglematchers-dev, golang-github-jacobsa-ogletest-dev, golang-github-jacobsa-syncutil-dev, golang-golang-x-net-dev Built-Using: ${misc:Built-Using} Description: Go package for rate limiting This package contains code for dealing with rate limiting. See the reference (http://godoc.org/github.com/jacobsa/ratelimit) for more info. ==> debian/copyright <== Format: http://www.debian.org/doc/packaging-manuals/copyright-format/1.0/ Upstream-Name: ratelimit Source: https://github.com/jacobsa/ratelimit Files: * Copyright: 2015 Aaron Jacobs License: Apache-2.0 Files: debian/* Copyright: 2015 Michael Stapelberg <stapelberg@debian.org> License: Apache-2.0 Comment: Debian packaging is licensed under the same terms as upstream License: Apache-2.0 Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at . http://www.apache.org/licenses/LICENSE-2.0 . Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. . On Debian systems, the complete text of the Apache version 2.0 license can be found in "/usr/share/common-licenses/Apache-2.0". ==> debian/gbp.conf <== [DEFAULT] pristine-tar = True ==> debian/rules <== #!/usr/bin/make -f export DH_GOPKG := github.com/jacobsa/ratelimit %: dh $@ --buildsystem=golang --with=golang ==> debian/source <== head: error reading ‘debian/source’: Is a directory ==> debian/source/format <== 3.0 (quilt) midna /tmp/golang-github-jacobsa-ratelimit master $

Okay, then. Let’s give it a shot and see if it builds:

midna /tmp/golang-github-jacobsa-ratelimit master $ git add debian && git commit -a -m 'Initial packaging' [master 48f4c25] Initial packaging 7 files changed, 75 insertions(+) create mode 100644 debian/changelog create mode 100644 debian/compat create mode 100644 debian/control create mode 100644 debian/copyright create mode 100644 debian/gbp.conf create mode 100755 debian/rules create mode 100644 debian/source/format midna /tmp/golang-github-jacobsa-ratelimit master $ gbp buildpackage --git-pbuilder […] midna /tmp/golang-github-jacobsa-ratelimit master $ lintian ../golang-github-jacobsa-ratelimit_0.0\~git20150723.0.2ca5e0c-1_amd64.changes I: golang-github-jacobsa-ratelimit source: debian-watch-file-is-missing P: golang-github-jacobsa-ratelimit-dev: no-upstream-changelog I: golang-github-jacobsa-ratelimit-dev: extended-description-is-probably-too-short midna /tmp/golang-github-jacobsa-ratelimit master $

This package just built (as it should!), but occasionally one might need to disable a test and file an upstream bug about it. So, let’s push this package to pkg-go and upload it:

midna /tmp/golang-github-jacobsa-ratelimit master $ ssh git.debian.org "/git/pkg-go/setup-repository golang-github-jacobsa-ratelimit 'Packaging for golang-github-jacobsa-ratelimit'" Initialized empty shared Git repository in /srv/git.debian.org/git/pkg-go/packages/golang-github-jacobsa-ratelimit.git/ HEAD is now at ea6b1c5 add mrconfig for dh-make-golang [master c5be5a1] add mrconfig for golang-github-jacobsa-ratelimit 1 file changed, 3 insertions(+) To /git/pkg-go/meta.git ea6b1c5..c5be5a1 master -> master midna /tmp/golang-github-jacobsa-ratelimit master $ git push git+ssh://git.debian.org/git/pkg-go/packages/golang-github-jacobsa-ratelimit.git --tags master pristine-tar upstream Counting objects: 31, done. Delta compression using up to 8 threads. Compressing objects: 100% (25/25), done. Writing objects: 100% (31/31), 18.38 KiB | 0 bytes/s, done. Total 31 (delta 2), reused 0 (delta 0) To git+ssh://git.debian.org/git/pkg-go/packages/golang-github-jacobsa-ratelimit.git * [new branch] master -> master * [new branch] pristine-tar -> pristine-tar * [new branch] upstream -> upstream * [new tag] upstream/0.0_git20150723.0.2ca5e0c -> upstream/0.0_git20150723.0.2ca5e0c midna /tmp/golang-github-jacobsa-ratelimit master $ cd .. midna /tmp $ debsign golang-github-jacobsa-ratelimit_0.0\~git20150723.0.2ca5e0c-1_amd64.changes […] midna /tmp $ dput golang-github-jacobsa-ratelimit_0.0\~git20150723.0.2ca5e0c-1_amd64.changes Uploading golang-github-jacobsa-ratelimit using ftp to ftp-master (host: ftp.upload.debian.org; directory: /pub/UploadQueue/) […] Uploading golang-github-jacobsa-ratelimit_0.0~git20150723.0.2ca5e0c-1.dsc Uploading golang-github-jacobsa-ratelimit_0.0~git20150723.0.2ca5e0c.orig.tar.bz2 Uploading golang-github-jacobsa-ratelimit_0.0~git20150723.0.2ca5e0c-1.debian.tar.xz Uploading golang-github-jacobsa-ratelimit-dev_0.0~git20150723.0.2ca5e0c-1_all.deb Uploading golang-github-jacobsa-ratelimit_0.0~git20150723.0.2ca5e0c-1_amd64.changes midna /tmp $ cd golang-github-jacobsa-ratelimit midna /tmp/golang-github-jacobsa-ratelimit master $ git tag debian/0.0_git20150723.0.2ca5e0c-1 midna /tmp/golang-github-jacobsa-ratelimit master $ git push git+ssh://git.debian.org/git/pkg-go/packages/golang-github-jacobsa-ratelimit.git --tags master pristine-tar upstream Total 0 (delta 0), reused 0 (delta 0) To git+ssh://git.debian.org/git/pkg-go/packages/golang-github-jacobsa-ratelimit.git * [new tag] debian/0.0_git20150723.0.2ca5e0c-1 -> debian/0.0_git20150723.0.2ca5e0c-1 midna /tmp/golang-github-jacobsa-ratelimit master $

Thanks for reading this far, and I hope dh-make-golang makes your life a tiny bit easier. As dh-make-golang just entered Debian unstable, you can install it using apt-get install dh-make-golang. If you have any feedback, I’m eager to hear it.

Categories: Elsewhere

Dirk Eddelbuettel: Evading the "Hadley tax": Faster Travis tests for R

Planet Debian - Mon, 27/07/2015 - 03:35

Hadley is a popular figure, and rightly so as he successfully introduced many newcomers to the wonders offered by R. His approach strikes some of us old greybeards as wrong---I particularly take exception with some of his writing which frequently portrays a particular approach as both the best and only one. Real programming, I think, is often a little more nuanced and aware of tradeoffs which need to be balanced. As a book on another language once popularized: "There is more than one way to do things." But let us leave this discussion for another time.

As the reach of the Hadleyverse keeps spreading, we sometimes find ourselves at the receiving end of a cost/benefit tradeoff. That is what this post is about, and it uses a very concrete case I encountered yesterday.

As blogged earlier, the RcppZiggurat package was updated. I had not touched it in a year, but Brian Ripley had sent a brief and detailed note concerning something flagged by the Solaris compiler (correctly suggesting I replace fabs() with abs() on integer types). (Allow me to stray from the main story line here for a second to stress just how insane a work load he is carrying, essentially for all of us. R and the R community are so just so indebted to him for all his work---which makes the usual social media banter about him so unfortunate. But that too shall be left for another time.) Upon making the simple fix, and submitting to GitHub the usual Travis CI was triggered. And here is what I saw:


All happy, all green. Previous build a year ago, most recent build yesterday, both passed. But hold on: test time went from 2:54 minutes to 7:47 minutes for an increase of almost five minutes! And I knew that I had not added any new dependencies, or altered any build options. What did happen was that among the dependencies of my package, one had decided to now also depend on ggplot2. Which leads to a chain of sixteen additional packages being loaded besides the four I depend upon---when it used to be just one. And that took five minutes as all those packages are installed from source, and some are big and take a long time to compile.

There is however and easy alternative, and for that we have to praise Michael Rutter who looks after a number of things for R on Ubuntu. Among these are the R builds for Ubuntu but also the rrutter PPA as well as the c2d4u PPA. If you have not heard this alphabet soup before, a PPA is a package repository for Ubuntu where anyone (who wants to sign up) can upload (properly setup) source files which are then turned into Ubuntu binaries. With full dependency resolution and all other goodies we have come to expect from the Debian / Ubuntu universe. And Michael uses this facility with great skill and calm to provide us all with Ubuntu binaries for R itself (rebuilding what yours truly uploads into Debian), as well as a number of key packages available via the CRAN mirrors. Less know however is this "c2d4u" which stands for CRAN to Debian for Ubuntu. And this builds on something Charles Blundell once built under my mentorship in a Google Summer of Code. And Michael does a tremdous job covering well over a thousand CRAN source packages---and providing binaries for all. Which we can use for Travis!

What all that means is that I could now replace the line

- ./travis-tool.sh install_r RcppGSL rbenchmark microbenchmark highlight

which implies source builds of the four listed packages and all their dependencies with the following line implying binary installations of already built packages:

- ./travis-tool.sh install_aptget libgsl0-dev r-cran-rcppgsl r-cran-rbenchmark r-cran-microbenchmark r-cran-highlight

In this particular case I also needed to build a binary package of my RcppGSL package as this one is not (yet) handled by Michael. I happen to have (re-)discovered the beauty of PPAs for Travis earlier this year and revitalized an older and largely dormant launchpad account I had for this PPA of mine. How to build a simple .deb package will also have to left for a future post to keep this more concise.

This can be used with the existing r-travis setup---but one needs to use the older, initial variant in order to have the ability to install .deb packages. So in the .travis.yml of RcppZiggurat I just use

before_install: ## PPA for Rcpp and some other packages - sudo add-apt-repository -y ppa:edd/misc ## r-travis by Craig Citro et al - curl -OL http://raw.github.com/craigcitro/r-travis/master/scripts/travis-tool.sh - chmod 755 ./travis-tool.sh - ./travis-tool.sh bootstrap

to add my own PPA and all is good. If you do not have a PPA, or do not want to create your own packages you can still benefit from the PPAs by Michael and "mix and match" by installing from binary what is available, and from source what is not.

Here we were able to use an all-binary approach, so let's see the resulting performance:


Now we are at 1:03 to 1:15 minutes---much better.

So to conclude, while the every expanding universe of R packages is fantastic for us as users, it can be seen to be placing a burden on us as developers when installing and testing. Fortunately, the packaging infrastructure built on top of Debian / Ubuntu packages can help and dramatically reduce build (and hence test) times. Learning about PPAs can be a helpful complement to learning about Travis and continued integration. So maybe now I need a new reason to blame Hadley? Well, there is always snake case ...

Follow-up: The post got some pretty immediate feedback shortly after I posted it. Craig Citro pointed out (quite correctly) that I could use r_binary_install which would also install the Ubuntu binaries based on their R packages names. Having built R/CRAN packages for Debian for so long, I am simply more used to the r-cran-* notations, and I think I was also the one contributing install_aptget to r-travis ... Yihui Xie spoke up for the "new" Travis approach deploying containers, caching of packages and explicit whitelists. It was in that very (GH-based) discussion that I started to really lose faith in the new Travis approach as they want use to whitelist each and every package. With 6900 and counting at CRAN I fear this simply does not scale. But different approaches are certainly welcome. I posted my 1:03 to 1:15 minutes result. If the "New School" can do it faster, I'd be all ears.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Categories: Elsewhere

Wuinfo: Content as a Service

Planet Drupal - Mon, 27/07/2015 - 02:47

As one of Canada’s most successful integrated media and entertainment companies, Corus have multiple TV channels and websites for each channel.

It had been a challenge to have multiple channels' live schedule data displayed on websites. All the data are from a central repository. It became a little bit difficult since the repository is not always available. We had used Feeds module to import all the schedule data. Each channel website keeps a live copy of the schedule data. Things got worse because of the way we update the program items. We delete all the current schedule data in the system and then imported from the central repository. Sometimes, our schedule pages became empty because the central repository is not available.

Pedram Tiv, the director of digital operations at Corus Entertainment, had a vision of building a robust schedule for all channels. He wants to establish a Drupal website as a schedule service provider - content as a service. The service website download and synchronize all channels schedule data. Our content manager can also login to the website and edit any schedule items. The site keeps all the revisions for the changes. Since, the central repository only provide raw data, It is helpful we can edit the scheduled show title or series name.

I loved this brilliant idea as soon as he had explained it to me. We are building a Drupal website as a content service provider. It means we would build a CMS for other CMS websites. Scalability is always challenging for a modern website. To make it scalable, Pedram added another layer of cache protection. We added S3 cache between the schedule service and the front end web servers. With it, schedule service can handle more channels and millions of requests each day. Front end websites download schedule data from the Amazon S3 bucket only. What we did is creating and uploading seven days' schedule data to S3. We set up a cron job for this task. Every day, It uploads thousands of JSON schedule files for different channels in different time zones of next seven days each time.

This setup offloaded the pressure of schedule server and let it serve unlimited front end users. It gives seven days of grace period. It allowed the schedule server to be offline without interrupting the service. One time, our schedule service was down for three days. The schedule service was not affected because we have seven days of schedule data in an S3 bucket. By using S3 as another layer of protection, it provided excellent high availability.

Our schedule service have been up and running for many months without a problem. There are over 100,000 active nodes in the system. For more detail about importing large number of content and building an efficient system, we have some other blogs for this project.

Sites are that are using the schedule services now:
http://www.cmt.ca/schedule
http://www.teletoonlanuit.com/horaire
http://www.abcspark.ca/schedule/daily

Categories: Elsewhere

Gregor Herrmann: RC bugs 2015/30

Planet Debian - Sun, 26/07/2015 - 23:18

this week, besides other activities, I again managed to NMU a few packages as part of the GCC 5 transition. & again I could build on patches submitted by various HP engineers & other helpful souls.

  • #757525 – hardinfo: "hardinfo: FTBFS with clang instead of gcc"
    patch to build with -std=gnu89, upload to DELAYED/5
  • #758723 – nagios-plugins-rabbitmq: "should depend on libjson-perl"
    add missing dependency, upload to DELAYED/5
  • #777766 – src:adun.app: "adun.app: ftbfs with GCC-5"
    send updated patch to BTS
  • #777837 – src:ebview: "ebview: ftbfs with GCC-5"
    add patch from paulownia@Safe-mail.net, upload to DELAYED/5
  • #777882 – src:gnokii: "gnokii: ftbfs with GCC-5"
    build with -fgnu89-inline, upload to DELAYED/5
  • #777907 – src:hunt: "hunt: ftbfs with GCC-5"
    apply patch from Nicholas Luedtke, upload to DELAYED/5
  • #777920 – src:isdnutils: "isdnutils: ftbfs with GCC-5"
    add patch to build with -fgnu89-inline; upload to DELAYED/5
  • #778019 – src:multimon: "multimon: ftbfs with GCC-5"
    build with -fgnu89-inline; upload to DELAYED/5
  • #778068 – src:pork: "pork: ftbfs with GCC-5"
    build with -fgnu89-inline, QA upload
  • #778098 – src:quarry: "quarry: ftbfs with GCC-5"
    build with -std=gnu89, upload to DELAYED/5, then rescheduled to 0-day with maintainer's permission
  • #778099 – src:ratbox-services: "ratbox-services: ftbfs with GCC-5"
    build with -fgnu89-inline, upload to DELAYED/5, later cancelled because package is about to be removed (#793408)
  • #778109 – src:s51dude: "s51dude: ftbfs with GCC-5"
    build with -fgnu89-inline, upload to DELAYED/5
  • #778116 – src:shell-fm: "shell-fm: ftbfs with GCC-5"
    apply patch from Brett Johnson, upload to DELAYED/5
  • #778119 – src:simulavr: "simulavr: ftbfs with GCC-5"
    apply patch from Brett Johnson, QA upload
  • #778120 – src:sipsak: "sipsak: ftbfs with GCC-5"
    apply patch from Brett Johnson, upload to DELAYED/5
  • #778122 – src:skyeye: "skyeye: ftbfs with GCC-5"
    build with -fgnu89-inline, QA upload
  • #778140 – src:tcpcopy: "tcpcopy: ftbfs with GCC-5"
    add patch backported from upstream git, upload to DELAYED/5
  • #778145 – src:thewidgetfactory: "thewidgetfactory: ftbfs with GCC-5"
    add missing #include, upload to DELAYED/5
  • #778164 – src:vtun: "vtun: ftbfs with GCC-5"
    add patch from Tim Potter, upload to DELAYED/5
  • #790464 – flow-tools: "Please drop conditional build-depend on libmysqlclient15-dev"
    drop obsolete dependency, NMU
  • #793336 – src:libdevel-profile-perl: "libdevel-profile-perl: FTBFS with perl 5.22 in experimental (MakeMaker changes)"
    finish and upload package modernized by XTaran (pkg-perl)
  • #793580 – libb-hooks-parser-perl: "libb-hooks-parser-perl: B::Hooks::Parser::Install::Files missing"
    investigate and forward upstream, upload new upstream release later (pkg-perl)
Categories: Elsewhere

Paul Rowell: Drupal's admin pages can be beautiful!

Planet Drupal - Sun, 26/07/2015 - 22:20

So, it turns out the Drupal CMS can be beautiful. I kid you not! Anditko has updated the Adminimal theme with a material skin based on Android Lollipop. I've mentioned Adminimal before, an admin theme that greatly improves the look and feel of Drupal’s CMS, and the latest update takes it that step further into the land of stunning.

Categories: Elsewhere

Lunar: Reproducible builds: week 13 in Stretch cycle

Planet Debian - Sun, 26/07/2015 - 18:03

What happened in the reproducible builds effort this week:

Toolchain fixes
  • Emmanuel Bourg uploaded maven-archiver/2.6-3 which fixed parsing DEB_CHANGELOG_DATETIME with non English locales.
  • Emmanuel Bourg uploaded maven-repo-helper/1.8.12 which always use the same system independent encoding when transforming the pom files.
  • Piotr Ożarowski uploaded dh-python/2.20150719 which makes the order of the generated maintainer scripts deterministic. Original patch by Chris Lamb.

akira uploaded a new version of doxygen in the experimental “reproducible” repository incorporating upstream patch for SOURCE_DATE_EPOCH, and now producing timezone independent timestamps.

Dhole updated Peter De Wachter's patch on ghostscript to use SOURCE_DATE_EPOCH and use UTC as a timezone. A modified package is now being experimented.

Packages fixed

The following 14 packages became reproducible due to changes in their build dependencies: bino, cfengine2, fwknop, gnome-software, jnr-constants, libextractor, libgtop2, maven-compiler-plugin, mk-configure, nanoc, octave-splines, octave-symbolic, riece, vdr-plugin-infosatepg.

The following packages became reproducible after getting fixed:

Some uploads fixed some reproducibility issues but not all of them:

Patches submitted which have not made their way to the archive yet:

  • #792943 on argus-client by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #792945 on authbind by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #792947 on cvs-mailcommit by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #792949 on chimera2 by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #792950 on ccze by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #792951 on dbview by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #792952 on dhcpdump by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #792953 on dhcping by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #792955 on dput by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #792958 on dtaus by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #792959 on elida by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #792961 on enemies-of-carlotta by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #792963 on erc by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #792965 on fastforward by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #792967 on fgetty by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #792969 on flowscan by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #792971 on junior-doc by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #792972 on libjama by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #792973 on liblip by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #792974 on liblockfile by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #792975 on libmsv by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #792976 on logapp by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #792977 on luakit by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #792978 on nec by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #792979 on runit by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #792980 on tworld by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #792981 on wmweather by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #792982 on ftpcopy by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #792983 on gerstensaft by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #792984 on integrit by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #792985 on ipsvd by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #792986 on uruk by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #792987 on jargon by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #792988 on xbs by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #792989 on freecdb by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #792990 on skalibs by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #792991 on gpsmanshp by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #792993 on cgoban by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #792994 on angband-doc by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #792995 on abook by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #792996 on bcron by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #792998 on chiark-utils by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #792999 on console-cyrillic by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #793000 on beav by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #793001 on blosxom by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #793002 on cgilib by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #793003 on daemontools by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #793004 on debdelta by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #793005 on checkpw by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #793006 on dropbear by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #793126 on torbutton by Dhole: set TZ=UTC when calling zip.
  • #793127 on pdf.js by Dhole: set TZ=UTC when calling zip.
  • #793300 on deejayd by Dhole: set TZ=UTC when calling zip.
reproducible.debian.net

Packages identified as failing to build from source with no bugs filed and older than 10 days are scheduled more often now (except in experimental). (h01ger)

Package reviews

178 obsolete reviews have been removed, 59 added and 122 updated this week.

New issue identified this week: random_order_in_ruby_rdoc_indices.

18 new bugs for packages failing to build from sources have been reported by Chris West (Faux), and h01ger.

Categories: Elsewhere

Lunar: Reproducible builds: week 12 in Stretch cycle

Planet Debian - Sun, 26/07/2015 - 17:41

What happened in the reproducible builds effort this week:

Toolchain fixes

Eric Dorlan uploaded automake-1.15/1:1.15-2 which makes the output of mdate-sh deterministic. Original patch by Reiner Herrmann.

Kenneth J. Pronovici uploaded epydoc/3.0.1+dfsg-8 which now honors SOURCE_DATE_EPOCH. Original patch by Reiner Herrmann.

Chris Lamb submitted a patch to dh-python to make the order of the generated maintainer scripts deterministic. Chris also offered a fix for a source of non-determinism in dpkg-shlibdeps when packages have alternative dependencies.

Dhole provided a patch to add support for SOURCE_DATE_EPOCH to gettext.

Packages fixed

The following 78 packages became reproducible in our setup due to changes in their build dependencies: chemical-mime-data, clojure-contrib, cobertura-maven-plugin, cpm, davical, debian-security-support, dfc, diction, dvdwizard, galternatives, gentlyweb-utils, gifticlib, gmtkbabel, gnuplot-mode, gplanarity, gpodder, gtg-trace, gyoto, highlight.js, htp, ibus-table, impressive, jags, jansi-native, jnr-constants, jthread, jwm, khronos-api, latex-coffee-stains, latex-make, latex2rtf, latexdiff, libcrcutil, libdc0, libdc1394-22, libidn2-0, libint, libjava-jdbc-clojure, libkryo-java, libphone-ui-shr, libpicocontainer-java, libraw1394, librostlab-blast, librostlab, libshevek, libstxxl, libtools-logging-clojure, libtools-macro-clojure, litl, londonlaw, ltsp, macsyfinder, mapnik, maven-compiler-plugin, mc, microdc2, miniupnpd, monajat, navit, pdmenu, pirl, plm, scikit-learn, snp-sites, sra-sdk, sunpinyin, tilda, vdr-plugin-dvd, vdr-plugin-epgsearch, vdr-plugin-remote, vdr-plugin-spider, vdr-plugin-streamdev, vdr-plugin-sudoku, vdr-plugin-xineliboutput, veromix, voxbo, xaos, xbae.

The following packages became reproducible after getting fixed:

Some uploads fixed some reproducibility issues but not all of them:

Patches submitted which have not made their way to the archive yet:

reproducible.debian.net

The statistics on the main page of reproducible.debian.net are now updated every five minutes. A random unreviewed package is suggested in the “look at a package” form on every build. (h01ger)

A new package set based new on the Core Internet Infrastructure census has been added. (h01ger)

Testing of FreeBSD has started, though no results yet. More details have been posted to the freebsd-hackers mailing list. The build is run on a new virtual machine running FreeBSD 10.1 with 3 cores and 6 GB of RAM, also sponsored by Profitbricks.

strip-nondeterminism development

Andrew Ayer released version 0.009 of strip-nondeterminism. The new version will strip locales from Javadoc, include the name of files causing errors, and ignore unhandled (but rare) zip64 archives.

debbindiff development

Lunar continued its major refactoring to enhance code reuse and pave the way to fuzzy-matching and parallel processing. Most file comparators have now been converted to the new class hierarchy.

In order to support for archive formats, work has started on packaging Python bindings for libarchive. While getting support for more archive formats with a common interface is very nice, libarchive is a stream oriented library and might have bad performance with how debbindiff currently works. Time will tell if better solutions need to be found.

Documentation update

Lunar started a Reproducible builds HOWTO intended to explain the different aspects of making software build reproducibly to the different audiences that might have to get involved like software authors, producers of binary packages, and distributors.

Package reviews

17 obsolete reviews have been removed, 212 added and 46 updated this week.

15 new bugs for packages failing to build from sources have been reported by Chris West (Faux), and Mattia Rizzolo.

Presentations

Lunar presented Debian efforts and some recipes on making software build reproducibly at Libre Software Meeting 2015. Slides and a video recording are available.

Misc.

h01ger, dkg, and Lunar attended a Core Infrastructure Initiative meeting. The progress and tools mode for the Debian efforts were shown. Several discussions also helped getting a better understanding of the needs of other free software projects regarding reproducible builds. The idea of a global append only log, similar to the logs used for Certificate Transparency, came up on multiple occasions. Using such append only logs for keeping records of sources and build results has gotten the name “Binary Transparency Logs”. They would at least help identifying a compromised software signing key. Whether the benefits in using such logs justify the costs need more research.

Categories: Elsewhere

Dirk Eddelbuettel: RcppZiggurat 0.1.3: Faster Random Normal Draws

Planet Debian - Sun, 26/07/2015 - 15:09

After a slight hiatus since the last release in early 2014, we are delighted to announce a new release of RcppZiggurat which is now on the CRAN network for R.

The RcppZiggurat package updates the code for the Ziggurat generator which provides very fast draws from a Normal distribution.

The package provides a simple C++ wrapper class for the generator improving on the very basic macros, and permits comparison among several existing Ziggurat implementations. This can be seen in the figure where Ziggurat from this package dominates accessing the implementations from the GSL, QuantLib and Gretl---all of which are still way faster than the default Normal generator in R (which is of course of higher code complexity).

This release contains a few internal cleanups relative to the last release. It was triggered by a very helpful email from Brian Ripley who notices compiler warnings on the Solaris platform due to my incorrect use of on integer variables.

The NEWS file entry below lists all changes.

Changes in version 0.1.3 (2015-07-25)
  • Use the SHR3 generator for the default implementation just like Leong et al do, making our default implementation identical to theirs (but 32- and 64-bit compatible)

  • Switched generators from float to double ensuring that results are identical on 32- and 64-bit platforms

  • Simplified builds with respect to GSL use via the RcppGSL package; added a seed setter for the GSL variant

  • Corrected use of fabs() to abs() on integer variables, with a grateful nod to Brian Ripley for the hint (based on CRAN checks on the beloved Slowlaris machines)

  • Accelerated Travis CI tests by relying exclusively on r-cran-* packages from the PPAs by Michael Rutter and myself

  • Updated DESCRIPTION and NAMESPACE according to current best practices, and R-devel CMD check --as-cran checks

Courtesy of CRANberries, there is also a diffstat report for the most recent release. More information is on the RcppZiggurat page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Categories: Elsewhere

Steinar H. Gunderson: DYI web video streaming

Planet Debian - Sun, 26/07/2015 - 13:00

I've recently taken a new(ish) look at streaming video for the web, in terms of what's out there of formats. (When I say streaming, I mean live video; not static files where you can seek etc.) There's a bewildering array; most people would probably use a ready-made service such as Twitch, Ustream or YouTube, but they do have certain aspects that are less than ideal; for instance, you might need to pay (or have your viewers endure ads), you might be shut down at any time if they don't like your content (e.g. sending non-gaming content on Twitch, or using copyrighted music on YouTube), or the video quality might be less than ideal.

So what I'm going to talk about is mainly what format to choose; there are solutions that allow you to stream to many, but a) the CPU amount you need is largely proportional to the number of different codecs you want to encode to, and b) I've never really seen any of these actually work well in practice; witness the Mistserver fiasco at FOSDEM last year, for instance (full disclosure: I was involved in the 2014 FOSDEM streaming, but not in 2015). So the goal is to find the minimum number of formats to maximize quality and client support.

So, let's have a look at the candidates:

We'll start in a corner with HLS. The reason is that mobile is becoming increasingly important, and Mobile Safari (iOS) basically only supports HLS, so if you want iOS support, this has to be high on your list. HLS is basically H.264+AAC in a MPEG-TS mux, split over many files (segments), with a .m3u8 file that is refreshed to inform about new segments. This can be served over whatever that serves HTTP (including your favorite CDN), and if your encoder is up to it, you can get adaptive bandwidth control (which works so-so, but better than nothing), but unfortunately it also has high latency, and MPEG-TS is a pretty high-overhead mux (6–7%, IIRC).

Unfortunately, basically nothing but Safari (iOS/OS X) supports HLS. (OK, that's not true; the Android browser does from Android 4.0, but supposedly 4.0 is really buggy and you really want something newer.) So unless you're in la-la land where nothing but Apple counts, you'll not only need HLS, but also something else. (Well, there's a library that claims to make Chrome/Firefox support HLS, but it's basically a bunch of JavaScript that remuxes each segment from MPEG-TS to MP4 on the fly, and hangs the entire streaming process while doing so.) Thankfully FFmpeg can remux from some other format into HLS, so it's not that painful.

MPEG-DASH is supposedly the new hotness, but like anything container-wise from MPEG, it's huge, tries to do way too many things and is generally poorly supported. Basically it's HLS (with the same delay problems) except that you can support a bazillion different codecs and multiple containers, and actual support out there is poor. The only real way to get it into a browser (assuming you can find anything stable that encodes an MPEG-DASH stream) is to load a 285kB JavaScript library into your browser, which tries to do all the metadata parsing in JavaScript, download the pieces with XHR and then piece them into the <video> tag with the Media Source Extensions API. And to pile on the problems, you can't really take an MPEG-DASH stream and feed it into something that's not a web browser, e.g. current versions of MPlayer/VLC/XBMC. (This matters if you have e.g. a separate HTPC that's remote-controlled. Admittedly, it might be a small segment depending on your audience.) Perhaps it will get better over time, but for the time, I cannot really recommend it unless you're a huge corporation and have the resources to essentially make your own video player in JavaScript (YouTube or Twitch can, but the rest of us really can't).

Of course, a tried-and-tested solution is Flash, with its FLV and RTMP offerings. RTMP (in this context) is basically FLV over a different transport from HTTP, and I've found it to be basically pain from one end to the other; the solutions you get are either expensive (Adobe's stuff, Wowza), scale poorly (Wowza), or are buggy and non-interoptable in strange ways (nginx-rtmp). But H.264+AAC in FLV over HTTP (e.g. with VLC plus my own Cubemap reflector) works well against e.g. JW Player, and has good support… on desktop. (There's one snag, though, in that if you stream from HTTP, JW Player will believe that you're streaming a static file, and basically force you to zero client-side buffer. Thus, it ends up being continuously low on buffer, and you need some server-side trickery to give it some more leeway against network bumps and not show its dreaded buffering spinner.) With mobile becoming more important, and people increasingly calling for the death of Flash, I don't think this is the solution for tomorrow, although it might be okay for today.

Then there's WebM (in practice VP8+Vorbis in a Matroska mux; VP9 is too slow for good quality in realtime yet, AFAIK). If worries about format patents are high on your list, this is probably a good choice. Also, you can stick it straight into <video> (e.g. with VLC plus my own Cubemap reflector), and modulo some buffering issues, you can go without Flash. Unfortunately, VP8 trails pretty far behind H.264 on picture quality, libvpx has strange bugs and my experience is that bitrate control is rather lacking, which can lead to your streams getting subtle, hard-to-debug issues with getting through to the actual user. Furthermore, support is lackluster; no support for IE, no support for iOS, no hardware acceleration on most (all?) phones so you burn your battery.

Finally there's MP4, which is formally MPEG-4 Part 14, which in turn is based on MPEG-4 Part 12. Or something. In any case, it's the QuickTime mux given a blessing as official, and it's a relatively common format for holding H.264+AAC. MP4 is one of those formats that support a zillion different ways of doing everything; the classic case is when someone's made a file in QuickTime and it has the “moov” box at the end, so you can't play any of your 2 GB file until you have the very last bytes, too. But after I filed a VLC bug and Martin Storsjö picked it up, the ffmpeg mux has gotten a bunch of fixes to produce MP4 files that are properly streamable.

And browsers have improved as well; recent versions of Chrome (both desktop and Android) stream MP4 pretty well, IE11 reportedly does well (although I've had reports of regressions, where the user has to switch tabs once before the display actually starts updating), Firefox on Windows plays these fine now, and I've reported a bug against GStreamer to get these working on Firefox on Linux (unfortunately it will be a long time until this works out of the box for most people).

So that's my preferred solution right now; you need a pretty recent ffmpeg for this to work, and if you want to use MP4 in Cubemap, you need this VLC bugfix (unfortunately not in 2.2.0, which is the version in Debian stable), but combined with HLS as an iOS fallback, it will give you great quality on all platforms, good browser coverage, reasonably low latency (for non-HLS clients) and good playability in non-web clients. It won't give you adaptive bitrate selection, though, and you can't hand it to your favorite CDN because they'll probably only want to serve static files (and I don't think there's a market for a Cubemap CDN :-) ). The magic VLC incantation is:

--sout '#transcode{vcodec=h264,vb=3000,acodec=mp4a,ab=256,channels=2,fps=50}:std{access=http{mime=video/mp4},mux=ffmpeg{mux=mp4},dst=:9094}' --sout-avformat-options '{movflags=empty_moov+frag_keyframe+default_base_moof}'
Categories: Elsewhere

Norbert Preining: Challenging riddle from The Talos Principle

Planet Debian - Sun, 26/07/2015 - 10:53

When I recently complained that Portal 2 was too easy, I have to say, The Talos Principle is challenging. For a solution that, if known, takes only a few seconds, I often have to wring my brain about the logistics for long long time. Here a nice screenshot from one of the easier riddles, but with great effect.

A great game, very challenging. A more length review will come when I have finished the game.

Categories: Elsewhere

Dirk Eddelbuettel: Rcpp 0.12.0: Now with more Big Data!

Planet Debian - Sat, 25/07/2015 - 21:01

A new release 0.12.0 of Rcpp arrived on the CRAN network for GNU R this morning, and I also pushed a Debian package upload.

Rcpp has become the most popular way of enhancing GNU R with C++ code. As of today, 423 packages on CRAN depend on Rcpp for making analyses go faster and further. Note that this is 60 more packages since the last release in May! Also, BioConductor adds another 57 packages, and casual searches on GitHub suggests many more.

And according to Andrie De Vries, Rcpp has now page rank of one on CRAN as well!

And with this release, Rcpp also becomes ready for Big Data, or, as they call it in Texas, Data.

Thanks to a lot of work and several pull requests by Qiang Kou, support for R_xlen_t has been added.

That means we can now do stunts like

R> library(Rcpp) R> big <- 2^31-1 R> bigM <- rep(NA, big) R> bigM2 <- c(bigM, bigM) R> cppFunction("double getSz(LogicalVector x) { return x.length(); }") R> getSz(bigM) [1] 2147483647 R> getSz(bigM2) [1] 4294967294 R>

where prior versions of Rcpp would just have said

> getSz(bigM2) Error in getSz(bigM2) : long vectors not supported yet: ../../src/include/Rinlinedfuns.h:137 >

which is clearly not Texas-style. Another wellcome change, also thanks to Qiang Kou, adds encoding support for strings.

A lot of other things got polished. We are still improving exception handling as we still get the odd curveballs in a corner cases. Matt Dziubinski corrected the var() computation to use the proper two-pass method and added better support for lambda functions in Sugar expression using sapply(), Qiang Kou added more pull requests mostly for string initialization, and Romain added a pull request which made data frame creation a little more robust, and JJ was his usual self in tirelessly looking after all aspects of Rcpp Attributes.

As always, you can follow the development via the GitHub repo and particularly the Issue tickets and Pull Requests. And any discussions, questions, ... regarding Rcpp are always welcome at the rcpp-devel mailing list.

Last but not least, we are also extremely pleased to annouce that Qiang Kou has joined us in the Rcpp-Core team. We are looking forward to a lot more awesome!

See below for a detailed list of changes extracted from the NEWS file.

Changes in Rcpp version 0.12.0 (2015-07-24)
  • Changes in Rcpp API:

    • Rcpp_eval() no longer uses R_ToplevelExec when evaluating R expressions; this should resolve errors where calling handlers (e.g. through suppressMessages()) were not properly respected.

    • All internal length variables have been changed from R_len_t to R_xlen_t to support vectors longer than 2^31-1 elements (via pull request 303 by Qiang Kou).

    • The sugar function sapply now supports lambda functions (addressing issue 213 thanks to Matt Dziubinski)

    • The var sugar function now uses a more robust two-pass method, supports complex numbers, with new unit tests added (via pull request 320 by Matt Dziubinski)

    • String constructors now allow encodings (via pull request 310 by Qiang Kou)

    • String objects are preserving the underlying SEXP objects better, and are more careful about initializations (via pull requests 322 and 329 by Qiang Kou)

    • DataFrame constructors are now a little more careful (via pull request 301 by Romain Francois)

    • For R 3.2.0 or newer, Rf_installChar() is used instead of Rf_install(CHAR()) (via pull request 332).

  • Changes in Rcpp Attributes:

    • Use more robust method of ensuring unique paths for generated shared libraries.

    • The evalCpp function now also supports the plugins argument.

    • Correctly handle signature termination characters ('{' or ';') contained in quotes.

  • Changes in Rcpp Documentation:

    • The Rcpp-FAQ vignette was once again updated with respect to OS X issues and Fortran libraries needed for e.g. RcppArmadillo.

    • The included Rcpp.bib bibtex file (which is also used by other Rcpp* packages) was updated with respect to its CRAN references.

Thanks to CRANberries, you can also look at a diff to the previous release As always, even fuller details are on the Rcpp Changelog page and the Rcpp page which also leads to the downloads page, the browseable doxygen docs and zip files of doxygen output for the standard formats. A local directory has source and documentation too. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Categories: Elsewhere

Steinar H. Gunderson: Stream audio level monitoring with ebumeter

Planet Debian - Sat, 25/07/2015 - 19:08

When monitoring stream sound levels, seemingly VLC isn't quite there; at least the VU meter on mine shows unusably low levels (and I think it might even stick a compressor in there, completely negating the point). So I wanted to write my own, but while searching for the right libraries, I found ebumeter.

So I spent the same amount of time getting it to run in the first place; it uses JACK, which I've never ever had working before. But I guess there's a first time for everything? I wrote up a quick guide for others that are completely unfamiliar with it:

First, install the JACK daemon and qjackctl (Debian packages jackd2 and qjackctl), in addition to ebumeter itself. I've been using mplayer to play the streams, but you can use whatever with JACK output.

Then, start JACK:

jack_control start

and start ebumeter plus give the stream some input:

ebumeter & mplayer -ao jack http://whatever…

You'll notice that ebumeter isn't showing anything yet, because the default routing for MPlayer is to go to the system output. Open qjackctl and go to the Connect dialog. You should see the running MPlayer and ebumeter, and you should see that MPlayer is connected to “system” (not ebumeter as we'd like).

So disconnect all (ignore the warning). Then expand the MPlayer and ebumeter clients, select out_0, then in.L and choose Connect. Do the same with the other channel, and tada! There should be a meter showing EBU R128 levels, including peak (unfortunately it doesn't seem to show number of clipped samples, but I can live with that).

Unfortunately the conncetions are not persistent. To get them persistent, you need to go to Patchbay, create a new patchbay, accept when it asks you if you want to start from the current conncetions, then save, and finally activate. As long as the qjackctl dialog is open (?), new MPlayer JACK sessions will now be autoconnected to ebumeter, no matter what the pid is. If you want to distinguish between different MPlayers, you can always give them a different name as an argument to the -ao jack parameter.

Categories: Elsewhere

Freelock : Drupal on Docker, with a pinch of Salt

Planet Drupal - Sat, 25/07/2015 - 17:16

Faster, more secure, more maintainable. Three nice benefits we get from our new standard Drupal server architecture.

This year we're replacing our old "traditional" LAMP stack with an entirely less pronounceable LNDMPS version. We still use Linux, MariaDB and PHP, of course, but instead of Apache we've moved to Nginx, and we've added Docker and Salt.

DrupalDrupal PlanetDockerSaltConfiguration ManagementSecurityPerformanceDevOps
Categories: Elsewhere

Pages

Subscribe to jfhovinne aggregator