Elsewhere

Deeson Online: Using Grunt, bootstrap, Compass and SASS in a Drupal sub theme

Planet Drupal - Mon, 18/08/2014 - 08:37
*/

If you have a separate front end design team from your Drupal developers, you will know that after static pages are moved into a Drupal theme there can be a huge gap in structure between the original files and the final Drupal site.

We wanted to bridge the gap between our theme developers, UX designers, front end coders, and create an all encompassing boilerplate that could be used as a starting point for any project and then easily ported into Drupal.

After thinking about this task for a few weeks it was clear that the best way forward was to use Grunt to automate all of our tasks and create a scalable, well structured sub theme that all of our coders can use to start any project.

What is Grunt?

Grunt is a Javascript task runner that allows you to automate repetitive tasks such as file minifying files, javascript linting, CSS preprocessing, and even reloading your browser.

Just like bootstrap, there are many resources and a vast amount of plugins available for Grunt that can automate any task you could think of, plus it is very easy to write your own, so setting Grunt as a standard for our boilerplate was an easy decision.

The purpose of this post

We use bootstrap in most projects and recently switched to using SASS for CSS preprocessing bundled with Compass, so for the purpose of this tutorial we will create a simple bootstrap sub theme that utilises Grunt & Compass to compile SASS files and automatically reloads our browser every time a file is changed.

You can then take this approach and use the best Grunt plugins that suit your project.

Step 1. Prerequisites

To use Grunt you will need node.js and ruby installed on your system. Open up terminal, and type:

node -v ruby -v

If you don't see a version number, head to the links below to download and install them.

Don’t have node? Download it here

Don’t have ruby? Follow this great tutorial

Step 2. Installing Grunt

Open up terminal, and type:

sudo npm install -g grunt-cli

This will install the command line interface for Grunt. Be patient whilst it is downloading as sometimes it can take a minute or two.

Step 3. Installing Compass and Grunt plugins

Because we want to use the fantastic set of mixins and features bundled with Compass, lets install the Compass and SASS ruby gems.

Open up terminal, and type:

sudo gem install sass sudo gem install compass

For our boilerplate we only wanted to install plugins that we would need in every project, so we kept it simple and limited it to Watch, Compass and SASS to compile all of our files. Our team members can then add extra plugins later in the project as and when needed.

So lets get started and use the node package manager to install our Grunt plugins.

Switch back to Terminal and run the following commands:

sudo npm install grunt-contrib-watch —save-dev sudo npm install grunt-contrib-compass —save-dev sudo npm install grunt-contrib-sass —save-dev Step 4. Creating the boilerplate

Note: For the purposes of this tutorial we are going to use the bootstrap sub theme for our Grunt setup, but the same Grunt setup described below can be used with any Drupal sub theme.

  • Create a new Drupal site
  • Download the bootstrap theme into your sites/all/themes directory
    drush dl bootstrap
  • Copy the bootstrap starter kit (sites/all/themes/bootstrap/bootstrap_subtheme) into your theme directory
  • Rename bootstrap_subtheme.info.starterkit to bootstrap_subtheme.info
  • Navigate to admin/appearance and click “Enable, and set default" for your sub-theme.

Your Drupal site should now be setup with Bootstrap and your folder structure should now look like this:

For more information on creating a bootstrap sub theme check out the community documentation.

Step 5. Switching from LESS to SASS

Our developers liked less, our designers likes SASS, but after a team tech talk explaining the benefits of using SASS with Compass (a collection of mixins with an updater with some cleaver sprite creation), everyone agreed that SASS was the way forward.

Officially Bootstrap is now packaged with SASS, so lets replace our .less files with .scss files in our bootstrap_subtheme so we can utilise all of the mixin goodness that comes with it SASS & Compass.

  • Head over to bootstrap and download the SASS version
  • Copy the stylesheets folder from boostrap-sass/assets/ and paste it into your bootstrap_subtheme
  • Rename the stylesheets folder to bootstrap-sass
  • Create a new folder called custom-sass in bootsrap_subtheme
  • Create a new file in the custom-sass called style.scss
  • Import bootstrap-sass/bootstrap.scss into style.scss

​You should now have the following setup in your sub theme:

We are all set!

Step 6. Setting up Grunt - The package.json & Gruntfile.js

Now lets configure Grunt to run our tasks. Grunt only needs two files to be setup, a package.json file that defines our dependencies and a Gruntfiles.js to configure our plugins.

Within bootstrap_subtheme, create a package.json and add the following code:

{ "name": "bootstrap_subtheme", "version": "1.0.0", "author": “Your Name", "homepage": "http://homepage.com", "engines": { "node": ">= 0.8.0" }, "devDependencies": { "grunt-contrib-compass": "v0.9.0", "grunt-contrib-sass": "v0.7.3", "grunt-contrib-watch": "v0.6.1" } }

In this file you can add whichever plugins are best suited for your project, check out the full list of plugins at the official Grunt site.

Install Grunt dependencies

Next, open up terminal, cd into sites/all/themes/bootstrap_subtheme, and run the following task:

sudo npm install

This command looks through your package.json file and installs the plugins listed. You only have to run this command once when you set up a new Grunt project, or when you add a new plugin to package.json.

Once you run this you will notice a new folder in your bootstrap_subtheme called node_modules which stores all of your plugins. If you are using git or SVN in your project, make sure to ignore this folder.

Now lets configure Grunt to use our plugins and automate some tasks. Within bootstrap_subtheme, create a Gruntfile.js file and add the following code:

module.exports = function (grunt) { grunt.initConfig({ watch: { src: { files: [‘**/*.scss', '**/*.php'], tasks: ['compass:dev'] }, options: { livereload: true, }, }, compass: { dev: { options: { sassDir: 'custom-sass/scss', cssDir: 'css', imagesPath: 'assets/img', noLineComments: false, outputStyle: 'compressed' } } } }); grunt.loadNpmTasks('grunt-contrib-compass'); grunt.loadNpmTasks('grunt-contrib-sass'); grunt.loadNpmTasks('grunt-contrib-watch'); };

This file is pretty straight forward, we configure our watch tasks to look for certain files and reload our browser, and then we define our scss and css directories so that compass knows where to look.

I won’t go into full detail with the options available, but visit the links below to see the documentation:

Watch documentatation

SASS documentatation

 

Step 7. Enabling live reload

Download and enable the livereload module into your new Drupal site. By default, you will have to be logged in as admin for live reload to take effect, but you can change this under Drupal permissions.

Once you enable livereload, refresh your browser window to load the livereload.js library.

Step 8. Running Grunt

We are all set! Head back over to Terminal and check you are in the bootstrap_subtheme directory, then type:

grunt watch

Now every time you edit a scss file, Grunt will compile your SASS into a compressed style.css file and automatically reload your browser.

Give it a go by importing compass into the top of your style folder and changing the body background to be a compass mixin.

@import 'compass'; @import '../bootstrap-sass/bootstrap.scss'; /* * Custom overrides */ body { @include background(linear-gradient(#eee, #fff)); }

To stop Grunt from watching your files, press Ctrl and C simultaneously on your keyboard.

Step 9. Debugging

One common problem you may encounter when using Grunt alongside live reload is the following error message:

Fatal error: Port 35729 is already in use by another process.

This means that the port being used by live reload is currently in use by another process, either by a different grunt project, or an application such as Chrome.

If you experience this problem run the following command and find out which application is using the port.

lsof | grep 35729

Simply close the application and run “grunt watch” again. If the error still persists and all else fails, restart your machine and try to stop Grunt from watching files before moving on to another project.

Next steps…

This is just a starting point on what you can achieve using Grunt to automate your tasks and gives you a quick insight to how we go about starting a project.

Other things to consider:

  • Duplicating the _variables.scss bootstrap file to override the default settings.
  • Adding linted, minified javascript files using the uglify plugin
  • Configure Grunt to automatically validate your markup using the W3C Markup Validator
  • Write your own Grunt plugins to suite your own projects
Let me know your thoughts - you can share your ideas and views in the comments below.

 

Read moreUsing Grunt, bootstrap, Compass and SASS in a Drupal sub themeBy David Allard | 18th August 2014
Categories: Elsewhere

Andrew Pollock: [tech] Solar follow up

Planet Debian - Sun, 17/08/2014 - 19:32

Now that I've had my solar generation system for a little while, I thought I'd write a follow up post on how it's all going.

Energex came out a week ago last Saturday and swapped my electricity meter over for a new digital one that measures grid consumption and excess energy exported. Prior to that point, it was quite fun to watch the old analog meter going backwards. I took a few readings after the system was installed, through to when the analog meter was disconnected, and the meter had a value 26 kWh lower than when I started.

I've really liked how the excess energy generated during the day has effectively masked any relatively small overnight power consumption.

Now that I have the new digital meter things are less exciting. It has a meter measuring how much power I'm buying from the grid, and how much excess power I'm exporting back to the grid. So far, I've bought 32 kWh and exported 53 kWh excess energy. Ideally I want to minimise the excess because what I get paid for it is about a third of what I have to pay to buy it from the grid. The trick is to try and shift around my consumption as much as possible to the daylight hours so that I'm using it rather than exporting it.

On a good day, it seems I'm generating about 10 kWh of energy.

I'm still impatiently waiting for PowerOne to release their WiFi data logger card. Then I'm hoping I can set up something automated to submit my daily production to PVOutput for added geekery.

Categories: Elsewhere

Jamie McClelland: Getting to know systemd

Planet Debian - Sun, 17/08/2014 - 17:42

Somehow both my regular work laptop and home entertainment computers (both running Debian Jessie) were switched to systemd without me noticing. Judging from by dpkg.log it may have happened quite a while ago. I'm pretty sure that's a compliment to the backwards compatibility efforts made by the systemd developers and a criticism of me (I should be paying closer attention to what's happening on my own machines!).

In any event, I've started trying to pay more attention now - particularly learning how to take advantage of this new software. I'll try to keep this blog updated as I learn. For now, I have made a few changes and discoveries.

First - I have a convenient bash wrapper I use that both starts my OpenVPN client to a samba server and then mounts the drive. I only connect when I need to and rarely do I properly disconnect (the OpenVPN connection does not start automatically). So, I've written the script to carefully check if my openvpn connection is present and either restart or start depending on the results.

I had something like this:

if ps -eFH | egrep [o]penvpn; then sudo /etc/init.d/openvpn restart else sudo /etc/init.d/openvpn start fi

One of the advertised advantages of systemd is the ability to more accurately detect if a service is running. So, first I changed this to:

if systemctl -q is-active openvpn.service; then sudo systemctl restart openvpn.service else sudo systemctl start openvpn.service fi

However, after reviewing the man page I realized I can shorten if further to simply:

sudo systemctl restart openvpn.service

According to the man page, restart means:

Restart one or more units specified on the command line. If the units are not running yet, they will be started.

After discovering this meaning for "restart" in systemd, I tested and realized that "restart" works the same way for openvpn using the old sysv style init system. Oh well. At least there's a man page and a stronger guarantee that it will work with all services, not just the ones that happen to respect that convention in their init.d scripts.

The next step was to disable openvpn on start up. I confess, I never bothered to really learn update-rc.d. Everytime I read the manpage I ended up throwing up my hands and renaming symlinks by hand. In the case of openvpn I had previously edited /etc/default/openvpn to indicate that "none" of the virtual private networks should be started.

Now, I've returned that file to the default configuration and instead I ran:

systemctl disable openvpn.service
Categories: Elsewhere

Victor Kane: Super simple example of local drush alias configuration

Planet Drupal - Sun, 17/08/2014 - 16:12

So I have a folder for drush scripts _above_ several doc root folders on a dev user's server. And I want to run status or whatever and my own custom drush scripts on _different_ Drupal web app instances. Drush has alias capability for different site instances, so you can do:

$ drush @site1 status

So, how to set up an aliases file?

(I'm on Ubuntu with Drush 6.2.0 installed with PEAR as per this great d.o. doc page Installing Drush on Any Linux Server Out There (Kalamuna people, wouldn't you know it?)).

Careful reading of the excellent drush documentation points you to a Drush Shell Aliases doc page, and from there to the actual example aliases file that comes with every drush installation.

So to be able to run drush commands for a few of my local Drupal instances, I did this:

  • In my Linux user directory, I created the file ~/.drush/aliases.drushrc.php
  • Contents:
<?php $aliases['site1'] = array( 'root' => '/home/thevictor/site1/drupal-yii', 'uri' => 'drupal-yii.example.com', ); $aliases['site2'] = array( 'root' => '/home/thevictor/site2', 'uri' => 'site2.example.com', );

Then I can do, from anywhere as long as I am logged in as that user:

$ cd /tmp
$ drush @site1 status
...
$ drush @site2 status

and lots of other good stuff. Have a nice weekend.

read more

Categories: Elsewhere

Francesca Ciceri: Adventures in Mozillaland #4

Planet Debian - Sun, 17/08/2014 - 11:36

Yet another update from my internship at Mozilla, as part of the OPW.

An online triage workshop

One of the most interesting thing I've done during the last weeks has been to held an online Bug Triage Workshop on the #testday channel at irc.mozilla.org.
That was a first time for me: I had been a moderator for a series of training sessions on IRC organized by Debian Women, but never a "speaker".
The experience turned out to be a good one: creating the material for the workshop had me basically summarize (not too much, I'm way too verbose!) all what I've learned in this past months about triaging in Mozilla, and speaking of it on IRC was a sort of challenge to my usual shyness.

And I was so very lucky that a participant was able to reproduce the bug I picked as example, thus confirming it! How cool is that? ;)

The workshop was about the very basics of triaging for Firefox, and we mostly focused on a simplified lifecycle of bugs, a guided tour of bugzilla (including the quicksearch and the advanced one, the list view, the individual bug view) and an explanation of the workflow of the triager. I still have my notes, and I plan to upload them to the wiki, sooner or later.

I'm pretty satisfied of the outcome: the only regret is that the promoting wasn't enough, so we have few participants.
Will try to promote it better next time! :)

about:crashes

Another thing that had me quite busy in the last weeks was to learn more about crashes and stability in general.
If you are unfortunate enough to experience a crash with Firefox, you're probably familiar with the Mozilla Crash Reporter dialog box asking you to submit the crash report.

But how does it works?

From the client-side, Mozilla uses Breakpad as set of libraries for crash reporting. The Mozilla specific implementation adds to that a crash-reporting UI, a server to collect and process crash reported data (and particularly to convert raw dumps into readable stack traces) and a web interface, Socorro to view and parse crash reports.

Curious about your crashes? The about:crashes page will show you a list of the submitted and unsubmitted crash reports. (And by the way, try to type about:about in the location bar, to find all the super-secret about pages!)

For the submitted ones clicking on the CrashID will take you to the crash report on crash-stats, the website where the reports are stored and analyzed. The individual crash report page on crash-stats is awesome: it shows you the reported bug numbers if any bug summaries match the crash signature, as well as many other information. If crash-stats does not show a bug number, you really should file one!

The CrashKill team works on these reports tracking the general stability of the various channels, triaging the top crashes, ensuring that the crash bugs have enough information and are reproducible and actionable by the devs.
The crash-stats site is a mine of information: take a look at the Top Crashes for Firefox 34.0a1.
If you click on a individual crash, you will see lots of details about it: just on the first tab ("Signature Summary") you can find a breakdown of the crashes by OS, by graphic vendors or chips or even by uptime range.
A very useful one is the number of crashes per install, so that you know how widespread is the crashing for that particular signature. You can also check the comments the users have submitted with the crash report, on the "Comments" tab.

One and Done tasks review

Last week I helped the awesome group of One and Done developers, doing some reviewing of the tasks pages.

One and Done is a brilliant idea to help people contribute to the QA Mozilla teams.
It's a website proposing the user a series of tasks of different difficulty and on different topics to contribute to Mozilla. Each task is self-contained and can last few minutes or be a bit more challenging. The team has worked hard on developing it and they have definitely done an awesome job! :)

I'm not a coding person, so I just know that they're using Django for it, but if you are interested in all the dirty details take a look at the project repository. My job has been only to check all the existent tasks and verify that the description and instruction are correct, that the task is properly tagged and so on. My impression is that this an awesome tool, well written and well thought with a lot of potential for helping people in their first steps into Mozilla. Something that other projects should definitely imitate (cough Debian cough).

What's next?

Next week I'll be back on working on bugs. I kind of love bugs, I have to admit it. And not squashing them: not being a coder make me less of a violent person toward digital insects. Herding them is enough for me. I'm feeling extremely non-violent toward bugs.

I'll try to help Liz with the Test Plan for Firefox 34, on the triaging/verifying bugs part.
I'll also try to triage/reproduce some accessibility bugs (thanks Mario for the suggestion!).

Categories: Elsewhere

Andreas Metzler: progress

Planet Debian - Sun, 17/08/2014 - 08:03

The GnuTLS28 transition is making progress, more than 60% done:

Thanks to a national holiday combined with rainy weather this should look even better soon:
ametzler@argenau:~$ w3m -dump https://ftp-master.debian.org/deferred.html | grep changes | wc -l
26
ametzler@argenau:~$ w3m -dump https://ftp-master.debian.org/deferred.html | grep Metzler | wc -l
18

Categories: Elsewhere

Matt Brown: GPG Key Transition

Planet Debian - Sun, 17/08/2014 - 04:45

Firstly, thanks to all who responded to my previous rant. It turns out exactly what I wanted does exist in the form of a ID-000 format smartcard combined with a USB reader. Perfect. No idea why I couldn’t find that on my own prior to ranting, but very happy to have found it now.

Secondly, now that I’ve got my keys and management practices in order, it is time to begin transitioning to my new key.

Click this link to find the properly signed, full transition statement.

I’m not going to paste the full statement into this post, but my new key is:

pub 4096R/A48F065A 2014-07-27 [expires: 2016-07-26] Key fingerprint = DBB7 64A1 797D 2E21 C799 3CD7 A628 CB5F A48F 065A uid Matthew Brown <matt @mattb.net.nz> uid Matthew Brown <mattb @debian.org> sub 4096R/1937883F 2014-07-27 [expires: 2016-07-26]

If you signed my old key, I’d very much appreciate a signature on my new key, details and instructions in the transition statement. I’m happy to reciprocate if you have a similarly signed transition statement to present.

Categories: Elsewhere

Ian Donnelly: How-To: Integrate elektra-merge Into a Debian Package

Planet Debian - Sat, 16/08/2014 - 20:43

Hi Everybody,

So I already explained that we decided to go in a new direction and patch ucf to allow automatic configuration file merging with any custom command. Today I wanted to explain how to use ucf’s new `–three-way-merge-command` functionality in
conjunction with Elektra in order to ultilize Elektra’s powerful tools in order to allow automatic three-way merges of your package’s configuration during upgrades in a way that is more reliable than a diff3 merge. This guide assumes that you are fimilar with ucf already and are just trying to impliment the --three-way-merge-command option using Elektra.

The addition of the --three-way-merge-command option was a part of my Google Summer of Code Project. This option takes the form:

--three-way-merge-command command [New File] [Destination]

Where command is the command you would like to use for the merge. New File and Destination are the same as always.

We added a new script to Elektra called elektra-merge for use with this new option in ucf. This script acts as a liaison between ucf and Elektra, allowing a regular ucf command to run a kdb merge even though ucf commands only pass New File and Destination whereas kdb merge requires ourpath, theirpath, basepath, and resultpath. Since ucf already performs a three-way merge, it keeps track of all the necessary files to do so, even though it only takes in New File and Destination.

In order to use elektra-merge, the current configuration file must be mounted to KDB to serve as ours in the merge. The script automatically mounts theirs, base, and result using the kdb remount command in order to use the same backend as ours (since all versions of the same file should use the same backend anyway) and this way users don’t need to worry about specifying the backend for each version of the file. Then the script attempts a merge on the newly mounted KeySets. Once this is finished, either on success or not, the script finishes by unmouting all but our copy of the file to cleanup KDB. Then, if the merge was successful ucf will replace ours with the result providing the package with an automatically merged configuration which will also be updated in KDB itself.

Additionally, we added two other scripts, elektra-mount and elektra-umount which act as simple wrappers for kdb mount and kdb umount. That work identically but are more script friendly

The full command to use elektra-merge to perform a three-way merge on a file managed by ucf is:

ucf --three-way --threeway-merge-command elektra-merge [New File] [Destination]

Thats it! As described above, elektra-merge is smart enough to run the whole merge off of the information from that command and utilizes the new kdb remount command to do so.

Integrating elektra-merge into a package that already uses ucf is very easy! In postinst you should have a line similar to:

ucf [New File] [Destination]

or perhaps:

ucf --three-way [New File] [Destination]

All you must do is in postinst, when run with the configure option you must mount the config file to Elektra:

elektra-mount [New File] [Mounting Destination] [Backend]

Next, you must update the line containing ucf with the options --three-way and --threeway-merge-command like so:

ucf --three-way --threeway-merge-command elektra-merge [New File] [Destination]

Then, in your postrm script, during a purge, you must umount the config file before deleting it:

elektra-umount [Name]

That’s it! With those small changes you can use Elektra to perform automatic three-way merges on any files that your package uses ucf to handle!

I just wanted to show a quick example, below is a diff representing the changes we made to the samba-common package in order to allow automatic configuration merging for smb.conf using Elektra. We chose this package because it already
uses ucf to handle smb.conf but it frequently requires users to manually merge changes across versions. Here is the patch showing what we changed:

diff samba_orig/samba-3.6.6/debian/samba-common.postinst samba/samba-3.6.6/debian/samba-common.postinst
92c92,93
< ucf --three-way --debconf-ok "$NEWFILE" "$CONFIG"
---
> elektra-mount "$CONFIG" system/samba/smb ini
> ucf --three-way --threeway-merge-command elektra-merge --debconf-ok "$NEWFILE" "$CONFIG"
Only in samba/samba-3.6.6/debian/: samba-common.postinst~
diff samba_orig/samba-3.6.6/debian/samba-common.postrm samba/samba-3.6.6/debian/samba-common.postrm
4a5
> elektra-umount system/samba/smb

As you can see, all we had to do was add the line to mount smb.conf during install, update the ucf command to include the new --threeway-merge-command option, and unmount system/samba/smb during a purge. It really is that easy!

Sincerely,
Ian S. Donnelly

Categories: Elsewhere

loldebian - Can I has a RC bug?: True Debian Stable

Planet Debian - Sat, 16/08/2014 - 20:00

Loled by MadameZou (inspired by this tweet)


Categories: Elsewhere

Daniel Pocock: WebRTC: what works, what doesn't

Planet Debian - Sat, 16/08/2014 - 17:49

With the release of the latest rtc.debian.org portal update, there are numerous improvements but there are still some known problems too.

The good news is that if you have a web browser, you can probably make successful WebRTC calls from one developer to another without any need to install or configure anything else.

The bad news is that not every permutation of browser and client will work. Here I list some of the limitations so people won't waste time on them.

The SIP proxy supports any SIP client

Just about any SIP client can connect to the proxy server and register. This does not mean that every client will be able to call each other. Generally speaking, modern WebRTC clients will be able to call each other. Standalone softphones or deskphones will call each other. Calling from a normal softphone or deskphone to a WebRTC browser, or vice-versa, will not work though.

Some softphones, like Jitsi, have implemented most of the protocols to communicate with WebRTC but they are yet to put the finishing touches on it.

Chat should just work for any combination of clients

The new WebRTC frontend supports SIP chat messaging.

There is no presence or buddy list support yet.

You can even use a tool like sipsak to accept or send SIP chats from a script.

Chat works for any client new or old. Although a WebRTC user can't call a softphone user, for example, they can send chats to each other.

WebRTC support in Iceweasel 24 on wheezy systems is very limited

On a wheezy system, the most recent Iceweasel update is version 24.7.

This version supports most of WebRTC but does not support TURN relay servers to help you out of a NAT network.

If you call between two wheezy machines on the same NAT network it will work. If the call has to traverse a NAT boundary it will not work.

Wheezy users need to either download a newer Firefox version or use Chromium.

JsSIP doesn't handle ICE elegantly

Internet Connectivity Establishment (ICE, RFC 5245) is meant to prevent calls from being answered with missing audio or video streams.

ICE is a mandatory part of WebRTC.

When correctly implemented, the JavaScript application will exchange ICE candidates and run the connectivity checks before alerting anybody that a call is ringing. If the checks fail (for example, with Iceweasel 24 and NAT), the caller should be told the call can't be made and the callee shouldn't be disturbed at all.

JsSIP is not operating in this manner though. It alerts the callee before telling the browser to start the connectivity checks. Then it even waits for the callee to answer. Only then does it tell the browser to start checking connectivity. This is not a fault with the ICE standard or the browser, it is an implementation problem.

Therefore, until this is fully fixed, people may still see some calls that appear to answer but don't have any media stream. After this is fixed, such calls really will be a thing of the past.

Debian RTC testing is more than just a pipe dream

Although these glitches are not ideal for end users, there is a clear roadmap to resolve them.

There are also a growing collection of workarounds to minimize the inconvenience. For example, JSCommunicator has a hack to detect when somebody is using Iceweasel 24 and just refuse to make the call. See the option require_relay_candidate in the config.js settings file. This also ensures that it will refuse to make a call if the TURN server is offline. Better to give the user a clear error than a call without any audio or video stream.

require_relay_candidate is enabled on freephonebox.net because it makes life easier for end users. It is not enabled on rtc.debian.org because some DDs may be willing to tolerate this issue when testing on a local LAN.

Categories: Elsewhere

Matthias Klumpp: AppStream/DEP-11 Debian progress

Planet Debian - Sat, 16/08/2014 - 16:50

There hasn’t been a progress-report on DEP-11 for some time, but that doesn’t mean there was no work going on on it.

DEP-11 is Debian’s implementation of AppStream, as well as an effort to enhance the metadata available about software in Debian. While initially, AppStream was only about applications, DEP-11 was designed with a larger scope, to collect data about libraries, binaries and things like Python modules. Now, since AppStream 0.6, DEP-11 and AppStream have essentially the same scope, with the difference of DEP-11 metadata being described in YAML, while official AppStream data is XML. That was due to a request by our ftpmasters team, which doesn’t like XML (which is also not used anywhere in Debian, as opposed to YAML). But this doesn’t mean that people will have to deal with the YAML file format: The libappstream library will just take DEP-11 data as another data source for it’s Xapian database, allowing anything using libappstream to access that data just like the XML stuff. Richards libappstream-glib will also receive support for the DEP-11 format soon, filling it’s in-memory data cache and enabling the use of GNOME-Software on Debian.

So, what has been done so far? The past months, my Google Summer of Code student. Abhishek Bhattacharjee, was working hard to integrate DEP-11 support into dak, the Debian Archive Kit, which maintains the whole Debian archive. The result will be an additional metadata table in our internal Postgres database, storing detailed information about the software available in a Debian package, as well as “Components-<arch>.yml.gz” files in the Debian repositories. Dak will also produce an application icon-cache and a screenshots repository. During the time of the SoC, Abhishek focused mainly on the applications part of things, and less on the other components (like extracting data about Python modules or libraries) – these things can easily be implemented later.

The remaining steps will be to polish the code and make it merge-ready for Debian’s dak (as soon as it has received enough testing, we will likely give it a try on the Tanglu Debian derivative). Following that, Apt will be extended to fetch the DEP-11 data on-demand on systems where it is useful (which is currently mostly desktop-systems) – if you want to save a little bit of space, you will be able to disable downloading this extra metadata in Apt. From there, libappstream will take the data for it’s Xapian db. This will ultimatively lead to the removal of the much-hated (from ftpmasters and maintainers side) app-install-data package, which has not been updated for two years and only contains a small fraction of the metadata provided by DEP-11.

What Debian will ultimatively gain from this effort is support for software-centers like GNOME-Software, and improved support for tools like Apper and Muon in displaying applications. Long-term, with more metadata being available, It would be cool to add support for it to “specialized package managers”, like Python’s pip, npm or gem, to make them fetch information about available distribution software and install that instead of their own copies from 3rd-party repositories, if possible. This should ultimatively lead to less code duplication on distributions and will likely result in fewer security issues, since the officially maintained and integrated distribution packages can easily be used, if possible. This is no attempt to make tools like pip obsolete, but an attempt to have the different tools installing software on your machine communicate better, instead of creating parallel worlds in terms of software management. Another nice sideeffect of more metadata will be options to search for tools handling mimetypes in the software repos (in case you can’t open a file), smart software centers installing missing firmware, and automatic suggestions for developers which software they need to install in order to build a specific software package. Also, the data allows us to match software across distributions, for that, I will have some news soon (not sure how soon though, as I am currently in thesis-writing-mode, and therefore have not that much spare time). Since the goal is to have these features available on all distributions supporting AppStream, it will take longer to realize – but we are on a good way.

So, if you want some more information about my student’s awesome work, you can read his blogpost about it.

Sadly, I only see a very small chance to have the basic DEP-11 stuff land in-time for Jessie (lots of review work needs to be done, and some more code needs to be written), but we will definitively have it in Jessie+1.

A small example on how this data will look like can be found here – a larger, actual file is available here. Any questions and feedback are highly appreciated.

Categories: Elsewhere

Bits from Debian: Debian turns 21!

Planet Debian - Sat, 16/08/2014 - 11:45

Today is Debian's 21st anniversary. Plenty of cities are celebrating Debian Day. If you are not close to any of those cities, there's still time for you to organize a little celebration!

Happy 21st birthday Debian!

Categories: Elsewhere

Wesley Tanaka: Fast, Low Memory Drupal 6 System Module

Planet Drupal - Sat, 16/08/2014 - 03:24

A Drupal 5 version of this module is also available.  If you would like this patch to be committed to Drupal core, please do not leave a comment on this page—please instead add your comment to Drupal issue #455092.

This is a drop-in replacement for the system.module of Drupal 6.33 which makes your Drupal 6 site use less memory and may even make it faster. A test I ran in a development environment with a stock Drupal 6 installation suggested that I got:

read more

Categories: Elsewhere

Appnovation Technologies: Some World-Class Museums Using Drupal

Planet Drupal - Fri, 15/08/2014 - 22:56

I love museums and galleries! I love Open Source! I love Drupal! Why not weave them all together into a single, harmonious blog post, I thought….

var switchTo5x = false;stLight.options({"publisher":"dr-75626d0b-d9b4-2fdb-6d29-1a20f61d683"});
Categories: Elsewhere

Mediacurrent: Join Mediacurrent for These Drupal and Digital Marketing Events

Planet Drupal - Fri, 15/08/2014 - 22:16

Some Mediacurrent's top talent will be leading discussions on the latest developments in Drupal and digital marketing trends in many upcoming events. Check out the links below for more information and to register. We hope to see you there and make sure you stop by and say “hello”!

Categories: Elsewhere

Paul Tagliamonte: PyGotham 2014

Planet Debian - Fri, 15/08/2014 - 20:54

I’ll be there this year!

Talks look amazing, I can’t wait to hit up all the talks. Looks really well organized! Talk schedule has a bunch that I want to hit, I hope they’re recorded to watch later!

If anyone’s heading to PyGotham, let me know, I’ll be there both days, likely floating around the talks.

Categories: Elsewhere

Drupal @ Penn State: Drupal, Singularity, Digital Activism, and saving our institutions

Planet Drupal - Fri, 15/08/2014 - 19:51

It is as important to tell a great story using technology as it is to author technology that allows more stories to be told.

Categories: Elsewhere

Aurelien Jarno: Intel about to disable TSX instructions?

Planet Debian - Fri, 15/08/2014 - 18:02

Last time I changed my desktop computer I bought a CPU from the Intel Haswell family, the one available on the market at that time. I carefully selected the CPU to make sure it supports as many instructions extensions as possible in this family (Intel likes segmentation, even high-end CPUs like the Core i7-4770k do not support all possible instructions). I ended-up choosing the Core i7-4771 as it supports the “Transactional Synchronization Extensions” (Intel TSX) instructions, which provide transactional memory support. Support for it has been recently added in the GNU libc, and has been activated in Debian. By choosing this CPU, I wanted to be sure that I can debug this support in case of bug report, like for example in bug#751147.

Recently some computing websites started to mention that the TSX instructions have bugs on Xeon E3 v3 family (and likely on Core i7-4771 as they share the same silicon and stepping), quoting this Intel document. Indeed one can read on page 49:

HSW136. Software Using Intel TSX May Result in Unpredictable System Behavior

Problem: Under a complex set of internal timing conditions and system events, software using the Intel TSX (Transactional Synchronization Extensions) instructions may result in unpredictable system behavior.
Implication: This erratum may result in unpredictable system behavior.
Workaround: It is possible for the BIOS to contain a workaround for this erratum.

And later on page 51:

Due to Erratum HSw136, TSX instructions are disabled and are only supported for software development. See your Intel representative for details.

The same websites tell that Intel is going to disable the TSX instructions via a microcode update. I hope it won’t be the case and that they are going to be able to find a microcode fix. Otherwise it would mean I will have to upgrade my desktop computer earlier than expected. It’s a bit expensive to upgrade it every year and that’s a the reason why I skipped the Ivy Bridge generation which didn’t bring a lot from the instructions point of view. Alternatively I can also skip microcode and BIOS updates, in the hope I won’t need another fix from them at some point.

Categories: Elsewhere

Acquia: New Commons Media Integration

Planet Drupal - Fri, 15/08/2014 - 16:22

Drupal is known for providing a broad range of functionality with its extensible core and the tens of thousands of free contributed modules which add or extend functionality. One challenge for people who are building applications on top of Drupal is taking advantage of this flexibility and broad range of available functionality without compromising the usability of their applications for end users, and even for themselves as site maintainers.

Categories: Elsewhere

Four Kitchens: Frontend roundup: DrupalCon Amsterdam

Planet Drupal - Fri, 15/08/2014 - 15:50

As many of you might know, I am now on the other side of the pond, so I’ve paid extra attention to the DrupalCon Amsterdam schedule as it has been coming together. I want to highlight a few frontend goodies that I’m particularly excited to see.

DrupalCon Sass CSS JavaScript Drupal
Categories: Elsewhere

Pages

Subscribe to jfhovinne aggregator - Elsewhere