Michal Čihař: Weblate 2.8

Planet Debian - Wed, 31/08/2016 - 11:30

Quite on schedule (just one day later), Weblate 2.7 is out today. This release brings Subversion support or improved zen mode.

Full list of changes:

  • Documentation improvements.
  • Translations.
  • Updated bundled javascript libraries.
  • Added list_translators management command.
  • Django 1.8 is no longer supported.
  • Fixed compatibility with Django 1.10.
  • Added Subversion support.
  • Separated XML validity check from XML mismatched tags.
  • Fixed API to honor HIDE_REPO_CREDENTIALS settings.
  • Show source change in zen mode.
  • Alt+PageUp/PageDown/Home/End now works in zen mode as well.
  • Add tooltip showing exact time of changes.
  • Add option to select filters and search from translation page.
  • Added UI for translation removal.
  • Improved behavior when inserting placeables.
  • Fixed auto locking issues in zen mode.

If you are upgrading from older version, please follow our upgrading instructions.

You can find more information about Weblate on https://weblate.org, the code is hosted on Github. If you are curious how it looks, you can try it out on demo server. You can login there with demo account using demo password or register your own user. Weblate is also being used on https://hosted.weblate.org/ as official translating service for phpMyAdmin, OsmAnd, Aptoide, FreedomBox, Weblate itself and many other projects.

Should you be looking for hosting of translations for your project, I'm happy to host them for you or help with setting it up on your infrastructure.

Further development of Weblate would not be possible without people providing donations, thanks to everybody who have helped so far! The roadmap for next release is just being prepared, you can influence this by expressing support for individual issues either by comments or by providing bounty for them.

Filed under: Debian English Gammu phpMyAdmin SUSE Weblate | 0 comments

Categories: Elsewhere

Jim Birch: One or Multiple. Make a Bootstrap Carousel from a Multiple Value Drupal Field

Planet Drupal - Wed, 31/08/2016 - 11:20

Earlier this week I wrote how to create a Bootstrap Carousel using the Drupal Paragraphs module.  We can modify this approach to add a Bootstrap Carousel on a field.  I use this most on nodes that have a Featured Image field, one where we can use as a hero image, and one where we can set the Open Graph Meta image on a per node basis.

Thanks to social media outlets like Facebook, LinkedIn, and Twitter, every piece of content published to the web should have an image attached to it, define by the og:image meta tag.  But what happens when the content has or deserves more than one image?  If the field allows multiple, we can display them as a carousel/slideshow.

While I am discussing images in this blog post, this technique could be used for any type of field.  And even though I develop using the Bootstrap front end framework, the same theory could be applied to other frameworks and javascripts like the Slick carousel.  You would just need to change the markup to fit your specific framework.

So, all we need is a multiple value field.  Once we have defined which field that is, take a look at this markup and replace your template's markup with it.  The file name would be field--FIELD-NAME.html.twig

Read more

Categories: Elsewhere

Unimity Solutions Drupal Blog: Strategies for Revenue Streams for Digital Publishers

Planet Drupal - Wed, 31/08/2016 - 11:00

Coupled with increasing literacy rate publishing industry in India has seen tremendous growth in the past decade. Contradicting the global trend, in India the Newspaper industry has seen 8% growth last year. Most of this growth is driven by vernacular newspapers. It is expected that in the next decade regional newspapers grow at a rate of 12-14% as primary source of information. Although there is an increase in the adoption of digital books, digital news is yet to be accepted by Indians. 

Categories: Elsewhere

Joey Hess: late summer

Planet Debian - Wed, 31/08/2016 - 03:15

With days beginning to shorten toward fall, my house is in initial power saving mode. Particularly, the internet gateway is powered off overnight. Still running electric lights until bedtime, and still using the inverter and other power without much conservation during the day.

Indeed, I had two laptops running cpu-melting keysafe benchmarks for much of today and one of them had to charge up from empty too. That's why the house power is a little low, at 11.0 volts now, despite over 30 amp-hours of power having been produced on this mostly clear day. (1 week average is 18.7 amp-hours)

September/October is the tricky time where it's easy to fall off a battery depletion cliff and be stuck digging out for a long time. So time to start dusting off the conservation habits after summer's excess.

I think this is the first time I've mentioned any details of living off grid with a bare minimum of PV capacity in over 4 years. Solar has a lot of older posts about it, and I'm going to note down the typical milestones and events over the next 8 months.

Categories: Elsewhere

Drupal Association News: MarCom changes, and how we’ll keep moving forward

Planet Drupal - Wed, 31/08/2016 - 02:47

This post is part of a series about recent changes at the Drupal Association.

Like each of the Drupal Association teams, Marketing & Communications (MarCom) has a broad range of responsibilities. Our job is to persuade and sell, but also to inform and educate. About our work. About yours. About what it’s like to be part of this community.

We contribute to more than 20 Association programs and products, like Membership, Supporter Program fulfillment, and DrupalCon support. Day-to-day community engagement—via Zendesk, Twitter, Facebook, issue queue, etc.—is part of each of those.

It’s a feat that’s always been impossible without help from community members like you. There’s the work Dave does in the content issue queue. What Paul and Alex do on our social accounts. And what Jess, catch, Gábor, David, Michael, and many more do to coordinate releases. There’s more than we can count.

But it’s also a feat we used to have more staff to handle. A year ago, we were a team of five. After retrenchment and reorganization, we’re a team of two. As we support and advance the Association’s initiatives to increase Drupal adoption, we have to make hard choices.

First, an apology

We’re sorry. This isn’t just business; our work is for you. We think about the impact it has every day. Having to stop doing some of that work—even the work some may consider little things—isn’t something we take lightly.

If we don’t have a sticker shipping budget, it may impact the energy you create at a meetup. If we don’t support Global Training Days, maybe the next would-be contributor in your region isn’t inspired to learn Drupal. And if we don’t share stories of grant recipients, maybe you don’t know that your membership dues make an immeasurable difference in people’s lives. (They do, by the way.)

So, if we let you down, we’re sorry. Your trust is invaluable to us. We try to earn it every day.

But we must decide Criteria

The Association has two priorities:

  • Create sustainable financial health
  • Get more organizations with large, complex digital ecosystems—government agencies, publishers, universities, etc.—to choose Drupal

This means prioritizing revenue-generating initiatives. It means changing, or adding to, some of the user experience on Drupal.org—to better present Drupal as an answer to the questions our new primary audience has. And it means amplifying stories that persuade that audience to make Drupal part of their systems.


We have a lot of work to do. And we won’t be able to do it alone. For those reasons and more, we need to make a promise about the content we publish.

We started on the following content strategy statement based on insight from our leadership. It describes our goal, methods, and primary audience in relation to each other.

Slides available on Google Drive

Clarity gives us the best chance to be efficient, effective, and consistent. So, we’ll keep narrowing that statement. It’s an important step toward giving the content we create—or ask others to create—a chance to live up to a shared potential.


What can two people do or coordinate well? What can we complete and sustain?

A team that churns out content but doesn’t, for example, regularly pause to evaluate each thing it creates—that doesn’t turn that knowledge into insight for what it makes next (or doesn’t)—isn’t doing its job well.

Our decisions about what work we will and won’t do aren’t just about what we can somehow make happen. They’re about what we can turn into sustainable growth.

What we’ll do

There are products and programs that are critical to our identity. We’re not a membership association if we don’t have a membership program, for example. Then there are mission-critical initiatives. Drupal.org (D.o) is an incredible platform, so promoting Drupal without using the site would undercut our mission. And there are fiscal health requirements, like DrupalCon ticket sales.

So, most of our work will stay on our to-do list. It’ll just be prioritized differently. These are the areas of work that either stay mostly as-is or grow:

  • Drupal marketing and communications (e.g., release support)
  • D.o content management of promoted areas (e.g., the planned homepage refresh and “Drupal for industries” content)
  • DrupalCon marketing and communications, and sponsor and partner fulfillment
  • Membership (for people and organizations)
  • Supporter Program fulfillment
  • Jobs.drupal.org marketing and communications
  • Camp engagement (we’ll create a 2017 camp kit before January; promise)
  • Elections support
  • Association communications, generally (e.g., writing and editing posts like this)
The work we won’t

To make the work we’ll do possible, there’s work we can’t give as much attention.

There’s work we, unfortunately, won’t do at all.

  • Help build new D.o Sections (the initiative Tatiana led is paused for now)
  • D.o content management of areas that aren’t being promoted
  • Support the Drupal Store (its inventory will be liquidated)
  • Share Community Spotlights
  • Manage assoc.drupal.org content (the subdomain will be phased out)
  • Promote our webcast series

And there’s work we’ll reduce or others will lead.

  • Global Training Days (we’re setting up a community working group to coordinate it)
  • Advertising content production (we’ll depend on designers we trust)
  • Email newsletters (we’ll send fewer than the four we do now)
  • Sticker distribution (we’ll bring them to DrupalCons, but won’t ship them around the world)

This process will be fluid. As we adjust to these changes, these work lists may change again. If so, we’ll let you know.

How can you help?

If you help the Engineering and Events teams by giving back in ways Tim mentioned or contributing as Rachel suggested, you help us too. But there are things you can do to help MarCom specifically.

  1. Become a member. The revenue helps, of course. But it’s also one of the best ways to advocate for community programs.
  2. Refer friends and colleagues. Ask people you know at organizations that aren’t using Drupal to contact us. We need to know how these organizations make their decisions, and why they haven’t decided on Drupal.
  3. Support Global Training Days. Join the group at groups.drupal.org/global-training-days and add your 2016 events there. And if you want to know more about the working group we’re organizing, email Lizz.
  4. Contribute in the content issue queue. Especially for case studies, services listings, and training listings.
  5. Submit even the non-DrupalCon surveys. It’ll help us make decisions based on what you like, appreciate, value, and actually do.


About Bradley

Bradley Fields joined the Drupal Association in March 2015 as Content Manager. He now leads the content team as Marketing & Communications Manager.

When not at his desk, he’s curating Spotify playlists, watching an animated Disney movie, on the hunt for great whisk{e}y, or reading Offscreen magazine. He lives in Portland, OR.

Categories: Elsewhere

Mike Gabriel: credential-sheets: User Account Credential Sheets Tool

Planet Debian - Tue, 30/08/2016 - 22:05
Preface This little piece of work has been pending on my todo list for about two years now. For our local school project "IT-Zukunft Schule" I wrote the little tool credential-sheets. It is a little Perl script that turns a series of import files (CSV format) as they have to be provided for user mass import into GOsa² (i.e. LDAP) into a series of A4 sheets with little cards on them, containing initial user credential information. The upstream sources are on Github and I have just uploaded this little tool to Debian. Introduction After mass import of user accounts (e.g. into LDAP) most site administrators have to create information sheets (or snippets) containing those new credentials (like username, password, policy of usage, etc.). With this tiny tool, providing these pieces of information to multiple users, becomes really simple. Account data is taken from a CSV file and the sheets are output as PDF using easily configurable LaTeX template files. Usage Synopsis: credential-sheets [options] <CSV-file-1> [<CSV-file-2> [...]] Options The credential-sheets command accepts the following command-line options: --help Display a help with all available command line options and exit. --template=<tpl-name> Name of the template to use. --cols=<x> Render <x> columns per sheet. --rows=<y> Render <y> rows per sheet. --zip Do create a ZIP file at the end. --zipfilename=<zip-file-name> Alternative ZIP file name (default: name of parent folder). --debug Don't remove temporary files. CSV File Column Arrangement The credential-sheets tool can handle any sort of column arrangement in given CSV file(s). It expects the CSV file(s) to have column names in their first line. The given column names have to map to the VAR-<column-name> placeholders in credential-sheets's LaTeX templates. The shipped-with templates (students, teachers) can handle these column names:
  • login -- The user account's login id (uid)
  • lastName -- The user's last name(s)
  • firstName -- The user's first name(s)
  • password -- The user's password
  • form -- The form name/ID (student template only)
  • subjects -- A list of subjects taught by a teacher (teacher template only)
If you create your own templates, you can be very flexible in using your own column names and template names. Only make sure that the column names provided in the CSV file(s)'s first line match the variables used in the customized LaTeX template. Customizations The shipped-with credential sheets templates are expected to be installed in /usr/share/credential-sheets/ for system-wide installations. When customizing templates, simply place a modified copy of any of those files into ~/.credential-sheets/ or /etc/credential-sheets/. For further details, see below. The credential-sheets tool uses these configuration files:
  • header.tex (LaTeX file header)
  • <tpl-name>-template.tex (where as <tpl-name> students and teachers is provided on default installations, this is extensible by defining your own template files, see below).
  • footer.tex (LaTeX file footer)
Search paths for configuration files (in listed order):
  • $HOME/.credential-sheets/
  • ./
  • /etc/credential-sheets/
  • /usr/local/share/credential-sheets/
  • /usr/share/credential-sheets/
You can easily customize the resulting PDF files generated with this tool by placing your own template files, header and footer where appropriate. Dependencies This project requires the following dependencies:
  • Text::CSV Perl module
  • Archive::Zip Perl module
  • texlive-latex-base
  • texlive-fonts-extra
Copyright and License Copyright © 2012-2016, Mike Gabriel <mike.gabriel@das-netzwerkteam.de>. Licensed under GPL-2+ (see COPYING file).
Categories: Elsewhere

Palantir: From Intern to Employee: What to Expect in the Transition

Planet Drupal - Tue, 30/08/2016 - 19:59
From Intern to Employee: What to Expect in the Transition brandt Tue, 08/30/2016 - 12:59 Matt Carmichael Aug 30, 2016

Internship experience is extremely valuable, especially in web development. So what can you expect in the transition from coursework to client work?

In this post we will cover...
  • The benefits of having an internship
  • What learning challenges you can expect

Want to work at Palantir?

Send us your resume!

The transition from student to full-time programmer can be intimidating. The days of submitting buggy and poorly-tested code that is just good enough to get a passing grade are over. Every line of code is just as important as the last, and even the smallest mistakes could lead to catastrophic problems for your team and your client. But trust me, it's doable!

I was lucky enough to have an internship at Palantir the summer before becoming a full-time employee. The opportunity to dive into the professional side of development without having quite the expectations of a full-time employee is priceless in my opinion. It gives you a chance to focus more on the learning experience of being a developer, without stressing as much over productivity and results. 

The most prominent takeaways from my time as an intern were learning and engaging in the process of collaborative programming with a team and also understanding the importance of code quality. In school a majority of the projects are done individually and you rarely have to rely on anyone else to get the job done. This is one of the biggest adjustments to make, as the workplace is the exact opposite. The members of your team need your code to be correct, verbose, and understandable in order to do their jobs. That being said, coding standards cannot be stressed enough in a team environment and doing things ‘your way’ is not going to cut it. 

At Palantir, I was introduced to an extensive Github repository devoted to documenting our code standards and development process. Familiarizing myself with the workflow, documentation, and code style expectations seemed like a full time job in its own right. Every line of code is submitted in a pull request which is inspected by a senior engineer and tested against the standards that are laid out in the developer documentation. 

Issues as seemingly trivial as the size of an indentation and documentation formatting are sent back to the programmer for revisions. Once the code has been through the critique and revision phase and is cleared by the engineer, the code is finally merged into the master branch and is ready to go into production.  This level of attention to details that seem trivial at a glance, but are vital for code consistency and readability, is in place to ensure the best product for the client.

Once you get past the logistics of teamwork, and the code standards become second nature to you, it's all uphill from there. As I settle in and become more confident in my role I realize how exciting worklife is. The opportunity to work with people of all backgrounds and skillsets and to see a project come together from start to finish is what makes it all worthwhile. 

Stay connected with the latest news on web strategy, design, and development.

Sign up for our newsletter.
Categories: Elsewhere

Chromatic: Drush SQL Sync Alternative: SQL Sync Pipe

Planet Drupal - Tue, 30/08/2016 - 19:56

Using Drush to sync databses (drush sql-sync) is a valuable tool, but it is not always an efficient choice when dealing with large databases (think over 1GB). SQL Sync Pipe (drush sql-sync-pipe) is a Drush command that provides an alternative for drush sql-sync that streams the database dump directly from the source to the destination as opposed to sql-sync saving the database dump, transferring it via rsync and then importing the dump file. As an added bonus it excludes cache tables by default.


Below are examples from the command's README, syncing the same 1.05Gib database using the two different methods:

drush sql-sync Command: drush sql-sync @alias.dev @alias.sandbox --no-cache Transfer size: 88.1MiB (compressed using rsync) Import size: 1.05GiB Total time elapsed: 46 minutes 47 seconds drush sql-sync-pipe Command: drush sql-sync-pipe @alias.dev @alias.sandbox --progress Transfer size: 88.1MiB (sent compressed using gzip) Import size: 1.05GiB Import & transfer time: 27 minutes 05 seconds Total time elapsed: 30 minutes 35 seconds

What are you waiting for? Download and install SQL Sync Pipe and get started!

drush dl drush_sql_sync_pipe --destination=$HOME/.drush && drush cc drush
Categories: Elsewhere

Daniel Stender: My work for Debian in August

Planet Debian - Tue, 30/08/2016 - 19:42

Here's again a little list of my humble off-time contributions I'm happy to add to the large amount of work we're completing all together each month. Then there is one more "new in Debian" (meaning: "new in unstable") announcement. First, the uploads (a few of them are from July):

  • afl/2.21b-1
  • djvusmooth/0.2.17-1
  • python-bcrypt/3.1.0-1
  • python-latexcodec/1.0.3-4 (closed #830604)
  • pylint-celery/0.3-2 (closed #832826)
  • afl/2.28b-1 (closed #828178)
  • python-afl/0.5.4-1
  • vulture/0.10-1
  • afl/2.30b-1
  • prospector/0.12.2-1
  • pyinfra/0.1.1-1
  • python-afl/0.5.4-2 (fix of elinks_dump_varies_output_with_locale)
  • httpbin/0.5.0-1
  • python-afl/0.5.5-1 (closed #833675)
  • pyinfra/0.1.2-1
  • afl/2.33b-1 (experimental, build/run on llvm 3.8)
  • pylint-flask/0.3-2 (closed #835601)
  • python-djvulibre/0.8-1
  • pylint-flask/0.5-1
  • pytest-localserver/0.3.6-1
  • afl/2.33b-2
  • afl/2.33b-3

New packages:

  • keras/1.0.7-1 (initial packaging into experimental)
  • lasagne/0.1+git20160728.8b66737-1

Sponsored uploads:

  • squirrel3/3.1-4 (closed #831210)

Requested resp. suggested for packaging:

  • yapf: Python code formatter
  • spacy: industrial-strength natural language processing for Python
  • ralph: asset management and DCIM tool for data centers
  • pytest-cookies: Pytest plugin for testing Cookiecutter templates
  • blocks: another deep learning framework build on the top of Theano
  • fuel: data provider for Blocks and Python DNN frameworks in general
New in Debian: Lasagne (deep learning framework)

Now that the mathematical expression compiler Theano is available in Debian, deep learning frameworks resp. toolkits which have been build on top of it can become available within Debian, too (like Blocks, mentioned before). Theano is an own general computing engine which has been developed with a focus on machine learning resp. neural networks, which features an own declarative tensor language. The toolkits which have build upon it vary in the way how much they abstract the bare features of Theano, if they are "thick" or "thin" so to say. When the abstraction gets higher you gain more end user convenience up to the level that you have the architectural components of neural networks available for combination like in a lego box, while the more complicated things which are going on "under the hood" (like how the networks are actually implemented) are hidden. The downside is, thick abstraction layers usually makes it difficult to implement novel features like custom layers or loss functions. So more experienced users and specialists might to seek out for the lower abstraction toolkits, where you have to think more in terms of Theano.

I've got an initial package of Keras in experimental (1.0.7-1), it runs (only a Python 3 package is available so far) but needs some more work (e.g. building the documentation with mkdocs). Keras is a minimalistic, high modular DNN library inspired by Torch1. It has a clean, rather easy API for experimenting and fast prototyping. It can also run on top of Google's TensorFlow, and we're going to have it ready for that, too.

Lasagne follows a different approach. It's, like Keras and Blocks, a Python library to create and train multi-layered artificial neural networks in/on Theano for applications like image recognition resp. classification, speech recognition, image caption generation or other purposes like style transfers from paintings to pictures2. It abstracts Theano as little as possible, and could be seen rather like an extension or an add-on than an abstraction3. Therefore, knowledge on how things are working in Theano would be needed to make full use out of this piece of software.

With the new Debian package (0.1+git20160728.8b66737-1)4, the whole required software stack (the corresponding Theano package, NumPy, SciPy, a BLAS implementation, and the nividia-cuda-toolkit and NVIDIA kernel driver to carry out computations on the GPU5) could be installed most conveniently just by a single apt-get install python{,3}-lasagne command6. If wanted with the documentation package lasagne-doc for offline use (no running around on remote airports seeking for a WIFI spot), either in the Python 2 or the Python 3 branch, or both flavours altogether7. While others have to spend a whole weekend gathering, compiling and installing the needed libraries you can grab yourself a fresh cup of coffee. These are the advantages of a fully integrated system (sublime message, as always: desktop users switch to Linux!).

When the installation of packages has completed, the MNIST example of Lasagne could be used for a quick check if the whole library stack works properly8:

$ THEANO_FLAGS=device=gpu,floatX=float32 python /usr/share/doc/python-lasagne/examples/mnist.py mlp 5 Using gpu device 0: GeForce 940M (CNMeM is disabled, cuDNN 5005) Loading data... Downloading train-images-idx3-ubyte.gz Downloading train-labels-idx1-ubyte.gz Downloading t10k-images-idx3-ubyte.gz Downloading t10k-labels-idx1-ubyte.gz Building model and compiling functions... Starting training... Epoch 1 of 5 took 2.488s training loss: 1.217167 validation loss: 0.407390 validation accuracy: 88.79 % Epoch 2 of 5 took 2.460s training loss: 0.568058 validation loss: 0.306875 validation accuracy: 91.31 %

The example on how to train a neural network on the MNIST database of handwritten digits is refined (it also provides --help) and explained in detail in the Tutorial section of the documentation in /usr/share/doc/lasagne-doc/html/. Very good starting points are also the IPython notebooks that are available from the tutorials by Eben Olson9 and Geoffrey French on the PyData London 201610. There you have Theano basics, examples for employing convolutional neural networks (CNN) and recurrent neural networks (RNN) for a range of different purposes, how to use pre-trained networks for image recognition, etc.

  1. For a quick comparison of Keras and Lasagne with other toolkits, see Alex Rubinsteyn's PyData NYC 2015 presentation on using LSTM (long short term memory) networks on varying length sequence data like Grimm's fairy tales (https://www.youtube.com/watch?v=E92jDCmJNek 27:30 sq.) 

  2. https://github.com/Lasagne/Recipes/tree/master/examples/styletransfer 

  3. Great introduction to Theano and Lasagne by Eben Olson on the PyData NYC 2015: https://www.youtube.com/watch?v=dtGhSE1PFh0 

  4. The package is "freelancing" currently being in collab-maint, to set up a deep learning packaging team within Debian is in the stage of discussion. 

  5. Only available for amd64 and ppc64el. 

  6. You would need "testing" as package source in /etc/apt/sources.list to install it from the archive at the present time (I have that for years, but if Debian Testing could be advised as productive system is going to be discussed elsewhere), but it's coming up for Debian 9. The cuda-toolkit and pycuda are in the non-free section of the archive, thus non-free (mostly used in combination with contrib) must be added to main. Plus, it's a mere suggestion of the Theano packages to keep Theano in main, so --install-suggests is needed to pull it automatically with the same command, or this must be given explicitly. 

  7. For dealing with Theano in Debian, see this previous blog posting 

  8. Like suggested in the guideline From Zero to Lasagne on Ubuntu 14.04. cuDNN isn't available as official Debian package yet, but could be downloaded as a .deb package after registration at https://developer.nvidia.com/cudnn. It integrates well out of the box. 

  9. https://github.com/ebenolson/pydata2015 

  10. https://github.com/Britefury/deep-learning-tutorial-pydata2016, video: https://www.youtube.com/watch?v=DlNR1MrK4qE 

Categories: Elsewhere

Christoph Egger: DANE and DNSSEC Monitoring

Planet Debian - Tue, 30/08/2016 - 19:11

At this year's FrOSCon I repeted my presentation on DNSSEC. In the audience, there was the suggestion of a lack of proper monitoring plugins for a DANE and DNSSEC infrastructure that was easily available. As I already had some personal tools around and some spare time to burn I've just started a repository with some useful tools. It's available on my website and has mirrors on Gitlab and Github. I intent to keep this repository up-to-date with my personal requirements (which also means adding a xmpp check soon) and am happy to take any contributions (either by mail or as "pull requests" on one of the two mirrors). It currently has smtp (both ssmtp and starttls) and https support as well as support for checking valid DNSSEC configuration of a zone.

While working on it it turned out some things can be complicated. My language of choice was python3 (if only because the ssl library has improved since 2.7 a lot), however ldns and unbound in Debian lack python3 support in their bindings. This seems fixable as the source in Debian is buildable and useable with python3 so it just needs packaging adjustments. Funnily the ldns module, which is only needed for check_dnssec, in debian is currently buggy for python2 and python3 and ldns' python3 support is somewhat lacking so I spent several hours hunting SWIG problems.

Categories: Elsewhere

Rhonda D'Vine: Thomas D

Planet Debian - Tue, 30/08/2016 - 18:12

It's not often that an artist touches you deeply, but Thomas D managed to do so to the point of that I am (only half) jokingly saying that if there would be a church of Thomas D I would absolutely join it. His lyrics always did stand out for me in the context of the band I found about him, and the way he lives his life is definitely outstanding. And additionally there are these special songs that give so much and share a lot. I feel sorry for the people who don't understand German to be able to appreciate him.

Here are three songs that I suggest you to listen to closely:

  • Fluss: This song gave me a lot of strengh in a difficult time of my life. And it still works wonders when I feel down to get my ass up from the floor again.
  • Gebet an den Planeten: This songs gives me shivers. Let the lyrics touch you. And take the time to think about it.
  • An alle Hinterbliebenen: This song might be a bit difficult to deal with. It's about loss and how to deal with suffering.

Like always, enjoy!

/music | permanent link | Comments: 0 | Flattr this

Categories: Elsewhere

Drupal.org blog: Documentation overhaul

Planet Drupal - Tue, 30/08/2016 - 18:11

One of the biggest content areas on Drupal.org—and one of the most important assets of any open source project—is documentation. Community-written Drupal documentation consists of about 10,000 pages. Preparations for the complete overhaul of the documentation tools were in the works for quite some time, and in the recent weeks we finally started to roll out the changes on the live site.


Improving documentation on Drupal.org has been a part of a larger effort to restructure content on the site based on content strategy we developed.

The new section comes after a few we launched earlier in the year. It also uses our new visual system, which will slowly expand into other areas.

Goals and process

The overall goal for the new Documentation section is to increase the quality of the community documentation.

On a more tactical level, we want to:

  • Introduce the concept of "maintainers" for distinct parts of documentation
  • Flatten deep documentation hierarchy
  • Split documentation per major Drupal version
  • Notify people about edits or new documentation
  • Make comments more useful

To achieve those goals, we went through the following process:

First, we wrote a bunch of user stories based on our user research and the story map exercise we went through with the Documentation Working Group members. Those stories cover all kinds of things different types of users do while using documentation tools.

We then wireframed our ideas for how the new documentation system should look and work. We ran a number of remote and in person usability testing sessions on those wireframes.

Our next step was to incorporate the feedback, update our wireframes, and create actual designs. And then we tested them again, in person, during DrupalCamp London.
Incorporated feedback again, and started building.

The new system

So, how does the new documentation system work exactly? It is based on two new content types:

  1. Documentation guide: a container content type. It will group documentation pages on a specific topic, and provide an ability to assign 'maintainers' for this group of pages (similar to maintainers for contributed projects). Additionally, users will be able to follow the guide and receive notifications about new pages added or existing pages edited.
  2. Documentation page: a content type for the actual documentation content. These live inside of documentation guides.

Example of a new documentation guide

All of the documentation is split per major Drupal version, which means every documentation guide or page lives inside of one of a few top level 'buckets', e.g. Drupal 7 documentation, Drupal 8 documentation.
It is also possible to connect guides and pages to each other via a 'Related content' field, which should make it easier to discover relevant information. One of our next to-do’s is to provide an easy way to connect documentation guides to projects, enabling 'official' project documentation functionality.

More information on various design decisions we made for the new documentation system, and the reasons behind them, can be found in our DrupalCon New Orleans session (slides).

Current status

Right now, we have the new content types and related tools ready on Drupal.org.
We are currently migrating existing documentation (all 10,000 pages!) into the new system. The first step is generic documentation (e.g. 'Structure Guide'), with contributed projects documentation to follow later.

While working on the migration, we are recruiting maintainers for the new guides. If you are interested in helping out, sign up in the issue. Please only sign up if you actually have some time to work on documentation in the near future.

There is a lot of work to be done post-migration (both by guide maintainers and regular readers/editors). The content is being migrated as-is, and it needs to be adapted for the new system. This means almost every single page needs to be edited. New fields (such as Summary) filled out with meaningful text (to replace text automatically generated by the migration script). A lot of pages include information for both Drupal 7 and Drupal 8, but this content needs to be split, with Drupal 8 information moved to pages in the appropriate version of the guide. These are just some of the steps that need to happen once the documentation has been migrated into the new system.

Next steps

As staff, we have a few follow-up tasks for minor improvements to the content types and tools. However, the bulk of the work is editing and improving the actual documentation, as I described above. This is in your hands now. Not only do we not have enough staff members to edit every single documentation page in a reasonable amount of time, we are also not subject matter experts for many of the topics, and so can't provide meaningful edits. The tools are ready, now it is up to the community to pick them up and write great documentation.

Example of a documentation page

Thank you

Lastly we want to say thanks.

Thanks to all the community volunteers who wrote those 10,000 pages over the years. Thanks to the Documentation Working Group members for their expertise, insight, and patience.

And, of course, thanks to staff. Unfortunately due to recent changes for the Engineering team, this will be the last section we'll have resources to work on for a while. This was a fun and important project to work on, and we are glad that we got to finish it. It is a beautiful legacy of the work we did together with some of our former colleagues: DyanneNova, japerry, and joshuami. Thank you!

Categories: Elsewhere

Sooper Drupal Themes: "It starts with something small" Next Gen Drupal Themes From 1.0 To 2.5 In a Single Year

Planet Drupal - Tue, 30/08/2016 - 18:00

Just over a year ago I started with something small. I combined some existing technologies to create a drag and drop builder/theme. A combination of jQuery UI, CKEditor, Bootstrap 3 and Drupal. The result was far from perfect but interesting enough to get a bunch of people excited and involved in helping me find out how to improve the product.

Above: our Glazed main demo. Click to view full demo. Glazed Construction Design, Product Updates

Today's blog celebrates the new Construction design that is available today to all SooperThemes members. We've been working towards this release for nearly a year and the reason it took so long is that the core of the Glazed framework theme and drag and drop builder needed to be 100% up to the job. The reliability, sophistication and flexibility of our framework theme and drag and drop builder are lightyears ahead of the minimum viable product we released 14 months ago. To our customers who joined us in the beginning and are now renewing for the second year: Thank you so much! 

Our construction theme (click to view demo) is not the prettiest design I've ever created but that's not really the point. The point is that it looks like a construction theme all the way. It doesn't look like a generic theme that was customized to look a little bit more "heavy industry". Our Glazed framework theme allows you to completely design every aspect of your Drupal site from typography, to colors, grid design and navigation. Combine this with our drag and drop builder and everything you need in a business website can be designed and developed in the browser. From a blank canvas all the way down to the pages, views and forms everything in this demo was created in the browser, without writing a single line of code

No templates were edited, no CSS was written, not even a single hand coded HTML tag was needed to build this unique 40 page Glazed demo.

For more details about todays products updates check out the Glazed Changelog and Carbide Builder Changelog.

Why WordPress Is Taking Turf From Drupal 

It's simple economics. You can buy a WordPress theme for $59,- USD and a few days of customization and content editing later and your client is impressed and your project is comfortably on schedule and on budget. WordPress themes have become a major source of innovation in recent years, offering drag and drop builders, and niche-specific features for magazine websites, directory websites and all sorts of small business websites.

Themes are becoming more sophisticated and crawling into Drupal territories like for example education, magazines and community websites.

10 years ago I moved away from WordPress and Mambo because I simply felt Drupal was better, and I still feel that way. While these WordPress themes are loaded with features, they lack Drupal's modularity, coding standards and interoperability. I sincerely believe Drupal can be a better platform for all these themes. After all, Drupal has built in support for content types, relations, versioning, configuration management, and superior user management and access control.

Starting from today I will focus on offering as many niche designs for small businesses, large businesses, NGOs, governments, educators and online magazines. I hope that you will join sooperthemes.com and help us with our mission to show the world that Drupal is capable of doing what WordPress does and more. 

Our mission as SooperThemes is to promote Drupal as the #1 platform to author content on the web and at the same time to remain the #1 provider of designs and design tools for Drupal. See our roadmap for more details. The way to make Drupal the number one choice again is through the same economics that made WordPress so big, so our initial focus is to disrupt the market with a 90% decrease in costs of building and running a Drupal website.

Enjoying The Ride 

The past year has been a big adventure but also a lot of grinding, bug fixing, technical debt problems and all the things that new products face when they enter the market. However it has mostly been fun and exciting to develop these new technologies for the Drupal community and the reception of our updates is really motivating and powering our new developments.

Allthough pioneering in the area of next gen (drag and drop) Drupal themes meant facing a steep learning curve it can be said that Drupal is actually easier to build on in the long term. Our Drag and Drop builder is very similar to a frontend framewor that uses the CMS as an API. This is somthing that needs hooks and AJAX capabilities. Something that Drupal provides out of the box.

If you are reading this as a prospective customer: please join Sooperthemes and enjoy the ride with us. To our existing customers: keep your eyes open for exciting new features and designs. 

Categories: Elsewhere

Tag1 Consulting: Drupal 6 Long Term Support is My Favorite Feature of Drupal 8

Planet Drupal - Tue, 30/08/2016 - 17:04
rfay Tue, 08/30/2016 - 08:04

Long Term Support for Drupal 6 might be my favorite new feature included in Drupal 8. (I know, that might be stretching things for the fundamentally awesome step forward that Drupal 8 is, but bear with me.)

Categories: Elsewhere

Joachim Breitner: Explicit vertical alignment in Haskell

Planet Debian - Tue, 30/08/2016 - 15:35

Chris Done’s automatic Haskell formatter hindent is released in a new version, and getting quite a bit of deserved attention. He is polling the Haskell programmers on whether two or four spaces are the right indentation. But that is just cosmetics…

I am in principle very much in favor of automatic formatting, and I hope that a tool like hindent will eventually be better at formatting code than a human.

But it currently is not there yet. Code is literature meant to be read, and good code goes at length to be easily readable, and formatting can carry semantic information.

The Haskell syntax was (at least I get that impression) designed to allow the authors to write nicely looking, easy to understand code. One important tool here is vertical alignment of corresponding concepts on different lines. Compare

maze :: Integer -> Integer -> Integer maze x y | abs x > 4 || abs y > 4 = 0 | abs x == 4 || abs y == 4 = 1 | x == 2 && y <= 0 = 1 | x == 3 && y <= 0 = 3 | x >= -2 && y == 0 = 4 | otherwise = 2


maze :: Integer -> Integer -> Integer maze x y | abs x > 4 || abs y > 4 = 0 | abs x == 4 || abs y == 4 = 1 | x == 2 && y <= 0 = 1 | x == 3 && y <= 0 = 3 | x >= -2 && y == 0 = 4 | otherwise = 2

The former is a quick to grasp specification, the latter (the output of hindent at the moment) is a desert of numbers and operators.

I see two ways forward:

  • Tools like hindent get improved to the point that they are able to detect such patterns, and indent it properly (which would be great, but very tricky, and probably never complete) or
  • We give the user a way to indicate intentional alignment in a non-obtrusive way that gets detected and preserved by the tool.

What could such ways be?

  • For guards, it could simply detect that within one function definitions, there are multiple | on the same column, and keep them aligned.
  • More general, one could take the approach lhs2Tex (which, IMHO, with careful input, a proportional font and with the great polytable LaTeX backend, produces the most pleasing code listings) takes. There, two spaces or more indicate an alignment point, and if two such alignment points are in the same column, their alignment is preserved – even if there are lines in between!

    With the latter approach, the code up there would be written

    maze :: Integer -> Integer -> Integer maze x y | abs x > 4 || abs y > 4 = 0 | abs x == 4 || abs y == 4 = 1 | x == 2 && y <= 0 = 1 | x == 3 && y <= 0 = 3 | x >= -2 && y == 0 = 4 | otherwise = 2

    And now the intended alignment is explicit.

(This post is cross-posted on reddit.)

Categories: Elsewhere

InternetDevels: Collection of great free responsive Drupal themes 2016

Planet Drupal - Tue, 30/08/2016 - 14:53

According to the best practices of responsive web design, a website neatly adapts to whatever screen it is viewed on. And according to the best practices of our blog, we make collections of free responsive Drupal themes for you to use.

Read more
Categories: Elsewhere

Petter Reinholdtsen: First draft Norwegian Bokmål edition of The Debian Administrator's Handbook now public

Planet Debian - Tue, 30/08/2016 - 10:10

In April we started to work on a Norwegian Bokmål edition of the "open access" book on how to set up and administrate a Debian system. Today I am happy to report that the first draft is now publicly available. You can find it on get the Debian Administrator's Handbook page (under Other languages). The first eight chapters have a first draft translation, and we are working on proofreading the content. If you want to help out, please start contributing using the hosted weblate project page, and get in touch using the translators mailing list. Please also check out the instructions for contributors. A good way to contribute is to proofread the text and update weblate if you find errors.

Our goal is still to make the Norwegian book available on paper as well as electronic form.

Categories: Elsewhere

LevelTen Interactive: Hang Out With Some Cool People-- We Are Hiring!

Planet Drupal - Tue, 30/08/2016 - 07:00

Are you looking for a job in the tech world? Have you ever worked at a company that practiced Agile Methodology? Then this is the job for you! 

Who We Are:

We’re a small web development and marketing agency near Southern Methodist University in Dallas, Texas. We like the occasional ice cream socials and NERF gun battles, but most of all, we enjoy making Drupal websites and helping client's businesses grow. 

The Position:

PT Technical...Read more

Categories: Elsewhere

Dirk Eddelbuettel: RProtoBuf 0.4.5: now with protobuf v2 and v3!

Planet Debian - Tue, 30/08/2016 - 04:55

A few short weeks after the 0.4.4 release of RProtoBuf, we are happy to announce a new version 0.4.5 which appeared on CRAN earlier today.

RProtoBuf provides R bindings for the Google Protocol Buffers ("Protobuf") data encoding library used and released by Google, and deployed as a language and operating-system agnostic protocol by numerous projects.

This release brings support to the recently-release 'version 3' Protocol Buffers standard, used e.g. by the (very exciting) gRPC project (which was just released as version 1.0). RProtoBuf continues to supportv 'version 2' but now also cleanly support 'version 3'.

Changes in RProtoBuf version 0.4.5 (2016-08-29)
  • Support for version 3 of the Protcol Buffers API

  • Added 'syntax = "proto2";' to all proto files (PR #17)

  • Updated Travis CI script to test against both versions 2 and 3 using custom-built .deb packages of version 3 (PR #16)

  • Improved build system with support for custom CXXFLAGS (Craig Radcliffe in PR #15)

CRANberries also provides a diff to the previous release. The RProtoBuf page has an older package vignette, a 'quick' overview vignette, a unit test summary vignette, and the pre-print for the JSS paper. Questions, comments etc should go to the GitHub issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Categories: Elsewhere


Subscribe to jfhovinne aggregator - Elsewhere