Agrégateur de flux

Freelock : Performance problem: N! database calls

Planet Drupal - mer, 16/07/2014 - 18:34

Kicking off some posts about various performance challenges we've fixed.

N Factorial

During a code review for a site we were taking over, I found this little gem:

<?php

function charity_view_views_pre_render($view) {
  // this code takes the rows returned from a view query after the query has been run, and formats it for display...
  // snip to the code of interest:
  usort($view->result, 'charity_view_sort_popular');
}

PerformanceTechnicalDrupalDrupal Planet
Catégories: Elsewhere

Drupal Association News: How Your Membership Gives Back to the Community

Planet Drupal - mer, 16/07/2014 - 17:05

When people ask me, “What’s Drupal?” I find it a complex answer. Of course, in a technical sense, Drupal is a CMS— but to me, and to many others, it’s far more than that. It’s a community full of amazing people with inspiring leaders and huge hearts.

At DrupalCon Austin, I was able to share several stories about community members who really pushed the project further, all with the help of community cultivation grants selected and financed by the men and women who love Drupal. I want to thank Gabor, Sheng, and Tatiana for letting me share their stories and I'd like to share these stories with all of you.

Drupal Dev Days: A Week of Sprints in Szeged

Earlier this year, Gábor Hojtsy organized a dev days event that was a huge success. From March 24 to March 30 this year, three hundred people gathered in Szeged to sprint together on Drupal 8 core, Drupal 8 Search and Entity Field API, Documentation, Migration, MongoDB, REST and authentication, Rules for Drupal 8 and, of course, Drupal.org.

There was so much happening, they almost brought D.O to a halt— but fortunately, everything came out OK, and we had huge improvements to Drupal.org as a result.

There were big benefits to Drupal 8 at Szeged, too. Some of the things that our great sprinters accomplished were:

  •  115 core commits with 706 files changed, 10967 insertions(+), 6849 deletions(-)
  •  19 beta blocker and beta target issues fixed

It was the community that made Dev Days Szeged so great. By turning out and sprinting, they made big improvements to the project, while a community development grant funded part of the Internet fees. It's an important element of any sprint, but the real significance is that Drupal Association members who could not attend the sprints or are not in a role to contribute code were still able to help achieve this success by funding it through their membership.

DrupalCamp Shanghai

Sheng is the community leader of the Shanghai community, and as an ex-New Yorker, he knew firsthand how important Drupal meetups and camps are for networking and learning. After he moved to Shanghai, he decided that he wanted to share the valuable experience of face-to-face time with his new local community, which had skilled developers who were mostly disconnected from each other and the wider global community.

After building momentum through holding a number of meet ups, Sheng applied for grants in 2013 and 2014 to put on a Shanghai Drupal camp. With the funds, he flew in a Drupal Rock Star to come keynote each of the camps — Forest Mars and John Albin — and the camp doubled in size and there are now hundreds of people who come out to these camps.

While the investment of flying John Albin out to Shanghai from Taiwan was relatively small, the impact and ROI was huge both for the camps and for the greater Drupal community: camp attendees learned to contribute back to the larger global community, almost like a small R&D investment. It wouldn’t have been possible without a community grant.

DrupalCamp Donetsk

Tatiana of the Drupal Association worked with her colleagues in Donetsk to put together a DrupalCamp in Donetsk, in spite of the revolution. A lot of people came together and connected both to each other and to the global community, and used a grant to pay for the food and coffee— and for us at the Association, that grant stands as a sign of positive support from the greater Drupal community in spite of the strife that was going on in Donetsk.

In the end, lots of thanks needs to go around. First, I’d like to thank Gabor, Sheng and Tatiana and all community leaders for turning your vision into reality and for the time and passion you pour into Drupal. We are appreciate all that you do to unite and grow Drupal.

Secondly, none of this would be possible without the three community leaders who manage the volunteer program: Mike Anello, Amy Scarvada, and Thomas Turnbull. Your passion for growing the community is doing great things.

Finally, I want to issue a big thank you to our Drupal Association Members for making these stories a reality. If you want to be come a member and help more of these programs around the world come to life, please sign up today at https://assoc.drupal.org/membership.

Catégories: Elsewhere

Matthias Klumpp: AppStream 0.7 specification and library released

Planet Debian - mer, 16/07/2014 - 17:03

Today I am very happy to announce the release of AppStream 0.7, the second-largest release (judging by commit number) after 0.6. AppStream 0.7 brings many new features for the specification, adds lots of good stuff to libappstream, introduces a new libappstream-qt library for Qt developers and, as always, fixes some bugs.

Unfortunately we broke the API/ABI of libappstream, so please adjust your code accordingly. Apart from that, any other changes are backwards-compatible. So, here is an overview of what’s new in AppStream 0.7:

Specification changes

Distributors may now specify a new <languages/> tag in their distribution XML, providing information about the languages a component supports and the completion-percentage for the language. This allows software-centers to apply smart filtering on applications to highlight the ones which are available in the users native language.

A new addon component type was added to represent software which is designed to be used together with a specific other application (think of a Firefox addon or GNOME-Shell extension). Software-center applications can group the addons together with their main application to provide an easy way for users to install additional functionality for existing applications.

The <provides/> tag gained a new dbus item-type to expose D-Bus interface names the component provides to the outside world. This means in future it will be possible to search for components providing a specific dbus service:

$ appstream-index what-provides dbus org.freedesktop.PackageKit.desktop system

(if you are using the cli tool)

A <developer_name/> tag was added to the generic component definition to define the name of the component developer in a human-readable form. Possible values are, for example “The KDE Community”, “GNOME Developers” or even the developer’s full name. This value can be (optionally) translated and will be displayed in software-centers.

An <update_contact/> tag was added to the specification, to provide a convenient way for distributors to reach upstream to talk about changes made to their metadata or issues with the latest software update. This tag was already used by some projects before, and has now been added to the official specification.

Timestamps in <release/> tags must now be UNIX epochs, YYYYMMDD is no longer valid (fortunately, everyone is already using UNIX epochs).

Last but not least, the <pkgname/> tag is now allowed multiple times per component. We still recommend to create metapackages according to the contents the upstream metadata describes and place the file there. However, in some cases defining one component to be in multiple packages is a short way to make metadata available correctly without excessive package-tuning (which can become difficult if a <provides/> tag needs to be satisfied).

As small sidenote: The multiarch path in /usr/share/appdata is now deprecated, because we think that we can live without it (by shipping -data packages per library and using smarter AppStream metadata generators which take advantage of the ability to define multiple <pkgname/> tags)

Documentation updates

In general, the documentation of the specification has been reworked to be easier to understand and to include less duplication of information. We now use excessive crosslinking to show you the information you need in order to write metadata for your upstream project, or to implement a metadata generator for your distribution.

Because the specification needs to define the allowed tags completely and contain as much information as possible, it is not very easy to digest for upstream authors who just want some metadata shipped quickly. In order to help them, we now have “Quickstart pages” in the documentation, which are rich of examples and contain the most important subset of information you need to write a good metadata file. These quickstart guides already exist for desktop-applications and addons, more will follow in future.

We also have an explicit section dealing with the question “How do I translate upstream metadata?” now.

More changes to the docs are planned for the next point releases. You can find the full project documentation at Freedesktop.

AppStream GObject library and tools

The libappstream library also received lots of changes. The most important one: We switched from using LGPL-3+ to LGPL-2.1+. People who know me know that I love the v3 license family of GPL licenses – I like it for tivoization protection, it’s explicit compatibility with some important other licenses and cosmetic details, like entities not loosing their right to use the software forever after a license violation. However, a LGPL-3+ library does not mix well with projects licensed under other open source licenses, mainly GPL-2-only projects. I want libappstream to be used by anyone without forcing the project to change its license. For some reason, using the library from proprietary code is easier than using it from a GPL-2-only open source project. The license change was also a popular request of people wanting to use the library, so I made the switch with 0.7. If you want to know more about the LGPL-3 issues, I recommend reading this blogpost by Nikos (GnuTLS).

On the code-side, libappstream received a large pile of bugfixes and some internal restructuring. This makes the cache builder about 5% faster (depending on your system and the amount of metadata which needs to be processed) and prepares for future changes (e.g. I plan to obsolete PackageKit’s desktop-file-database in the long term).

The library also brings back support for legacy AppData files, which it can now read. However, appstream-validate will not validate these files (and kindly ask you to migrate to the new format).

The appstream-index tool received some changes, making it’s command-line interface a bit more modern. It is also possible now to place the Xapian cache at arbitrary locations, which is a nice feature for developers.

Additionally, the testsuite got improved and should now work on systems which do not have metadata installed.

Of course, libappstream also implements all features of the new 0.7 specification.

With the 0.7 release, some symbols were removed which have been deprecated for a few releases, most notably as_component_get/set_idname, as_database_find_components_by_str, as_component_get/set_homepage and the “pkgname” property of AsComponent (which is now a string array and called “pkgnames”). API level was bumped to 1.

Appstream-Qt

A Qt library to access AppStream data has been added. So if you want to use AppStream metadata in your Qt application, you can easily do that now without touching any GLib/GObject based code!

Special thanks to Sune Vuorela for his nice rework of the Qt library!

And that’s it with the changes for now! Thanks to everyone who helped making 0.7 ready, being it feedback, contributions to the documentation, translation or coding. You can get the release tarballs at Freedesktop. Have fun!

Catégories: Elsewhere

Dirk Eddelbuettel: Introducing RcppParallel: Getting R and C++ to work (some more) in parallel

Planet Debian - mer, 16/07/2014 - 16:56
A common theme over the last few decades was that we could afford to simply sit back and let computer (hardware) engineers take care of increases in computing speed thanks to Moore's law. That same line of thought now frequently points out that we are getting closer and closer to the physical limits of what Moore's law can do for us.

So the new best hope is (and has been) parallel processing. Even our smartphones have multiple cores, and most if not all retail PCs now possess two, four or more cores. Real computers, aka somewhat decent servers, can be had with 24, 32 or more cores as well, and all that is before we even consider GPU coprocessors or other upcoming changes.

And sometimes our tasks are embarassingly simple as is the case with many data-parallel jobs: we can use higher-level operations such as those offered by the base R package parallel to spawn multiple processing tasks and gather the results. I covered all this in some detail in previous talks on High Performance Computing with R (and you can also consult the Task View on High Performance Computing with R which I edit).

But sometimes we can't use data-parallel approaches. Hence we have to redo our algorithms. Which is really hard. R itself has been relying on the (fairly mature) OpenMP standard for some of its operations. Luke Tierney's (awesome) keynote in May at our (sixth) R/Finance conference mentioned some of the issues related to OpenMP. One which matters is that OpenMP works really well on Linux, and either not so well (Windows) or not at all (OS X, due the usual issue with the gcc/clang switch enforced by Applem but the good news is that the OpenMP toolchain is expected to make it to OS X is some more performant form "soon"). R is still expected to make wider use of OpenMP in future versions.

Another tool which has been around for a few years, and which can be considered to be equally mature is the Intel Threaded Building Blocks library, or TBB. JJ recently started to wrap this up for use by R. The first approach resulted in a (now superseded, see below) package TBB. But hardware and OS issues bite once again, as the Intel TBB is not really building that well for the Windows toolchain used by R (and based on MinGW).

(And yes, there are two more options. But Boost Threads requires linking which precludes easy use as e.g. via our BH package. And C++11 with its threads library (based on Boost Threads) is not yet as widely available as R and Rcpp which means that it is not a real deployment option yet.)

Now, JJ, being as awesome as he is, went back to the drawing board and integrated a second threading toolkit: TinyThread++, a small header-only library without further dependencies. Not as feature-rich as Intel Threaded Building Blocks, but at least available everywhere. So a new package RcppParallel, so far only on GitHub, wraps around both TinyThread++ and Intel Threaded Building Blocks and offers a consistent interface available on all platforms used by R.

Better still, JJ also authored several pieces demonstrating this new package for the Rcpp Gallery:

All four are interesting and demonstrate different aspects of parallel computing via RcppParallel. But the last article is key. Based on a question by Jim Bullard, and then written with Jim, it shows how a particular matrix distance metric (which is missing from R) can be implemented in a serial manner in both R, and also via Rcpp. The key implementation, however, uses both Rcpp and RcppParallel and thereby achieves a truly impressive speed gain as the gains from using compiled code (via Rcpp) and from using a parallel algorithm (via RcppParallel) are multiplicative! Between JJ's and my four-core machines the gain was between 200 and 300 fold---which is rather considerable. For kicks, I also used a much bigger machine at work which came in at an even larger speed gain (but gains become clearly sublinear as the number of cores increases; there are however some tuning parameters).

So these are exciting times. I am sure there will be lots more to come. For now, head over to the RcppParallel package and start playing. Further contributions to the Rcpp Gallery are not only welcome but strongly encouraged.
Catégories: Elsewhere

Acquia: Drupal for Digital Commerce – Bojan Živanović

Planet Drupal - mer, 16/07/2014 - 14:49

Bojan and I chatted at Drupal Dev Days 2014 about one of the newest and most important weapons available in Drupal's eCommerce arsenal: recurring billing for digital commerce in Drupal Commerce.

Catégories: Elsewhere

Vincent Sanders

Planet Debian - mer, 16/07/2014 - 13:04
It is no great secret that my colleagues at Collabora have been doing work with the Raspberry Pi Foundation.

My desk is very near Marco and I often see him working with the various Pi boards. Recently he obtained one of the new B+ units for testing and I thought it looked a little sad sat naked on his desk.

To remedy this bare board problem I designed and built a laser cut a case for him and now the B+ has been publicly released I can make the design freely available.

The design is completely original though is inspired by several other plastic "clip" type designs I have seen. Originally I created and debugged the case design for my parallella though tweaking it for the Pi was pretty easy.

The design is under a CC attribution licence and I ought to say that my employer is in no way responsible for this, its all my own fault.
Catégories: Elsewhere

Wouter Verhelst: Reprepro for RPM

Planet Debian - mer, 16/07/2014 - 12:43

Dear lazyweb,

reprepro is a great tool. I hand it some configuration and a bunch of packages, and it creates the necessary directory structure, moves the packages to the right location, and generates a (signed) Debian package repository. Obviously it would be possible to all that reprepro does by hand—by calling things like cp and dpkg-scanpackages and gpg and other things by hand—but it's easy to forget a step when doing so, and having a tool that just does things for me is wonderful. The fact that it does so only on request (i.e., when I know something has changed, rather than "once every so often") is also quite useful.

At work, I currently need to maintain a bunch of package repositories. The Debian package archives there are maintained with reprepro, but I currently maintain the RPM archives pretty much by hand: create the correct directories, copy the right files to the right places, run createrepo over the correct directories (and in the case of the OpenSUSE repository, also run gpg), and a bunch of other things specific to our local installation. As if to prove my above point, apparently I forgot to do a few things there, meaning, some of the RPM repositories didn't actually work correctly, and my testing didn't catch on.

Which makes me wonder how RPM package repositories are usually maintained. When one needs to maintain just a bunch of packages for a number of servers, well, running createrepo manually isn't too much of a problem. When it gets beyond own systems, however, and when you need to support multiple builds for multiple versions of multiple distributions, having to maintain all those repositories by hand is probably not the best idea.

So, dear lazyweb: how do large RPM repositories maintain state of the packages, the distributions they belong to, and similar things?

Please don't say "custom scripts"

Catégories: Elsewhere

Juliana Louback: Become an Open-source Contributor Video Conference

Planet Debian - mer, 16/07/2014 - 12:06

A couple weeks ago I was contacted by Yehuda Korotkin through one of the Debian mailing lists. Yehuda is a tech professor at one of Israel’s leading colleges for women. He proposed a video conference to present the open source community to his class and explain how they can contribute to open source projects. I volunteered to participate and last Monday (July 14th 2014) we had our virtual meeting, which I hope was the first of many.

Shauna G. also participated from Boston, making it a Israel - USA - Brazil meetup. Pretty impressive, eh? The conference was hosted on Google Hangouts, the video is available for viewing here. Run time is around 2.5 hours so to save you time I’ve kindly posted a summary of the conference! To be frank, I look ghastly half the time so I’d much prefer that you read this post.

First there was a Q&A period which was more of a discussion panel, followed by a hands-on session where the girls made their first contribution to an open source project. Here’s the gist of it:

Q&A

  • What is open source [software]? Software that allows free access to its source code (ergo ‘open’), permitting analysis and any modifications to the original code. All these permissions are contained in a license included in the software. Note that the term ‘open source’ is now applied to a lot of things other than software, and I assume there’s a different definition for those cases.

  • Who is the community behind open source software? Shauna pointed out that there are many kinds of projects and many kinds of communities backing them. As an example of a for-profit model, Shauna cited RedHat that offers Linux (an open source OS) for free and profits from support services. Some projects are supported by NGO’s (example: Center for Open Science) with a specific objective, a series of other projects are volunteer-based or are the result of a hobby project. The contributors vary accordingly, they are not only programmers but also people with a non-technical background. In sum, there’s a lot of variety in the open source community as a whole, details of sturcture vary case to case.

  • Why is it important that women get involved in technology? Men and women are different because of a series of reasons, among them our biological composition, influences of society and life experiences. Technology is for everyone, male or female and we need to be able to design products that will suit everyone well. If the development team is comprised of only one sex or another, it’s likely that factors will be overlooked or ignored and as a result not a democratic experience. Shauna gave some examples of this, such as a voice-powered location search software that could easily identify stereotypically ‘male’ keywords but not many of interest to females.

    In addition to this, it’s known that there is a severe workforce deficiency in the tech industry. One proposed solution is to encourage more women to follow a career in tech. The argument is based on the assumption that any boy who is remotely interested in tech has a high likelihood of studying tech, but not every girl. Often girls are not encouraged to stuy STEM-related topics and in many places tech is considered inappropriate for women. I’ve read that tech companies’ engineering team is 12% female and 88% male. That number is consistent with the stats of female college grads for STEM majors. I think boys who love tech will keep getting into tech, but there are a lot of girls who could be great engineers if they only gave it a shot. Women should be encouraged to pursue a tech career not only so they’ll get to work in such an incredible field but also so they can help assuage the gap in supply and demand present in the tech workforce today.

  • Why is open source important? We don’t always get everything we want/need from off-the-shelf software. The great thing about open source is that you are free to tweak the at will and add whatever features you’d like. Sometimes there are even more important necessities, such as security requirements. With open source you don’t need to just take the manufacturers’ word that the code is bullet proof; open it and check it out yourself. Another factor that increases the value of open source is its stability. As Linus Torvalds said, “given enough eyeballs, all bugs are shallow”. With so many developers opening, studying and testing the code, bugs are quickly identified and removed. The fact that open source software is ‘free’ doesn’t reflect negatively on the quality; to the contrary, plenty of open source software is the best that’s out there.

  • Why its important contribute to open source? To start, it’s not written anywhere that you have to contribute. That said, it sure is nice if you do. I think there are two main reasons why people contribute. One is they want to ‘give back’ to the open source community. The other is they want to add some missing detail, feature or entire product. Of course, usually if you need something, someone else does as well, then by supplying your own need you end up helping many people out on the way. But one thing that I think applies especially to students is it’s a great way to gain experience. You have to fit in some practical learning along with all that theory and one can only be so enthusiastic about the CS projects done in school that are usually discarded at the end of the semester. By contributing to an open source project, you are doing something that’s real and that people will actually use. You also have access to an incredible network of mentors who are real pros and more than willing to help you out. Maybe I’ve just been lucky, but they are really nice people who won’t snicker if you ask a stupid question. Well, maybe they do. But I haven’t seen it happen and I’m the queen of stupid questions. Many times this is their pet project, so they’re thrilled to have your contribution. Another great thing is that you can choose what you want to work with. It’s easy to find a tech internship out there, what’s not easy is finding an interesting internship. You might work with something you dislike or (gasp) end up just bringing people coffee. But we do that stuff cause we need the work experience. With open source you have a huge array of projects you can work on, one of them is certain to suit your interests.

  • How to become a contributor? When using an open source software, you might notice things you’d like to change, or actual bugs. Let the development team know about it and be sure to also let them know if you think you can figure out the fix. And if that hasn’t happened yet, try exploring Github’s open source repositories and check out the ‘Issues’ option (upper right corner). There’s usually a list of bugs or wishlist features that you just might be the man/woman for.

Hands on

For the practical part of our conference, Yehuda’s students contributed to Jscommunicator’s Issue #5 which requests i18n support for major languages. JSCommunicator is a SIP communication tool developed in HTML and JavaScript, currently supporting audio/video calls and SIP SIMPLE instant messaging.

First we set up Github in their local machines and forked the Jscommunicator repo. We learned the basic git commands needed to push changes to their remote repositories and to create a pull requests. This is explained in further detail here.

At the end of the conference with nearly an hour overtime due to my tendency to be overly verbose, Yehuda’s class made their first contribution with a Hebrew translation for Jscommunicator!

All in all, it was great to virtually meet Yehuda and his epic students who all seemed very clever engineers. I’m looking forward to our next meetup once I gain some time-management skills.

Catégories: Elsewhere

IXIS: Nippy EdgeCast Purging

Planet Drupal - mer, 16/07/2014 - 11:27

Since we integrated the EdgeCast CDN for one of our clients, and released a related EdgeCast Drupal module we have been encouraging more and more clients to consider a CDN layer to accelerate performance to multiple geographic locations and maintain an excellent uptime even during site maintenance periods.

A recent international client who is running many domains with federated content using the Domain module needed to make use of the content delivery network to improve performance and resiliance for their sites.

read more

Catégories: Elsewhere

Mogdesign: #D8Rules As a Proof that Drupal Community Is a Living Cell

Planet Drupal - mer, 16/07/2014 - 11:03

When D8Rules project waiting in a Funding phase had just seven days left to be successfully funded, success didn’t seem likely. The project had raised just over 40% of its funding goal so far. The days shortened; the pressure rose.

Catégories: Elsewhere

Rapha&#235;l Hertzog: Spotify migrates 5000 servers from Debian to Ubuntu

Planet Debian - mer, 16/07/2014 - 10:07

Or yet another reason why it’s really important that we succeed with Debian LTS. Last year we heard of Dreamhost switching to Ubuntu because they can maintain a stable Ubuntu release for longer than a Debian stable release (and this despite the fact that Ubuntu only supports software in its main section, which misses a lot of popular software).

A few days ago, we just learned that Spotify took a similar decision:

A while back we decided to move onto Ubuntu for our backend server deployment. The main reasons for this was a predictable release cycle and long term support by upstream (this decision was made before the announcement that the Debian project commits to long term support as well.) With the release of the Ubuntu 14.04 LTS we are now in the process of migrating our ~5000 servers to that distribution.

This is just a supplementary proof that we have to provide long term support for Debian releases if we want to stay relevant in big deployments.

But the task is daunting and it’s difficult to find volunteers to do the job. That’s why I believe that our best answer is to get companies to contribute financially to Debian LTS.

We managed to convince a handful of companies already and July is the first month where paid contributors have joined the effort for a modest participation of 21 work hours (watch out for Thorsten Alteholz and Holger Levsen on debian-lts and debian-lts-announce). But we need to multiply this figure by 5 or 6 at least to make a correct work of maintaining Debian 6.

So grab the subscription form and have a chat with your management. It’s time to convince your company to join the initiative. Don’t hesitate to get in touch if you have questions or if you prefer that I contact a representative of your company. Thank you!

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Catégories: Elsewhere

Jon Dowland: Mac

Planet Debian - mar, 15/07/2014 - 21:22

My job exposes me to a large variety of computing systems and I regularly use Mac, Windows and Linux desktops. My main desktop environment at home and work has been Debian GNU/Linux for over 10 years. However every now and then I take a little "holiday" and use something else for a few weeks. Often I'm spurred on by some niggle or other on the GNOME desktop, or burn-out with whatever the current contentious issue of the moment is in Debian. Usually I switched to Windows and I used it as an excuse to play some computer games.

Last November I had just such an excuse to take a holiday but this time I opted to go for Mac. I had a back-log of Mac issues to investigate at work anyway.

I haven't looked back.

It appears I have switched for good. I've been meaning to write about this for some time, but I couldn't quite get the words right. I doubted I could express my frustrations in a constructive, helpful way, even if I think that my experiences are useful and my discoveries valuable, perhaps I would put them across in a way that seemed inciteful rather than insightful. I wasn't sure anyone cared. Certainly the GNOME community doesn't seem interested in feedback.

I turns out that one person that doesn't care is me: I didn't realise just how broken the F/OSS desktop is. The straw that broke the camel's back was the file manager replacing type-ahead find with a search but (to seemlessly switch metaphor) it turns out I'd been cut a thousand times already. I'm not just on the other side of the fence, I'm several fields away.

Sometimes community people write about their concerns with whether they're going in the right direction, or how to tell the difference between legitimate complains, trolls and whiners. When I look at conferences now, the sea of Thinkpads was replaced with a sea of Apple Macs a long time ago now, and the Thinkpads haven't come back. I'd suggest: don't worry about the whiners. Worry about the leavers.

What does this mean for my Debian involvement? Well, you can't help but have noticed that I've done very little this year. I've written nearly exclusively about music so far. the good news is: I still regularly use Debian, and I still intend to stay involved, just not on the desktop. I'm essentially only maintaining two packages now, lhasa and squishyball. I might pick up a few more (possibly archivemail if the situation doesn't improve) but I'm happy with a low package load; I'd like to make sure the ones I do maintain are maintained well. The sum of all my Debian efforts this year have been to get these two (or three) ship-shape. I have a bunch of other things I'd like to achieve in Debian which are not packages, and a larger package load would just distract from them. (We really are too package-oriented in Debian).

Catégories: Elsewhere

Michal &#268;iha&#345;: New UI for Weblate

Planet Debian - mar, 15/07/2014 - 18:00

For quite some time, I'm working on new UI for Weblate. As the time is always limited, the progress is not that fast as I would like to see, but I think it's time to show the current status to wider audience.

Almost all pages have been rewritten, the major missing parts are zen mode and source strings review. So it's time to play with it on our demo server. The UI is responsive, so it works more or less on different screen sizes, though I really don't expect people to translate on mobile phone, so not much tweaking was done for small resolutions.

Anyway I'd like to hear as much feedback as possible :-).

Filed under: English phpMyAdmin SUSE Weblate | 0 comments | Flattr this!

Catégories: Elsewhere

Acquia: Deliver digital faster with Drupal – Part 2

Planet Drupal - mar, 15/07/2014 - 15:45

In Deliver digital faster with Drupal Part 1, I showed you some of the many examples of successful sites built rapidly thanks to Drupal’s modularity. To stay ahead of your competition, you need to be nimble and agile; Drupal helps you do this with reusable, transferable digital experiences that can be customised to suit various niches even within a single business enterprise. All, of course, without paying additional license fees or mandated limits on developers, environments, or copies.

Catégories: Elsewhere

Liran Tal's Enginx: Migrate Drupal 7 to WordPress 3.9 – The Conclusion

Planet Drupal - mar, 15/07/2014 - 15:35
This entry is part 2 of 2 in the series Drupal 7 to Wordpress 3.9 Migration

Migrate Drupal 7 to WordPress 3.9  - To recap, in a previous post on this series, I’ve set the background for my action to migrate  from Drupal 7 to WordPress 3.9. In this post, we will explore the process of making this migration happen.

If you’ve been on this search before to migrate from Drupal to WordPress, then you’ve realized that there aren’t a lot of resources, and that you may have some preferences in regards to the migration process. Some solutions that popped required to have both instances of Drupal and WordPress up and running for some reason, but that didn’t fit my requirements as I wanted to use the same domain and not needing to setup another one just for the migration process. Other solutions are of course professional support services which will perform the migration for you, but you’d have to say goodbye to a few hundred dollars to begin with (prices range from $750 to $3500 for a website migration)

Finding Drupal2Worpdress provided me a good start to get things rolling. As with most things on Github for me, I usually begin by forking a repository and Drupal2Wordpress was no exception. Quickly after I reviewed the code in the original repository I found out that the script is very small and focused, without requiring any special dependencies or extra configuration which was my primary goal – finding the most simple solution as possible. Now I’m ready to take a stub at it.

 

My Video Course - Step by Step Drupal 7 to WordPress 3.9 Migration

I created a Video course on Udemy.com to teach you the skills of migrating Drupal 7 to WordPress 3.9.

I’d appreciate if you leave a review after taking the quick course

Step-by-Step Drupal 7 to WordPress 3.9 Migration Learn how to migrate your content, users, and more from a Drupal 7 website to WordPress 3.9.

 

 

 

Getting to Business with Drupal2Wordpress

Drupal2Wordpress is essentially very simple. It only requires to edit the PHP code at the beginning, and set the connection information correctly for both WordPress and Drupal database. That already implies on the characteristics of this migration tool – it expects that both instances of Drupal and WordPress are available through a database connection and since this tool has to be accessible and run on the hosting account service  and be triggered from the web or from a cron job (because hosting accounts do not open their database servers to the public).

Some of my fixes to this tool began with importing any content type from Drupal, yet making sure they are imported into WordPress as eligble posts content type (as opposed to pages for example, which aren’t blog related). URL aliasing has also been fixed so that imported posts in the new WordPress install are just working good, as well as another fix to migrate only approved comments. New additions to the tool included the support for migrating users, and adding a default ‘Blog’ category on WordPress and relating all posts to it (as otherwise they are not displayed).

The tool has been tested and it only requires to get a fresh installation of WordPress 3.9 to migrate any Drupal 7 site to it. You’re welcome to fork out the repository or test it and comment so we can further improve upon it.

Drupal2Wordpress – the Github repository.

 

(adsbygoogle = window.adsbygoogle || []).push({});

The post Migrate Drupal 7 to WordPress 3.9 – The Conclusion appeared first on Liran Tal's Enginx.

Catégories: Elsewhere

Drupalize.Me: Drupal 8 Plugins Explained

Planet Drupal - mar, 15/07/2014 - 15:00

As you start down the road of learning Drupal 8 module development, one of the first new Drupalisms that you're likely to encounter are plugins. After writing a blog post about creating blocks, which uses the new plugin architecture, I thought it might be interesting to take a step back and talk a little bit more about plugins at a higher level. This blog post contains an introduction to the what and why of plugins to help Drupal 7 developers make the transition to Drupal 8.

Catégories: Elsewhere

Nuvole: Packaging and reusing configuration in Drupal 8

Planet Drupal - mar, 15/07/2014 - 15:00
Bringing "reusable features" to Drupal 8.

This is a preview of Nuvole's training at DrupalCon Amsterdam: An Effective Development Workflow in Drupal 8.

Configuration Management in Drupal 8 elegantly solves staging configuration between different environments addressing an issue that is still haunting even the most experienced Drupal 7 developer. In earlier posts we covered the new Configuration Management in Drupal 8, seeing how it compares to Drupal 7 and Features, and even investigated how to manually simulate Features in Drupal 8 last year. Recent developments and contrib modules can take us several steps closer.

Configuration Management can get quite streamlined when using Git and Drush 7 as explained below.

Step 1: Move staging configuration directory into a versionable location

By default Drupal 8 will place the staging directory under sites/default/files and it is considered a good practice to not version that location, but an alternative location can easily be specified in our settings.php:

<?php
$config_directories['staging'] = 'config/staging';
?>

Done that, we must rebuild the Drupal cache:

$ drush cache-rebuild

Step 2: Export active configuration into staging directory via Drush

The Configuration Management system exposes a set of very handy Drush commands: in order to “dump” all active configuration into our newly set staging directory we can just run:

$ drush config-export
The current contents of your export directory (config/staging) will be deleted. (y/n): y
Configuration successfully exported to config/staging.                                                    

Step 3: Push configuration changes and import it on the staging environment

Since the staging directory is under version control we can simply git-add all its content and push it to the remote repository. After having set-up the staging environment as an exact replica of our development environment (this is actually required for the configuration staging to work) we can start profiting from the new Drupal 8 CM system. Imagine we have changed the site name on dev, after having exported, committed and pushed that change, on the staging site we will simply run:

$ git pull

$ drush config-import

Config               Operation               
system.site          update
Import the listed configuration changes? (y/n): y
The configuration was imported successfully.

For a more comprehensive overview of the Configuration Management system please refer to our previous blog post Configuration Management: Drupal 7 to Drupal 8.

Packaging configuration

For those developers familiar with code-driven development practices the three steps above might resemble what the Features module does in Drupal 7 with its features-update and features-revert Drush commands.

While Drupal 8 configuration staging capabilities are far more advanced than what Features could possibly provide, what the new Configuration Management system really lacks is the ability to package configuration.

Enter the Configuration development module

The Configuration development module, currently maintained by chx, serves two main purposes:

  • It automates the import of specified configuration files into the active storage.
  • It automates the export of specified configuration objects into files.

The module offers a simple, global UI interface where a Drupal developer can set which configuration is automatically exported and imported any time they hit the “Save” button on a configuration setting page.

In order to achieve a more modular configuration packaging it would be enough to set a specific module’s config/install directory as the actual export destination.

Nuvole contributed a patch to make that possible: instead of firing an auto-export every time a “Save” button is clicked the developer can, instead, specify in the module’s info file which configuration needs to be written back to that module’s install directory and run a simple Drush command to do that.

Reusable “features” in Drupal 8

One of the main advantages of having a standardized way of dealing with configuration means that modules can now stage configuration at installation time. In a way that’s something very close to what Features allowed us to do in Drupal 7.

Say we have our news section up and running on the site we are currently working on and we would like to package it into a custom module, together with some other custom code, and ship it over a new project. The patched Config development module will help us to do just that! Here it is how:

Step 1: Download, patch and enable Configuration development module

We need to download and enable the Configuration development module version 8.x-1.x-dev and apply the patch attached to this Drupal.org issue.

After rebuilding the cache, we will have the config-writeback Drush command available. Let's have a closer look at what it is meant to do:

$ drush help config-writeback

Write back configuration to a module's config/install directory. State which configuration settings you want to export in the module's info file by listing them under 'config_devel', as shown below:

config_devel:
  - entity.view_display.node.article.default
  - entity.view_display.node.article.teaser
  - field.instance.node.article.body


Examples:
drush config-writeback MODULE_NAME        Write back configuration to the specified module, based on .info file.

Arguments:
module                                    Module machine name.

Aliases: cwb

Step 2: Find what configuration needs to be packaged

We now look for all configuration related to our site’s news section. In Drupal 8 most of the site configuration is namespaced with related components so, if we keep on using consistent naming conventions, we can easily list all news-related configuration by simply running:

$ drush config-list | grep news

entity.form_display.node.news.default
entity.view_display.node.news.default
entity.view_display.node.news.teaser
field.instance.node.news.body
image.style.news_medium
menu.entity.node.news
node.type.news

Step 3: Package configuration

To package all the settings above we will create a module called custom_news and, in its info file, we will specify all the settings we want to export, listing them under the config_devel: directive, as follows:

$ cat modules/custom_news/custom_news.info.yml

name: Custom News
type: module
description: 'Custom news module.'
package: Custom
core: 8.x
config_devel:
  - entity.form_display.node.news.default
  - entity.view_display.node.news.default
  - entity.view_display.node.news.teaser
  - field.instance.node.news.body
  - image.style.news_medium
  - menu.entity.node.news
  - node.type.news

After enabling the module we will run:

$ drush config-writeback custom_news

And we will have all our settings exported into the module’s install directory:

$ tree -L 3 modules/custom_news/

modules/custom_news/
├── config
│   └── install
│       ├── entity.view_display.node.news.default.yml
│       ├── entity.view_display.node.news.teaser.yml
│       ├── field.instance.node.news.body.yml
│       ├── image.style.news_medium.yml
│       ├── menu.entity.node.news.yml
│       └── node.type.news.yml
└── custom_news.info.yml

The Drush command above takes care of clearing all sensitive UUID values making sure that the module will stage the exported configuration cleanly, once enabled on a new Drupal 8 site.

To get the news section on another site we will just copy the module to the new site's ./modules/ directory and enable it:

$ drush en custom_news

The following extensions will be enabled: custom_news
Do you really want to continue? (y/n): y
custom_news was enabled successfully.      Final evaluation: Drupal 7 versus Drupal 8

One of the main differences between working in Drupal 7 and in Drupal 8 is represented by the new Configuration Management system.

While Features was proposing a one-stop solution for both configuration staging and packaging, Drupal 8 CM does a better job in keeping them separate, allowing developers in taking a greater control over these two different and, at the same time, complementary aspect of a solid Drupal development workflow.

By using the method described above we can upgrade our comparison table between Drupal 7 and Drupal 8 introduced in one of our previous posts as follows:

Functionality D7 Core D7 Core + Features D8 Core (current) D8 Core (current) + Patched Config Devel Export full site config (no content) NO NO YES YES Export selected config items NO YES YES YES Track config changes (full site) NO NO YES YES Track config changes (selected items) NO YES YES YES Stage configuration NO YES YES YES Package configuration NO YES NO YES Reuse configuration in other projects NO YES NO YES Collaborate on the same project NO YES NO NO

The last "NO" deserves a brief explanation: Configuration Management allows two developers to work simultaneously on different parts of the same project if they are very careful: but "merging" the work would have to be done by version control (GIT or similar), that doesn't know about YAML or Drupal.

Some open issues

Contributed modules seem to be the best way to enhance the core Configuration Management system, much like what happened with Drupal 7 and Features. There are still several issues that should be considered for an optimal workflow, to match and improve what we already have in Drupal 7:

  • Piping: the ability to relate configuration components based on both hard and logic dependencies, for example: I export a content type and, automatically, I get also its fields. If piping might have been too rigid, at times, it would be still useful to have in some configurable form.
  • Enhanced configuration diff: it might be useful to have the possibility to review what configuration is going to be installed before enabling a module, like it is now when importing staged configuration to the active storage.
  • Granularity: it is still impossible to export part of a configuration file, so we still depend on the core conventions for grouping configuration into files, and we can't export a single permission for example.
  • Ownership: we can't know if another module (or "feature") is tracking a component we wish to track; this could be useful in the perspective of maintaining several "modular" features.
  • Updates: we can reuse configuration by simply enabling a module, but this holds only for the initial installation; after a module is enabled, we don't have a clean way to import changes (say, to "upgrade" to a newer version of the feature) outside the standard workflow foreseen in Configuration Management.
Tags: Drupal 8, Drupal Planet, DrupalCon, Drush, Code Driven DevelopmentImage: 
Catégories: Elsewhere

Mario Lang

Planet Debian - mar, 15/07/2014 - 10:15
Mixing vinyl again

The turntables have me back, after quite some long-term mixing break.

I used to do straight 4-to-the-floor, mostly acid or hardtek. You can find an old mix of mine on SoundCloud. This one is actually back from 2006.

But currently I am more into drum and bass. It is an interesting mixing experience, since considerably harder. Here is a small but very recent minimix. Experts in the genre might notice that I am mostly spinning stuff from BlackOutMusicNL, admittedly my favourite label right now.

Catégories: Elsewhere

Kristian Polso: Password protect your Drupal site with Shield-module

Planet Drupal - mar, 15/07/2014 - 09:26
From time to time, there might arise a situation where you want to password protect your Drupal site. Maybe the site is under development, or you just want it to be available to a selection of users. This is usually done with something called "HTTP Basic Auth", which allows you to password protect a site. This can be done in Apache by modifying the .htaccess file.  There are some great tutorials on how to do this, but this is not the correct way of doing it for couple of reasons.
Catégories: Elsewhere

Pages

Subscribe to jfhovinne agrégateur