Feed aggregator

Enrico Zini: debtags-cleanup

Planet Debian - Fri, 05/02/2016 - 19:18
debtags.debian.org cleaned up

Since the Debtags consolidation announcement there are some more news:

No more anonymous submissions
  • I have disabled anonymous tagging. Anyone is still able to tag via Debian Single Sign-On. SSO-enabling the site was as simple as this.
  • Tags need no review anymore to be sent to ftp-master. I have removed all the distinction in the code between reviwed and unreviewed tags, and all the code for the tag review interface.
  • The site now has an audit log for each user, that any person logged in via SSO can access via the "history" link in the top right of the tag editor page.
Official recognition as Debian Contributors
  • Tag contributions are sent to contributors.debian.org. There is no historical data for them because all submissions until now have been anonymous, but from now on if you tag packages you are finally recognised as a Debian Contributor!
Mailing lists closed
  • I closed the debtags-devel and debtags-commits mailing lists; the archives are still online.
  • I have updated the workflow for suggesting new tags in the FAQ to "submit a bug to debtags and Cc debian-devel"

We can just use debian-devel instead of debtags-devel.

Autotagging of trivial packages
  • I have introduced the concept of "trivial" packages to currently be any package in the libs, oldlibs and debug sections. They are tagged automatically by the site maintenance and are excluded from the site todo lists and tag editor. We do not need to bother about trivial packages anymore, all 13239 of them.
Miscellaneous other changes
  • I have moved the debtags vocabulary from subversion to git
  • I have renamed the tag used to mark packages not yet reviewed by humans from special::not-yet-tagged to special::unreviewed
  • At the end of every nightly maintenance, some statistics are saved into a database table. I have collected 10 years of historical data by crunching big tarballs of site backups, and fed them to the historical stats table.
  • The workflow for getting tags from the site to ftp-master is now far, far simpler. It is almost simple enough that I should manage to explain it without needing to dig through code to see what it is actually doing.
Categories: Elsewhere

Mediacurrent: Meet the Mediacurrent Friday 5

Planet Drupal - Fri, 05/02/2016 - 18:48

Whether you’re a long time Mediacurrent reader or have recently discovered us (if so, welcome!) you know that we are always looking for new ways to deliver quality content to our readers. We have more than 500 blog posts on our website and have no plans to slow down.

Categories: Elsewhere

Acquia Developer Center Blog: Drupal, meet PHP FIG - Larry Garfield

Planet Drupal - Fri, 05/02/2016 - 15:28
Drupal, meet PHP FIG - Larry Garfield

Larry Garfield aka crell: Drupal 8 Web Services Initiative Lead, a subsystem maintainer for a couple of things, relevant and Drupal representative to the PHP Framework Interoperability Group. Make sure you listen to the podcast for the full origin story of Larry’s online handle!

This conversation with Larry Garfield (@crell) is the first in a series of interviews Campbell Vertesi (@CampbellVertesi) and I carried out in preparation for DrupalCon Asia in Mumbai. We are building the world’s longest DrupalCon session and packing all 6+ hours of it with information and personalities you won’t want to miss! So actually ... For our one hour in the spotlight in Mumbai, we’ve been doing a lot of preparation. Our “session” will include a lot of additional materials like podcasts and blog posts about what we’ve learned along the way.

Our session, Meet PHP-FIG: Your community just got a whole lot bigger, Drupal is about Drupal 8’s membership in the new, interoperable PHP community. We’re covering the basics of what the PHP Framework Interoperability Group (PHP-FIG) is, what the various PSRs are and do, talk about testing and dependency management, and what it means to be a part of the new PHP community — including having better architecture, cleaner code, and more interoperability. All of this adds up to a big move to get projects “off their islands,” saving developers a lot of code, and companies a lot of money, among other benefits.

I apologize for the poor audio quality in this recording and hope the quality of the conversation makes up for it.

“I don’t want to speak to PHP from Drupal. I don’t want to speak to Drupal from PHP because that implies that those are different things that aren’t a part of each other or that I’m part of one talking to the other. That’s not the point. The point is that Drupal and PHP are not separate entities. Drupal is part of the PHP world and the PHP world is part of Drupal. That collaboration has helped us produce Drupal 8 and that collaboration I’m sure would continue to produce not just future versions of Drupal but better practices, more robust practices in PHP itself. So I would encourage everyone from these two large robust communities ... don’t look at them as two large robust communities. Look at them as different pockets of one larger community that we can all learn from, that we can all benefit from, and together we can build a better PHP for all projects.”

More by or featuring Larry, further reading
  1. Getting off the island in 2013
  2. Building Bridges: Linking Islands (2014)
  3. Drupal & PHP: Linking Islands, the podcast – part 1
  4. Drupal & PHP: Linking Islands, the podcast – part 2
  5. Drupal 8: Happy, but not satisfied
  6. Larry’s challenge for us: “Giving Back in 2016. Contribute to other projects. Get your name on the contributors list for a new open source project.”
Interview video - 43 min.

What is the PHP FIG--PHP Framework Interoperability Group--for and does it have something like a mission statement?

Larry: Okay. So a history lesson. The Framework Interoperability Group began life at php[tek] in 2009 in Chicago as the PHP Standards Group. We got together in a hotel room and said “With PHP 5.3 coming out and all these namespaces, it would be really cool if we all use them the same way and hey, we could do some cool autoloading stuff with that.” So the original goal was simply “Let’s collaborate and push this out to the community.” It was renamed to the Framework Interoperability Group in I think 2012. It didn’t really do anything more useful for several years.

In practice these days, pretty much any project that matters is using either the PSR-0 or PSR-4 autoloading standard. A project that doesn’t then has a huge amount of pressure to start doing so.

The PSR-2 coding standards: Most projects that are just random projects have now adopted tooling PhpStorm and phpcs, support by default, and there’s pressure on projects like Drupal that don’t use it to start using it just for conformity’s sake.

If you’re going to do anything new with HTTP messages now and you’re not already using Symfony’s HTTP foundation, you’re foolish to not use PSR-7 or something very close to PSR-7 because there’s a lot of tooling and tools built on top of that already.

So who are the members of the FIG group these days?

Larry: There’s I think 41 or 42 members now. I don’t remember all of them off the top of my head. They’re listed on the website. I’ll say they include pretty much very major project except Wordpress. So Symfony, Zend, Drupal, Joomla, phpBB, about a dozen libraries like Monolog or Stash or Doctrine, some smaller libraries you may not have heard as much about like Jackalope. It really runs the gamut from really big players like Drupal to really small players like Jackalope and everything in the middle.

What are some valid reasons why projects like Wordpress or individual developers would choose to ignore this interoperability movement, not take advantage of the PSR standards?

Larry: I think the biggest reason that projects wouldn’t follow PSR is legacy code bases. If you have a code base that’s been around for eight, 10 years or even just five years, you probably have a lot of internal conventions already built up and changing them is hard. Not like Drupal knows anything about that. ;-) So for a project like Wordpress where mission statement number one is backward compatibility, switching their logging system to use the PSR-3 logger would be an API break or at least extra API clumsiness so they’re not willing to do that. Certainly for a project by Drupal, switching our coding standards to PSR-2, whatever the technical benefits or downsides to that are, regardless of whether PSR-2 is a good spec or a bad spec, would mean changing literally millions of lines of code. It could be scripted to cover 98% of it fairly easily, but it still means every single patch and every single person’s local configuration and defaults in their IDE change. That’s not a small ask. So I think the biggest impediment to PSR adoption is simply existing standards, existing code bases, existing practices, which are sometimes legitimate complaints and sometimes not.

Actually, there’s one comment which you made in your Drupal 8 launch blog post which I recommend for everybody to read ...

... continued: You mentioned actually one of the most significant things about the launch of Drupal 8 is proving that it is possible. Before we manage to do this, it was an open question, is it possible for the entire community to retool, change the entire API method of thinking and switch to object-oriented concepts and unit testability. We managed to drag one of the world’s largest open source communities through that and successfully launched a product. You’re right. It’s an enormous undertaking to understand other projects not wanting to do that.

Larry: I actually have a keynote that I gave called Eating Elephants that is that exact point of this is a lot of work. If Drupal can pull it off, so can anybody, but it’s still a lot of work. Not every project necessarily wants to go through that, the level of overhaul that Drupal did and not necessarilyevery project needs to. But I think over time, simply through natural project churn, most of the standards are going to become widespread in practice.

What are the choices that people should be making now outside of implementing the PSRs?

... continued: So outside of FIG, of course FIG is just one part of a broader movement for interoperability and standard behaviors across no matter what it is you’re building with PHP. So what are some of the architectural implications of this exciting new world? What are the choices that people should be making now outside of implementing the PSRs?

Larry: I think the most important, just general good modern practices for collaboration these days are:

  • use a PSR-based autoloader because everyone else is. It just using your code and sharing your code dead simple.
  • Register it with Packagist because then getting it through Composer is dead simple.
  • Use proper dependency injection because that makes it a lot easier to swap out pieces and plug your system into someone else’s ...
  • ... which also means build your code in small standalone components rather than one big monolithic system.

This is really a movement that Symfony started with Symfony 2. It was the first project to really have a component library that was loosely coupled and then built a framework on top of it. Others have since done the same. Zend Framework 3 is moving heavily in that direction. The Aura project is strictly decoupled components with a framework built on top. A lot of major components now are completely standalone.

I think the biggest thing is think in terms of small, discrete pieces that you can mix and match. Same kind of Lego block approach that Drupal has striven for at the module level for years, even though we didn’t do a very good job of it at the code level all the time. We’re getting better. The more you do that, the easier it is to exchange code with people, the easier it is to reuse code, and also the easier it is to test.

Good unit testable code is also loosely coupled, is also easy to swap out, is easy to reason about. All of these concepts overlap on each other.

Testability, understandability, debuggability, ability to share with others all have the same underlying structure, underlying needs. So focusing on any one of those will make the others better.

What are some wheels that we decided to bring in from outside in Drupal 8, rather than reinventing them?

Larry: So the big first wheel we got from elsewhere was our routing system which we pulled in from Symfony and along with that, a new architecture that spread throughout the rest of the system and took over. The template engine, of course, Twig is new and that’s been a huge win. Everything I’ve heard front enders adore it. That’s third-party code. The places we didn’t, the configuration system is primarily homegrown in large part because we needed the UI integration for it. Symfony’s configuration system, for example, assumes you’re doing configuration by editing files on disk. Drupal assumes you’re doing configuration by pushing buttons in the UI. These are fundamentally different assumptions and that same underlying tooling that supports one is not really going to support the other. Not very well.

Drupal’s coming to the fold or gotten, become part of main line PHP. Talk about how this new world of interoperability has allowed Drupal to start making contributions outwards into other systems and other frameworks, other applications.

Larry: Honestly, I think at the moment, our biggest contributions are patches we’ve submitted to other projects, be that Symfony, Guzzle, Zend, whatever. Just being the poster child for this new PHP world. Drupal, being a demonstration that yes, it is possible to teach an old CMS new tricks, yes it is possible to embrace these modern tools and techniques, yes there’s benefits to doing so, you will survive. Honestly, I think that’s our biggest contribution is just proving that it can be done. We’re not the only project that has adopted lots of Symfony but I think just the evolutionary pressure we give that way is probably the biggest impact. It’s that the proof is in the Drupal 8 release, that it is a thing and it can be done and we should continue to provide that example of growth and of maturity enough to admit that you can change things. I think that’s probably our biggest contribution to PHP at the moment.

So OO isn't so hard after all ...

... continued: I think early on when we were talking inside the community about adopting object oriented practices, about adopting some of the Symfony. A lot of the conversation was around Drupal being not so accessible for newbie programmers, people coming to write their first lines of code. It seems like it’s so much easier when it’s procedural. What I’m most excited about with Drupal 8 is watching what happens in the next two or three years as we demonstrate that anybody can code with modern practices, too. And that in fact, it makes it easy. If you can learn how an IF statement works, you can understand what a class is. So I think that’s another cultural export that we’re offering the rest of the PHP world.

Larry: You don’t have to be a comp sci grad from school in order to write in modern object-oriented code. We have thousands of people now from Drupal who have picked it up without being in school for it and are liking it.

In the last few years, you’ve done a series of posts and sort of challenges to I guess the broader PHP world.

... continued: Initially, “Hey Drupal, we’ve got to get off our island and accept that we shouldn’t carry all this liability ourselves.” Then there was a building bridges post which said “Go visit people in other communities” and there was a challenge this year, build something in a project that’s not your home project. What’s your mission statement and challenge for all of us in 2016?

Larry: I know what I’m going to say. First one was go out and learn from other projects. The second one was go out and build with other projects. So I’ll say it now. Your challenge for this next year, contribute to other projects. Your goal is to get your name on the contributor’s list for a new open source project, some project that’s not your home project.

Podcast series: Drupal 8Skill Level: Intermediate
Categories: Elsewhere

Michal Čihař: Bug squashing in Gammu

Planet Debian - Fri, 05/02/2016 - 12:00

I've not really spent much time on Gammu in past months and it was about time to do some basic housekeeping.

It's not that there would be too much of new development, I rather wanted to go through the issue tracker, properly tag issues, close questions without response and resolve the ones which are simple to fix. This lead to few code and documentation improvements.

Overall the list of closed issues is quite huge:

Do you want more development to happen on Gammu? You can support it by money.

Filed under: English Gammu python-gammu Wammu | 0 comments

Categories: Elsewhere

Valuebound: Changing the Appearance of your new site

Planet Drupal - Fri, 05/02/2016 - 08:28

Once you get a new Drupal installed in your system, very next step you would like to change overall appearance of your site to make and feel good for end user. This is one the initial process require while setting up Drupal. Deciding about your design early will go long way in saving your time and repeated effort instead of doing it later.

Drupal 8 provides few built in theme that come up with the same package
e.g: Bartik, Stable, Seven, Stark, Classy.
if you don’t find any of them from given list then just go to drupal directory core/themes and open any theme info.yml and change hidden: true to hidden: false.

Drupal 8 is mobile-first approach. All built-in themes in Drupal 8 are responsive, with an…

Categories: Elsewhere

Valuebound: Changing the Appearance of your new site

Planet Drupal - Fri, 05/02/2016 - 07:49

Once you get a new Drupal installed in your system, very next step you would like to change overall appearance of your site to make and feel good for end user. This is one the initial process require while setting up Drupal. Deciding about your design early will go long way in saving your time and repeated effort instead of doing it later.

Drupal 8 provides few built in theme that come up with the same package
e.g: Bartik, Stable, Seven, Stark, Classy.
if you don’t find any of them from given list then just go to drupal directory core/themes and open any theme info.yml and change hidden: true to hidden: false.

Drupal 8 is mobile-first approach. All built-in themes in Drupal 8 are responsive, with an…

Categories: Elsewhere

Roy Scholten: Redesigning the content creation page for Drupal 8

Planet Drupal - Fri, 05/02/2016 - 01:41
We want to be faster and bolder in shipping design improvements for Drupal 8. But how? Lets have a look at a relatively big (but not super huge) design change built for Drupal 8 during the development cycle and see what we might learn from it. Redesigning the content creation page

Drupal 8 ships with a significant overhaul of the content creation page (“node form” for intimi). It’s design process and subsequent implementation are extensively documented on drupal.org. This is a high level summary of how this redesign come to be.

Steps in the process:
  1. Research
  2. Sketch
  3. Design
  4. Test
  5. Implement

Who were working on this? In the earliest design stages primarily 3 people: Bojhan Somers, Jared Ponchot and moi, Roy Scholten. Many more helped with finetuning design elements, usability testing, writing and reviewing code and all the other little and not so little things that go into getting a big design change committed to Drupal core. Thanks all.

Research & sketching

We didn’t spend much time building the case for a better content creation page. No problem because most were already aware of the big room for improvement.

The research was two-part: “what does Drupal do?” And “what are other systems doing?” For the Drupal aspects, we looked at how and where contributed modules add functionality to this screen. We reviewed other systems looking for patterns in how functionality was grouped and arranged on the page.

That input was then translated into very generic concept sketches, comparing and contrasting several arrangements of the three basic feature groups: content, settings and actions. From there, we proposed to pursue one specific direction in more detail. Before we did that, we opened up the work so far for feedback: http://groups.drupal.org/node/214898

Design

Starting from that very rough initial layout we started exploring the details of that arrangement. Which items belong in which area and how would they behave? How would it work on small screens? Which items to call out, which ones to push back?

Then Jared Ponchot stepped in and pulled all that sketching together in high-definition mockups. And on these we iterated again a couple of times, detailing the arrangement of interface elements, the use of color and other ways to (de-)emphasise certain parts of the whole. A comparison with the then current state of the Seven admin theme identified how we would have to extend its visual language to accommodate this new design.

And that’s where we opened up for another round of feedback: http://groups.drupal.org/node/217434

Test

A working prototype of the design proposal was coded and made available on a test site. A usability test plan was drafted and several people used that script to test the prototype with people both new to and experienced with Drupal. One of the few times we actively pushed for community driven usability testing actually. Results from the testing were reported in the implementation issue and individual issues were opened for the necessary changes.

Usability test plan: http://groups.drupal.org/node/223959

Implementation

The prototype for testing was created in context of the implementation issue. We spent a lot of time translating the design proposal into actionable tasks.

The distinction between rough prototyping code and actual core worthy implementation was a bit unclear at first but we very quickly got to a usable demo. The overarching “meta” issue has over 300 comments. Which is usually a sign of a large undertaking that’s not sufficiently broken down in seperate actionable tasks. But, we got there in the end!

Implementation meta issue: https://drupal.org/node/1510532

Lessons (not? :-) learned
  • Good: working with a small team. It allowed us to focus on what needed to be done and move relatively fast.
  • Good: Publicly documenting the research and sketching phases was a lot of work but worth it. Pulling a finalised glossy photoshop design out of the hat would not have created the same engagement and constructive feedback
  • Good: counter to the previous point but also a good thing was that the initial sketches and design mockups were shared only within the very small team of 3 to 5 people. This kept momentum up but more importantly allowed us to focus on the actual design work. A broader discussion would very likely have shifted towards discussing implementation challenges, which is not what you’re after when still exploring multiple options.
  • Not so good: we prototyped only one working version quite late in the process. Only after a lot of time invested did we get to see and feel a somewhat working version. This narrowed our bandwidth for subsequent changes, which were relatively small tweaks, keeping the basic paradigm intact. We never really pitted two or more radically different approaches against each other. This was mostly a time and energy issue: we only had the bandwidth to work through one design direction.
  • Not so good: Doing the design phases outside of the issue queue (where implementation happens). This was a necessary but difficult trade off. The issue queue doesn’t lend itself to explorative work with lots of ambiguity so the design work wasn’t tracked there. Many core developers did not closely follow the process as it happened on groups.drupal.org, so when we brought the design over to the issue queue with the proposal to go build this, much of the earlier discussion points got brought up again.
  • Not so good: Not having a primary code architect as part of the team. We could have prevented at least some of the rehash in the issue queue if we had had a knowledgeable core developer on the design team. Having somebody who could answer to the technical implications of the design and help break down the work into manageable tasks the would probably have gotten us off to a better start with implementation.

A quick tally of the number of comments across the main discussion threads and issues for this project: more than 1200. And that doesn’t even include huge additions like the WYSIWYG editor and the improved previews. Not to say that this doesn’t happen in other initiatives, but you can see how demanding it is for anyone who wants to keep track, especially if you want to make sure that the big picture doesn’t get lost in the myriad of details.

How to get better, faster?

The nature of design changes like these is that they touch many aspects: backend & frontend, php & javascript, visual design & performance, accessibility & multilingual, etc. If we want to go faster we might want to consider replacing the research intensive work with building multiple (rougher) prototypes earlier and testing those for viability. That might lead us to a general direction and a plan for implementation faster. As for the actual core worthy implementation, we might win some time if we can provide a design spec together with an initial plan identifying the technical challenges and strategies to overcome those.

The amount of work will always be huge. I think the gains are in finding a better balance in:

  1. Feeling free to write quick throw-away code in the initial explorations so people can get a feel of what might work and we can test it.
  2. Reducing wasted efforts (in code and discussion) during implementation.

Understanding the distinction between these two, and being clear about when the first ends and the second begins will already be a big step forward.

Further discussion: Determine process for big UX changes

Tags: drupaluxauthor uxdrupalplanetSub title: Drupal 8 has a redesigned content creation page. This is how it came to be.
Categories: Elsewhere

Vincent Fourmond: Making oprofile work again with recent kernels

Planet Debian - Thu, 04/02/2016 - 21:54
I've been using oprofile for profiling programs for a while now (and especially QSoas, because it doesn't require specific compilation options, and doesn't make your program run much more slowly (like valgrind does, which can also be used to some extent for profiling). It's a pity the Debian package was dropped long ago, but the ubuntu packages work out of the box on Debian. But, today, while trying to see what takes so long in some fits I'm running, here's what I get:
~ operf QSoas Unexpected error running operf: Permission denied
Looking further using strace, I could see that what was not working was the first call to perf_event_open.
It took me quite a long time to understand why it stopped working and how to get it working again, so here's for those of you who googled the error and couldn't find any answer (including me, who will probably have forgotten the anwser in a couple of months). The reason behing the change is that, for security reason, non-privileged users do not have the necessary privileges since Debian kernel 4.1.3-1; here's the relevant bit from the changelog:

* security: Apply and enable GRKERNSEC_PERF_HARDEN feature from Grsecurity, disabling use of perf_event_open() by unprivileged users by default (sysctl: kernel.perf_event_paranoid)
The solution is simple, just run as root:
~ sysctl kernel.perf_event_paranoid=1
(the default value seems to be 3, for now). Hope it helps !
Categories: Elsewhere

Drop Guard: Sneak Peek: Drop Guard's revamped project creation process

Planet Drupal - Thu, 04/02/2016 - 19:30
Sneak Peek: Drop Guard's revamped project creation process Manuel Pistner Thu, 02/04/2016 - 18:30

We are working tirelessly to make Drop Guard better, faster and more friendly for developer. In this blog post we present you a "sneak peek" of our revamped project creation process, with this end in mind to please you with greater usability for getting started with your project in Drop Guard!

So let's get more detailed: the creation process will be split into 3 independent configuration screens.

1. On the first screen you will be able to quickly connect Drop Guard to your repository and enjoy it's updates monitoring capabilities - even without installing a Drop Guard module.

2. Second screen will be for those who immediately want to integrate Drop Guard in their daily maintenance routine. It's about telling Drop Guard what to do when the update of a certain type is detected.

3. Third screen will be all about events - sending e-mails, running SSH commands, pinging your favourite CI tool or merging branches based on certain conditions.

So below we share the preview of the new "Updates setup" wizard. As opposed to the "accordion-like" endless form, we now have the sleek step-by-step configurator, which allows you to quickly instruct Drop Guard what to do when updates are detected (embracing best update practices and being able to set a single configuration for different types of updates). 

This is a screenshot of  the update types configuration in the old project creation process:


And here you can enjoy the sneak peek of the new process:

 

 

If you're a Drop Guard user or just curious - don't hesitate and leave your feedback on it. We'd love to optimize Drop Guard for every workflow and we can't do it without your voice! You prefer a personal contact? Find our data here: About Drupal Drupal Planet Project Process
Categories: Elsewhere

Promet Source: The Drupal Developers' Essential Guide to Automated Testing

Planet Drupal - Thu, 04/02/2016 - 18:54
Read our Automated Testing eBook

 

Our Drupal development experts compiled their best advice for running effective automated tests that will save time and money. Complex development projects are likely to have many releases and have much to gain from implementing an automated test framework. Read this guide for advice on how your team should approach writing test cases, choosing the right tools to execute tests, and how to emphasize visibility in sharing the test results.

Categories: Elsewhere

Jeff Geerling's Blog: Set up a hierarchical taxonomy term Facet using Facet API with Search API Solr

Planet Drupal - Thu, 04/02/2016 - 18:28

I wanted to document this here just because it took me a little while to get all the bits working just right so I could have a hierarchical taxonomy display inside a Facet API search facet, rather than a flat display of only the taxonomy terms directly related to the nodes in the current search.

Basically, I had a search facet on a search page that allowed users to filter search results by a taxonomy term, and I wanted it to show the taxonomy's hierarchy:

To do this, you need to do two main things:

  1. Make sure your taxonomy field is being indexed with taxonomy hierarchy data intact.
  2. Set up the Facet API facet for this taxonomy term so it will display the full hierarchy.

Let's first start by making sure the taxonomy information is being indexed (refer to the image below):

Categories: Elsewhere

LevelTen Interactive: The First Ever Statewide DrupalCamp in Texas! TexasCamp 2016

Planet Drupal - Thu, 04/02/2016 - 16:49

LevelTen Interactive is proud to present TexasCamp 2016 on April 1 - 2 at the Addison Conference and Theatre Centre in Dallas, Texas.

TexasCamp is two days of DrupalCamp, intended for Drupal admins and users, sitebuilders, themers and developers. Expect sessions from beginner to expert level, with the brightest minds in the Drupal world attending and presenting.

You can attend TexasCamp for only ...Read more

Categories: Elsewhere

Petter Reinholdtsen: Using appstream in Debian to locate packages with firmware and mime type support

Planet Debian - Thu, 04/02/2016 - 16:40

The appstream system is taking shape in Debian, and one feature set that is very convenient is its ability to tell you want package to install to get a given firmware file. This can be done using apt-file too, but that is for someone else to blog about. :)

Here is a small recipe to find the package with a given firmware file, in this example I am looking for ctfw-3.2.3.0.bin, randomly picked from the set of firmware announced using appstream in Debian unstable. In general you would be looking for the firmware requested by the kernel during kernel module loading. To find the package providing the example file, do like this:

% apt install appstream [...] % apt update [...] % appstreamcli what-provides firmware:runtime ctfw-3.2.3.0.bin | \ awk '/Package:/ {print $2}' firmware-qlogic %

See the appstream wiki page to learn how to embed the package metadata in a way appstream can use.

This same approach can be used to find any package supporting a given MIME type. This is very useful when you get a file you do not know how to handle. First find the mime type using file --mime-type, and next look up the package providing support for it. Lets say you got an SVG file. Its MIME type is image/svg+xml, and you can find all packages handling this type like this:

% apt install appstream [...] % apt update [...] % appstreamcli what-provides mimetype image/svg+xml | \ awk '/Package:/ {print $2}' bkchem phototonic inkscape shutter tetzle geeqie xia pinta gthumb karbon comix mirage viewnior postr ristretto kolourpaint4 eog eom gimagereader midori %

I believe the MIME types are fetched from the desktop file for packages providing appstream metadata.

Categories: Elsewhere

Ritesh Raj Sarraf: Lenovo Yoga 2 13 running Debian with GNOME Converged Interface

Planet Debian - Thu, 04/02/2016 - 16:33

I've wanted to blog about this for a while. So, though I'm terrible at creating video reviews, I'm still going to do it, rather than procrastinate every day.

 

In this video, the emphasis is on using Free Software (GNOME in particular) tools, with which soon you should be able serve the needs for Desktop/Laptop, and as well as a Tablet.

The video also touches a bit on Touchpad Gestures.

 

Categories: Keywords: Like: 
Categories: Elsewhere

Martin-Éric Racine: xf86-video-geode 2.11.18

Planet Debian - Thu, 04/02/2016 - 15:27

Yesterday, I pushed out version 2.11.18 of the Geode X.Org driver. This is the driver used by the OLPC XO-1 and by a plethora of low-power desktops, micro notebooks and thin clients. This release mostly includes maintenance fixes of all sorts. Of noticeable interest is a fix for the long-standing issue that switching between X and a VT would result in a blank screen (this should probably be cherry-picked for distributions running earlier releases of this driver). Many thanks to Connor Behan for the fix!


Unfortunately, this driver still doesn't work with GNOME. On my testing host, launching GDM produces a blank screen. 'ps' and other tools show that GDM is running but there's no screen content; the screen remains pitch black. This issue doesn't happen with other display managers e.g. LightDM. Bug reports have been filed, additional information was provided, but the issue still hasn't been resolved.


Additionally, X server flat out crashes on Geode hosts running Linux kernels 4.2 or newer. 'xkbcomp' repeatedly fails to launch and X exits with a fatal error. Bug reports have been filed, but not reacted to. However, interestingly enough, X launches fine if my testing host is booted with earliers kernels, which might suggest what the actual cause of this particular bug might be:


Since kernel 4.2 entered Debian, the base level i386 kernel on Debian is now compiled for i686 (without PAE). Until now, the base level was i586. This essentially makes it pointless to build the Geode driver with GX2 support. It also means that older GX1 hardware won't be able to run Debian either, starting with the next stable release.

Categories: Elsewhere

Acquia Developer Center Blog: Drupal 8 Module of the Week: Admin Toolbar

Planet Drupal - Thu, 04/02/2016 - 13:35
Jeffrey A. "jam" McGuire

Each day, more Drupal 7 modules are being migrated over to Drupal 8 and new ones are being created for the Drupal community’s latest major release. In this series, the Acquia Developer Center is profiling some of the most prominent, useful modules available for Drupal 8. This week: Admin Toolbar.

Tags: acquia drupal planetadmin toolbardrupal 8Drupal modules
Categories: Elsewhere

Daniel Pocock: Australians stuck abroad and alleged sex crimes

Planet Debian - Thu, 04/02/2016 - 11:30

Two Australians have achieved prominence (or notoriety, depending on your perspective) for the difficulty in questioning them about their knowledge of alleged sex crimes.

One is Julian Assange, holed up in the embassy of Ecuador in London. He is back in the news again today thanks to a UN panel finding that the UK is effectively detaining him, unlawfully, in the Ecuadorian embassy. The effort made to discredit and pursue Assange and other disruptive technologists, such as Aaron Swartz, has an eerie resemblance to the way the Spanish Inquisition hunted witches in the middle ages.

The other Australian stuck abroad is Cardinal George Pell, the most senior figure in the Catholic Church in Australia. The inquiry into child sex abuse by priests has heard serious allegations claiming the the Cardinal knew about and covered up abuse. This would appear far more sinister than anything Mr Assange is accused of. Like Mr Assange, the Cardinal has been unable to travel to attend questioning in person. News reports suggest he is ill and can't leave Rome, although he is being accommodated in significantly more comfort than Mr Assange.

If you had to choose, which would you prefer to leave your child alone with?

Categories: Elsewhere

Russell Coker: Unikernels

Planet Debian - Thu, 04/02/2016 - 10:48

At LCA I attended a talk about Unikernels. Here are the reasons why I think that they are a bad idea:

Single Address Space

According to the Unikernel Wikipedia page [1] a significant criteria for a Unikernel system is that it has a single address space. This gives performance benefits as there is no need to change CPU memory mappings when making system calls. But the disadvantage is that any code in the application/kernel can access any other code directly.

In a typical modern OS (Linux, BSD, Windows, etc) every application has a separate address space and there are separate memory regions for code and data. While an application can request the ability to modify it’s own executable code in some situations (if the OS is configured to allow that) it won’t happen by default. In MS-DOS and in a Unikernel system all code has read/write/execute access to all memory. MS-DOS was the least reliable OS that I ever used. It was unreliable because it performed tasks that were more complex than CP/M but had no memory protection so any bug in any code was likely to cause a system crash. The crash could be delayed by some time (EG corrupting data structures that are only rarely accessed) and was very difficult to fix. It would be possible to have a Unikernel system with non-modifyable executable areas and non-executable data areas and it is conceivable that a virtual machine system like Xen could enforce that. But that still wouldn’t solve the problem of all code being able to write to all data.

On a Linux system when an application writes to the wrong address there is a reasonable probability that it will not have write access and you will immediately get a SEGV which is logged and informs the sysadmin of the address of the crash.

When Linux applications have bugs that are difficult to diagnose (EG buffer overruns that happen in production and can’t be reproduced in a test environment) there are a variety of ways of debugging them. Tools such as Valgrind can analyse memory access and tell the developers which code had a bug and what the bug does. It’s theoretically possible to link something like Valgrind into a Unikernel, but the lack of multiple processes would make it difficult to manage.

Debugging

A full Unix environment has a rich array of debugging tools, strace, ltrace, gdb, valgrind and more. If there are performance problems then tools like sysstat, sar, iostat, top, iotop, and more. I don’t know which of those tools I might need to debug problems at some future time.

I don’t think that any Internet facing service can be expected to be reliable enough that it will never need any sort of debugging.

Service Complexity

It’s very rare for a server to have only a single process performing the essential tasks. It’s not uncommon to have a web server running CGI-BIN scripts or calling shell scripts from PHP code as part of the essential service. Also many Unix daemons are not written to run as a single process, at least threading is required and many daemons require multiple processes.

It’s also very common for the design of a daemon to rely on a cron job to clean up temporary files etc. It is possible to build the functionality of cron into a Unikernel, but that means more potential bugs and more time spent not actually developing the core application.

One could argue that there are design benefits to writing simple servers that don’t require multiple programs. But most programmers aren’t used to doing that and in many cases it would result in a less efficient result.

One can also argue that a Finite State Machine design is the best way to deal with many problems that are usually solved by multi-threading or multiple processes. But most programmers are better at writing threaded code so forcing programmers to use a FSM design doesn’t seem like a good idea for security.

Management

The typical server programs rely on cron jobs to rotate log files and monitoring software to inspect the state of the system for the purposes of graphing performance and flagging potential problems.

It would be possible to compile the functionality of something like the Nagios NRPE into a Unikernel if you want to have your monitoring code running in the kernel. I’ve seen something very similar implemented in the past, the CA Unicenter monitoring system on Solaris used to have a kernel module for monitoring (I don’t know why). My experience was that Unicenter caused many kernel panics and more downtime than all other problems combined. It would not be difficult to write better code than the typical CA employee, but writing code that is good enough to have a monitoring system running in the kernel on a single-threaded system is asking a lot.

One of the claimed benefits of a Unikernel was that it’s supposedly risky to allow ssh access. The recent ssh security issue was an attack against the ssh client if it connected to a hostile server. If you had a ssh server only accepting connections from management workstations (a reasonably common configuration for running servers) and only allowed the ssh clients to connect to servers related to work (an uncommon configuration that’s not difficult to implement) then there wouldn’t be any problems in this regard.

I think that I’m a good programmer, but I don’t think that I can write server code that’s likely to be more secure than sshd.

On Designing It Yourself

One thing that everyone who has any experience in security has witnessed is that people who design their own encryption inevitably do it badly. The people who are experts in cryptology don’t design their own custom algorithm because they know that encryption algorithms need significant review before they can be trusted. The people who know how to do it well know that they can’t do it well on their own. The people who know little just go ahead and do it.

I think that the same thing applies to operating systems. I’ve contributed a few patches to the Linux kernel and spent a lot of time working on SE Linux (including maintaining out of tree kernel patches) and know how hard it is to do it properly. Even though I’m a good programmer I know better than to think I could just build my own kernel and expect it to be secure.

I think that the Unikernel people haven’t learned this.

No related posts.

Categories: Elsewhere

Iustin Pop: X cursor theme

Planet Debian - Thu, 04/02/2016 - 10:46

There's not much to talk about X cursor themes, except when they change behind your back

A while back, after a firefox upgrade, it—and only it—showed a different cursor theme: basically double the size, and (IMHO) uglier. Searched for a while, but couldn't figure what makes firefox special, except that it is a GTK application.

After another round of dist-upgrades, now everything except xterms were showing the big cursors. This annoyed me to no end—as I don't use a high-DPI display, the new cursors are just too damn big. Only to find out two things:

  • thankfully, under Debian, the x-cursor-theme is an alternatives entry, so it can be easily configured
  • sadly, the adwaita-icon-theme package (whose description says "default icon theme of GNOME") installs itself as a very high priority alternatives entry (90), which means it takes over my default X cursor

Sigh, Gnome.

Categories: Elsewhere

Valuebound: Installing Drupal with Drush, the Basics

Planet Drupal - Thu, 04/02/2016 - 07:37

Drush is a command line interface that help us to speed up administrative and development tasks for Drupal sites. After installing this Drush, we’ll be able to perform useful action simply by typing command into a terminal —actions that would usually take multiple steps in a web browser. Drush runs on Drupal 6, 7 well as 8.

Note:  Drupal 8, works only with Drush 8.

Couple of task which can be be done using Drush easily are :
    Download Drupal
    Download contrib modules
    Install Drupal
    Update Drupal and contrib module versions
    Run updatedb
    Clear the cache
    Run cron
    Run Drupal with a lightweight web server
    Import, export and merge configuration
   …

Categories: Elsewhere

Pages

Subscribe to jfhovinne aggregator