Agrégateur de flux

Mark Shropshire: Type Less with Drush site-set

Planet Drupal - mer, 22/06/2016 - 05:53

I use drush aliases between Drupal VM and Drupal hosting services quite a bit. It was great to learn that drush site-set allows me to set the alias to use for the current session, so I don't have to type the alias name over and over again. For instance, I can set an alias like this: $ drush site-set @drupalvm.drupal8.dev, allowing me to check the status of the site on the Drupal VM with $ drush status. To make it even easier, use is an alias for site-set. Example: $ drush use @drupalvm.drupal8.dev.

Drush site-set has some other useful options beyond setting drush aliases. Check out the options available at the link below:

https://drushcommands.com/drush-8x/core/site-set/

Blog Category: 
Catégories: Elsewhere

Talha Paracha: GSoC’16 – Pubkey Encrypt – Week 4 Report

Planet Drupal - mer, 22/06/2016 - 02:00

I started the week by providing test coverage for functionalities I added to the module in week 3. Since the main functionality I added was the automatic generation of keys, the tests I wrote assert for these capabilities:

Catégories: Elsewhere

Matthew Garrett: I've bought some more awful IoT stuff

Planet Debian - mer, 22/06/2016 - 01:11
I bought some awful WiFi lightbulbs a few months ago. The short version: they introduced terrible vulnerabilities on your network, they violated the GPL and they were also just bad at being lightbulbs. Since then I've bought some other Internet of Things devices, and since people seem to have a bizarre level of fascination with figuring out just what kind of fractal of poor design choices these things frequently embody, I thought I'd oblige.

Today we're going to be talking about the KanKun SP3, a plug that's been around for a while. The idea here is pretty simple - there's lots of devices that you'd like to be able to turn on and off in a programmatic way, and rather than rewiring them the simplest thing to do is just to insert a control device in between the wall and the device andn ow you can turn your foot bath on and off from your phone. Most vendors go further and also allow you to program timers and even provide some sort of remote tunneling protocol so you can turn off your lights from the comfort of somebody else's home.

The KanKun has all of these features and a bunch more, although when I say "features" I kind of mean the opposite. I plugged mine in and followed the install instructions. As is pretty typical, this took the form of the plug bringing up its own Wifi access point, the app on the phone connecting to it and sending configuration data, and the plug then using that data to join your network. Except it didn't work. I connected to the plug's network, gave it my SSID and password and waited. Nothing happened. No useful diagnostic data. Eventually I plugged my phone into my laptop and ran adb logcat, and the Android debug logs told me that the app was trying to modify a network that it hadn't created. Apparently this isn't permitted as of Android 6, but the app was handling this denial by just trying again. I deleted the network from the system settings, restarted the app, and this time the app created the network record and could modify it. It still didn't work, but that's because it let me give it a 5GHz network and it only has a 2.4GHz radio, so one reset later and I finally had it online.

The first thing I normally do to one of these things is run nmap with the -O argument, which gives you an indication of what OS it's running. I didn't really need to in this case, because if I just telnetted to port 22 I got a dropbear ssh banner. Googling turned up the root password ("p9z34c") and I was logged into a lightly hacked (and fairly obsolete) OpenWRT environment.

It turns out that here's a whole community of people playing with these plugs, and it's common for people to install CGI scripts on them so they can turn them on and off via an API. At first this sounds somewhat confusing, because if the phone app can control the plug then there clearly is some kind of API, right? Well ha yeah ok that's a great question and oh good lord do things start getting bad quickly at this point.

I'd grabbed the apk for the app and a copy of jadx, an incredibly useful piece of code that's surprisingly good at turning compiled Android apps into something resembling Java source. I dug through that for a while before figuring out that before packets were being sent, they were being handed off to some sort of encryption code. I couldn't find that in the app, but there was a native ARM library shipped with it. Running strings on that showed functions with names matching the calls in the Java code, so that made sense. There were also references to AES, which explained why when I ran tcpdump I only saw bizarre garbage packets.

But what was surprising was that most of these packets were substantially similar. There were a load that were identical other than a 16-byte chunk in the middle. That plus the fact that every payload length was a multiple of 16 bytes strongly indicated that AES was being used in ECB mode. In ECB mode each plaintext is split up into 16-byte chunks and encrypted with the same key. The same plaintext will always result in the same encrypted output. This implied that the packets were substantially similar and that the encryption key was static.

Some more digging showed that someone had figured out the encryption key last year, and that someone else had written some tools to control the plug without needing to modify it. The protocol is basically ascii and consists mostly of the MAC address of the target device, a password and a command. This is then encrypted and sent to the device's IP address. The device then sends a challenge packet containing a random number. The app has to decrypt this, obtain the random number, create a response, encrypt that and send it before the command takes effect. This avoids the most obvious weakness around using ECB - since the same plaintext always encrypts to the same ciphertext, you could just watch encrypted packets go past and replay them to get the same effect, even if you didn't have the encryption key. Using a random number in a challenge forces you to prove that you actually have the key.

At least, it would do if the numbers were actually random. It turns out that the plug is just calling rand(). Further, it turns out that it never calls srand(). This means that the plug will always generate the same sequence of challenges after a reboot, which means you can still carry out replay attacks if you can reboot the plug. Strong work.

But there was still the question of how the remote control works, since the code on github only worked locally. tcpdumping the traffic from the server and trying to decrypt it in the same way as local packets worked fine, and showed that the only difference was that the packet started "wan" rather than "lan". The server decrypts the packet, looks at the MAC address, re-encrypts it and sends it over the tunnel to the plug that registered with that address.

That's not really a great deal of authentication. The protocol permits a password, but the app doesn't insist on it - some quick playing suggests that about 90% of these devices still use the default password. And the devices are all based on the same wifi module, so the MAC addresses are all in the same range. The process of sending status check packets to the server with every MAC address wouldn't take that long and would tell you how many of these devices are out there. If they're using the default password, that's enough to have full control over them.

There's some other failings. The github repo mentioned earlier includes a script that allows arbitrary command execution - the wifi configuration information is passed to the system() command, so leaving a semicolon in the middle of it will result in your own commands being executed. Thankfully this doesn't seem to be true of the daemon that's listening for the remote control packets, which seems to restrict its use of system() to data entirely under its control. But even if you change the default root password, anyone on your local network can get root on the plug. So that's a thing. It also downloads firmware updates over http and doesn't appear to check signatures on them, so there's the potential for MITM attacks on the plug itself. The remote control server is on AWS unless your timezone is GMT+8, in which case it's in China. Sorry, Western Australia.

It's running Linux and includes Busybox and dnsmasq, so plenty of GPLed code. I emailed the manufacturer asking for a copy and got told that they wouldn't give it to me, which is unsurprising but still disappointing.

The use of AES is still somewhat confusing, given the relatively small amount of security it provides. One thing I've wondered is whether it's not actually intended to provide security at all. The remote servers need to accept connections from anywhere and funnel decent amounts of traffic around from phones to switches. If that weren't restricted in any way, competitors would be able to use existing servers rather than setting up their own. Using AES at least provides a minor obstacle that might encourage them to set up their own server.

Overall: the hardware seems fine, the software is shoddy and the security is terrible. If you have one of these, set a strong password. There's no rate-limiting on the server, so a weak password will be broken pretty quickly. It's also infringing my copyright, so I'd recommend against it on that point alone.

comments
Catégories: Elsewhere

Ian Wienand: Zuul and Ansible in OpenStack CI

Planet Debian - mer, 22/06/2016 - 00:16

In a prior post, I gave an overview of the OpenStack CI system and how jobs were started. In that I said

(It is a gross oversimplification, but for the purposes of OpenStack CI, Jenkins is pretty much used as a glorified ssh/scp wrapper. Zuul Version 3, under development, is working to remove the need for Jenkins to be involved at all).

Well some recent security issues with Jenkins and other changes has led to a roll-out of what is being called Zuul 2.5, which has indeed removed Jenkins and makes extensive use of Ansible as the basis for running CI tests in OpenStack. Since I already had the diagram, it seems worth updating it for the new reality.

OpenStack CI Overview

While previous post was really focused on the image-building components of the OpenStack CI system, overview is the same but more focused on the launchers that run the tests.

  1. The process starts when a developer uploads their code to gerrit via the git-review tool. There is no further action required on their behalf and the developer simply waits for results of their jobs.

  2. Gerrit provides a JSON-encoded "fire-hose" output of everything happening to it. New reviews, votes, updates and more all get sent out over this pipe. Zuul is the overall scheduler that subscribes itself to this information and is responsible for managing the CI jobs appropriate for each change.

  3. Zuul has a configuration that tells it what jobs to run for what projects. Zuul can do lots of interesting things, but for the purposes of this discussion we just consider that it puts the jobs it wants run into gearman for a launcher to consume. gearman is a job-server; as they explain it "[gearman] provides a generic application framework to farm out work to other machines or processes that are better suited to do the work". Zuul puts into gearman basically a tuple (job-name, node-type) for each job it wants run, specifying the unique job name to run and what type of node it should be run on.

  4. A group of Zuul launchers are subscribed to gearman as workers. It is these Zuul launchers that will consume the job requests from the queue and actually get the tests running. However, a launcher needs two things to be able to run a job — a job definition (what to actually do) and a worker node (somewhere to do it).

    The first part — what to do — is provided by job-definitions stored in external YAML files. The Zuul launcher knows how to process these files (with some help from Jenkins Job Builder, which despite the name is not outputting XML files for Jenkins to consume, but is being used to help parse templates and macros within the generically defined job definitions). Each Zuul launcher gets these definitions pushed to it constantly by Puppet, thus each launcher knows about all the jobs it can run automatically. Of course Zuul also knows about these same job definitions; this is the job-name part of the tuple we said it put into gearman.

    The second part — somewhere to run the test — takes some more explaining. To the next point...

  5. Several cloud companies donate capacity in their clouds for OpenStack to run CI tests. Overall, this capacity is managed by a customized management tool called nodepool (you can see the details of this capacity at any given time by checking the nodepool configuration). Nodepool watches the gearman queue and sees what requests are coming out of Zuul. It looks at node-type of jobs in the queue (i.e. what platform the job has requested to run on) and decides what types of nodes need to start and which cloud providers have capacity to satisfy demand.

    Nodepool will start fresh virtual machines (from images built daily as described in the prior post), monitor their start-up and, when they're ready, put a new "assignment job" back into gearman with the details of the fresh node. One of the active Zuul launchers will pick up this assignment job and register the new node to itself.

  6. At this point, the Zuul launcher has what it needs to actually get jobs started. With an fresh node registered to it and waiting for something to do, the Zuul launcher can advertise its ability to consume one of the waiting jobs from the gearman queue. For example, if a ubuntu-trusty node is provided to the Zuul launcher, the launcher can now consume from gearman any job it knows about that is intended to run on an ubuntu-trusty node type. If you're looking at the launcher code this is driven by the NodeWorker class — you can see this being created in response to an assignment via LaunchServer.assignNode.

    To actually run the job — where the "job hits the metal" as it were — the Zuul launcher will dynamically construct an Ansible playbook to run. This playbook is a concatenation of common setup and teardown operations along with the actual test scripts the jobs wants to run. Using Ansible to run the job means all the flexibility an orchestration tool provides is now available to the launcher. For example, there is a custom console streamer library that allows us to live-stream the console output for the job over a plain TCP connection, and there is the possibility to use projects like ARA for visualisation of CI runs. In the future, Ansible will allow for better coordination when running multiple-node testing jobs — after all, this is what orchestration tools such as Ansible are made for! While the Ansible run can be fairly heavyweight (especially when you're talking about launching thousands of jobs an hour), the system scales horizontally with more launchers able to consume more work easily.

    When checking your job results on logs.openstack.org you will see a _zuul_ansible directory now which contains copies of the inventory, playbooks and other related files that the launcher used to do the test run.

  7. Eventually, the test will finish. The Zuul launcher will put the result back into gearman, which Zuul will consume (log copying is interesting but a topic for another day). The testing node will be released back to nodepool, which destroys it and starts all over again — nodes are not reused and also have no sensitive details on them, as they are essentially publicly accessible. Zuul will wait for the results of all jobs for the change and post the result back to Gerrit; it either gives a positive vote or the dreaded negative vote if required jobs failed (it also handles merges to git, but that is also a topic for another day).

Work will continue within OpenStack Infrastructure to further enhance Zuul; including better support for multi-node jobs and "in-project" job definitions (similar to the https://travis-ci.org/ model); for full details see the spec.

Catégories: Elsewhere

Acquia Developer Center Blog: Drupal 8 Module (Distro!) of the Week: Lightning

Planet Drupal - mar, 21/06/2016 - 23:29

Each day, new functionality is being created for and built with Drupal 8. At the same time, more and more Drupal 7 modules are also being migrated to the Drupal community’s latest major release. In this series, the Acquia Developer Center is profiling some of the most prominent, useful modules, projects, and tools available for Drupal 8. This week: the Drupal 8 Lightning distribution.

Tags: acquia drupal planetlightningdistrodistributionauthoring
Catégories: Elsewhere

Zivtech: Attention: A Key Component of UX and Cognitive Psychology

Planet Drupal - mar, 21/06/2016 - 21:30


​Technology is cool. New features are cool. Shouldn’t your site show all these cool things off?

The short answer, unfortunately, is no. All those bells and whistles rapidly overwhelm users. They may be thinking: Wow, look at this magical 3D scrolling effect! Wow, look at this video background! Wow, check out this slideshow! Wow, look at those cool drawings!

And very quickly, users get lost in the hubbub of cool things and lose track of mission. Guess what? Your website’s goal is not to show off cool stuff. It’s to get and keep visitors’ attention to your product or service, and to convert customers. That’s why it’s so important to understand how attention comes into play when designing your website.


​How Psychology Aids Website Design

Recent advances in technology spotlight two increasingly important fields: user experience (UX), or how people interact with websites and apps; and cognitive psychology, a discipline that examines mental processes such as thinking and memory.

User experience is an exemplary application of cognitive psychology, though it’s not always framed that way. In order for a user experience designer to work from a research driven, human focused standpoint, it’s necessary to understand key aspects of cognition.

Attention is one of the main tenets of human cognition. When you understand the principles of attention, you can greatly improve the way websites are designed for both the producer and the consumer.


Look Away From the Light

Psychologists used to compare visual attention to a spotlight: people set their eyes on a certain visual of a certain size and that was that. Cognitive psychologists have made great strides in the field of attention. It turns out that attention is not as limited as scientists once imagined.

Contrary to what most people assume, attention is not finite. It does not have to be focused where the eyes are looking, and it can be focused in multiple spots. People take in stimuli even if they aren’t immediately focused on them. Specific things like movement divert attention from the initial focus. Attention is not a spotlight as psychologists once thought, but rather it is an ever-shifting amorphous scan.

When you take a more holistic approach to building a website, guide the user’s attention without inducing a headache. When too many exciting things distract the user, it prevents them from accomplishing their goals.
​Order is Beauty

Strive for a visual hierarchy. Not everything should immediately try to grab the user’s attention; rather, the most important part of the website should be obvious. Some websites have begun to prioritize the user’s attention-- for example, Zivtech’s blog page phases out the header image by blurring it as the user scrolls down, shifting focus to the articles below.

​Anyone can tell you that a web page looks cluttered, but a good designer should know how a cluttered web page impacts user attention. So if you want to boost your site metrics like traffic, session duration, and conversion, pay attention.

Catégories: Elsewhere

Gunnar Wolf: Relax and breathe...

Planet Debian - mar, 21/06/2016 - 21:30

Time passes. I had left several (too many?) pending things to be done un the quiet weeks between the end of the lective semestre and the beginning of muy Summer trip to Winter. But Saturday gets closer every moment... And our long trip to the South begins.

Among many other things, I wanted to avance with some Debían stuff - both packaging and WRT keyring analysis. I want to contacto some people I left pending interactions with, but honestly, that will only come face to face un Capetown.

As to "real life", I hace too many pending issues at work to even begin with; I hope to get some time at South África todo do some decent UNAM sysadmining. Also, I want to play the idea of using Git for my students' workflow (handing in projects and assignments, at least)... This can be interesting to talk with the Debían colleagues about, actually.

As a Masters student, I'm making good advances, and will probably finish muy class work next semester, six months ahead of schedule, but muy thesis work so far has progressed way slower than what I'd like. I have at least a better defined topic and approach, so I'll start the writing phase soon.

And the personal life? Family? I am more complete and happy than ever before. My life su completely different from two years ago. Yes, that was obvious. But it's also the only thing I can come up with. Having twin babies (when will they make the transition from "babies" to "kids"? No idea... We will find out as it comes) is more than beautiful, more than great. Our life has changed in every possible aspect. And yes, I admire my loved Regina for all of the energy and love she puts on the babies... Life is asymetric, I am out for most of the day... Mommy is always there.

As I said, happier than ever before.

Catégories: Elsewhere

Dries Buytaert: The long path to being understood

Planet Drupal - mar, 21/06/2016 - 19:29

I sent an internal note to all of Acquia's 700+ employees today and decided to cross-post it to my blog because it contains a valuable lesson for any startup. One of my personal challenges — both as an Open Source evangelist/leader and entrepreneur — has been to learn to be comfortable with not being understood. Lots of people didn't believe in Open Source in Drupal's early days. Some people still don't understand why you'd give the software away for free. Lots of people didn't believe Acquia could succeed. It can be difficult to deal with the naysayers and rejections. In many cases, an idea takes years to gain general acceptance. Open Source software and its new commercial approaches are starting to reach that point just now. If you ever have an idea that is not understood, I want you to think of my story.

Team,

This week, Acquia got a nice mention on Techcrunch in an article written by Jake Flomenberg, a partner at Accel Partners. For those of you who don't know Accel Partners, they are one of the most prominent venture capital investors and were early investors in companies like Facebook, Dropbox, Slack, Etsy, Atlassian, Lynda.com, Kayak and more.

The article, called "The next wave in software is open adoption software", talks about how the enterprise IT stack is being redrawn atop powerful Open Source projects like MongoDB, Hadoop, Drupal and more. Included in the article is a graph that shows Acquia's place in the latest wave of change to transform the technology landscape, a place showing our opportunity is bigger than anything before as the software industry migrated from mainframes to client-server, then SaaS/PaaS and now - to what Flomenberg dubs, the age of Open Adoption Software.

It's a great article, but it isn't new to any of us per se – we have been promoting this vision since our start nine years ago and we have seen over and over again how Open Source is becoming the dominant model for how enterprises build and deliver IT. We have also shown that we are building a successful technology company using Open Source.

Why then do I feel compelled to share this article, you ask? The article marks a small but important milestone for Acquia.

We started Acquia to build a new kind of company with a new kind of business model, a new innovation model, all optimized for a new world. A world where businesses are moving most applications into the cloud, where a lot of software is becoming Open Source, where IT infrastructure is becoming a metered utility, and where data-driven services make or break business results.

We've been steadily executing on this vision; it is why we invest in Open Source (e.g. Drupal), cloud infrastructure (e.g. Acquia Cloud and Site Factory), and data-centric business tools (e.g. Acquia Lift).

In my 15+ years as an Open Source evangelist, I've argued with thousands of people who didn't believe in Open Source. In my 8+ years as an entrepreneur, I've talked to thousands of business people and dozens of investors who didn't understand or believe in Acquia's vision. Throughout the years, Tom and I have presented Acquia's vision to many investors – some have bought in and some, like Accel, have not (for various reasons). I see more and more major corporations and venture capital firms coming around to Open Source business models every day. This trend is promising for new Open Source companies; I'm proud that Acquia has been a part of clearing their path to being understood.

When former skeptics become believers, you know you are finally being understood. The Techcrunch article is a small but important milestone because it signifies that Acquia is finally starting to be understood more widely. As flattering as the Techcrunch article is, true validation doesn't come in the form of an article written by a prominent venture capitalist; it comes day-in and day-out by our continued focus and passion to grow Drupal and Acquia bit by bit, one successful customer at a time.

Building a new kind of company like we are doing with Acquia is the harder, less-traveled path, but we always believed it would be the best path for our customers, our communities, and ultimately, our world. Success starts with building a great team that not only understands what we do, but truly believes in what we do and remains undeterred in its execution. Together, we can build this new kind of company.

--
Dries Buytaert
Founder and Project Lead, Drupal
Co-founder and Chief Technology Officer, Acquia

Catégories: Elsewhere

DrupalCon News: The Business of Drupal

Planet Drupal - mar, 21/06/2016 - 19:10

Drupal is a CMS. Drupal is a framework. Drupal is a piece of software which allows us to create amazing online experiences. Drupal is its awesome community. For some of us Drupal is a way of life. But what else is Drupal?

Drupal is our business.

Catégories: Elsewhere

Cheeky Monkey Media: Custom Sorting of Views Content

Planet Drupal - mar, 21/06/2016 - 17:58
Custom Sorting of Views Content ryan Tue, 06/21/2016 - 15:58

Have you ever had a list of related items, related by say by a taxonomy term or another node, and needed some way to sort that list, fully, or even partially? If so, there are a few good views modules out there to help you out.

The Nodequeue Module

My first introduction to setting up a custom sort on a list of content was to use the Nodequeue module. Nodequeue is a multi-faceted module which has a lot of queue/listing functionality. One of which is integrating with views.

I’ll go through the steps necessary for setting up a nodequeue and linking it to your view to have it use your sorting.

Catégories: Elsewhere

Joey Hess: twenty years of free software -- part 2 etckeeper

Planet Debian - mar, 21/06/2016 - 17:24

etckeeper was a sleeper success for me. I created it, wrote one blog post about it, installed it on all my computers, and mostly forgot about it, except when I needed to look something up in the git history of /etc it helpfully maintains. It's a minor project.

Then I started getting patches porting it to many other version control systems, and other linux distributions, and fixing problems, and adding features. Mountains and mountains of patches over time.

And then I started hearing about distributions that install it by default. (Though Debian for some reason never did so I keep having to install it everywhere by hand.)

Writing this blog post, I noticed etckeeper had accumulated enough patches from other people to warrant a new release. That happens pretty regularly.

So it's still a minor project as far as I'm concerned, but quite a few people seem to be finding it useful. So it goes with free software.

Next: twenty years of free software -- part 3 myrepos

Catégories: Elsewhere

Acquia Developer Center Blog: How to Ensure That Your Website is Launch-Ready

Planet Drupal - mar, 21/06/2016 - 17:08

Launching a new application can be a scary event. Many potential bottlenecks, although not readily apparent, can cause problems on the go-live day, or the first time there’s a surge in site traffic.

At Acquia, we conduct a site audit to ensure that a new site is not subject to unnecessary delays. We do this by identifying potential problems, and proposing clear and specific remediation and optimization measures during development.

That’s the big picture. Here’s a close-up view on how we do it.

Tags: acquia drupal planet
Catégories: Elsewhere

ImageX Media: When Responsive Websites May Not Be Enough: Why You Need a Mobile Business App

Planet Drupal - mar, 21/06/2016 - 16:07

Mobile usage shows no signs of slowing down. Many web design and development agencies encourage clients to deploy websites using a responsive design in place. For those in need a refresher, a responsive website is a design approach based on fluid grids and CSS3 media queries. A responsive site's layout will change based on the size (height x width) of a device.

Catégories: Elsewhere

Drupal Commerce: Commerce 2.x: Unit, Kernel, and Functional Tests Oh My!

Planet Drupal - mar, 21/06/2016 - 16:01

At the end of May, I made an initiative to move all of the Drupal Commerce tests away from Simpletest and to use the available test classes built off of PHPUnit. Why? Simpletest is a test framework within Drupal and not used by the PHP community at large.

With the KernelTestBaseTNG™ issue, Drupal core officially moved to being based on top of PHPUnit for Kernel and Unit tests. Soon more test types were to follow, such as browser tests and JavaScript testing.

Death to Simpletest, Long Live PHPUnit, Mink, and PhantomJS

We now have PHPUnit as our test framework, the choice of the greater PHP community. The browser tests use the Mink browser emulator, which anyone working with Behat should be somewhat familiar. Testing JavaScript is done by pointing PhantomJS configuration to Mink. No longer are we limited to the functionalities of Simpletest and our community to develop it.

Catégories: Elsewhere

ComputerMinds.co.uk: How to write a PHPUnit test for Drupal 8

Planet Drupal - mar, 21/06/2016 - 14:00

This article will talk you through the steps to follow to write a simple PHPUnit test for Drupal 8.

I have been doing a lot of work on Drupal 8 migrations for the past few months so that will be the focus of the test.

Step 1: Create a Fixture

To quote the PHPUnit manual:

Catégories: Elsewhere

Web Wash: Debug Site Performance Using Web Profiler in Drupal 8

Planet Drupal - mar, 21/06/2016 - 13:50

In the beginning of any Drupal project the site loads very quickly because there aren't many modules installed. But as you add modules, the performance of the site will become slower and slower.

There's always a certain point in the project where you realize it's time to look at the problem and see if it's a rogue module or some dodgy code, we've all seen this.

Trying to debug a performance issue can be tedious work. But often, it comes down to having too many queries loaded on a page.

If you're on Drupal 7, just enable query logging using the Devel module. This will show all the queries generated at the bottom of the page.

But for Drupal 8 we have something better: Web Profiler.

Web Profiler is a Drupal 8 port of the Symfony WebProfiler bundle. The port is possible because Drupal 8 uses Symfony components.

Web Profiler adds a toolbar at the bottom of every page and shows you all sorts of stats such as the amount of database queries loaded on the page, which services are used and much more.

Catégories: Elsewhere

Into my Galaxy: GSoC’ 16: Port Search Configuration module; coding week #4

Planet Drupal - mar, 21/06/2016 - 13:45

 

Google Summer of Code (GSoC), has entered into the mid-Term evaluation stage. This is a 1 week period from 21- 27 June, were students and mentors present the progress of their projects. Based on the reports submitted, students are made pass/ fail.

I have been working on porting Search Configuration to Drupal 8 in the past few weeks. If you would like to have a quick glimpse of my past activities on this port process, please go through these posts.

last week, I could learn some Drupal concepts which were really helpful for my project. In the previous versions of Drupal, the role permissions were stored in a role_permissions table in the Database. But now, in Drupal 8, the role permissions are directly stored in the role configuration entity.

So, as described above, in D7 and its preceding versions, role permissions were stored in a role_permissions database which had the role Id and the corresponding permissions. The permissions distributed to a role was retrieved in D7 using:

$permissions = role->getPermissions();

But, in D8, this is done by the

$permissions = role->getPermissions();

Another instance is that, to grant certain permissions to roles.

In D7 it was controlled by,

user_role_grant_permissions($rid, array(‘ access content’));

The role configuration entity remodels this functionality in D8 to:

$role->grantPermission(‘ access content’);

In connection with the term permissions, the most important aspect in Drupal is a hook: hook_permissions(). This hook, obviously as you might have guessed, distributes the permissions to various users; decides whether a particular user should be allowed to access a page or a content, granting and restricting the access.

This hook has been replaced in Drupal 8 by a module.permissions.yml file. This file contains the permissions and its specifications. We can write a driver function in a php file to add the dynamic permissions. This can be achieved by making a driver class in the php file and adding the behaviour of the permission we need in the member functions of the class. We also have to link this PHP file with our yml file to keep it active. This is done by adding a callback function in the yml file which references this php file.

To display special characters in a plain text string for display as HTML format, Drupal earlier versions used the function check_plain.  This had the general syntax:

check_plain($text); // where $text was the string to be processed.

This function has got deprecated in Drupal 8. This has been replaced by the \Drupal\Compoent\Utility\Html::escape($text).

 


Catégories: Elsewhere

Miloš Bovan: Detecting a footer of an email

Planet Drupal - mar, 21/06/2016 - 13:22
Detecting a footer of an email

This is the 5th blog post of the Google Summer of Code 2016 project - Mailhandler.

Implementing authentication and authorization for a mail sender provided an additional layer of security for Mailhandler project. The module was extended to support both PGP signed and unsigned messages.

The goal for the last week was to create a mail Footer analyzer and to add support for node (content) type detection via mail subject. The pull request has been created and it is in the review status. This analyzer has a purpose of stripping the message footer/signature from the message body. As of now, 2 types of signature/footer separators are supported:

  • -- \n as the separator line between the body and the signature of a message recommended by RFC 3676
  • On {day}, {month} {date}, {year} at {hour}:{minute} {AM|PM} pattern which is trickier and currently used by Gmail to separate replied message from the response.

First of all, we had to create inmail.analyzer.footer config entity and the corresponding analyzer plugin - FooterAnalyzer. Since footer, subject and content type properties are relevant for all types of mail messages supported by Mailhandler, these properties were put in MailhandlerAnalyzerResultBase class.

FooterAnalyzer currently depends on the analyzed result provided by MailhandlerAnalyzer. The reason why one plugin depends on another is to support PGP signed messages. MailhandlerAnalyzer will try to analyze the message body of signed (and unsigned) messages and extract the actual mail body. Next, FooterAnalyzer will parse the processed body stored in MailhandlerAnalyzerResult. As mentioned above, the footer analyzer currently supports footers separated by -- \n and On {day}, {month} {date}, {year} at {hour}:{minute} {AM|PM} lines. The content after these lines is put into the footer property of the analyzer result. In case the body message has one of the supported separators, detected footer is stripped out from the actual message body.

Furthermore, the content type detection via message subject has been implemented. As we are going to support creating comments via email in the following weeks, we had to create a “protocol” that will allow us to differentiate between nodes and comments. We agreed to add [{entity_type}][{bundle}] before the actual message subject. For now, only node entity type and its bundle (content/node type) are parsed and extracted. All the assertions of the analyzed message are happening in the handler plugin (MailhandlerNode). The handler plugin will check if the configured content type is set to “Detect” mode and if so, it will get the parsed content type and create an entity of the parsed node type.

This week, students and their mentors are requested to submit mid-term evaluations. The evaluation represents a sum of the project after 5 weeks of the work. By finishing FooterAnalyzer, Mailhandler is now capable of processing signed (and unsigned) emails, extracting the actual body and creating a node of the detected node type for an authorized user.

The plan for the next week is to extend the project with validation support. We will use entity (node) validation and extend content type to bundle validation too. Also, I will work on splitting the Mailhandler analyzer to the smaller analyzers and adapting the handler to the changes.

 

 

Milos Tue, 06/21/2016 - 13:22 Tags Drupal Open source Google Summer of Code Drupal Planet Add new comment
Catégories: Elsewhere

TheodorosPloumis blog: DrupalCamp Greece is 3B!

Planet Drupal - mar, 21/06/2016 - 13:19

DrupalCamp Greece is "3B". Back, Bigger, Better!

The Greek community orginizes the 3rd (or 4th I can't remember) Drupal Camp Athens, 1 - 3 July 2016.

3 days with our "true one love" Drupal with social events straight in the heart of Athens and so many interesting sessions for Drupal and the new ecosystem around it (yes we are out of the island now and so are our DrupalCamps :-)

Schedule is ready.

MortenDK is going to open the event with a special keynote and a session about - what else - "DrupalTwig". There will be several sessions about Drupal 8.x Plugin system, migration, CKEditor, frontend, backend, REST API, content strategy, Aegir, security and my - temporary - favourite topic: Docker!

Oh, I forgot to mention the workshops. An introduction to Drupal 8.x and a special workshop about 8.x Commerce Kickstart. There will also be a sprint.

Are you a Drupal <whatever> traveling to Greece, why not join us? You can still get your ticket.

drupal-camp.gr

And don’t forget to register for news and updates about the event.

Hope to see you around.

(PS. I am not representing the organizers or the Greek Drupal community and this post contains my own opinion)

Catégories: Elsewhere

Reproducible builds folks: Reproducible builds: week 60 in Stretch cycle

Planet Debian - mar, 21/06/2016 - 10:24

What happened in the Reproducible Builds effort between June 12th and June 18th 2016:

Media coverage GSoC and Outreachy updates

Weekly reports by our participants:

Toolchain fixes
  • texlive-bin/2016.20160513.41080-3 has been uploaded to unstable, featuring support for FORCE_SOURCE_DATE. See the last post for details on it.
  • doxygen/1.4.4-1 has been uploaded to unstable, fixing #822197 (upstream bug), which caused some generated html file to contain unreproducible memory addresses of Python objects used at build time.
  • debhelper/9.20160618 has been uploaded to unstable, fixing #824490, which instructs ant to not save the username of the build user in the generated files. Original patch by Emmanuel Bourg.
  • HW42 reported a long-known (although only internally) bug in our dpkg-buildinfo. this particular bug doesn't affect our current infrastructure, but it's a blocker for having .buildinfo support merged upstream.
  • epydoc/3.0.1+dfsg-13 and 3.0.1+dfsg-14 have been uploaded by Kenneth J. Pronovici which fixes nondeterministic ordering issues in generated documentation and removes memory addresses. Original patches (#825968 and #827416) by Sascha Steinbiss.

With this upload of texlive-bin we decided to stop keeping our patched fork of as most of the patches for SOURCE_DATE_EPOCH support had been integrated upstream already, and the last one (making FORCE_SOURCE_DATE default to 1) had been refused. So, we are now going to let the archive be rebuilt against unstable's texlive-bin and see how many packages will become unreproducible with this change; once enough data will be collected we will ponder whether FORCE_SOURCE_DATE should be exported by helper tools (such as debhelper) or manually exported by every package that needs it.

(For those wondering: we still recommend to follow SOURCE_DATE_EPOCH always and don't recommend other projects to implement FORCE_SOURCE_DATE…)

With the drop of texlive-bin we now have only three modified packages in our experimental repository.

Reproducible work in other projects Packages fixed

The following 12 packages have become reproducible due to changes in their build dependencies: django-floppyforms flask-restful hy jets3t kombu llvm-toolchain-3.8 moap python-bottle python-debtcollector python-django-debug-toolbar python-osprofiler stevedore

The following packages have become reproducible after being fixed:

Some uploads have fixed some reproducibility issues, but not all of them:

Uploads with reproducibility fixes that currently fail to build:

  • ruby2.3/2.3.1-3 by Christian Hofstaedtler, avoids unreproducible rbconfig.rb files by always using bash for building.

Patches submitted that have not made their way to the archive yet:

  • #827109 against asciijump by Reiner Herrmann: sort source files for deterministic linking order.
  • #827112 against boswars by Reiner Herrmann: sort source files for deterministic linking order.
  • #827114 against overgod by Reiner Herrmann: use C locale for sorting source files.
  • #827115 against netpbm by Alexis Bienvenüe: honour SOURCE_DATE_EPOCH while generating output.
  • #827124 against funguloids by Reiner Herrmann: use C locale for sorting files.
  • #827145 against scummvm by Reiner Herrmann: don't embed extra fields in zip archive and build with proper host architecture.
  • #827150 against netpanzer by Reiner Herrmann: sort source files for deterministic linking order.
  • #827172 against reaver by Alexis Bienvenüe: sort object files for deterministic linking order.
  • #827187 against latex2html by Alexis Bienvenüe: iterate deterministic over Perl hashes; honour SOURCE_DATE_EPOCH for output; strip username from output; sort index keys.
  • #827313 against cherrypy3 by Sascha Steinbiss: prevent memory addresses in output.
  • #827361 against matplotlib by Alexis Bienvenüe: honour SOURCE_DATE_EPOCH in output and sort keys while iterating over dict.
  • #827382 against dwarfutils by Reiner Herrmann: fix array size, which caused memory from outside a table to be embedded into output.
  • #827384 against skytools3 by Sascha Steinbiss: use stable sorting order and remove timestamps from documentation.
  • #827419 against ldaptor by Sascha Steinbiss: sort list of input files and prevent home directory from leaking into documentation.
  • #827546 against git-buildpackage by Sascha Steinbiss: replace timestamps in documentation with changelog date; prevent temporary paths in documentation.
  • #827572 against xprobe by Reiner Herrmann: sort list of object files in static library archives.
Package reviews

36 reviews have been added, 12 have been updated and 31 have been removed in this week.

17 FTBFS bugs have been reported by Chris Lamb, Santiago Vila and Dominic Hargreaves.

diffoscope development

Satyam worked on argument completion (#826711) for diffoscope.

strip-nondeterminism development

Mattia Rizzolo uploaded strip-nondeterminism 0.019-1~bpo8+1 to jessie-backports.

reprotest development

Ceridwen filed an Intent To Package (ITP) bug for reprotest as #827293.

tests.reproducible-builds.org
  • Mattia Rizzolo uploaded pbuilder 0.225 to unstable, providing built-in support for eatmydata. We're planning to use it in armhf and i386 builders where we don't build in tmpfs, to increase the build speed some more.
  • Valery Young reworked the appearance of the package page, hopefully making them more intuitive and usable. In the process she changed the script generating them to use a real templating system, thus improving maintenance for the future.
  • Holger adjusted the scheduler to reschedule packages in state 'depwait' after two days instead of three.
  • Mattia added the bug title next to the bug numbers in the notes.
Misc.

This week's edition was written by Mattia Rizzolo, Reiner Herrmann, Ed Maste and Holger Levsen and reviewed by a bunch of Reproducible builds folks on IRC.

Catégories: Elsewhere

Pages

Subscribe to jfhovinne agrégateur