Agrégateur de flux
I've been a mostly happy Thinkpad owner for almost 15 years. My first Thinkpad was a 570, followed by an X40, an X61s, and an X220. There might have been one more in there, my archives only go back a decade. Although it's lately gotten harder to buy Thinkpads at UNB as Dell gets better contracts with our purchasing people, I've persevered, mainly because I'm used to the Trackpoint, and I like the availability of hardware service manuals. Overall I've been pleased with the engineering of the X series.
Over the last few days I learned about the installation of the superfish malware on new Lenovo systems, and Lenovo's completely inadequate response to the revelation. I don't use Windows, so this malware would not have directly affected me (unless I had the misfortune to use this system to download installation media for some GNU/Linux distribution). Nonetheless, how can I trust the firmware installed by a company that seems to value its users' security and privacy so little?
Unless Lenovo can show some sign of understanding the gravity of this mistake, and undertake not to repeat it, then I'm afraid you will be joining Sony on my list of vendors I used to consider buying from. Sure, it's only a gross income loss of $500 a year or so, if you assume I'm alone in this reaction. I don't think I'm alone in being disgusted and angered by this incident.
I have been intermittently working with a drupal 7 multiple site ‘platform’ for several years, which originally emerged from a single site. I am the sole maintainer. The platform is not based on a drupal ‘multi-site’ configuration, as the shared codebase model seemed more hindrance than feature. Instead the (four) sites share a common pattern of base configuration, each cloned (direct copies of files and db) from an early version of the build. A set of custom modules is kept up to date as one git repo that is cloned into each of the sites. Similarly a base theme, and sub themes for all of the sites are also held in a single repo and cloned into each site. More recently, I created an entirely separate site to aggregate and index these sister sites (via feeds) to provide a more advanced, yet loosely coupled search facility.
The setup as described was intended to make maintaining these sites simpler, as extended functionality could be built into some sites whilst the other sites are left without the baggage of additional modules and configuration, making them more easily performative, and simpler to maintain. Sister sites can optionally share any new features, because the early build decisions are shared, so configuration is transferable, and custom modules/features from the shared repo can simply be turned on. I opted not to construct a single site segmented by Spaces, or equivalent pattern, because initially it was not clear how far the sites might diverge from one another.
This workflow has seemed to make a rough kind of sense until now, but I am returning to the decisions I made when creating the platform to try and get it ship shape now that each site has largely settled into a stable pattern of usage. I want to make sure that I have not incurred an impractical amount of technical debt, and that site maintenance is transferable, should it need to be. Another consideration is the approach of Drupal 8. When this transformative version is widely used, it will be much easier to migrate simplified, minimal sites.Why rock the boat?
Drupal is endlessly configurable. A lot of this configuration is executed through the admin interface and captured in the database. This oft criticised lack of separation between configuration and content should soon be alleviated by the adoption of Drupal 8, but for the moment (maybe until the end of 2015?) it does not seem sensible to move to D8.
The sites I have created have evolved to meet the requirements of their users. Due to reactive (and sometimes undocumented) measures taken in individual sites sometimes similar workflow and layout objectives have been achieved in subtlely different ways. Earlier in the process, these changes were captured in Drupal Features as much as was possible, but this became complex in itself.
Now that I can see the way these sites are being used, and content/design policies have emerged that structure (admin)...
Looks like I'll be speaking at LOADays again. This time around, at the suggestion of one of the organisers, I'll be speaking about the Belgian electronic ID card, for which I'm currently employed as a contractor to help maintain the end-user software. While this hasn't been officially confirmed yet, I've been hearing some positive signals from some of the organisers.
So, under the assumption that my talk will be accepted, I've started working on my slides. The intent is to explain how the eID middleware works (in general terms), how the Linux support is supposed to work, and what to do when things fail.
If my talk doesn't get rejected at the final hour, I will continue my uninterrupted "speaker at loadays" streak, which has started since loadays' first edition...
I’m getting increasingly cynical about our largest organisations and their voting-centred approach to democracy. You vote once, for people rather than programmes, then you’re meant to leave them to it for up to three years until they stand for reelection and in most systems, their actions aren’t compared with what they said they’d do in any way.
I have this concern about Cooperatives UK too, but then its CEO publishes http://www.uk.coop/blog/ed-mayo/2015-02-18/rebooting-democracy-case-citizens-constitutional-convention and I think there may be hope for it yet. Well worth a read if you want to organise better groups.
Lately at Freelock, we've been improving our Drupal site assessment. For years we've analyzed Drupal sites built by others to identify how well they are built, what pitfalls/minefields lurk there, and where we need to be extremely careful with budget recommendations when extending functionality.
In the past couple months, we've overhauled it to include a snapshot rating of the site, to let our clients know what we think of their site in 7 crucial areas.
One of them that's often overlooked is Maintainability.Site AssessmentmaintenanceDeploymentDrupal Planet
In my travels to talk about Drupal, everyone asks me about Drupal 8's performance and scalability. Modern websites are much more dynamic and interactive than 10 years ago, making it more difficult to build modern sites while also being fast. It made me realize that maybe I should write up a summary of some of the most exciting performance and scalability improvements in Drupal 8. After all, Drupal 8 will leapfrog many of its competitors in terms of how to architect and scale modern web applications. Many of these improvements benefit both small and large websites, but also allow us to build even bigger websites with Drupal.More precise cache invalidation
One of the strategies we employ in making Drupal fast is "caching". This means we try to generate pages or page elements one time and then store them so future requests for those pages or page elements can be served faster. If an item is already cached, we can simply grab it without going through the building process again (known as a "cache hit"). Drupal stores each cache item in a "cache bin" (a database table, Memcache object, or whatever else is appropriate for the cache backend in use).
In Drupal 7 and before, when one of these cache items changes and it needs to be re-generated and re-stored (the cache gets "invalidated"), you can only delete a specific cache item, clear an entire cache bin, or use prefix-based invalidation. None of these three methods allow you to invalidate all cache items that contain data of, say, user 200. The only method that is going to suffice is clearing the entire cache bin, and this means that usually we invalidate way too much, resulting in poor cache hit ratios and wasted effort rebuilding cache items that haven't actually changed.
This problem is solved in Drupal 8 thanks to the concept of "cache tags": each cache item can have any number of cache tags. A cache tag is a compact string that describes the object being cached. Thanks to this extra metadata, we can now delete all cache items that use the user:200 cache tag, for example. This means we've deleted all the cache items we must delete, but not a single one more: optimal cache invalidation!
Example cache tags for different cache IDs.
And don't worry, we also made sure to expose the cache tags to reverse proxies, so that efficient and accurate invalidation can happen throughout a site's entire delivery architecture.More precise cache variation
While accurate cache invalidation makes caching more efficient, there is more we did to improve Drupal's caching. We also make sure that cached items are optimally varied. If you vary too much, duplicate cache entries will exist with the exact same content, resulting in inefficient usage of caches (low cache hit ratios). For example, we don't want a piece of content to be cached per user if it is the same for many users. If you vary too little, users might see incorrect content as two different cache entries might collide. In other words, you don't want to vary too much nor too little.
In Drupal 7 and before, it's easy to program any cached item to vary by user, by user role, and/or by page, and could even be configured through the UI for blocks. However, more targeted variations (such as by language, by country, or by content access permissions) were more difficult to program and not typically exposed in a configuration UI.
In Drupal 8, we introduced a Cache Context API to allow developers and site builders to express these variations and to make them automatically available in the configuration UI.Server-side dynamic content substitution
Usually a page can be cached almost entirely except for a few dynamic elements. Often a page served to two different authenticated users looks identical except for a small "Welcome $name!" and perhaps their profile picture. In Drupal 7, this small personalization breaks the cacheability of the entire page (or rather, requires a cache context that's way too granular). Most parts of the page, like the header, the footer and certain blocks in the sidebars don't change often nor vary for each user, so why should you regenerate all those parts at every request?
In Drupal 8, thanks to the addition of #post_render_cache, that is no longer the case. Drupal 8 can render the entire page with some placeholder HTML for the name and profile picture. That page can then be cached. When Drupal has to serve that page to an authenticated user, it will retrieve it from the cache, and just before sending the HTML response to the client, it will substitute the placeholders with the dynamically rendered bits. This means we can avoid having to render the page over and over again, which is the expensive part, and only render those bits that need to be generated dynamically!Client-side dynamic content substitution
Some things that Drupal has been rendering for the better part of a decade, such as the "new" and "updated" markers on comments, have always been rendered on the server. That is not ideal because these markers are different for every visitor and as a result, it makes caching pages with comments difficult.
With Drupal 8, we're very close to taking the client-side dynamic content substitution a step further, just like some of the world's largest dynamic websites do. Facebook has 1.35 billion monthly active users all requesting dynamic content, so why not learn from them?
The traditional page serving model has not kept up with the increase of highly personalized websites where different content is served to different users. In the traditional model, such as Drupal 7, the entire page is generated before it is sent to the browser: while Drupal is generating a page, the browser is idle and wasting its cycles doing nothing. When Drupal finishes generating the page and sends it to the browser, the browser kicks into action, and the web server is idle. In the case of Facebook, they use BigPipe. BigPipe delivers pages asynchronously instead; it parallelizes browser rendering and server processing. Instead of waiting for the entire page to be generated, BigPipe immediately sends a page skeleton to the the client so it can start rendering that. Then the remaining content elements are requested and injected into their correct place. From the user's perspective the page is rendered progressively. The initial page content becomes visible much earlier, which improves the perceived speed of the site.
We've made significant improvements to the way Drupal 8 renders pages (presentation). By default, Drupal 8 core still implements the traditional model of assembling these pieces into a complete page in a single server-side request, but the independence of each piece and the architecture of the new rendering pipeline enable different “render strategies" to be experimented with — different methods for dynamic content assembly, such as BigPipe, Edge Side Includes, or other ideas for making the most optimal use of client, server, content delivery networks and reverse proxies. In all those examples, the idea is that we can send the primary content first so the client can start rendering that. Then we send the remaining Drupal blocks, such as the navigation menu or a 'Related articles' block, and have the browser, content delivery network or reverse proxy assemble or combine these blocks into a page.
A snapshot of the Drupal 8 render pipeline diagram that highlights where alternative render strategies can be implemented.
Some early experiments by Wim Leers in Acquia's OCTO show that we can improve performance by a factor of about 2 compared to a recent Drupal 8 development snapshot. These breakthroughs are enabled by leveraging the various improvements we made to Drupal 8.And much more
All in all, there is a lot to look forward to in Drupal 8!
Lenovo deserve criticism. The level of incompetence involved here is so staggering that it wouldn't be a gross injustice for the company to go under as a result. But let's not pretend that this is some sort of isolated incident. As an industry, we don't care about user security. We will gladly ship products with known security failings and no plans to update them. We will produce devices that are locked down such that it's impossible for anybody else to fix our failures. We will hide behind vague denials, we will obfuscate the impact of flaws and we will deflect criticisms with announcements of new and shinier products that will make everything better.
It'd be wonderful to say that this is limited to the proprietary software industry. I would love to be able to argue that we respect users more in the free software world. But there are too many cases that demonstrate otherwise, even where we should have the opportunity to prove the benefits of open development. An obvious example is the smartphone market. Hardware vendors will frequently fail to provide timely security updates, and will cease to update devices entirely after a very short period of time. Fortunately there's a huge community of people willing to produce updated firmware. Phone manufacturer is never going to fix the latest OpenSSL flaw? As long as your phone can be unlocked, there's a reasonable chance that there's an updated version on the internet.
But this is let down by a kind of callous disregard for any deeper level of security. Almost every single third-party Android image is either unsigned or signed with the "test keys", a set of keys distributed with the Android source code. These keys are publicly available, and as such anybody can sign anything with them. If you configure your phone to allow you to install these images, anybody with physical access to your phone can replace your operating system. You've gained some level of security at the application level by giving up any real ability to trust your operating system.
This is symptomatic of our entire ecosystem. We're happy to tell people to disable security features in order to install third-party software. We're happy to tell people to download and build source code without providing any meaningful way to verify that it hasn't been tampered with. Install methods for popular utilities often still start "curl | sudo bash". This isn't good enough.
We can laugh at proprietary vendors engaging in dreadful security practices. We can feel smug about giving users the tools to choose their own level of security. But until we're actually making it straightforward for users to choose freedom without giving up security, we're not providing something meaningfully better - we're just providing the same shit sandwich on different bread.
 I don't see any way that they will, but it wouldn't upset me
I had the mixed pleasure of doing a partial rewrite of lintian’s reporting framework. It started as a problem with generating the graphs, which turned out to be “not enough memory”. On the plus side, I am actually quite pleased with the end result. I managed to “scope-creep” myself quite a bit and I ended up getting rid of a lot of old issues.
The major changes in summary:
- A lot of logic was moved out of harness, meaning it is now closer to becoming a simple “dumb” task scheduler. With the logic being moved out in separate processes, harness now hogs vastly less memory that I cannot convince perl to release to the OS. On lilburn.debian.org “vastly less” is on the order of reducing “700ish MB” to “32 MB”.
- All important metadata was moved into the “harness state-cache”, which is a simple YAML file. This means that “Lintian laboratory” is no longer a data store. This change causes a lot of very positive side effects.
- With all metadata now stored in a single file, we can now do atomic updates of the data store. That said, this change itself does not enable us to run multiple lintian’s in parallel.
- As the lintian laboratory is no longer a data store, we can now do our processing in “throw away laboratories” like the regular lintian user does. As the permanent laboratory is the primary source of failure, this removes an entire class of possible problems.
There are also some nice minor “features”:
- Packages can now be “up to date” in the generated reports. Previously, they would always be listed as “out of date” even if they were up to date. This is the only end user/website-visitor visible change in all of this (besides the graphs are now working again \o/).
- The size of the harness work list is no longer based on the number of changes to the archive.
- The size of the harness work list can now be changed with a command line option and is no longer hard coded to 1024. However, the “time limit” remains hard coded for now.
- The “full run” (and “clean run”) now simply marks everything “out-of-date” and processes its (new) backlog over the next (many) harness runs. Accordingly, a full-run no longer causes lintian to run 5-7 days on lilburn.d.o before getting an update to the website. Instead we now get incremental updates.
- The “harness.log” now features status updates from lintian as they happen with “processed X successfully” or “error processing Y” plus a little wall time benchmark. With this little feature I filed no less than 3 bugs against lintian – 2 of which are fixed in git. The last remains unfixed but can only be triggered in Debian stable.
- It is now possible with throw-away labs to terminate the lintian part of a reporting run early with minimal lost processing. Since the lintian-harness is regular fed status updates from lintian, we can now mark successfully completed entries as done even if lintian does not complete its work list. Caveat: There may be minor inaccuracies in the generated report for the particular package lintian was processing when it was interrupted. This will fix itself when the package is reprocessed again.
- It is now vastly easier to collect new meta data to be used in the reports. Previously, they had to be included in the laboratory and extracted from there. Now, we just have to fit it into a YAML file. In fact, I have been considering to add the “wall time” and make a “top X slowest” page.
- It is now possible to generate the html pages with only a “state-cache” and the “lintian.log” file. Previously, it also required a populated lintian laboratory.
As you can probably tell, I am quite pleased with the end result. The reporting framework lacks behind in development, since it just “sits there and takes care of itself”. Also with the complete lack of testing, it also suffers from the “if it is not broken, then do not fix it” paradigm (because we will not notice if we broke until it is too late).
Of course, I managed to break the setup a couple of times in the process. However, a bonus feature of the reporting setup is that if you break it, it simply leaves an outdated report on the website.
Anyway, enjoy. :)
Filed under: Debian, Lintian
新年快樂 (Happy [Lunar] New Year)! Last week, the Drupal community converged on Bogotá, Colombia for DrupalCon Latin America 2015, and largely due to the pre– and post–DrupalCon sprints, the number of Drupal 8 critical issues has dropped to about 56! Also, all of the critical issues from the Drupal 8 core critical issues sprint at DrupalCamp NJ have been committed!
Some other highlights of the month were:
- Tobias Stöckler and Jibran Ijaz volunteered to become the Shortcut module maintainers, and Lewis Nyman volunteered to become the CSS maintainer.
- Webchick posted an excellent rundown of the remaining Drupal 8 critical issues on her personal blog.
- The functions drupal_pre_render_html(), module_invoke(), module_install(), module_uninstall(), module_implements(), and module_list() were removed.
- To improve user experience, autocomplete widgets for links and entities were added, and various pieces of user-interface text were made easier to understand.
- To improve developer experience, a \Drupal::hasContainer() function was added, Symfony session components can now be injected, and it became easier to swap session storage handlers.
- On the front-end, we made a lot of progress evicting crappy CSS, the pager component was rewritten (and now it's more intuitive for readers of right-to-left languages), and work continued moving classes from preprocess functions to templates.
- Simpletest memory limits were lowered and installer performance was improved.
- A bug causing stale field values to be kept indefinately was fixed.
- For views with multiple languages, entity and field language selection is now unified under one straightforward setting. More work is going on to render base fields with this system.
See Help get Drupal 8 released! for updated information on the current state of the release and more information on how you can help. Webchick posted an excellent rundown of the remaining Drupal 8 critical issues on her personal blog which may also be helpful.
We're also looking for more contributors to help compile these posts. Contact xjm if you'd like to help!Drupal 8 In Real Life
- The Oak Park Illinois Sprint is happening this weekend: Sunday, February 22nd. Everyone is welcome!
- Don't forget to submit your sessions for DrupalCamp Los Angeles before the deadline next week (February 27th)!
- Would you believe Drupal Dev Days Montpellier will already be over two months from today? Make sure to plan to come for the sprints, an amazing opportunity to learn from and work with top Drupal developers for several uninterrupted days. April 13-19.
- The MidWest Developers Summit is happening on August 12-15. More details later this year!
Do you follow Drupal Planet with devotion, or keep a close eye on the Drupal event calendar, or git pull origin 8.0.x every morning without fail before your coffee? We're looking for more contributors to help compile these posts. You could either take a few hours once every six weeks or so to put together a whole post, or help with one section more regularly. If you'd like to volunteer for helping to draft these posts, please follow the steps here!
Weblate 2.2 has been released today. It comes with improved search, user interface cleanup and various other fixes.
Full list of changes for 2.2:
- Performance improvements.
- Fulltext search on location and comments fields.
- Support for Django 1.8.
- Support for deleting comments.
- Added own SVG badge.
- Added support for Google Analytics.
- Improved handling of translation file names.
- Added support for monolingual JSON translations.
- Record component locking in a history.
- Support for editing source (template) language for monolingual translations.
- Added basic support for Gerrit.
You can find more information about Weblate on http://weblate.org, the code is hosted on Github. If you are curious how it looks, you can try it out on demo server. You can login there with demo account using demo password or register your own user.
If you are free software project which would like to use Weblate, I'm happy to help you with set up or even host Weblate for you.
Further development of Weblate would not be possible without people providing donations, thanks to everybody who have helped so far!
PS: The roadmap for next release is just being prepared, you can influence this by expressing support for individual issues either by comments or by providing bounty for them.
Have you ever worked with a Drupal developer who seemed to always be able to fix bugs by finding patches to apply seemingly instantaneously? At the risk of being dispelled from the Alliance of Magicians, I’ll share how I do it.
View All Issues
The first step to find a patch to a contributed Drupal module is to get to that module’s issue queue. The path to the queue is always drupal.org/project/issues/[machine-name]. By default when you view a module’s issue queue only Open issues are shown. This excludes you from seeing all of the patches that have already been applied to the dev branch even if they have not been included in any stable release. When hunting for patches, be sure to look at All statuses.
You can save time finding the issue queue and showing all issues with a Chrome Search Engine keyword
My Drupal search keywords.
I have an “is” Chrome keyword, so I can type “is views” into Chrome and it takes me to https://www.drupal.org/project/issues/views?status=All
First look at the top most recent issue titles for something similar. If these don’t look relevant use the Search to search within the issue queue. Using just one or two keywords in this search is typically best. Again scan the first few issue titles, opening up anything that looks relevant in a new tab. Don’t worry about the status of the issues or the number of comments (any relevant issue may provide a link to a better issue).
Don’t attempt to read issues in their entirety (unless you are going to be chiming in to the issue, in which case for the love of god do read and understand the entire issue before doing so). Start by quickly reading the summary of the issue (which may be the original poster’s issue or a revised summary) and then scan down the page looking for links to other issues, patches, and the last updates on the issue. Open any links to review in a new tab.
Stereotyping the actors in the issue based on their apparent professionalism helps you scan faster. If there are no patches and all commenters seem confused, look for a different issue as it may be a duplicate: the inexperienced may not have found the other issue where the real action is. Make sure you can tell the difference between an amateur (inappropriate status changes, complaints, excess punctuation??, not enough or irrelevant details) and a struggling English speaker (unusual word choice or grammar, speling erors) as the latter is much more likely to have a useful point. If someone stands out in the issue as knowing their stuff (often the maintainer), spend your time focused on their comments.
If the issue seemed unhelpful, close the tab. If it may be related but you’re not sure yet, leave it open. If it looks like exactly your issue, proceed to try out the latest version of any patch attached.
Go through all your open tabs scanning them this way. If you’ve scanned them all without a clear solution, next try looking more closely at the issues you still have open, or try different search keywords (you likely have learned a few relevant words from what you’ve scanned so far).
Once you find a promising patch, to best impress your colleagues with your amazing speed you should declare victory in your chat room at this point. This will help add to the illusion that you have unusual speed at patch hunting. It helps to have some gifs ready for this occasion.
Apply a Patch
Instead of downloading a patch and then moving it to the right location on your machine or virtual machine, if you’ve already got a terminal open to the right spot it’s faster to copy the URL of the patch and use wget to download it. Don’t worry that the patch is for the dev version but you’re running the stable version- it will often apply anyway. The steps for patching Foo module are
patch -p1 < foo.patch
Then you can use git diff (please be using git) to check your patch applied properly.
Note that many patches only change a few lines of code. If this is the case you may find it quicker to edit the files manually.
Test a Patch
Now, slow down. This isn’t the step to save time. You need to carefully test whether the patch fixed your problem and it’s easy to get confused when you do this (did I clear the cache? was I looking at the right environment? Did I attempt to reproduce it the same way?) which will get you completely off track.
Track a Patch
If the patch didn’t help or it broke something else, use git checkout to remove it. Read the issue the patch came from and if it still seems like the right issue, add a comment on your findings. Continue looking for patches or try another problem-solving technique (google? IRC? ask a colleague? debug the code? think of a workaround?)
If the patch solved your problem, chime in to the issue to say so (unless several folks have already said the same). If you now understand the involved code enough to review it, give it a code review and mark its status as Needs Work or Reviewed. If you want your patch to survive an update and not be a dirty hack (hint: you do) you need to track your patch according to your organization’s procedures. I highly recommend using a Drush Patch File workflow.
Postscript: Check the Attitude
Do you get frustrated every time there’s a bug in your open source software and you need to find a patch or some other solution? You need to accept that bugs are a normal part of all software, and work with the rest of us as a giant team finding and fixing them as a part of our working process. You will be able to find solutions much more quickly when you look for them with calm confidence rather than frustration.Terms: DrupalDrupal Planet
The Drupal 7 Interval Field module provides a simple way to create a duration or interval field on any Drupal 7 field-able entity. A common use for this might be on a content type that generally keeps track of dates. Sometimes it easier to summarize a group of dates to a user or visitor using an interval field rather than selecting multiple dates.
An interval field is useful for keeping track of data such as:Tags: DrupalFieldsDrupal 7Site BuildingDrupal Planet
A quick announcement: Debian has applied to the Google Summer of Code, and will also participate in Outreachy (formerly known as the Outreach Program for Women) for the Summer 2015 round! Those two mentoring programs are a great way for our project to bootstrap new ideas, give an new impulse to some old ones, and of course to welcome an outstanding team of motivated, curious, lively new people among us.
We need projects and mentors to sign up really soon (before February 27th, that’s next week), as our project list is what Google uses to evaluate our application to GSoC. Projects proposals should be described on our wiki page. We have three sections:
- Coding projects with confirmed mentors are proposed to both GSoC and Outreachy applicants
- Non-Coding projects with confirmed mentors are proposed only to Outreachy applicants
- Project ideas without confirmed mentors will only happen if a mentor appears. They are kept on the wiki page until the application period starts, as we don’t want to give applicants false hopes of being picked for a project that won’t happen.
We also would LOVE to be able to welcome more Outreachy interns. So far, and thanks to our DPL, Debian has committed to fund one internship (US$6500). If we want more Outreachy interns, we need your help :).
If you, or your company, have some money to put towards an internship, please drop us a line at email@example.com and we’ll be in touch. Some of the successes of our Outreachy alumni include the localization of the Debian Installer to a new locale, improvements in the sources.debian.net service, documentation of the debbugs codebase, and a better integration of AppArmor profiles in Debian.
Thanks a lot for your help!
This tutorial covers writing a "Hello World" test for Drupal 7 using the SimpleTest framework that comes with Drupal core, and is based on the free video Learning Test Case Basics by Writing A Hello World Test.
In a previous post I wrote about why 'Compound fields' in your Drupal installation. A compound field can be seen as a unified field, so a field that contains multiple fields. Get it? :)And now, as promised: how to build
To clarify you will find below an example of how to build a module in which you define a compound field. In this example I am creating a 'Video' field’ for which the following two fields are required:
Tonight I was reading up on the history of the Drupal Console. It started after seeing a tweet that seemed to indicate that the Drupal Console project is headed towards full command-line abilities to control Drupal. Up until recently, the Drupal Console has been described mostly as a "scaffolding tool" for Drupal 8, which would help save developers from boilerplate code. But Console can now enable/disable maintenance mode, which looks like it's headed right into full-blown site management.
This got me thinking, "don't we already have a command line tool for managing Drupal?" Of course we do! It's drush, The canonical tool for managing Drupal sites. I know I must not be the first to think of the overlap, and knowing the Drupal community tends to react poorly to overlap, I did some searching on the history of these two projects. There are a lot of cross-links that happen all over the place, but I think these two posts generally sum things up:
- The first post suggesting Drupal Console overlaps and should merge with Module Builder: https://www.drupal.org/node/2298437
- The issue for using Symfony's Console component directly within Drush: https://github.com/drush-ops/drush/pull/88
It may take a while to read through the posts and their internal links, but several similarities struck me between Drush/Console and Drupal/Backdrop.
- The initial reaction was negative. Both Backdrop and Drupal Console received some negative response, claiming that the overlap was wasted work or would hurt either the project or developers.
- The purpose for forking or rewriting was based around leveraging Symfony components. I find this interesting that Backdrop was created to avoid adopting Symfony, which in turn led to major refactoring and rewriting of major systems. In this case, Drupal Console could (loosely) be considered the fork for the opposite reason. Drush maintainers considered some Symfony underpinnings but decided (for now) that the amount of change and syntax differences didn't really warrant implementation. Drush stayed the same (a la Backdrop) but Drupal Console adopted Symfony components (a la Drupal 8).
- Symfony's paradigms infect all code it touches. I know "infect" is a negative term, but being a proponent of the infectious GPL license, I don't necessarily think of it in a negative light. Drupal 8 didn't set out to rewrite all its subsystems, but after adopting Symfony at the bottom-most layer, the concepts bubbled up into higher levels of the application. It seems to me that the fundamental incompatibility between procedural programming and dependency-injection leads in inevitable rewriting of all code so it fits with the dependency-injection model.
- There are differences in compatibility. Moving from Drush to Console (if you were so inclined) will almost certainly require rewriting scripts and integrations, but Drush 7 to Drush 8 likely will be very compatible. On the opposite side, the jump from Drupal 7 to Drupal 8 requires significant rewriting, while moving from Drupal 7 to Backdrop maintains most compatibility.
- Even though there's now a split, both projects are still communicating and moreover, collaborating on solving problems. I think it's wonderful that the Drush and Console folks have had no hostility. Instead you see the opposite, that they're cross-communicating ways that each could leverage from the others approach. Backdrop and Drupal are collaborating in similar fashion, such as coordinating security releases and cross-porting patches/pull-requests between the projects.
So with all these comparisons, you can see that it's not simply an analogous "A is to B as C is to D" situation, but there are a lot of similarities between reactions, purpose, and intent. As Mark Ferree said, it'll be interesting to see where both projects end up.
The traditional approach of directly styling default Drupal markup is not a scalable solution when we consider the volatility of the modern browser ecosystem. It is now necessary for front-end developers to abstract design patterns into manageable components. By using a styleguide we can automate the process of isolating and cataloguing patterns so they can be iterated and tested against independently. In this post I discuss ideas and put forward some informal rules around managing these components from within a styleguide.
We've been publishing a lot of technical blogs about Drupal 8 to educate and inspire the community. But what about the non-technical folk? How will Drupal 8 shift the way designers, content strategists and project managers plan websites? While many of the changes will not affect our day to day work, there are a few new terms and ways of thinking that can streamline the strategy process, save developers time and save clients money.
Entities: The Word of the Day
Entities are our new friend. They are easygoing and flexible. The sooner we get comfortable with the word Entity and begin using it with our teams, the sooner we can all reap the rewards of the budding relationship.
Victor Kane: Desgrabación de mi presentación sobre DurableDrupal Lean UX+Dev+DevOps Drupalcon Latin America
Desgrabación de la Presentación en el DrupalCon Latin America 2015
Poner en pie una fábrica de DurableDrupal (Drupal Duradero) en base de un proceso Lean reutilizable
[Bajar el pdf de 18 páginas al pie de la página]
Ya que el sonido de la grabación de mi presentación quedó muy bajo en volumen, quiero presentar la desgrabación con la esperanza de que mi mensaje llegue a la mayor cantidad de personas posible.
El video de la presentación en sí se encuentra aquí: https://www.youtube.com/watch?v=bNbkBvtQ8Z0
Los diapositivas: http://awebfactory.com/drupalcon2015lean/#/
Para cada diapositiva, incluyo a continuación el link al slide y el texto correspondiente del video.
En algunos casos hay correcciones inevitables o he extendido el texto para su mejor comprensión. También he traducido diapositivas importantes y agregado algunos comentarios para incluir puntos de suma importancia que no fueron mencionados en la presentación por falta de tiempo, como el Kanban y sus características.
El artículo como consecuencia se extiende bastante, pero espero que resulte de utilidad para quienes se interesan por la cuestión del proceso Lean en los desarrollos con Drupal.
Mi plan es publicar en breve un libro que integre estos conceptos en la práctica con un ejemplo concreto de desarrollo de una aplicación web en Drupal 7 y 8.