Napoleon beat his opponents for years, despite his much smaller army. His knowledge of warfare and the armies of his opponents made him win the wars every time, and ultimately he was able to dominate Europe.Knowledge is power > Sharing is power
The phrase "Knowledge is power" does not come out of thin air – where you could also explain power as influence, wealth or fame. However, in the knowledge economy of today is just having knowledge not enough. It becomes powerful when you can convey that knowledge. In the Open Source community we see that one who shares the most has the most "power". The real change agents, the core developers; they get done a lot because they not only know a lot, but also share this knowledge. And that goes in many ways: by writing a blog, giving a presentation, or simply just by contributing code.Contributing code
Open Source is only good if people not only use it, but also improve it. Drupal is great software, but it has bugs. In the core itself, but (especially) in its thousands of community modules. If we discover a bug during a project we could fix this locally and continue with our work; our problem is resolved. However, we won’t. We always make sure that the solution flows back into the community. That can be done in several ways:Contribute a patch
Can we solve the problem? Great! We create a new issue in the issue queue of the relevant module and deliver the code change as a patch. Example of Martijn: https://www.drupal.org/node/1783678Describe the problem
Are we unable to fix it ourselves? Then at least create an issue and describe how the issue can be reproduced. This helps another developer to fix this, or recognize them their own problem quickly. Example of Dominique: https://www.drupal.org/node/907504Start een nieuwe module
Did we write a separate piece of code that might be interesting for others? We’ll then try to offer this as a separate project. The extra time it takes to make a piece of client code generic and configurable is not an issue, knowing that the community as a whole can now help to improve and maintain the code for us. Example of myself, commissioned by the European Space Agency: https://www.drupal.org/project/commons_hashtagsFeatured Drupal Provider
By sharing so much code we became one of the 4 Featured Drupal Providers in the Netherlands.Taking equals giving
At LimoenGroen (Lime Green) everyone gets 10% community time: every other week, our employees have a full Friday to do what they think is important. They experiment with new technology, write a blog, or "open-source" customer code.
To make sure that the client agrees, we add the following boilerplate text to any quote that we write:Drupal is developed under an open source software license. All, in the context of this project developed software falls under the same license as Drupal itself: GNU General Public License, version 2 or later. The intellectual property is yours. To take full advantage of the benefits of the open source development model, we believe it is important that we have the ability to develop parts of the software generic and share this with the community (with the mention that this is developed for <CUSTOMER NAME>).Appeal to Drupal suppliers
Taking equals giving is what I truly believe in. Therefore, I call on every Drupal supplier to include the text mentioned above in your offers. By doing so, there will soon be more to take! Who's with me?
Maybe do you remember, last year I mentored a Google Summer of code whose aim was to replace our well known Package Tracking System with something more modern, usable by derivatives and more easily hackable. The result of this project is a new Django-based software called Distro Tracker.
With the help of the Debian System Administrators, it’s now setup on tracker.debian.org!
This service is also managed by the Debian QA team, it’s deployed in /srv/tracker.debian.org/ (on ticharich.debian.org, a VM) if you want to verify something on the live installation. It runs under the “qa” user (so members of the “qa-core” group can administer it).
That said you can reproduce the setup on your workstation quite easily, just by checking out the git repository and applying this change:--- a/distro_tracker/project/settings/local.py +++ b/distro_tracker/project/settings/local.py @@ -10,6 +10,7 @@ overrides on top of those type-of-installation-specific settings. from .defaults import INSTALLED_APPS from .selected import * +from .debian import * ## Add your custom settings here
Speaking of contributing, the documentation includes a “Contributing” section to get you up and running, ready to do your first contribution!
When caching is not an option, Drupal sites employing the Views module may find their performances bound by it. Getting to the bottom of this issue on a number of sites we discovered that performance benefits are to be gained in unlikely corners of Views. We published a first version of the Views Accelerator module for everyone to reap those benefits. You’re invited to give it a burl. A couple of clicks on the UI could be all it takes to put a smile on the performance dial.From simple to more complex analysis tools
Did you ever pay attention to that spinning circle while your browser is fetching your page? While that wheel spins anti-clockwise your browser is waiting for a reply from the server to your mouse click. Then as the server response streams in, the wheel reverses direction and the browser builds up your page. Details of each and every file processed during that phase and how long it took can be found under the Network tab of your browser console.
But when it comes to improving that left-churning part, no amount of browser analytics can help you. This is when you bring out the big guns. Like XHProf, or for D8, the Symfony-based WebProfilerBundle.
And you get ready to get your hands dirty, as you may have to dig deep.
But why take the trouble to analyse all this? Why not tell your customer to throw a pile of caching technologies at the under-performing site?
Because depending on the nature of your site, caching can be ineffective and even lead to functional errors.
The reason is personalisation/customisation.
Increasingly websites recall specific details about us to give us an enhanced browsing experience tailored to our preferences. Sites remember stuff we chose before. Brands, price ranges, travel destinations. Taking advantage of GPS/WiFi technology sites know where we are when we visit them. A map may place our current location at its centre and only show nearby points of interest — rather than the whole world.
Websites are moving from off-the-rack, one-size-fits-all to bespoke.
To cache is to assemble time-consuming pages once, to then dish out copies to everyone who ordered that same page. Caching does not cater for every guest bringing their own dietary requirements to the table.
Bespoke is indigestible to caching.
That’s when you have to take caching off the menu and look for alternative ways to speed up a site. So we cooked up Views Accelerator.Identifying server-side slow spots
Tasked with making a customer site perform quicker we booted up XHProf. The culprit of slow performance was soon identified as a map featuring hundreds to thousands of nearby points of interest, centred on the visitor’s current location.
But it wasn’t any of the map engines or their APIs (Google, Openlayers, Leaflet) that were soaking up the seconds. Neither was it the database. It was Views. A little-known corner of Views.
Those familiar with the Views UI cockpit may know the tick box Show performance statistics on the admin/structure/views/settings page. With that checked, a preview prints out Query build time, Query execute time and View render time.
It’s like the developers of the Views module themselves felt those three numbers sum up all there is to Views performance.
But there is a fourth component… and it can slow your site down more than the other three together. XHProf proved it.The performance opportunity
Between the query-execute and view-render phases, the code passes through a post-execute stage. This is where the raw results from the database are groomed for final rendering and theming. All results go through post-execute, even when this may not be necessary....
And with that we cue to the Views Accelerator project page. Featured there is a summary of a case study, proving how flicking on the module can boost Views speeds by 60%.
Views Accelerator is unconventional in its approach and is still in its infancy. Time will tell how the module matures in the community. We welcome feedback to help us improve the module.
Enable Views Accelerator on a test site. In Analysis mode it tells you how every view on every page you visit performs. Then in Accelerator mode it shows you those figures again. Hopefully the second time round those figures are a little leaner, making the user experience a little richer. If not, then your views may already be close to optimum. That’s reassuring too, isn’t it?
No gain, no pain. There is no reason not to give Views Accelerator a go.Image taken from Time Magazine:
The $19 million Bloodhound SSC that is designed to shatter the world record on land with speeds over 1000 mph.
File under: Planet Drupal
14th June, I've participated to OSC (Open Source Conference) 2014 Hokkaido in Sapporo, Hokkaido (sorry openSUSE folks, OSC does not mean openSUSE Conference ;) OSC has 10 years history in Japan, so don't blame me...)
Hokkaido is northan island of Japan (it has 4 major islands - Hokkaido, Honshu, Shikoku and Kyushu), takes 1.5 hours from Tokyo (HND-CTS) and debian-mirror.sakura.ne.jp is also there.
As always, we show the Debian booth with Debian lovers, Squeeze, Woody and Jessie.
And I gave talk about Debian a little,
mostly how it is developed and distribute, and shapes in Jessie at that time (PDF/odf is my page on Debian Wiki as usual).
Does Cowgirl Dream of Red Swirl? from Hideki Yamane
After that, Enjoyed food, beer (sure! :) and chatting in party.
Folks, see you in #osc15do again!
Some projects tend to like to abstract everything - KDE, I am looking at your developer base, see phonon for a very misguided effort. While abstracting config files like elektra tries to do looks like a laudable goal, it can't cover all of them plus is a maintenance nightmare.
Adding a crappy^Wbloated c++ layer in order not to prompt user is definitely using the wrong tool at hands. It seems this year again Debian choose super boring Google summer subjects, while Linaro let the students do cool stuff. BTW git implemented all kind of merge strategies, that would be the first place to look at and merge into dpkg.
I have a number of USB hard disks. Like, I suppose, mostly everyone who reads this blog. Unlike many people who do, however, for whatever reason I decided to create LVM volumes on most of my USB hard disks. The unfortunate result is that these now contain a lot of data with a somewhat less than efficient partitioning system.
I don't really care much, but it's somewhat annoying, not in the least because disconnecting an LVM device isn't as easy as it used to be; originally you could just run the lvm2 init script with the stop argument, but that isn't the case anymore today. That is, you can run that, but it won't help you because all that does, effectively, is exit 0.
So what do you do instead? This:
First, make sure your devices aren't mounted anymore. Note: do not use lazy umount for a device that you're going to remove from your system! I've seen a few forum posts here and there of people who think it's safe to use umount -l for a device they're about to remove from their system which is still in use. It's not. It's a good way to cause data loss.
Instead, make sure your partitions are really unmounted. Use fuser -m if you need to figure out which process is still using the partition.
- Next, use vgchange -a n. This will cause LVM to deactivate any logical volumes and volume groups that aren't open any more. Note that this can't work if you haven't done the above. Also note that this doesn't cause the devices to be gone when you do things like vgs or so. They're still there, they're just not in use anymore. Skipping this step isn't recommended, though; it will make LVM unhappy, mostly because some caches are still in use.
- Remove your device from the computer. That is, disconnect the USB cable, or call nbd-client -d, or do whatever you need to make sure the PV isn't connected to your system anymore.
- Finally, run vgchange --refresh. This will cause the system to rescan all partitions, notice that the volume groups which you've just disconnected aren't there anymore, and remove them from configuration.
Voila, your LVM volume group is no longer available, and you've not suffered data loss. Kewl.
Note: I don't know what the lvm2 init script used to do. I suspect there's another way which doesn't require the --refresh step. I don't think it matters all that much, though. This works, and is safe. That being said, comments are welcome...
1. This weekend I will apply to rejoin the Debian project, as a developer.
3. This is the end of my list.
4. I lied. This is the end of my list. Powers of two, baby.
Sometimes while website development it is necessary to transfer data from one database to another. Often it is either migration to a newer Drupal version (from 6.x to 7.x) or transfer of content to Drupal from another platform. Migrate module makes a very convenient tool for importing data in such cases.Read more
This is the second of four (ok, it was three, but there is so much good information!) weekly blog posts that encapsulate the advice, tips and must-do elements of career building in the Drupal Community from the panel of experts collected for DrupalEasy’s DrupalCon Austin session; DrupalCareer Trailhead; Embark on a Path to Success. It will be listed with other career resources for reference at the DrupalEasy Academy Career Center.-->
DebConf team: Second set of talks accepted; 3 days left to submit yours (Posted by Ana Guerrero Lopez)
There are only left 3 days to submit your talk. Submit yours before it’s too late. From the submissions from the last week, we have accepted a second batch of talks:
- Validation and Continuous Integration BoF
- DSA Team round table/BoF
- debdry - Debian Don’t Repeat Yourself
- Debian Contributors, one year later
- Jessie (bits from the release team)
- State of the ARM
- Debian installer and CD BoF
- OpenStack update & packaging experience sharing
- Meet the Technical Committee
- Upstream Guide BoF
If your talk is not on the list, it doesn’t mean that it is not accepted. The talks team will go through the list of talks again, and will publish the final list of talks towards the end of the month.
We’ll keep the submission of talks open after the deadline. Talks submitted after the deadline still have the possibility of being scheduled as ad-hoc talks. We’ll publish more information about this closer to the conference.
I am looking for a few core developers, willing to take part in 45-60 minute user interviews next week. These interviews are part of the User Research we are doing now, they will help inform the future Drupal.org redesign.
If you have time and desire to talk to us about Drupal.org, let me know!
Hi Planet Debian!
My name is Ian Donnelly. I am going into my Senior Year at Florida State University this fall but for the summer I am working on a Google Summer of Code project with Debian. My mentor is Markus Raab and I am working with the Elektra team to implement a great new feature for Debian! I am spending this summer working on a system to allow automatic conffile merging by utilizing the Elektra Framework.
Elektra is designed to provide “a universal and secure framework to store configuration parameters in a global, hierarchical key database.” For this project, conffiles will be mounted to the Elektra framework as KeySets. I have written code that performs a three-way merge on these KeySets which is much more robust than merging individual lines. A conffile can easily be mounted to the Key database using the kdb mount command, which I detailed in a previous blog post. Elektra has many included plugins which allow all types of conffiles to be mounted to the Key database. With our newest release, we even added an augeas plugin which will provide compatibility for most of the major conffiles. Once a conffile is mounted, any changes to the Elektra KeySet it is contained in will be updated in the actual conffile and visa-versa.
So far this summer, I have written code for a three-way merge of Elektra KeySets and we have begun to look at the next step of implementing a automatic merge into dpkg. We are currently working on identifying exactly how the merge should take place and where we need to modify dpkg. The basic idea of what will change is that:
- Dpkg will need to store base conffiles somewhere on the system.
- Conffiles will need to be edited to have options ready for mounting such as where the keys should be mounted (for a package to support the automatic merge).
- Mount conffiles during package installs.
- Attempt a three-way merge during a package upgrade (instead of showing the user a prompt)
- Fall back to the regular prompt if a conffile can’t be merged without conflicts.
Once dpkg has been patched, we will modify some packages and thoroughly test the new merge operation. It is designed to be non-destructive and to not infringe with any of the advantages of the current system. Package maintainers would need to opt in to using this feature to insure that all packages get tested before using it. However, we have designed this system to require minimal modification in order for maintainers to utilize it.
This project will benefit Debian by removing a large pain-point with users. For experienced uses, merging changes in conffiles can be a tedious process. For newer users, the current system can be downright confusing. The goal of our project is to make Debian easier to use for everybody while not loosing the features that make it such a powerful platform. Please let us know what you think of the project or if you have any questions.
Ian S. Donnelly
I would like to go over the kdb mount command. The mount command is used to mount configuration files to the Elektra Key Database. Once a file is mounted, any changes to the file will be reflected in the Elektra Keys and any changes to the keys will be reflected in the configuration file. So how do we use the mount command?
kdb mount [filepath] [mountpoint] [plugin]
For instance, to mount /etc/hosts to system/hosts (using the hosts plugin):
kdb mount /etc/hosts system/hosts hosts
At this point the hosts file has been mounted. On my system the following key now exist under system/hosts
Which makes sense as the only line in my hosts file is:
Now I can edit this configuration file through the Elektra API, the kdb tool, or by editing the file (like always). So if I make a change using kdb such as:
kdb set system/hosts/ian-debian 192.168.0.2
My hosts file will be updated accordingly:
So that’s how kdb mount works!
You want your website to tell your organization’s story. To do that, you need to listen to the stories your website is telling you.
I have the opportunity to work with a lot of great nonprofits, including one that supports women with breast cancer. While conducting a site audit, we noticed that they had higher mobile traffic than many of ThinkShout’s clients. It had grown from 5% of the site’s total in 2010 to 30% so far in 2014.
Digging a bit deeper, we found that many of the mobile visitors go directly to a section focusing on breast cancer in young women, including "survivor stories":
This section generated 50% of all pageviews for mobile users, as opposed to 27% of non-mobile traffic.
73% of mobile site visitors came directly to these pages through search, as opposed to 44% of non-mobile searchers; visitors coming to the site through search on their computers largely landed on the home page.
Top search terms on mobile include "breast cancer stories," “breast cancer in young women,” and “young breast cancer survivors.” Those terms are 2 to 5 times more likely to have been used by mobile visitors than by those coming to the site on their computers.
Those numbers tell a story.
The news that you have cancer must come as a shock. Particularly when you’re young, it may seem to come out of the blue. It’s entirely possible that, when you first receive this jolt, you’re alone, having just left your doctor’s office or set down the phone.
The first thing I would want is some reassurance, confirmation at this juncture in my life that I’m going to be okay. Ideally, I’d talk to a loved one, but in the moment, I might turn to my smartphone: it’s immediately available, it’s connected, it could provide some answers.
Our client had created content that could meet that need, opening opportunities not just to reassure a key audience, but to connect them to a broader community of support and thereby further their mission.
By experimenting with layouts and calls to action specifically for mobile visitors – essentially, by doing our job – we have the chance to help people in a very real way.
And that’s why we all do this work, isn’t it?
Taking the time to create data-informed personas can help you hone a strategy for your content that goes beyond assumptions and guesswork.
Once you understand HOW users arrive at your site and examine WHAT they do after they get there, you can begin to parse out WHY they’re interested in your content in the first place – and implement the tactics that will tie their motivations to your organizational goals.
So, what are they doing?
As we’ve suggested before, a good website will nudge a visitor from her desire path – the reason she visited your site to begin with – to the desired path: a series of actions that will serve your organization’s mission.
It pains me to say it, but you need to start with a content audit. If you don’t understand what you have, you can’t understand how it’s being used.
Don’t use them.
Or rather, use them in a way that serves your mission. Your website likely has hundreds or thousands of "pages" from highly trafficked to, sniffed at, forgotten, and abandoned. Microsoft once found that 30% of its webpages had never been visited. (That number is more impressive in real numbers: 3,000,000. Yes, three million pages essentially served no purpose at all.) More recently, an audit by the World Bank revealed that 31% of its reports had never been downloaded.
There’s always a reason a piece of content was added to your website. Somebody, at some point, thought it would be valuable. They may still have some emotional attachment to it.
Tough. Kill it.
Allyson Kapin recently pointed to a Harvard study that found that "the more complex a website is, the less appealing the website is to visitors." Microsoft discovered that removing poor quality content improved customer satisfaction. On a more basic level, psychologists have found that providing too many options can stall the decision-making process. When you go to the cereal aisle, do you want to spend time evaluating all of the possible cereal options? No, you just want the Golden Grahams. Everything else is a distraction.
When you put barriers in the way of your users, in the form of content they’re not interested in, they’re less likely to fight through to find the content they want. Yes, even if your organization thinks that content is central to your mission.
John McCrory created this handy dandy template to help you sort out what to keep and what to throw in the hopper:
If you need to run a full inventory, you might want to look into a tool like Screaming Frog or CAT (Content Analysis Tool). Or you can export reports from Google Analytics and combine it with data from social channels.
But you don’t need to care about everything. We suggest you:
1) Focus on your high-traffic content. You’ll need to decide what the cut-off should be, but in general, if a page represents less than .05% of your total pageviews, it’s probably not worth the time it takes to care about it. Your threshold may very well be higher than that.
You’ll want to analyze performance as well. Just because a page is heavily visited doesn’t mean it’s working the way you’d hoped. You may need to rewrite and reformat a lot of it. But there’s probably a reason people arrive there, and you should spend the time to figure out why.
On ThinkShout’s own site, for example, we know that there are a handful of older blog posts, mostly on technical subjects, that account for a high percentage of our site traffic. We’re not actively engaged in driving visitors to them, but they do present opportunities to encourage visitors to learn more about the work we do. For us, they’re old news. For our visitors, they may contain valuable information, and that’s a motivation we can tap.
2) Include any page in your site navigation, down two or three levels. Presumably, this content was once important to some stakeholder, somewhere. You may still cut it – hopefully, that stakeholder has already left your organization – but if it was important enough to include in your site architecture the last time around, you should try to understand why it’s there, even if it’s failing. This will help you not simply develop a strong argument to convince, say, your development director that her pet page is just getting in the way, but identify the pages that DO work and use them as exemplars to rewrite content deemed vital to the organization that may not be connecting with end users… yet.
3) Aggregate data for your primary content types. Task number one will catch that one event you did with Pharrell Williams in 2011 that still drives a lot of traffic to your site, but you probably have a certain number of content types – you can think of these as page "templates" – that handle the bulk of your content. You may have a blog, you may run events. If you’re already good at the structured content thing, you may have broken videos or media mentions into their own content types.
Kivi Leroux Miller talks about differentiating between your evergreen content (your main site pages, likely contained in your navigation structure) from your perennial and annual color (those blog posts and tweets). A lot of your content has a shelf life, so you need to make sure the wrapper keeps it as fresh as possible. Part of your content audit should examine how well the standard structure around your more temporal content is performing.
If you find that your audit still runs into hundreds of pages, Facebook’s Jonathon Colman has some great tips on how to use conditional formatting in Excel to highlight the areas that need the most attention. (Start on slide 131.)
Who’s doing it?
Now it’s time to tie content back to your users. You have a good idea how they arrived. By segmenting your traffic, you can study what they do next, all the way down to a page-by-page basis, if you’re feeling frisky.
A good place to start is the Google Analytics "Navigation Summary." This hidden gem of a report can be found under Behavior -> Site Content -> All Pages. In the upper left corner of the content, there are a series of tabs for “Explorer,” “Navigation Summary,” and “In-Page.” You want the middle one.
This report will tell you four things about any page on your site:
How many users landed on this page ("Entrances")
If it’s not a landing page, what their previous page was ("Previous Page Path")
How many users exited from this page ("Exits")
If they didn’t exit, the page they go to next ("Next Page Path")
Magic! But wait, there’s more. You can – and should! – use this page in conjunction with segmentation.
I don’t want to get bogged down in the details of Google’s Advanced Segments here. If you decide you want to take this on yourself, KISSMetrics has a good overview. Google itself made a slightly less helpful video.
When we begin a new project, we like to create segments that relate to the ways people arrive at a client’s site, in particular, engaged users.
You can define engagement in a number of ways: non-bounce visits, repeat visits, time on site – pretty much anything Google Analytics provides a metric for. If you have enough traffic, you can combine more than one of them to segment out a group of the highly engaged.
Combine engagement with mode and place of entry, and you’ll begin to flesh out your personas – and your content audit, if you set it up to track key audience segments for each top-level page. You might also consider using the information you generate in Page Tables for your evergreen content.
This sort of analysis sets the stage for much of the work we do here at ThinkShout. Getting back to the breast cancer organization, we created an advanced segment that looked only at mobile traffic, specifically mobile traffic that didn’t bounce (ie, they viewed more than one page).
Here’s what the segmentation looks like on the "Navigation Summary" report in the context of ThinkShout’s “Work” page:
We did this to answer two questions:
If a visitor navigates on a mobile device to a particular page (or group of pages), and doesn’t bounce, what do they do next?
How can we take advantage of that knowledge to try to capture the attention of more visitors?
By examining how the actions of engaged users differ from those you don’t capture, you can start to develop ideas about the calls to action, related content blocks, or other improvements to your information architecture that might make a particular group of users more likely to interact with your website – and your organization.
Next time, we’ll take a look at how you can make real improvements to your website based on the information you glean from your audience’s desire paths.
In the meantime, you can import three of ThinkShout’s commonly-used analytics segments:
You’ll likely need to edit these, based on your own URL structures, but they should be enough to get you started.
Half a year ago I wrote a blog post about various stumble blocks I had run into in my first year as a Drupal developer. I called them gotchas because they were not necessarily bugs - they might just be Drupal’s way of doing things which may confuse people new to or experienced with Drupal. Sometimes they are annoyances - they could be called paper cuts too. The post got some attention - which was great because people started sending me tips about things they ran into. Now there is time for a new round!
Half a year ago I wrote a blog post about various stumble blocks I had run into in my first year as a Drupal developer. I called them gotchas because they were not necessarily bugs - they might just be Drupal’s way of doing things which may confuse people new to or experienced with Drupal. Sometimes they are annoyances - they could be called papercuts too.
The post got some attention - which was great because people started sending me tips about things they ran into. Now there is time for a new round!
First I’d like to start with an update to my previous post - good news!Outdated gotcha: Features module and taxonomy
In the comments Damien McKenna pointed out
Features v2 now uses the machine name and labels for permissions, roles and vocabularies, making exporting those much easier than it used to be.
Which is true. It was just that we were using the “Taxonomy Access Fix” module in that project, and it was not using the machine names. But that module has been updated to also use vocabulary names. Things do get better in Drupal!New gotchas I have foundGOTCHA 1: Trying to be consistent about naming? Not too fast!
I can’t believe that I didn’t stumble into this one before, I have a feeling it hits a lot of people new to Drupal: When starting a new project, you might want to be consistent about naming, and give a custom project module and theme the same name.
Well, buddy, that is asking for trouble! And different trouble every time to confuse you even more - a search of any error message that pops up might not give you the cause as the first result in Google.
This duplicate naming problem can of course happen between your own custom modules and contrib projects you add from drupal.org.
Andreas also adds that you will also run into trouble if give your site profile the same name as your theme (and possibly also module names).
This is documented in Drupal's documentation, which you of course have read thoroughly: https://www.drupal.org/node/143020
Tips for best practice in naming: http://drupal.stackexchange.com/questions/851/best-practice-for-avoiding...Gotcha 2: Where was that module again?
In a direct response to the original blogpost, reddit user tranam mentioned:
The biggest gotcha in Drupal, as far as I'm concerned, is having a modules folder, and a sites/all/modules folder.
Lauri mentioned this:
You're trying to update a module and it doesn't update. drush up, drush updb, disable, enable, delete, download, etc.. and it's still the same! Then a colleague points out the obvious: You have the module installed in more than one place, like /sites/all/modules and /sites/all/modules/contrib or even a submodule of another module.
To find out if this is your problem, you can run SELECT * FROM system to see the actual path that drupal has associated with a specific module.Gotcha 4: The pesky feature that won’t revert (multilingual sites)
So you just pushed a git commit to a server, and you’re trying to revert the feature, but it just won’t happen. You have tried the magic cache-clearing/php-fpm reloading/cache-clearing incantation several times already, but no no no no go.
Then you look at the feature diff, and you notice that .. you were looking at the feature page in a different language from it was made in. Aargh!Gotcha 5: Browser based language detection is killed by page caching
Florian told me abot this one: If you have page caching enabled the browser based language detection will not work due to a bug in core.
This dropbucket snippet might work for you though: http://dropbucket.org/node/728Gotcha 6: Dragging and dropping around in panels don't work
Juho V tipped me about this one - and Sampo provided the workaround:[11.57.59] Sampo: Sulla on exposed formi tossa. Disabloi se view, draggaa toi paikalleen ja toimii.
Decrypted from Finnish it reads: You have an exposed form in there. Disable that view, drag it to it’s position and it will work. The exposed form is what breaks the drag and drop, so you better just disable that display.Gotcha 7: Administer Content permission is powerful but not that powerful
Mario notified me about this one:
Administer Content permission sounds all powerful: But if you're not user 1 you can still not access unpublished content if it is not edited by yourself.
You also need "Bypass content access control" permission - or you could use the “view_unpublished” module.Gotcha 8: Multiple meanings of “vid” in the database
This has baffled me, at least, when poking around in the database trying to figure out how to solve a case. And I am not the only one: http://drupal.stackexchange.com/questions/13266/what-does-vid-mean
marcvangend at Drupal Answers writes in http://drupal.stackexchange.com/a/13269 :
Unfortunately, vid can mean multiple things. That's not ideal, but I have not seen it causing problems (other than mild confusion now and then).
In the context of nodes, it means 'version id'. For every node in the node table, Drupal can save multiple versions in the node_revisions table. The version id is the unique identifier in the node_revisions table. (This is the vid you see in your query.)
In the context of taxonomy, vid means 'vocabulary id'. A vocabulary is a collection of related terms. Every vocabulary has a unique id.
In the context of the Views module, vid means 'view id'.
Teemu reported this one: If you are running EntityFieldQueries from cron, you might not get the expected results because EFQ does indeed add it’s own access checks.
Solution:$query = new EntityFieldQuery(); $query->entityCondition('entity_type', 'node') ->fieldCondition('field_something_id', 'value', $something_id) ->addMetaData('account', user_load(1));
The last line makes it sure its loaded as Dries.
An alternative is apparently to use this one (which was added in Drupal 7.15):->addTag('DANGEROUS_ACCESS_CHECK_OPT_OUT') Gotcha 10: Blocking users does not block immediately
Reigo reported this experience:
I once discovered a blocked user who had logged in before he was blocked, and he was still active! So after blocking, empty the session table.
Note: Emptying the session table will also log out ALL the site’s users - so you might think twice about that. One option is to remove the session for that specific user id.Gotcha 11: Want to disable the query pager? We’ll trip you up
If you have a query object and wish to return all items, you might try and set the limit to null or false to disable the pager. No go:
States:Specify the maximum number of elements per page for this query. The default if not specified is 10 items per page. $limit: An integer specifying the number of elements per page. If passed a false value (FALSE, 0, NULL), the pager is disabled.
Did you think that a disabled pager would return all results? Sorry, daften states this https://api.drupal.org/comment/13964#comment-13964
I would expect disabling the pager would mean everything is shown, instead nothing is returned.
Gotcha :)Gotcha 12: Views with date filter not returning what you want
I was struggling with a view with filters for a date field, that did not return what I wanted. I suppose the following conversation speaks for itself:[15:08:17] Ilmari: ha! [15:08:19] Ilmari: I got it [15:08:26] Ilmari: Guess what, its the granularity [15:08:32] Ilmari: It is set to "day" [15:08:38] Ilmari: instead of seconds [15:08:56] Ilmari: In the "settings" part of the filter (separate settings) [15:09:16] Ilmari: Filter granularity => Day [15:09:21] Ilmari: should be => Second [15:10:36] Ilmari: Did you find "Filter granularity" under "Settings" for the filter? [15:11:08] Bernt: I didn't find that [15:11:09] Ilmari: Pretty annoying that the default is a day [15:11:18] Ilmari: Content: Date - end date:value2 (yesterday) | Settings [15:11:25] Ilmari: --> "Settings" [15:11:32] Bernt: Oh hah there!! GRRRRRRR hitting my head in the wall [15:11:36] Ilmari: :D [15:11:36] Bernt: or on the desk, it is closer [15:12:02] Ilmari: I've encountered that exact problem several times, i always forgot to check those settings.. [15:12:22] Ilmari: ..and the default "day" granularity doesn’t really make sense, usually [15:12:56] Bernt: Thanks man [15:13:45] Ilmari: heh [15:13:50] Ilmari: Glad I could help this time [15:13:50] Bernt: Well, another GOTCHA for my list
So when you are working with a date field in views, remember to check what the granularity setting of the field is. Date defaults to day, which doesn’t work too well if you are more interested in now!Gotcha 13: Site name in a feature that depends on ..
Florian reported this find:
Don't export the site name in a feature that's a dependency of an installation profile, or you won't be able to install.
It just happens that the installer checks the site name to test if Drupal has already been installed or not, so if you add that variable to a feature, then it thinks that the installation has already been done.
In the meantime, you can just set the title during an install task (that runs later in the process).