For years, we have been using and recommending memcached for Drupal sites as its caching layer, and we wrote several articles on it, for example: configuring Drupal with multiple bins in memcached.
Memcached has the advantage of replacing core caching (which uses the database) with memory caching. It still allows modules that have hook_boot() and hook_exit() to work, unlike external cache layers such as Varnish.
However, memcached has its limitations: It is by definition transient, so rebooting wipes out the cache, and the server can suffer if it has high traffic. It is also entirely memory resident, so to cache more items you need more RAM, which is not suitable for small servers.
The following is a detailed guide to get Redis installed and configured for your server. It assumes that you are an Ubuntu Server 14.04, or the equivalent Debian release.Installing Redis
You do not need to enable any Redis modules in Drupal.
Then, install the Redis Server itself. On Debian/Ubuntu you can do the following. On CentOS/RedHat, you should use yum.aptitude install redis-server
Then, install PHP's Redis integration. Once you do that, you do not need to compile from source, or anything like that, as mentioned in Redis README.txt file.aptitude install php5-redis
Restart PHP, so it loads the Redis integration layer.
This assumes you are using PHP FPM:
If you are using PHP as an Apache module, then you need to restart it as follows:service apache2 restart Configuring Redis
Then in your settings.php file, you should replace the section for memcache which would be as follows:$conf['cache_backends'] = './sites/all/modules/contrib/memcache/memcache.inc';
$conf['cache_default_class'] = 'MemCacheDrupal';
$conf['memcache_servers'] = array('127.0.0.1:11211' => 'default');
$conf['memcache_key_prefix'] = 'site1';
And replace it with the following configuration lines:// Redis settings
$conf['redis_client_interface'] = 'PhpRedis';
$conf['redis_client_host'] = '127.0.0.1';
$conf['lock_inc'] = 'sites/all/modules/contrib/redis/redis.lock.inc';
$conf['path_inc'] = 'sites/all/modules/contrib/redis/redis.path.inc';
$conf['cache_backends'] = 'sites/all/modules/contrib/redis/redis.autoload.inc';
$conf['cache_default_class'] = 'Redis_Cache';
// For multisite, you must use a unique prefix for each site
$conf['cache_prefix'] = 'site1'; Cleaning Up
Once you do that, caching will start using redis. Memcached is not needed, so you should stop the daemon:service memcached stop
And you should purge memcached as well:aptitude purge memcached
And that is all there is to it.Changing Redis Configuration
You can then review the /etc/redis/redis.conf file to see if you should tweak parameters more, such as changing maxmemory to limit it to a certain amount, as follows:maxmemory 256mb
More below on this specific value.Checking That Redis Is Working
To check that Redis is working, you can inspect that keys are being cached. For this, you can use the redis-cli tool. This tool can be used interactively, as in, you get a prompt and type commands in it, and results are returned. Or you can use the specific command as an argument to redis-cli.
For example, this command filters on a specific cache bin, the cache_bootstrap one:$ redis-cli
127.0.0.1:6379> keys *cache_boot*
Or you can type it as:$ redis-cli keys "*cache_boot*"
In either case, if Drupal is caching correctly, you should see output like this:1) "site1:cache_bootstrap:lookup_cache"
As you can see, the key structure is simple, it is composed of the following components, separated by a colon:
- Cache Prefix
This is the site name in a multi site environment.
- Cache Bin
This is the cache table name when using the default database caching in Drupal.
- Cache Key
This is the unique name for the cached item. For cached pages, the URL is used, with the protocol (http or https) and the host/domain name.
You can also filter by site, using the cache_prefix:$ redis-cli keys "*site1:cache_page*"
The output will be something like this:1) "site1:cache_page:http://example.com/node/1"
You can also check how many items are cached in the database:$ redis-cli dbsize
The output will be the number of items:(integer) 20344 Flushing The Cache
If you need to clear the cache, you can do:$ redis-cli flushall Checking Time To Live (TTL) For A Key
You can also check how long does a specific item stay in cache, in seconds remaining:$ redis-cli ttl site1:cache_page:http://example.com/
The output will be the number of seconds:(integer) 586 Getting Redis Info
You can get a lot of statistics and other information about how Redis is doing, by using the info command:$ redis-cli info
You can check the full documentation for the info command.
But here is one of the important values to keep an eye on is used_memory_peak_human, which tells you the maximum memory that was used given your site's specifics, such as the number of items cached, the rate of caching, the size of each item, ...etc.used_memory_peak_human:256.25
You can use that value to tune the maxmemory parameter, as above.
You can decrease the Minimum Cache Lifetime under /admin/config/development/performance to make the available memory fit that number, or the other way around: you can allocate more memory to fit more.Monitoring Redis Operations In Real Time
And finally, here is a command that would show you all the operations that are being done on Redis in real time. Do not try this on a high traffic site!$ redis-cli monitor Performance Results
Redis performance as a page cache for Drupal is quite good, with Time To First Byte (TTFB) is ~ 95 to 105 milliseconds.Alternatives To Redis and Memcached
We did fairly extensive research for Redis and Memcached alternatives with the following criteria:
- Compatible With Redis or Memcached Protocol
We wanted to use the same PHP extension and Drupal Redis (or Memcached) modules, and not have to write and test yet another caching module.
- Non-Memory Resident Storage
We want to reduce the memory foot print of Redis/Memcached, because they both store the entire key/value combinations in memory. But still wanted to get acceptable performance.
The following products all claim to meet the above criteria, but none of them worked for us. They were tested on Ubuntu LTS 14.04 64-bit:MongoDB
Using MongoDB article for more details.MemcacheDB
MemcacheDB is a Memcached compatible server which used the excellent Berkeley DB database for storage.
This MemcacheDB presentation explains what it does in detail.
It has an Ubuntu package right in the repository, so no need to compile from source, or manually configure it. It works flawlessly. The -N option enable the DB_TXN_NOSYNC option, which means writes to the database are asynchronous, providing a huge performance improvement.
Configuration in Drupal's settings.php is very easy: it is exactly like Memcached, with only the port number changing, from 11211 to 21201.
Alas, all is not rosy: it is not really a cache layer, since it does not expire keys/values based on time, like Memcached and Redis does.Redis NDS
Redis-NDS is a fork of Redis 2.6, patched for NDS (Naive Disk Store).
It does compile and run, but when the line: 'nds yes' is added to the configuration file, it is rejected as an invalid value. Looking briefly in the source, we also tried 'nds_enabled yes', but that was rejected as well. So we could not get it to run in NDS mode.ARDB
ARDB is another NoSQL database that aims to be Redis protocol compatible.
We compiled this with three different storage engines: The Facebook RocksDB did not compile to begin with. Google's LevelDB compiled cleanly, and so did WiredTiger. But when trying to connect Drupal to it, Drupal hanged and never came back with both engines.SSDB
SSDB is also another NoSQL database that tries to be Redis protocol compatible.
It compiled cleanly, but had the same symptom as ARDB: Drupal hangs and never receives back a reply from SSDB.
If you were able to get any of the above, or another Redis/Memcached compatible caching engine working, please post a comment below.Resources
- A useful article on Redis persistence. Make sure you read this in conjunction with Redis' own documentation on presistence.
- Redis documentation on memory optimization.
- The Pantheon Redis instructions are useful, even though they are specific to their hosted service.
One of these scenarios is using MongoDB as the caching layer for Drupal.
This article describes what is needed to get MongoDB working as a caching layer for your Drupal site. We assume that you have an Ubuntu Server LTS 14.04 or similar Debian derived distro.Download The Drupal Module
First, download the MongoDB Drupal module. You do not need to enable any MongoDB modules.drush @live dl mongodb Install MongoDB Server, Tools and PHP Integration
Then install MongoDB, and PHP's MongoDB integration. Note that 'mongodb' is a virtual package that installs the mongodb-server package as well as other client tools and utilities:aptitude install php5-mongo mongodb Restart PHP
Restart PHP, so that MongoDB integration takes effect:service php5-fpm restart Configure Drupal With MongoDB
Now, edit your settings.php file, to add the following:$conf['mongodb_connections']['default']['host'] = 'mongodb://127.0.0.1';
$conf['mongodb_connections']['default']['db'] = 'site1';
$conf['cache_backends'] = 'sites/all/modules/contrib/mongodb/mongodb_cache/mongodb_cache.inc';
$conf['cache_default_class'] = 'DrupalMongoDBCache';
Note, that if you have multisite, then using a different 'db' for different sites will prevent cache collision.Monitoring MongoDB
You can monitor MongoDB using the following commands.mongotop -v
mongostat 15 Tuning MongoDB
Turn off MongoDB's journaling, since we are using MongoDB for transient cache data that can be recreated from Drupal.
Edit the file /etc/mongodb.conf and change journal= to false.Performance Results
Quick testing on a live site showed that MongoDB performance is acceptable, but not spectacular. That is specially true when compared to other memory resident caching, such as Memcached or Redis.
For example, on the same site and server, with Redis, Time To First Byte (TTFB) is ~ 95 to 105 milliseconds. With MongoDB it is ~ 200, but also goes up to ~350 milliseconds.
Still, MongoDB can be a solution in memory constrained environments, such as smallish VPS's.Tags:
Civil Comments is a platform that brings real-world social cues to comments sections via crowd-sourced moderation and powerful community management tools. Civil Comments is the first commenting platform specifically designed to improve the way people treat each other online.
Unlike others who have thrown up their hands and accepted that the comments sections of the Internet would either be dominated by bullies and trolls, or become a moderation burden for a site's editors, the team at Civil is attempting to solve the problem with community moderation. It is an exciting new take on a widespread problem, and Chromatic is thrilled to bring Civil Comments integration to Drupal with a new contrib module.
It should be noted (and is on the project page!) that there is not currently a free version of Civil Comments. For the time being, it is only available with a subscription as Civil continues work on the platform, but from what I understand a free version is on the horizon.
It's a good time to press your advantage as a Drupal developer.
Drupal 8 has launched, and it's much easier now for Drupal developers to expose content and features on their sites via an API. The capability is built right into Drupal 8 Core. Some contrib modules are attempting to make such capabilities even better.Tags: acquia drupal planet
I had a question the other day on twitter as to if it’s possible to run drush sites aliases in parallel. Apparently I was asking the question the wrong way as @greg_1_anderson (Drush co-maintainer) told me that Drush supported –concurrent and apparently has for several years. In fact, its supported it so long and is so under used that they might remove it from Drush as a result :).
I recently defended my Master of Sciences thesis with a central theme of open source, activism and Drupal. You can grab a copy here or in the article links. I started this research project in 2007, in part, life got in the way but also, I didn’t know how to tell the story.
This video talks through how XMLRPC Page Load and HTTPRL Spider can be used to warm caches on private / authenticated sites. XMLRPC Page Load provides a callback that tricks Drupal into thinking that it’s delivering a page to certain user account. It does this by simulating page delivery but then never actually writing the output anywhere.
Drupal 8.1 will be released in April and it marks the start of a new era for Drupal.
Previously, new features were only added every few years, with the release of major Drupal versions such as 6, 7 and 8.
Now, thanks to a new release cycle, we can look forward to new Drupal features every six months. Drupal 8.1 is arriving six months after the release of 8.0, and we can expect version 8.2 in October this year.
So what's new in Drupal 8.1? Read on to find out ...
With around 767 submissions from around the globe DrupalCon New Orleans must have set a new record for an all time high for session submissions at any conference. We can only imagine the plight of track chairs who helped select final ~150 sessions, given the numbers we are absolutely thrilled to announce that three of our talks (Including one in higher ed summit) have been accepted for Drupalcon New Orleans.
Here is a brief of talks at Drupalcon New Orleans May, 2016.
Saket and Piyuesh will talk about Service Workers on D8 and the promise of offline web. The topic is a build up from series of BoF at Drupalcon Asia where service workers was discussed passionately and pragmatically. It’s not just frontend, necessary backend infrastructure (route caching, url config) needs to be present for service workers to be a reality, hence Saket and Piyuesh ( our frontend and backend leads ) decided to team up on this. You can read more about the tentative agenda at the session synopsis and here are rumours of what to expect in the demo:
Config for urls for offline first approach
Config for urls for network first approach & fallback to serviceworker cache
Static resource caching via service workers
Since D8 has placeholders, it would be wise to cache the response of placeholders & render them using service-workers. Background sync can be used to keep pulling in fresh data for the placeholders.
Saket is back with a session on the utopia of using Drupal 8 as a backend for multiple front end frameworks; this time it’s through Polymer. Webcomponents have had the frontend community wanting, what’s not to desire? Reusable components, shadow DOM, templating and much more all native to the browser. Though webcomponents paints a beautiful feature, Polymer from Google delivers it as a polyfill for the restless. In this talk Saket will talk about what Drupal 8 frontend programming might look like and glitches we might need to resolve for a smooth future.
Drupal Campus Ambassador Program - A community Initiative:
A wildcard entry into Dries’s keynote and Holly’s closing-note at Drupalcon Asia, DCAP made its humble beginnings in one of India’s regional events, where business folks were trying to find answers to Drupal talent problems. In spirit of Drupal community’s do-ocracy some of the community members figured a WIN-WIN solution for the problem, that is to take Drupal to colleges and evangelise it professionally. There have been numerous college workshops but DCAP is different in ways that it provides longevity and support to college students and that is exactly what Rakhi will be sharing at Higher-ed summit -- a bit of the history, successes so far and brave new world Drupal will foray into through DCAP. Catch Rakhi in action on May 9 at DrupalCon New Orleans.aurelia.bhoy Mon, 03/21/2016 - 13:39