Elsewhere

Red Crackle: Dependency Injection

Planet Drupal - mar, 01/09/2015 - 16:06
If you are starting to learn about Drupal 8, you must have come across a term called "Dependency Injection". If you are wondering what it means, then this is the post you should read. Through examples, you will learn why dependency injection is useful for decoupling the code as well as unit testing effectively.
Catégories: Elsewhere

Wuinfo: Avoid Using the node_load_multiple API Function

Planet Drupal - mar, 01/09/2015 - 14:26

In a large website with many nodes, stop using the node_load_multiple function. It potentially limits the site growing.

According to the document: "This function should be used whenever you need to load more than one node from the database." But, I want to say that we should avoid using this function as this open the door to system crash in the future.

I had written a blog before Design a Drupal website with a million nodes in mind. I had used this function as an example. It is true that node_load_multiple function enhance the performance. But, it comes with a price. When we load thousands of node into the memory, it exhausts the web server memory instantly. Then we get this infamous message: "Fatal error: Allowed memory size of 268435456 bytes exhausted (tried to allocate XYZ bytes) in ...".

This issue brought me to an attention when I am reading the code done by one of the well-known Drupal shops. In one of the recently launched high-profile project, the function is used to load all the show nodes in the system. It sounds to be troublesome to me at the beginning. We can not load all nodes if we are not sure how many it will be? Where there are a small amount of node in the system, that is fine. But if it is for a system there are over hundred thousand nodes, that is a big problem. As time goes by, we add more shows into the system; The utility function needs more memory to handle it. The physical memory limits the ability to add infinity show nodes into the system.

What is the best way to handle this situation? Avoid using the node_load_multiple function. If we have to use the node_load_multiple, limit the number of the node for node_load_multiple to load.

Catégories: Elsewhere

Raphaël Hertzog: My Free Software Activities in August 2015

Planet Debian - mar, 01/09/2015 - 13:49

My monthly report covers a large part of what I have been doing in the free software world. I write it for my donators (thanks to them!) but also for the wider Debian community because it can give ideas to newcomers and it’s one of the best ways to find volunteers to work with me on projects that matter to me.

Debian LTS

This month I have been paid to work 6.5 hours on Debian LTS. In that time I did the following:

  • Prepared and released DLA-301-1 fixing 2 CVE in python-django.
  • Did one week of “LTS Frontdesk” with CVE triaging. I pushed 11 commits to the security tracker.

Apart from that, I also gave a talk about Debian LTS at DebConf 15 in Heidelberg and also coordinated a work session to discuss our plans for Wheezy. Have a look at the video recordings:

DebConf 15

I attended DebConf 15 with great pleasure after having missed DebConf 14 last year. While I did not do lots of work there, I participated in many discussions and I certainly came back with a renewed motivation to work on Debian. That’s always good.

For the concrete work I did during DebConf, I can only claim two schroot uploads to fix the lack of support of the new “overlay” filesystem that replaces “aufs” in the official Debian kernel, and some Distro Tracker work (fixing an issue that some people had when they were logged in via Debian’s SSO).

While the numerous discussions I had during DebConf can’t be qualified as “work”, they certainly contribute to build up work plans for the future:

As a Kali developer, I attended multiple sessions related to derivatives (notably the Debian Derivatives Panel).

I was also interested by the “Debian in the corporate IT” BoF led by Michael Meskes (Credativ’s CEO). He pointed out a number of problems that corporate users might have when they first consider using Debian and we will try to do something about this. Expect further news and discussions on the topic.

Martin Kraff, Luca Filipozzi, and me had a discussion with the Debian Project Leader (Neil) about how to revive/transform the Debian’s Partner program. Nothing is fleshed out yet, but at least the process initiated by the former DPL (Lucas) is again moving forward.

Other Debian work

Sponsorship. I sponsored an NMU of pep8 by Daniel Stender as it was a requirement for prospector… which I also sponsored since all the required dependencies are now available in Debian. \o/

Packaging. I NMUed libxml2 2.9.2+really2.9.1+dfsg1-0.1 fixing 3 security issues and a RC bug that was breaking publican. Since there’s no upstream fix for more than 8 months, I went back to the former version 2.9.1. It’s in line with the new requirement of release managers… a package in unstable should migrate to testing reasonably quickly, it’s not acceptable to keep it unfixed for months. With this annoying bug fixed, I could again upload a new upstream release of publican… so I prepared and uploaded 4.3.2-1. It was my first source only upload. This release was more work than I expected and I filed no less than 3 bug to upstream (new bash-completion install path, request to provide sources of a minified javascript file, drop a .po file for an invalid language code).

GPG issues with smartcard. Back from DebConf, when I wanted to sign some key, I stumbled again upon the problem which makes it impossible for me to use my two smartcards one after the other without first deleting the stubs for the private key. It’s not a new issue but I decided that it was time to report it upstream, so I did it: #2079 on bugs.gnupg.org. Some research helped me to find a way to work-around the problem. Later in the month, after a dist-upgrade and a reboot, I was no longer able to use my smartcard as a SSH authentication key… again it was already reported but there was no clear analysis, so I tried to do my own one and added the results of my investigation in #795368. It looks like the culprit is pinentry-gnome3 not working when started by the gpg-agent which is started before the DBUS session. Simple fix is to restart the gpg-agent in the session… but I have no idea yet of what the proper fix should be (letting systemd manage the graphical user session and start gpg-agent would be my first answer, but that doesn’t solve the issue for users of other init systems so it’s not satisfying).

Distro Tracker. I merged two patches from Orestis Ioannou fixing some bugs tagged newcomer. There are more such bugs (I even filed two: #797096 and #797223), go grab them and do a first contribution to Distro Tracker like Orestis just did! I also merged a change from Christophe Siraut who presented Distro Tracker at DebConf.

I implemented in Distro Tracker the new authentication based on SSL client certificates that was recently announced by Enrico Zini. It’s working nice, and this authentication scheme is far easier to support. Good job, Enrico!

tracker.debian.org broke during DebConf, it stopped being updated with new data. I tracked this down to a problem in the archive (see #796892). Apparently Ansgar Burchardt changed the set of compression tools used on some jessie repositorie, replacing bz2 by xz. He dropped the old Packages.bz2 but missed some Sources.bz2 which were thus stale… and APT reported “Hashsum mismatch” on the uncompressed content.

Misc. I pushed some small improvement to my Salt formulas: schroot-formula and sbuild-formula. They will now auto-detect which overlay filesystem is available with the current kernel (previously “aufs” was hardcoded).

Thanks

See you next month for a new summary of my activities.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Catégories: Elsewhere

Bits from Debian: New Debian Developers and Maintainers (July and August 2015)

Planet Debian - mar, 01/09/2015 - 13:45

The following contributors got their Debian Developer accounts in the last two months:

  • Gianfranco Costamagna (locutusofborg)
  • Graham Inggs (ginggs)
  • Ximin Luo (infinity0)
  • Christian Kastner (ckk)
  • Tianon Gravi (tianon)
  • Iain R. Learmonth (irl)
  • Laura Arjona Reina (larjona)

The following contributors were added as Debian Maintainers in the last two months:

  • Senthil Kumaran
  • Riley Baird
  • Robie Basak
  • Alex Muntada
  • Johan Van de Wauw
  • Benjamin Barenblat
  • Paul Novotny
  • Jose Luis Rivero
  • Chris Knadle
  • Lennart Weller

Congratulations!

Catégories: Elsewhere

Lunar: Reproducible builds: week 18 in Stretch cycle

Planet Debian - mar, 01/09/2015 - 13:11

What happened in the reproducible builds effort this week:

Toolchain fixes
  • Bdale Garbee uploaded tar/1.28-1 which includes the --clamp-mtime option. Patch by Lunar.

Aurélien Jarno uploaded glibc/2.21-0experimental1 which will fix the issue were locales-all did not behave exactly like locales despite having it in the Provides field.

Lunar rebased the pu/reproducible_builds branch for dpkg on top of the released 1.18.2. This made visible an issue with udebs and automatically generated debug packages.

Packages fixed

The following 70 packages became reproducible due to changes in their build dependencies: activemq-activeio, async-http-client, classworlds, clirr, compress-lzf, dbus-c++, felix-bundlerepository, felix-framework, felix-gogo-command, felix-gogo-runtime, felix-gogo-shell, felix-main, felix-shell-tui, felix-shell, findbugs-bcel, gco, gdebi, gecode, geronimo-ejb-3.2-spec, git-repair, gmetric4j, gs-collections, hawtbuf, hawtdispatch, jack-tools, jackson-dataformat-cbor, jackson-dataformat-yaml, jackson-module-jaxb-annotations, jmxetric, json-simple, kryo-serializers, lhapdf, libccrtp, libclaw, libcommoncpp2, libftdi1, libjboss-marshalling-java, libmimic, libphysfs, libxstream-java, limereg, maven-debian-helper, maven-filtering, maven-invoker, mochiweb, mongo-java-driver, mqtt-client, netty-3.9, openhft-chronicle-queue, openhft-compiler, openhft-lang, pavucontrol, plexus-ant-factory, plexus-archiver, plexus-bsh-factory, plexus-cdc, plexus-classworlds2, plexus-component-metadata, plexus-container-default, plexus-io, pytone, scolasync, sisu-ioc, snappy-java, spatial4j-0.4, tika, treeline, wss4j, xtalk, zshdb.

The following packages became reproducible after getting fixed:

Some uploads fixed some reproducibility issues but not all of them:

Patches submitted which have not made their way to the archive yet:

  • #797027 on zyne by Chris Lamb: switch to pybuild to get rid of .pyc files.
  • #797180 on python-doit by Chris Lamb: sort output when creating completion script for bash and zsh.
  • #797211 on apt-dater by Chris Lamb: fix implementation of SOURCE_DATE_EPOCH.
  • #797215 on getdns by Chris Lamb: fix call to dpkg-parsechangelog in debian/rules.
  • #797254 on splint by Chris Lamb: support SOURCE_DATE_EPOCH for version string.
  • #797296 on shiro by Chris Lamb: remove username from build string.
  • #797408 on splitpatch by Reiner Herrmann: use SOURCE_DATE_EPOCH to set manpage date.
  • #797410 on eigenbase-farrago by Reiner Herrmann: sets the comment style to scm-safe which tells ResourceGen that no timestamps should be included.
  • #797415 on apparmor by Reiner Herrmann: sorting with the locale set to C. CAPABILITIES
  • #797419 on resiprocate by Reiner Herrmann: set the embedded hostname to a static value. ug#797427: jam: please make the build reproducible: Reiner Herrmann: sorting with the locale set to C.
  • #797430 on ii-esu by Reiner Herrmann: sort source list using C locale.
  • #797431 on tatan by Reiner Herrmann: sort source list using C locale.

Chris Lamb also noticed that binaries shipped with libsilo-bin did not work.

Documentation update

Chris Lamb and Ximin Luo assembled a proper specification for SOURCE_DATE_EPOCH in the hope to convince more upstreams to adopt it. Thanks to Holger it is published under a non-Debian domain name.

Lunar documented easiest way to solve issues with file ordering and timestamps in tarballs that came with tar/1.28-1.

Some examples on how to use SOURCE_DATE_EPOCH have been improved to support systems without GNU date.

reproducible.debian.net

armhf is finally being tested, which also means the remote building of Debian packages finally works! This paves the way to perform the tests on even more architectures and doing variations on CPU and date. Some packages even produce the same binary Arch:all packages on different architectures (1, 2). (h01ger)

Tests for FreeBSD are finally running. (h01ger)

As it seems the gcc5 transition has cooled off, we schedule sid more often than testing again on amd64. (h01ger)

disorderfs has been built and installed on all build nodes (amd64 and armhf). One issue related to permissions for root and unpriviliged users needs to be solved before disorderfs can be used on reproducible.debian.net. (h01ger)

strip-nondeterminism

Version 0.011-1 has been released on August 29th. The new version updates dh_strip_nondeterminism to match recent changes in debhelper. (Andrew Ayer)

disorderfs

disorderfs, the new FUSE filesystem to ease testing of filesystem-related variations, is now almost ready to be used. Version 0.2.0 adds support for extended attributes. Since then Andrew Ayer also added support to reverse directory entries instead of shuffling them, and arbitrary padding to the number of blocks used by files.

Package reviews

142 reviews have been removed, 48 added and 259 updated this week.

Santiago Vila renamed the not_using_dh_builddeb issue into varying_mtimes_in_data_tar_gz_or_control_tar_gz to align better with other tag names.

New issue identified this week: random_order_in_python_doit_completion.

Misc.

h01ger gave a talk at FrOSCon on August 23rd. Recordings are already online.

These reports are being reviewed and enhanced every week by many people hanging out on #debian-reproducible. Huge thanks!

Catégories: Elsewhere

InternetDevels: Why and how to build a social shopping project with Drupal: case study

Planet Drupal - mar, 01/09/2015 - 13:11

Discover the post on social shopping projects in Drupal with some of the best secrets
of ecommerce website development for you ;)

Read more
Catégories: Elsewhere

Russ Allbery: Review: Kanban

Planet Debian - mar, 01/09/2015 - 06:06

Review: Kanban, by David J. Anderson

Publisher: Blue Hole Copyright: 2010 ISBN: 0-9845214-0-2 Format: Trade paperback Pages: 240

Another belated review, this time of a borrowed book. Which I can now finally return! A delay in the review of this book might be a feature if I had actually used it for, as the subtitle puts it, successful evolutionary change in my technology business. Sadly, I haven't, so it's just late.

So, my background: I've done a lot of variations of traditional project management for IT projects (both development and more operational ones), both as a participant and as a project manager. (Although I've never done the latter as a full-time job, and have no desire to do so.) A while back at Stanford, my team adopted Agile, specifically Scrum, so I did a fair bit of research about Scrum including a couple of training courses. Since then, at Dropbox, I've used a few different variations on a vaguely Agile-inspired planning process, although it's not particularly consistent with any one system.

I've been hearing about Kanban for a while and have friends who swear by it, but I only had a vague idea of how it worked. That seemed like a good excuse to read a book.

And Anderson's book is a good one if, like me, you're looking for a basic introduction. It opens with a basic description and definition, talks about motivation and the expected benefits, and then provides a detailed description of Kanban as a system. The tone is on the theory side, written using the terminology of (I presume) management theory and operations management, areas about which I know almost nothing, but the terminology wasn't so heavy as to make the book hard to read. Anderson goes into lots of detail, and I thought he did a good job of distinguishing between basic principles, optional techniques, and variations that may be appropriate for particular environments.

If you're also not familiar, the basic concept of Kanban is to organize work using an in-progress queue. It eschews time-bounded planning entirely in favor of staging work at the start of a sequence of queues and letting the people working on each queue pull from the previous queue when they're ready to take on a new task. As you might guess from that layout, Kanban was originally invented for assembly-line manufacturing (at Toyota). That was one of the problems that I had with it, or at least the presentation in this book: most of my software development doesn't involve finishing one part of something and handing it off to someone else, which made it hard to identify with the pipeline model. Anderson has clearly spent a lot of time working with large-scale programming shops, including outsourced development firms, with dedicated QA and operations roles. This is not at all how Silicon Valley agile development works, so parts of this book felt like missives from a foreign country.

That said, the key point of Kanban is not the assembly line but the work limit. One of the defining characteristics of Kanban, at least as Anderson presents it, is that one does not try to estimate work and tile it based on estimates, not even to the extent that Scrum and other Agile methodologies do within the sprint. Instead, each person takes as long as they take on the things they're working on, and pulls a new task when they have free capacity. The system instead puts a limit on how many things they can have in progress at a time. The problem of pipeline stalls is dealt with both via continuous improvement and "swarming" of a problem to unblock the line (since other teams may not be able to work until the block is fixed), and with being careful about the sizing of work items (I'm used to calling them stories) that go in the front end.

Predictability, which Scrum uses story sizing and team velocity analysis to try to achieve, is basically statistical. One uses a small number of buckets of sizes of stories, and the whole pipeline will finish some number of work items per unit time, with a variance. The promise made to clients and other teams is that some percentage of the work items will be finished within some time frame from when they enter the system. Most importantly, these are all descriptive properties, determined statistically after the fact, rather than planned properties worked out through story sizing and extensive team discussion. If you, like me, are pretty thoroughly sick of two-hour sprint planning meetings and endless sizing exercises, this is rather appealing.

My problem with most work planning systems is that I think they overplan and put too much weight our ability to estimate how long work will take. Kanban is very appealing when viewed through that lens: it gives up on things we're bad at in favor of simple measurement and building a system with enough slack that it can handle work of various different sizes. As mentioned at the top, I haven't had a chance to try it (and I'm not sure it's a good fit with the inter-group planning methods in use at my current workplace), but I came away from this book wanting to do so.

If, like me, your experience is mostly with small combined teams or individual programming work, Anderson's examples may seem rather large, rather formal, and bureaucratic. But this is a solid introduction and well worth reading if your only experience with Agile is sprint planning, story writing and sizing, and fast iterations.

Rating: 7 out of 10

Catégories: Elsewhere

Norbert Preining: PiwigoPress release 2.31

Planet Debian - mar, 01/09/2015 - 02:36

I just pushed a new release of PiwigoPress (main page, WordPress plugin dir) to the WordPress servers. This release incorporates new features for the sidebar widget, and better interoperability with some Piwigo galleries.

The new features are:

  • Selection of images: Up to now images for the widget were selected at random. The current version allows selecting images either at random (the default as before), but also in ascending or descending order by various criteria (uploaded, availability time, id, name, etc). With this change it is now possible to always display the most recent image(s) from a gallery.
  • Interoperability: Some Piwigo galleries don’t have thumbnail sized representatives. For these galleries the widget was broken and didn’t display any image. We now check for either square or thumbnail.

That’s all, enjoy, and leave your wishlist items and complains at the issue tracker on the github project piwigopress.

Catégories: Elsewhere

Junichi Uekawa: I've been writing ELF parser for fun using C++ templates to see how much I can simplify.

Planet Debian - lun, 31/08/2015 - 23:36
I've been writing ELF parser for fun using C++ templates to see how much I can simplify. I've been reading bionic loader code enough these days that I already know how it would look like in C and gradually converted into C++ but it's nevertheless fun to have a pet project that kind of grows.

Catégories: Elsewhere

Red Crackle: Free Drupal 8 Tutorials – An Exhaustive List

Planet Drupal - lun, 31/08/2015 - 21:34
If you are starting to learn Drupal 8, you are probably overwhelmed by the number of blog posts that offer free tutorials on different aspects of Drupal 8. The only way to find all these tutorials is to search online. In this post, we have created an exhaustive list of the free resources online for mastering Drupal 8, organized by categories. Use these links as a reference when starting on your next Drupal 8 learning expedition.
Catégories: Elsewhere

Four Kitchens: REST Easy Part 5: Everybody Get Together

Planet Drupal - lun, 31/08/2015 - 20:59

In this week’s REST Easy tutorial, we tackle the process of adding entity references to your API endpoints. Entity references create relationships between two separate data structures in Drupal. Exposing that link within your API is critical to providing a comprehensive content model.

Catégories: Elsewhere

Acquia Developer Center Blog: Helping the Drupal Community with Drupal 8 Training

Planet Drupal - lun, 31/08/2015 - 20:24
Kent Gale

Training Acquia employees how to use Drupal 8 had two purposes. First, there was the obvious need for our company – one that specializes in building and delivering a digital experience using Drupal – to get everyone up to speed on the new version.

But we also felt our training sessions had a larger purpose: to inform the larger Drupal community on the ins and outs, not to mention joys and pains, of the updated version. So as we embarked on our long training program of nearly 50 employees, we documented our progress.

Tags: acquia drupal planet
Catégories: Elsewhere

Matthew Garrett: Working with the kernel keyring

Planet Debian - lun, 31/08/2015 - 19:18
The Linux kernel keyring is effectively a mechanism to allow shoving blobs of data into the kernel and then setting access controls on them. It's convenient for a couple of reasons: the first is that these blobs are available to the kernel itself (so it can use them for things like NFSv4 authentication or module signing keys), and the second is that once they're locked down there's no way for even root to modify them.

But there's a corner case that can be somewhat confusing here, and it's one that I managed to crash into multiple times when I was implementing some code that works with this. Keys can be "possessed" by a process, and have permissions that are granted to the possessor orthogonally to any permissions granted to the user or group that owns the key. This is important because it allows for the creation of keyrings that are only visible to specific processes - if my userspace keyring manager is using the kernel keyring as a backing store for decrypted material, I don't want any arbitrary process running as me to be able to obtain those keys[1]. As described in keyrings(7), keyrings exist at the session, process and thread levels of granularity.

This is absolutely fine in the normal case, but gets confusing when you start using sudo. sudo by default doesn't create a new login session - when you're working with sudo, you're still working with key posession that's tied to the original user. This makes sense when you consider that you often want applications you run with sudo to have access to the keys that you own, but it becomes a pain when you're trying to work with keys that need to be accessible to a user no matter whether that user owns the login session or not.

I spent a while talking to David Howells about this and he explained the easiest way to handle this. If you do something like the following:
$ sudo keyctl add user testkey testdata @u
a new key will be created and added to UID 0's user keyring (indicated by @u). This is possible because the keyring defaults to 0x3f3f0000 permissions, giving both the possessor and the user read/write access to the keyring. But if you then try to do something like:
$ sudo keyctl setperm 678913344 0x3f3f0000
where 678913344 is the ID of the key we created in the previous command, you'll get permission denied. This is because the default permissions on a key are 0x3f010000, meaning that the possessor has permission to do anything to the key but the user only has permission to view its attributes. The cause of this confusion is that although we have permission to write to UID 0's keyring (because the permissions are 0x3f3f0000), we don't possess it - the only permissions we have for this key are the user ones, and the default state for user permissions on new keys only gives us permission to view the attributes, not change them.

But! There's a way around this. If we instead do:
$ sudo keyctl add user testkey testdata @s
then the key is added to the current session keyring (@s). Because the session keyring belongs to us, we possess any keys within it and so we have permission to modify the permissions further. We can then do:
$ sudo keyctl setperm 678913344 0x3f3f0000
and it works. Hurrah! Except that if we log in as root, we'll be part of another session and won't be able to see that key. Boo. So, after setting the permissions, we should:
$ sudo keyctl link 678913344 @u
which ties it to UID 0's user keyring. Someone who logs in as root will then be able to see the key, as will any processes running as root via sudo. But we probably also want to remove it from the unprivileged user's session keyring, because that's readable/writable by the unprivileged user - they'd be able to revoke the key from underneath us!
$ sudo keyctl unlink 678913344 @s
will achieve this, and now the key is configured appropriately - UID 0 can read, modify and delete the key, other users can't.

This is part of our ongoing work at CoreOS to make rkt more secure. Moving the signing keys into the kernel is the first step towards rkt no longer having to trust the local writable filesystem[2]. Once keys have been enrolled the keyring can be locked down - rkt will then refuse to run any images unless they're signed with one of these keys, and even root will be unable to alter them.

[1] (obviously it should also be impossible to ptrace() my userspace keyring manager)
[2] Part of our Secure Boot work has been the integration of dm-verity into CoreOS. Once deployed this will mean that the /usr partition is cryptographically verified by the kernel at runtime, making it impossible for anybody to modify it underneath the kernel. / remains writable in order to permit local configuration and to act as a data store, and right now rkt stores its trusted keys there.

comments
Catégories: Elsewhere

Gbyte blog: Sql code filter test

Planet Drupal - lun, 31/08/2015 - 19:03
Catégories: Elsewhere

DrupalCon News: Session Spotlight: Score big with this front end talk

Planet Drupal - lun, 31/08/2015 - 18:50

It is fitting that in football-crazed Barcelona, although we will have a plethora of tech talks, that we have at least one talk that highlights the sport! Whether you’re a fan of FC Barcelona or any football team for that matter, it is pretty impressive what a team can accomplish together. And although we aren’t goalies and midfielders, Adam Onishi's (onishiwebtalk will highlight how front-enders and Drupalers can go for the gold together.

Catégories: Elsewhere

Drupal core announcements: No Drupal 6 or Drupal 7 core release on Wednesday, September 2

Planet Drupal - lun, 31/08/2015 - 17:12

The monthly Drupal core bug fix/feature release window is scheduled for this Wednesday. However, there have not been enough changes to the development version since the last bug fix/feature release to warrant a new release, so there will be no Drupal core release on that date. (I had hoped to get a release done, but scheduling issues plus the work involved in putting out a large security release in August made it impossible.) A Drupal 7 bug fix/feature release during the October window is likely instead.

Upcoming release windows include:

  • Wednesday, September 16 (security release window)
  • Wednesday, October 7 (bug fix/feature release window)

For more information on Drupal core release windows, see the documentation on release timing and security releases, and the discussion that led to this policy being implemented.

Catégories: Elsewhere

ThinkShout: Video Tutorial - "Introduction to RedHen CRM"

Planet Drupal - lun, 31/08/2015 - 17:00

We recently teamed up with Drupalize.me to produce a brand new (free!) video tutorial: "Introduction to RedHen CRM."

In this introductory video, you'll tour RedHen CRM's central features, get a detailed walkthrough of the setup process, and learn a few of RedHen's cool tricks, like duplicate management (a pet feature of mine). This is a great resource for anyone who is looking for a step-by-step guide to setting up and configuring RedHen, or just wants to learn more about it!

Catégories: Elsewhere

Annertech: How to Integrate your Drupal Website with CiviCRM

Planet Drupal - lun, 31/08/2015 - 14:54
How to Integrate your Drupal Website with CiviCRM

Continuing in our series of integrating CRMs with Drupal, we're now going to take a look at CiviCRM, an open-source, web-based CRM aimed at charities and non-profits which integrates closely with Drupal, Joomla and WordPress.

Catégories: Elsewhere

OSTraining: Let's Welcome GoDaddy Into Our Open Source Communities

Planet Drupal - lun, 31/08/2015 - 14:29

This week, we published the first post about Drupal on GoDaddy.com: "What you need to know about Drupal 8".

GoDaddy is a really fascinating company, but most of you probably didn't expect them to be interested in Drupal.

You probably know some of the GoDaddy story.

You've seen the commercials with half-naked women. You may know about Bob Parsons, their ex-CEO, who was a public relations genius ... until he wasn't. Parsons built an empire on using SuperBowl commercials, NASCAR sponsorship and risqué advertising to sell hosting and domain names. But, in later years, Parsons made several major PR gaffes. And, it didn't help that GoDaddy also had a reputation for poor-quality hosting. If you've been to a tech conference, you've probably heard speakers making jokes at GoDaddy's expense.
Catégories: Elsewhere

Mike Crittenden: How Drupal 7 Works, Part 1: The Request

Planet Drupal - lun, 31/08/2015 - 11:53

This is a chapter out of my in-progress book, Drupal Deconstructed. You can read it online for free, download it as a PDF/ePUB/MOBI, or contribute to it on GitHub.

Have you ever wondered how Drupal does what it does? Good, me too. In this series of posts, I'm going to explain what Drupal is doing behind the scenes to perform its magic.

In Part 1, we'll keep it fairly high level and walk through the path a request takes as it moves through Drupal. In later parts, we'll take deeper dives into individual pieces of this process.

Step 0: Some background information

For this example, let's pretend that George, a user of your site, wants to visit your About Us page, which lives at http://oursite/about-us.

Let's also pretend that this page is a node (specifically, the node with nid of 1) with a URL alias of about-us.

And to keep things simple, we'll say that we're using Apache as our webserver, and there's nothing fancy sitting in front of Drupal, such as a caching layer or the like.

So basically, we're talking about a plain old Drupal site on a plain old webserver.

Step 1: The request hits the server

There's some pretty hot action that happens before Drupal even sees the request. George's browser sends a request to http://oursite.com/about-us, and this thing we call the internet figures out that that should go to our server. If you're not well versed on how that part happens, you may benefit from reading this infographic on how the internet works.

Once our server has received the request, that's when the fun begins. This request will land on port 80, and Apache just so happens to be listening on that port, so Apache will immediately see the request and decide what to do with it.

Since the request is going to oursite.com then Apache can look into its configuration files to see what the root directory is for oursite.com (along with lots of other info about it which is out of scope for this post). We'll say that the root directory for our site is /var/www/oursite, which is where Drupal lives. So Apache is going to send the request there.

Step 2: The .htaccess file

But Drupal hasn't taken over just yet. Drupal ships with a .htaccess file which is a way of telling Apache some things. In fact, Drupal's .htaccess tells Apache a whole lot of things. The most important things it does are:

  1. Protects files that could contain source code from being viewable, such as .module files or .inc files, both of which contain PHP.
  2. Allows requests that point directly to files in the filesystem to pass through without touching Drupal.
  3. Redirects all other requests to Drupal's index.php file.

It also does some other fancy things such as disabling directory indexes and allowing Drupal to serve gzipped CSS and JS, but those are the biggies.

So, in our case, Apache will see that we're looking for /about-us, and will say:

  1. Is this request trying to access a file that it shouldn't? No.
  2. Is this request directly pointing to a file in the filesystem? No.
  3. Then this request should be sent to index.php!

And away we go...

Step 3: Drupal's index.php file

Finally, we have reached Drupal, and we're looking at PHP. Drupal's index.php is super clean, and in fact only has 4 lines of code, each of which are easy to understand.

Line 1: Define DRUPAL_ROOT

php define('DRUPAL_ROOT', getcwd());

This just sets a constant called DRUPAL_ROOT to be the value of the current directory that index.php is in. This constant is used all over the place in the Drupal core codebase.

In our case, this means that DRUPAL_ROOT gets set to /var/www/oursite.

Lines 2 and 3: Bootstrap Drupal

php require_once DRUPAL_ROOT . '/includes/bootstrap.inc'; drupal_bootstrap(DRUPAL_BOOTSTRAP_FULL);

These lines run a full Drupal bootstrap, which basically means that they tell Drupal "Hey, grab all of the stuff you're going to need before we can get to actually handling this request."

For more information about the bootstrap process, see Part 2 of this series.

Line 4: Do everything else

php menu_execute_active_handler();

This simple looking function is where the heart and soul of Drupal lives. For more information about what happens in this ball of wax, visit the Menu Router chapter.

This is a chapter out of my in-progress book, Drupal Deconstructed. You can read it online for free, download it as a PDF/ePUB/MOBI, or contribute to it on GitHub.

Catégories: Elsewhere

Pages

Subscribe to jfhovinne agrégateur - Elsewhere