Elsewhere

CiviCRM Blog: Drupal Views in CiviCRM Dashlets

Planet Drupal - Fri, 12/12/2014 - 19:58

Here at Skvare, we strive to make Drupal and CiviCRM work as one to accomplish goals in a way that is simple and intuitive. Continuing our work in Drupal/CiviCRM integrations, we’ve cooked something new up for you all. We would now like to take this opportunity to introduce Views in Dashlets.

What is Views in Dashlets?

Views in Dashlets is a Drupal module that allows one to create a dashlet containing a Drupal View. That is right, in addition to CiviCRM reports you can use the power of Drupal Views to create a customizable experience. This opens grand new opportunities to use our imagination and drive to strengthen the bond between Drupal and CiviCRM. A majority of the functionality of Views is currently at your fingertips, with further enhancements on the horizon.

How did we get to this point?

We had the idea of rendering a View in a dashlet, but that’s all it was. An idea. We researched extensively trying to find out how exactly a dashlet works. Curiously, I created a forum post where I received a small piece of knowledge and took our first big step in creating this module. After that, it was a matter of development of ideas. Thank you totten for the quick and helpful response.

Basic Instructions

It is very simple to create a dashlet. All you have to do is:

  1. Open/create your view and add a new “CiviCRM Dashlet” display
  2. Configure the display to your liking and save
  3. Visit www.yourdomain.com/civicrm and click “Configure Dashboard”
  4. Add your created dashlet to a column and click “Done”

And now your brand new dashlet is on your dashboard!

Why Views in Dashlets?
  • Simplicity -- Using CiviCRM reports to create dashlets is a great feature of CiviCRM, but new reports require PHP and SQL coding skills. Some organizations have staff with these skills, but many do not. Many more people do have the ability to use site-building techniques to create Views through its powerful UI.
  • Customize - Customize your view to render your CiviCRM or Drupal data just how YOU want it. Quickly add sorts, filters, fields, relationships, no results behaviors, and rewrite field output functionality.
  • Style - Make it pretty! Use View’s built in style features to wrap fields in html elements and add classes to fields for css styling.
  • Content -- More than just data, place content on the dashboard. Display a node, add links to documentation, or list your latest Drupal Commerce orders. If it can be Viewed, it can be on the Dashboard.
  • Combine - Use in combination with the CiviCRM Entity and Entity Reference modules to create rich data structure displays combining Drupal content with CiviCRM data. Also, there are tons of great modules that expand Views functionality and most will work with Views in Dashlets.
  • Reuse - Use the same view for a Drupal page and CiviCRM dashlet.
  • Much more that we have not thought of!
"What can I do to help?"

If you would like to join the journey of Views in Dashlets then go ahead and create an issue. This issue could be a bug report or even a feature request. Views in Dashlets is a work in progress, but there is ALOT you can do with it already. We would be very grateful for your feedback. This is a team effort, and our community is the number one team.

For more information, visit the project page at https://www.drupal.org/sandbox/brandonferrell/2389543

Categories: Elsewhere

Drupal Watchdog: Migrate API

Planet Drupal - Fri, 12/12/2014 - 19:29
Article

The migrate API works with plugins and stores the configuration for those plugins in a configuration entity. There are a number of plugin types offered: source, process, and destination are the most important. Source merely provides an iterator and identifiers, and most of the time the destination plugins provided by core are adequate, so this article will focus on process plugins.

Process plugins

Nothing gets into the destination unless it is specified under the top level process key in the configuration entity. Each key under process is a destination property and the value of it is a process pipeline. Each “stage” of this pipeline is a plugin which receives the output of the previous stage as input, does some transformation on it, and produces the new value of the pipeline.

There are a few plugins which indeed only use the pipeline value as input – for example, the machine name plugin transliterates the input (presumably a human name) and replaces non-alphanumeric characters with underscores. However, if that was all plugins could do they wouldn’t be too useful. Instead, every plugin receives the whole row and the name of the destination property currently being created.

Each stage in the process pipeline is described by an array, where the plugin key is mandatory and the rest is just the plugin configuration. For example:

process: vid: - plugin: machine_name source: name - plugin: dedupe_entity entity_type: taxonomy_vocabulary field: vid

The above mentioned machine name transformation is run on name and then the entity deduplication plugin adds a numeric postfix ensuring the vid field of the taxonomy_vocabulary entity is unique. That is the canonical format of the process pipeline.

Categories: Elsewhere

Gregor Herrmann: GDAC 2014/12

Planet Debian - Fri, 12/12/2014 - 19:12

debian is again taking part in the OPW, & this afternoon I happened to read the backlog of the first weekly IRC meeting (in #debian-qa) between the mentors & the mentee for one of the projects. it was great to see that the participant's first patch is already merged & deployed, & that she closed her first bug report & is really getting into this debian world. – yay to great mentoring & increasing diversity!

this posting is part of GDAC (gregoa's debian advent calendar), a project to show the bright side of debian & why it's fun for me to contribute.

Categories: Elsewhere

Lullabot: Coding in Schools

Planet Drupal - Fri, 12/12/2014 - 17:49

In this episode, Amber Matz and her guests Eric Schneider and Matthew Tift talk about the successes and challenges on how parents and school officials worked together to get coding into the curriculum in Minnetonka Schools. Eric Schneider is the Assistant Superintendent for Instruction, Minnetonka Public Schools and Matthew Tift is a Senior Developer at Lullabot and a Minnetonka parent.

Categories: Elsewhere

ThinkShout: Oregon Zoo Small Actions

Planet Drupal - Fri, 12/12/2014 - 17:00
Oregon Zoo: Small Actions

The Oregon Zoo in Portland approached us to develop an action portal component for their Drupal web site. The action portal is a tool that suggests real-world actions that anyone can take to help wildlife survive and flourish. A social sharing component is important for spreading these tips organically. Pun intended. Like wildflowers.

Many sites integrate social sharing, but there are a couple of things that make the Zoo's action portal different. The main difference is that by sharing an action, you are saying that you've actually done that action in the real world, and you are encouraging your friends and followers to take the same action. The aim is not just to generate site traffic, but rather to encourage people to make real change that has a tangible impact on wildlife. Also, the shared content is more personalized, since it's a combination of a single species and the action that you've taken, plus custom messaging the visitor would like to add.

The original intent was to enable visitors to share an action on several social channels: Facebook, Twitter, etc. During technical planning, it was decided that Facebook alone would be the best place to start. We would integrate directly with Facebook and track the shares internally with a custom integration code interacting with Facebook's API.

When we began implementation, we spent a little more time exploring options for sharing on multiple channels, compared to Facebook only. There would be a couple of benefits of sharing directly on Facebook. Using their API would pave the way for deeper integration in the future, taking advantage of Open Graph properties as a starting point. We would have better control over messaging, and we would have complete control over how logging happens in Drupal. And I must say, the Facebook developer documentation is top notch.

But adding the ability later to share on other social networks would require additional API integration for each site. We wanted to consider paving a clearer path forward, so we looked into existing services for sharing on multiple sites. There are many: Gigya, AddThis, ShareThis, and more. For something to work for us, it would need to be free or very inexpensive, allow us to customize the shared message, and provide some statistics, mainly for a share count to display on the site. The ShareThis service ended up working best for us. When using any of these services, there is less control over how shares are logged.

We presented the client with these options along with the pros & cons of each and, ultimately, it was decided that we'd use ShareThis. Having approximate share counts was an acceptable tradeoff in exchange for the benefit of being able to share to multiple social networks.

So, back to how we actually did this...

Structurally, we started with two content types: Action (for the action we want people to take) and Animal (Species that relate to the actions). These each have mostly common field types, such as image and body text.

On the Action content type, we added an Animals entityreference field in order to make the connection between the two content types.

There are three new pages for this feature: the main landing page, the animal detail page, and the action detail page. We created an Animals view for the landing page and action detail page, and we created an Actions view also for the Explore by Action tab of the landing page and for the animal detail page. For the tabs on the landing page, we created a simple block using hook_block_info() and hook_block_view().

Something that's easy to miss when initially planning lists of things is how sorting should be controlled. Since an action references multiple animals, we use that order for displaying animals on the action detail page. But we were pretty limited in how to control the order of actions on an animal detail page. We needed independent sorting control between animals on action pages, and actions on animal pages. We opted to stay with the native drag and drop sorting of entityreference fields, so we added a matching entityreference field on animals to reference actions, and added the Corresponding Entity References to keep these references in sync with each other. Now we have native draggable sorting on both content types. There are several other methods that could have been used, such as adding a weight field, using the draggable views module, or using nodequeue, but using CER with a pair of entityreference fields kept complexity at a minimum.

An essential goal of this feature is sharing an action. The requirement was to have the sharing widget appear on individual actions only when listed on an animal detail page. The shared message is a combination of elements from both content types: the image and name of the animal, plus the contents of a Sharing Message text field from the action. The the URL shared is related to the action.

Message when sharing the FSC action from the Chimpanzee page:

Here's how we put that together. We start by including the global stuff for the ShareThis widget. An implementation of hook_views_pre_render() adds some javascript settings and includes the ShareThis javascript library. To add the unique things to each action, we add a new variable "sharethis_attributes" in hook_preprocess_views_view_field(). This variable contains a string of pseudo attributes: st_url="http://example.com/the-page" st_title="Example Page Title" st_image="http://example.com/image.jpg" st_summary="This is the text that will be shared." st_via="OregonZoo". We use that variable in a very specifically-named template file that takes effect for only this field in this view. The rest of the markup and classes placed in that field template came from ShareThis.

<?php print $output; ?> <div class="sharethis-custom"> <span class='st_sharethis_vcount' displayText='ShareThis' <?php print $sharethis_attributes; ?>></span> </div>

All of this work: content types, fields, image styles for the image fields, views, and the handful of custom hook implementations are bundled together in a new custom feature.

Check out the small actions pages at the Oregon Zoo site and see if there is a small action you can take that will have an impact on a wild animal you care about. There are some great tips that will help you live cleaner and sustain our irreplaceable wildlife.

Categories: Elsewhere

Stanford Web Services Blog: Adaptive Architecture: Leave Room to Evolve

Planet Drupal - Fri, 12/12/2014 - 15:55

All forward-thinking technologies share one attribute: the original designers intentionally build in opportunities for future users to innovate. It requires humility and a belief in the creativity of others. This is true for buildings, computers, networks, and other tools.

Categories: Elsewhere

Jingjie Jiang: Week1

Planet Debian - Fri, 12/12/2014 - 15:37
Down the rabbit hole

Starting from this week, my OPW period officially begins.
I am thankful and very grateful to this chance. One for the reason I can get an opportunity to contribute to a beneficial, working, meaningful, real-world software. The other seemingly reason is, I can learn much experience and design philosophy from my mentors zack and matthieu. :)

This week my fix is on, bug #763921. It’s basically making the folder page rendering providing more information, specifically the ls -l format. This offers information such as filetype, permission, size, etc.

I learned some new knowledge about “man 2 stat”, and also got more familiar(actually confident) with front end stuff, namely css.

I also get myself familiar with the python test (coverage). Next week, I will try to increase the test coverage a bit. Tests is an essential part of software. It ensures the correctness and robustness. And more importantly, by making tests, we can easily debug the software. The so called, 磨刀不误砍柴工。

The trello cards of next week is interesting. (in case you dunno the site, it’s here:http://trello.com

Let’s see it.


Categories: Elsewhere

Daniel Leidert: Issues with Server4You vServer running Debian Stable (Wheezy)

Planet Debian - Fri, 12/12/2014 - 11:51

I recently acquired a vServer hosted by Server4You and decided to install a Debian Wheezy image. Usually I boot any device in backup mode and first install a fresh Debian copy using debootstrap over the provided image, to have a clean system. In this case I did not and I came across a few glitches I want to talk about. So hopefully, if you are running the same system image, it saves you some time to figure out, why the h*ll some things don't work as expected :)

Cron jobs not running

I installed unattended-upgrades and adjusted all configuration files to enable unattended upgrades. But I never received any mail about an update although looking at the system, I saw updates waiting. I checked with

# run-parts --list /etc/cron.daily

and apt was not listed although /etc/cron.daily/apt was there. After spending some time to figure out, what was going on, I found the rather simple cause: Several scripts were missing the executable bit, thus did not run. So it seems, for whatever reason, the image authors have tempered with file permissions and of course, not by using dpkg-statoverride :( It was easy to fix the file permissions for everything beyond /etc/cron*, but that still leaves a very bad feeling, that there are more files that have been tempered with! I'm not speaking about customizations. That are easy to find using debsums. I'm speaking about file permissions and ownership.

Now there seems no easy way to either check for changed permissions or ownership. The only solution I found is to get a list of all installed packages on the system, install them into a chroot environment and get all permission and ownership information from this very fresh system. Then compare file permissions/ownership of the installed system with this list. Not fun.

init from testing / upstart on hold

Today I've discovered, that apt-get wanted to update the init package. Of course I was curious, why unattended-upgrades didn't yet already do so. Turns out, init is only in testing/unstable and essential there. I purged it, but apt-get keeps bugging me to update/install this package. I really began to wonder, what is going on here, because this is a plain stable system:

  • no sources listed for backports, volatile, multimedia etc.
  • sources listed for testing and unstable
  • only packages from stable/stable-updates installed
  • sets APT::Default-Release "stable";

First I checked with aptitude:

# aptitude why init
Unable to find a reason to install init.

Ok, so why:

# apt-get dist-upgrade -u
Reading package lists... Done
Building dependency tree
Reading state information... Done
Calculating upgrade... Done
The following NEW packages will be installed:
init
0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded.
Need to get 0 B/4674 B of archives.
After this operation, 29.7 kB of additional disk space will be used.
Do you want to continue [Y/n]?

JFTR: I see a stable system bugging me to install systemd for no obvious reason. The issue might be similar! I'm still investigating.

Now I tried to debug this:

# apt-get -o Debug::pkgProblemResolver="true" dist-upgrade -u
Reading package lists... Done
Building dependency tree
Reading state information... Done
Calculating upgrade... Starting
Starting 2
Investigating (0) upstart [ amd64 ] < 1.6.1-1 | 1.11-5 > ( admin )
Broken upstart:amd64 Conflicts on sysvinit [ amd64 ] < none -> 2.88dsf-41+deb7u1 | 2.88dsf-58 > ( admin )
Conflicts//Breaks against version 2.88dsf-58 for sysvinit but that is not InstVer, ignoring
Considering sysvinit:amd64 5102 as a solution to upstart:amd64 10102
Added sysvinit:amd64 to the remove list
Fixing upstart:amd64 via keep of sysvinit:amd64
Done
Done
The following NEW packages will be installed:
init
0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded.
Need to get 0 B/4674 B of archives.
After this operation, 29.7 kB of additional disk space will be used.
Do you want to continue [Y/n]?

Eh, upstart?

# apt-cache policy upstart
upstart:
Installed: 1.6.1-1
Candidate: 1.6.1-1
Version table:
1.11-5 0
500 http://ftp.de.debian.org/debian/ testing/main amd64 Packages
500 http://ftp.de.debian.org/debian/ sid/main amd64 Packages
*** 1.6.1-1 0
990 http://ftp.de.debian.org/debian/ stable/main amd64 Packages
100 /var/lib/dpkg/status# dpkg -l upstart
Desired=Unknown/Install/Remove/Purge/Hold
| Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend
|/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad)
||/ Name Version Architecture Description
+++-=============================-===================-===================-===============================================================
hi upstart 1.6.1-1 amd64 event-based init daemon

Ok, at least one package is at hold. This is another questionable customization, but in case easy to fix. But I still don't understand apt-get and the difference to aptitude behaviour? Can someone please enlighten me?

Customized files

This isn't really an issue, but just for completion: several files have been customized. debsums easily shows which ones:

# debsums -ac
I don't have the original list anymore - please check yourself
Categories: Elsewhere

Gábor Hojtsy: The Drupal 8 configuration schema cheat sheet

Planet Drupal - Fri, 12/12/2014 - 11:20

After over a month of concentrated work, Drupal 8 was ready today to finally flip the switch and enforce strict configuration schema adherence in all TestBase derived tests in core. See the announcement in the core group.

If you are a Drupal 8 contrib developer and provided some configuration schema earlier (or you integrate with an existing core system like blocks, views, fields, etc.) then your tests may now fail with configuration schema errors. Unless of course all your configuration schema is correct: #highfive for you then.

Otherwise I thought you'll have questions. There is of course the existing configuration schema documentation that I helped wrote. However if you are a visual person and want to get an understanding of the basics fast, I thought a cheat sheet would be a great tool. So sat down today and produced this one in the hopes it will help you all! Enjoy!

Categories: Elsewhere

EvolvisForge blog: Tip of the day: don’t use –purge when cross-grading

Planet Debian - Fri, 12/12/2014 - 10:04

A surprise to see my box booting up with the default GRUB 2.x menu, followed by “cannot find a working init”.

What happened?

Well, grub:i386 and grub:x32 are distinct packages, so APT helpfully decided to purge the GRUB config. OK. Manual boot menu entry editing later, re-adding “GRUB_DISABLE_SUBMENU=y” and “GRUB_CMDLINE_LINUX=”syscall.x32=y”” to /etc/default/grub, removing “quiet” again from GRUB_CMDLINE_LINUX_DEFAULT, and uncommenting “GRUB_TERMINAL=console”… and don’t forget to “sudo update-grub”. There. This should work.

On the plus side, nvidia-driver:i386 seems to work… but not with boinc-client:x32 (why, again? I swear, its GPU detection has been driving me nuts on >¾ of all systems I installed it on, already!).

On the minus side, I now have to figure out why…

tglase@tglase:~ $ sudo ifup -v tap1
Configuring interface tap1=tap1 (inet)
run-parts –exit-on-error –verbose /etc/network/if-pre-up.d
run-parts: executing /etc/network/if-pre-up.d/bridge
run-parts: executing /etc/network/if-pre-up.d/ethtool
ip addr add 192.168.0.3/255.255.255.255 broadcast 192.168.0.3 peer 192.168.0.4 dev tap1 label tap1
Cannot find device “tap1″
Failed to bring up tap1.

… this happens. This used to work before the cktN kernels.

Categories: Elsewhere

Blair Wadman: Programmatically assign roles to users in Drupal

Planet Drupal - Fri, 12/12/2014 - 09:24

This post is part of a series of posts on making changes in Drupal programmatically rather than in the Drupal interface.

In this tutorial, we will be programmatically assigning role(s) to user(s). You would typically do this in a site deployment module in order to automate this task rather than having to manually assign the roles in the Drupal UI.

Categories: Elsewhere

Joey Hess: a brainfuck monad

Planet Debian - Fri, 12/12/2014 - 06:02

Inspired by "An ASM Monad", I've built a Haskell monad that produces brainfuck programs. The code for this monad is available on hackage, so cabal install brainfuck-monad.

Here's a simple program written using this monad. See if you can guess what it might do:

import Control.Monad.BrainFuck demo :: String demo = brainfuckConstants $ \constants -> do add 31 forever constants $ do add 1 output

Here's the brainfuck code that demo generates: >+>++>+++>++++>+++++>++++++>+++++++>++++++++>++++++++++++++++++++++++++++++++<<<<<<<<[>>>>>>>>+.<<<<<<<<]

If you feed that into a brainfuck interpreter (I'm using hsbrainfuck for my testing), you'll find that it loops forever and prints out each character, starting with space (32), in ASCIIbetical order.

The implementation is quite similar to the ASM monad. The main differences are that it builds a String, and that the BrainFuck monad keeps track of the current position of the data pointer (as brainfuck lacks any sane way to manipulate its instruction pointer).

newtype BrainFuck a = BrainFuck (DataPointer -> ([Char], DataPointer, a)) type DataPointer = Integer -- Gets the current address of the data pointer. addr :: BrainFuck DataPointer addr = BrainFuck $ \loc -> ([], loc, loc)

Having the data pointer address available allows writing some useful utility functions like this one, which uses the next (brainfuck opcode >) and prev (brainfuck opcode <) instructions.

-- Moves the data pointer to a specific address. setAddr :: Integer -> BrainFuck () setAddr n = do a <- addr if a > n then prev >> setAddr n else if a < n then next >> setAddr n else return ()

Of course, brainfuck is a horrible language, designed to be nearly impossible to use. Here's the code to run a loop, but it's really hard to use this to build anything useful..

-- The loop is only entered if the byte at the data pointer is not zero. -- On entry, the loop body is run, and then it loops when -- the byte at the data pointer is not zero. loopUnless0 :: BrainFuck () -> BrainFuck () loopUnless0 a = do open a close

To tame brainfuck a bit, I decided to treat data addresses 0-8 as constants, which will contain the numbers 0-8. Otherwise, it's very hard to ensure that the data pointer is pointing at a nonzero number when you want to start a loop. (After all, brainfuck doesn't let you set data to some fixed value like 0 or 1!)

I wrote a little brainfuckConstants that runs a BrainFuck program with these constants set up at the beginning. It just generates the brainfuck code for a series of ASCII art fishes: >+>++>+++>++++>+++++>++++++>+++++++>++++++++>

With the fishes^Wconstants in place, it's possible to write a more useful loop. Notice how the data pointer location is saved at the beginning, and restored inside the loop body. This ensures that the provided BrainFuck action doesn't stomp on our constants.

-- Run an action in a loop, until it sets its data pointer to 0. loop :: BrainFuck () -> BrainFuck () loop a = do here <- addr setAddr 1 loopUnless0 $ do setAddr here a

I haven't bothered to make sure that the constants are really constant, but that could be done. It would just need a Control.Monad.BrainFuck.Safe module, that uses a different monad, in which incr and decr and input don't do anything when the data pointer is pointing at a constant. Or, perhaps this could be statically checked at the type level, with type level naturals. It's Haskell, we can make it safer if we want to. ;)

So, not only does this BrainFuck monad allow writing brainfuck code using crazy haskell syntax, instead of crazy brainfuck syntax, but it allows doing some higher-level programming, building up a useful(!?) library of BrainFuck combinators and using them to generate brainfuck code you'd not want to try to write by hand.

Of course, the real point is that "monad" and "brainfuck" so obviously belonged together that it would have been a crime not to write this.

Categories: Elsewhere

Bluespark Labs: 10 Challenges You Face when Designing for Locations

Planet Drupal - Fri, 12/12/2014 - 06:00

Designing a website for an organization with multiple locations is challenging. Especially when those locations have their own needs, goals, and identities. Libraries, in all shapes and sizes, face many challenges when building or redesigning their website. They need to build something that students and patrons can use for research as well as something that actually helps people interact with the library’s various locations. This becomes especially difficult when the library is spread across several buildings and departments — and the people that work at the different libraries often have very different ideas about how their library should be represented on the Web.

Building a great library website is an ambitious project with the ultimate goal of serving the students, faculty, staff, and community.

On the surface, building a library website might feel like just another web project, but when you dig into it, you see there are many, many unique challenges stemming from the unique relationship the library has with it’s virtual and physical spaces. In this article, I explore 10 of those challenges and some possible solutions. Like any web project, though, every situation is unique. I prefer to focus on guidelines and considerations rather than describe actual solutions.

Challenge #1 - Who is our primary audience and what is their context of use?

What’s interesting about Universities—and Libraries fall victim to this, too—there are plenty of audiences to go around and each of their needs must be met for the website to be considered a success. Unfortunately, having too many audiences is like having too many cooks—you end up with something bland that no one likes.

Once you know who your primary audience is, you can explore their Contexts of Use. Contexts of Use are the place, time, and situation in which they will be using the website.

  • When will they need help finding a location?

  • What in the space are they looking for? (A building? Equipment? Something else?)

  • Where are they when they need to find that location?

Sidenote: When we talk about audiences, we also often talk about their goals. Multiple audiences might have similar goals. (How many audiences for a University Library have “Conduct independent research” as a primary goal?) Contexts of Use, though, differentiate these goals across the audiences. A student who needs to conduct independent research at the library website will approach it differently than a faculty member.

So why are contexts of use important? Well, they reveal things like why people are searching for a location (meeting a group, or maybe they need a copy machine, or maybe they want a quiet place to study) and allow you to design experiences for those contexts.

Challenge #2 - Are we a library? Or do we have libraries?

University Libraries are spread all over campus. There’s almost always “The Library” — the one building that everyone thinks about when you tell them to meet you at “The Library” but that is probably just one location of many.

Not to mention the many affiliated libraries—the independent collections managed by departments—the literature collection tucked away in the basement of the Humanities building. How are these related to the library and does your audience understand that relationship? Often this is a question much bigger than a website design project—venturing into your overall strategy, but the answer (and its trickle down effects) can hinder or help your audience accomplish its goals.

And all that leads to the question: Are you a University Library—or are you the University Libraries? It’s an identity question that affects your brand, the representation of that brand online, and the freedom you give the individual libraries.

Challenge #3 - Libraries are not Buildings

Repeat after me: Libraries are not buildings. For some of your libraries, this is painfully obvious. A special collection might be tucked into the corner of a building—be considered its own department, have its own hours and staff. That library is located within a building.

For other libraries, though, it’s less obvious. For example, at the UCLA Library, the Powell Library is located in the Powell Library building, but so are several special collections. You see the confusion? In the physical world, libraries have a special connection to the information they hold—that is the only place you can access that information. It goes without saying that, in the virtual world, these boundaries no longer exist. But because our understanding of physicality (that, and the trend of naming a building after its current use) lead to a tendency to conflate the actual library with the building that houses it.

Where it becomes painfully obvious is when a library is spread across multiple buildings—something that can often happen in space-starved Universities. (And let’s be honest, no matter how large the University, there is always a need for more space.)

There are several key instances when confusing the building with the library will actually create project over-runs. Buildings need to be managed separately from the libraries. The buildings, like the libraries themselves, may have unique names and will definitely have physical locations.

Even more challenging, though, is that this misunderstanding can also lead to poor decision-making about how to structure content. Just like the books and journals of a library live within the physical walls of building, some library staff might think their content should live within the virtual walls of their particularly library’s sitelet. This is where having well-defined persona with prioritized goals and contexts of use become critical to the success of the project. You can use these to explore whether people are first finding a specific library before beginning their search OR are they expecting to search for relevant research from a centralized research section OR something else entirely. (And, of course, stir in a healthy dose of usability testing to verify your hypothesis.)

Challenge #4 - Finding the right location

Here’s a fun one.

How do you know people are getting the right location?

It’s easy when you have a library called The Humanities Library and people are searching for the Humanities Library. But what happens when you have The John Doe Library of the Humanities and people refer to it as the Hum. Or some people call it the Hum and others call it the Humanities Library. It can get confusing real quick if you don’t have an option to add nicknames to your libraries (add nicknames to your buildings while you’re at it, too).

Challenge #5- When the library doesn’t matter, but the location does...

Finding a location is pretty easy when people are searching for a specific location—whether that be a library or a building. But there are plenty of contexts in which a user needs something that is location-agnostic.

Maybe they need a copier or a printer or a computer. Most libraries will have copiers spread all over campus—and some with different pricing. Keeping track of all the amenities that each location offers can be no easy task, but can go a long way to helping your audience actually get what they need from the library.

Tip: Think beyond equipment and things like wifi. Do some libraries offer study rooms? Or some places might offer a group study area where you don’t need to keep your voice down. These are all important amenities that people will be looking for. Taking note of the questions your students are asking about your locations will give you insight into the types of amenities you should be offering and how you should be categorizing your various locations.

Challenge # 6 - Is it open?

One of the biggest challenges seems like it would be the simplest. Students (and all of your audiences, really) need to know one of two things: 1) Is it open now? 2) Will it be open when I need it?

Of course, answering these questions is tricky. You’ve got your normal hours, your holiday hours, your end-of-the-semester hours, your summer hours—so many special circumstances, how do manage them all, much less present them to the user in an easy to read format? There’s really no simple answer other than to keep it as simple as possible and to conduct usability testing on sketches and prototypes with real users.

One of the useful, yet tricky, things we did with UCLA was to add “Open Now” as a search facet, so that all results returned only buildings that were open now—particularly useful for the students that needed a late night copy machine (context of use!) as well as including an “Open Now” indicator right in the location pages’ navigation bar.

Challenge #7 - Destinations within a location

“Is it open?” is not always an easy question to answer. Often, you will have destinations within a location that have separate hours than the main location. While the library will be open, the computer room might not be.

The solution we implemented for UCLA Library was to allow each location to have Destinations with their own descriptions, contact information, and hours. A future iteration of this should also include walking directions from various entrances—assuming that those entrances are clearly designated.

Challenge #8 - What’s going on at the Library?

Libraries host some great events that need to be listed on the website. There are, though, two challenges you face with events.

The first part of this challenge comes from conflating the building and the library as illustrated by this story:

Once upon a time, there was a librarian that worked at the Library of Biological Sciences. She took it upon herself to organize a lecture series from visiting scientists. The first event’s popularity took her by surprise; she had reserved the largest room in her library’s space, but it was still standing room only. So, for the second event, she opted to host the event at the Humanities Library who had a space twice the size of her largest space.

Challenge part 1: If the Library of Biological Sciences is hosting an event at the Humanities Library space, does the event display on the Library of Biological Sciences sitelet?

Challenge part 2: The Library of Biological Sciences is hosting an event at the Humanities Library space, should that event display on the Humanities Library sitelet?

And the answer (drumroll, please): it’s complicated. Ideally, it displays in both areas. Both libraries have an interest in promoting the event (and it should be the same content—not two different entries), though they have different reasons for doing so. The Library of Biological Sciences wants to promote an event they have spent a lot of time and resources organizing while the Humanities Library wants to let people know what’s going on in their space.

The real answer: It should be up to the sitelet moderators to determine what events they promote. They should have strong guidelines generated from Engagement and Content strategies, but the final decision comes from the autonomy you give the libraries themselves. (More on that to come.)

In the above challenge, the Library of Biological Sciences is hosting an event related to Biology in the Humanities Library space. It’s pretty clear cut as to why the two might want to promote the event. But what if the Humanities Library were to invite the author of a crime thriller that paid exquisite attention to the forensic details of the case—something that many people from the Library of Biological Sciences might be interested in? Should the event display on the Library of Biological Sciences sitelet?

The answer is not a simple one. Probably, yes, it should. Again, though, it should be up to the sitelet moderator who would be following Engagement and Content strategies.

Challenge #9 - Balancing autonomy and control

How unique are your locations? From the space they contain to the style of signage and posters used throughout the library. Maybe some of the smaller libraries don’t have a unique brand while the larger ones do. Or, perhaps, your library system has a style guide that all of your staff rigorously apply in everything they create.

Ideally you find a nice balance between command-and-control and complete anarchy. You want each location to feel part of the same family without taking away the things that make them feel unique. There are several ways to do this, including allowing each location to:

  • control the type of content they include on their homepage.

  • customize some of the visual styles on their page -- perhaps background image and other areas that can affect the basic feel of the site.

  • control the layout of their homepage.

You want to give your staff enough autonomy that they can ensure their library sitelet best represents the unique branding and style they have crafted in their space.

Challenge #10 - Consistency between the virtual and the physical

On the flip side of giving your staff autonomy, we have the need to give your audiences a consistent experience across locations and on the website. At its simplest, this means that you display the same information in the same way across sites. An event should look the same across all sites.

It goes deeper than this, though, even into the branding and labeling across your entire Library system, which is really a much larger project than the website redesign. But if you think about it (and I know librarians have!), a key part of finding your way through the physical location is the signage. As much as possible, you should create consistency between the signage in your spaces -- meaning that all libraries are consistent in the labeling and iconography.

Designing location search becomes so much easier when all libraries refer to the place with all the computers as the same thing. Keep labels the same across the different locations. If you call it a “Computer Lab” at one location, resist the urge to call it the “Computer carrels” at another location. (This might mean some overall discussions within the libraries—nothing like a joint project to surface all the inconsistencies that might be confusing your audience.)

Controlled Vocabularies are your friend—both in labels and in iconography. Further, this consistency also helps your audience understand what’s available at a location (as presented on the location pages and search results) and even navigate to those destinations within the space.

Take the UCLA Library as an example. The UCLA Library used some custom made icons to designate various destinations in their libraries, creating consistency across the various locations, the main website, and the location sitelets. Once we started creating the Destinations for each location, we realized that not all Destinations had iconography, so our creative director spent some time expanding on their visual vocabulary. It was critical that everything used on the website feel like it could also be dropped onto library signage without appearing out of place.

Here you can see some of their signage and how iconography plays an important role in conveying what’s available at the destination.

Both the locations and destinations on the website use these same icons to communicate the available amenities.

Sidenote: Consistency can be useful when it’s helpful. But it can also get in the way of communicating important differences—whether those differences come from the brand or facility. Divergence can be a good thing. You just need to ensure that the divergence is meaningful and useful for your audience. Perhaps there is a difference in a Computer Lab (a place where students can use computers) and Computer Carrels (a place designed for laptop use -- no computers provided). However, subtle differences may not be important to people and calling attention to them may only serve to confuse them more.

Bonus Challenge - Who owns what

One last, very important aspect of any web project: who is responsible for editing and maintaining the content on the website. Even if you have a copywriter or two, you are still faced with the gargantuan task of maintaining a mountain of information. (And sadly, this is no exaggeration.) No matter your process for editing and creating content, you need to identify the owner of each content and what their responsibilities entail. (Are they editing the content as needed? Are they filing change requests to the copywriter?)

In Conclusion...

Reconciling the needs of these varied and diverse stakeholders, libraries, locations, and library users can be a daunting task.  Hopefully, if you take the time to address the challenges in this series your end result will meet the needs of the most important audience—as well as many of the needs of your secondary and tertiary audiences.  With such a large network of stakeholders and users, you won’t please everyone, but with good user research and usability testing, you’ll get a lot closer to that goal, and you’ll have a firm foundation for the choices you make.

Tags: Drupal Planet
Categories: Elsewhere

Dirk Eddelbuettel: RProtoBuf 0.4.2

Planet Debian - Fri, 12/12/2014 - 03:19

A new release 0.4.2 of RProtoBuf is now on CRAN. RProtoBuf provides R bindings for the Google Protocol Buffers ("Protobuf") data encoding library used and released by Google, and deployed as a language and operating-system agnostic protocol by numerous projects.

Murray and Jeroen did almost all of the heavy lifting. Many changes were triggered by two helpful referee reports, and we are slowly getting to the point where we will resubmit a much improved paper. Full details are below.

Changes in RProtoBuf version 0.4.2 (2014-12-10)
  • Address changes suggested by anonymous reviewers for our Journal of Statistical Software submission.

  • Make Descriptor and EnumDescriptor objects subsettable with "[[".

  • Add length() method for Descriptor objects.

  • Add names() method for Message, Descriptor, and EnumDescriptor objects.

  • Clarify order of returned list for descriptor objects in as.list documentation.

  • Correct the definition of as.list for EnumDescriptors to return a proper list instead of a named vector.

  • Update the default print methods to use cat() with fill=TRUE instead of show() to eliminate the confusing [1] since the classes in RProtoBuf are not vectorized.

  • Add support for serializing function, language, and environment objects by falling back to R's native serialization with serialize_pb and unserialize_pb to make it easy to serialize into a Protocol Buffer all of the more than 100 datasets which come with R.

  • Use normalizePath instead of creating a temporary file with file.create when getting absolute path names.

  • Add unit tests for all of the above.

CRANberries also provides a diff to the previous release. RProtoBuf page which has a draft package vignette, a a 'quick' overview vignette, and a unit test summary vignette. Questions, comments etc should go to the GitHub issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Categories: Elsewhere

Propeople Blog: Drupal UX Improvements: When Node Forms are a Nuisance

Planet Drupal - Fri, 12/12/2014 - 00:16

Node forms can be big. With field collections, reference fields, tags, taxonomies, location fields, and others, node forms can actually be really big. This can make an editor’s experience so frustrating that it’s a surprise she doesn’t have a heart attack when she sees how much data she needs to enter in order to save the form.

Sound familiar? Well, in this article we will share some tips and tricks that we use to simplify editors’ lives and to make the Drupal editing experience more user-friendly.

Split a form into tabs

If you have too many fields on one screen, it’s nearly impossible for an editor to remember what he entered in the beginning by the time he’s reached the end of the page. The answer to this quandary is simple: it’s time to split up the form.

It is possible to use different criteria to split forms into more manageable sections, or tabs. The criteria you’ll use will depend on the type of form and its intended purpose. Forms can be split up by field types, required vs. optional fields, priority rank, more vs. less frequently used; the list goes on. To create an even better user experience, ask the editors themselves about how they’d like forms to be organized. If they have been using a particular form for a while, it’s likely they’ll have plenty of valuable feedback regarding the best way to make the forms more intuitive and improve the efficiency of their workflow.

From a technical perspective, there are multiple ways to achieve more manageable forms. The one we use most frequently utilizes vertical tabs. An example of this is shown in the screenshot below.

 

Vertical tabs help editors to concentrate on one thing at a time and improves the navigation experience when working with especially large forms. A good rule of thumb is to divide the form into chunks that won’t require users to scroll through more than 1 to 1.5 pages to complete all the elements on any given section.

In addition to vertical tabs, you have the option to set up horizontal ones. This can easily be achieved with the Field Group module.

Split a form into columns

Some editors set up their workstations to use fairly wide screens. In this case, another practical approach is to have multiple columns. This solution can be used to nicely group fields together so they can be viewed side by side.

Drupal's Panels module make it easy to configure forms that use columns. Simply set up the layout for a node’s edit form and arrange the fields however you like.

Warn editors about unsaved changes

If you use JIRA or Google Docs, you’ve probably seen a warning message like the one shown below.

 

For a busy editor juggling multiple tasks, this friendly little pop-up can be a lifesaver. And of course… there’s a module for that.

Allow us to introduce Node Edit Protection. It helps editors to remember to save their changes before navigating away from a form. This becomes especially handy if you’ve split your forms into multiple tabs, as editors may think that simply switching to another tab automatically saves their changes.

Taxonomy tag widgets

You won’t hear any complaints from us about the standard autocomplete feature and the way it enables editors to quickly select appropriate tags. There are multiple widgets, however, that put the icing on the autocomplete cake. One of these that we particularly like is Chosen.


 

Another good one is Autocomplete Deluxe.

 

Links

Sometimes editors need some help building links in WYSIWYG or when using link fields. This is especially true when editors need to search for and quickly locate the content or URL they want to reference. Enter the Linkit module, which acts as a link-specific autocomplete function.

 

With the Linkit Picker module, we can even have a custom view to search for the content we would like to link to. This allows us to configure additional filters to help editors find content more effectively.

 

References

If you are using reference fields, there is a nifty module called References Dialog. This module allows editors to create a referenced entity while working within a node using a dialog box.

 

This comes in very handy, especially when your referenced entities include only a handful of fields and can fit comfortably into the dialog pop-up.

May we suggest...

For more great ideas that will have editors singing your praises, check out this presentation from DrupalCon Amsterdam 2014 about building a better backend, as well as our colleague Boyan Borisov’s presentation about improving Drupal's editorial experience.

Interested in learning even more about how Propeople can create a Drupal platform with user-friendly backend functionality for you? Don't hesitate to contact us.

Tags: DrupalUXcontent managementService category: TechnologyCheck this option to include this post in Planet Drupal aggregator: planetTopics: Tech & Development
Categories: Elsewhere

Drupal core announcements: All TestBase derived tests now enforce strict configuration schema adherence by default

Planet Drupal - Thu, 11/12/2014 - 22:03

Configuration schema was originally introduced to help describe configuration for translations. Then it expanded considerably and is now used to export configuration entities automatically for example (unless you want to write code to manually define what to export). Configuration schema is also used to automatically typecast values to their expected types. This ensures that although PHP and web forms in general favour strings over all other types, the right types are used when saving configuration. That is important so when deploying configuration, only actual changes will show up in the difference, no random type changes. Schema enforces certain rules and best practices of configuration on its users, for example that each piece in the active configuration should have an owner. Finally configuration schema based configuration validation helps find several types of bugs in code that is otherwise not or incorrectly tested.

Therefore after a month+ of work to make all core tests pass strict configuration schema checking, we are making TestBase default to strictly check all configuration against configuration schemas. There are only a few tests exempt from this in the testing of the configuration system itself. This affects all contributed module developers writing TestBase (WebTestBase, KernelTestBase, etc.) extending tests. This may result in new test failures which indicate either issues in your schema, your configuration, your tests, migrations, etc. Either way it indicates that in some cases unexpected data structures are generated.

Read more in the change notice.

Categories: Elsewhere

Gregor Herrmann: GDAC 2014/11

Planet Debian - Thu, 11/12/2014 - 21:45

is enthusiasm contagious? I think so. a recent example: another advent posting. – ¡gracias!

this posting is part of GDAC (gregoa's debian advent calendar), a project to show the bright side of debian & why it's fun for me to contribute.

Categories: Elsewhere

Mediacurrent: Upcoming Webinar: Migrating The Weather Channel Onto Drupal

Planet Drupal - Thu, 11/12/2014 - 17:39

The Weather Channel (TWC) has one of the most trafficked websites in the world, which makes it one of the largest Drupal sites in the world.

Categories: Elsewhere

Blink Reaction: Creating Custom Search Pages with Search404 and Apache Solr Search

Planet Drupal - Thu, 11/12/2014 - 17:19

Imagine somewhere deep in your site that you have a page with the alias:

/how-to-build-drupal-site.

Now, let’s imagine this page is not a link in your site’s menu. Your visitor remembers that they have seen this page on your site and starts typing in the address line:

Categories: Elsewhere

Pages

Subscribe to jfhovinne aggregator - Elsewhere