Elsewhere

Craig Small: Important WordPress update

Planet Debian - mer, 09/04/2014 - 15:08

WordPress 3.8.2 was released yesterday which contains some important security fixes. This is an important security release and the Debian packages were uploaded to the ftp-master a few minutes ago.

Besides fixing Debian Bug #744018, the release fixes the following two vulnerabilities (as mentioned in the bug report):

  • CVE-2014-0165 WordPress privilege escalation: prevent contributors from publishing posts
  • CVE-2014-0166 WordPress potential authentication cookie forgery

I recommend if you use the Debian package to upgrade as soon as it is available.

 

Related articles
Catégories: Elsewhere

Julian Andres Klode: ThinkPad X230 UEFI broken by setting a setting

Planet Debian - mer, 09/04/2014 - 14:57

Today, I decided to set my X230 back to UEFI-only boot, after having changed that for a bios upgrade recently (to fix a resume bug). I then choose to save the settings and received several error messages telling me that the system ran out of resources (probably storage space for UEFI variables).

I rebooted my machine, and saw no logo appearing. Just something like an underscore on a text console. The system appears to boot normally otherwise, and once the i915 module is loaded (and we’re switching away from UEFI’s Graphical Output Protocol [GOP]) the screen works correctly.

So it seems the GOP broke.

What should I do next?


Filed under: General
Catégories: Elsewhere

Petter Reinholdtsen: S3QL, a locally mounted cloud file system - nice free software

Planet Debian - mer, 09/04/2014 - 11:30

For a while now, I have been looking for a sensible offsite backup solution for use at home. My requirements are simple, it must be cheap and locally encrypted (in other words, I keep the encryption keys, the storage provider do not have access to my private files). One idea me and my friends have had many years ago, before the cloud storage providers showed up, have been to use Google mail as storage, writing a Linux block device storing blocks as emails in the mail service provided by Google, and thus get heaps of free space. On top of this one can add encryption, RAID and volume management to have lots of (fairly slow, I admit that) cheap and encrypted storage. But I never found time to implement such system. But the last few weeks I have looked at a system called S3QL, a locally mounted network backed file system with the features I need.

S3QL is a fuse file system with a local cache and cloud storage, handling several different storage providers, any with Amazon S3, Google Drive or OpenStack API. There are heaps of such storage providers. S3QL can also use a local directory as storage, which combined with sshfs allow for file storage on any ssh server. S3QL include support for encryption, compression, de-duplication, snapshots and immutable file systems, allowing me to mount the remote storage as a local mount point, look at and use the files as if they were local, while the content is stored in the cloud as well. This allow me to have a backup that should survive fire. The file system can not be shared between several machines at the same time, as only one can mount it at the time, but any machine with the encryption key and access to the storage service can mount it if it is unmounted.

It is simple to use. I'm using it on Debian Wheezy, where the package is included already. So to get started, run apt-get install s3ql. Next, pick a storage provider. I ended up picking Greenqloud, after reading their nice recipe on how to use s3ql with their Amazon S3 service, because I trust the laws in Iceland more than those in USA when it come to keeping my personal data safe and private, and thus would rather spend money on a company in Iceland. Another nice recipe is available from the article S3QL Filesystem for HPC Storage by Jeff Layton in the HPC section of Admin magazine. When the provider is picked, figure out how to get the API key needed to connect to the storage API. With Greencloud, the key did not show up until I had added payment details to my account.

Armed with the API access details, it is time to create the file system. First, create a new bucket in the cloud. This bucket is the file system storage area. I picked a bucket name reflecting the machine that was going to store data there, but any name will do. I'll refer to it as bucket-name below. In addition, one need the API login and password, and a locally created password. Store it all in ~root/.s3ql/authinfo2 like this:

[s3c] storage-url: s3c://s.greenqloud.com:443/bucket-name backend-login: API-login backend-password: API-password fs-passphrase: local-password

I create my local passphrase using pwget 50 or similar, but any sensible way to create a fairly random password should do it. Armed with these details, it is now time to run mkfs, entering the API details and password to create it:

# mkdir -m 700 /var/lib/s3ql-cache # mkfs.s3ql --cachedir /var/lib/s3ql-cache --authfile /root/.s3ql/authinfo2 \ --ssl s3c://s.greenqloud.com:443/bucket-name Enter backend login: Enter backend password: Before using S3QL, make sure to read the user's guide, especially the 'Important Rules to Avoid Loosing Data' section. Enter encryption password: Confirm encryption password: Generating random encryption key... Creating metadata tables... Dumping metadata... ..objects.. ..blocks.. ..inodes.. ..inode_blocks.. ..symlink_targets.. ..names.. ..contents.. ..ext_attributes.. Compressing and uploading metadata... Wrote 0.00 MB of compressed metadata. #

The next step is mounting the file system to make the storage available.

# mount.s3ql --cachedir /var/lib/s3ql-cache --authfile /root/.s3ql/authinfo2 \ --ssl --allow-root s3c://s.greenqloud.com:443/bucket-name /s3ql Using 4 upload threads. Downloading and decompressing metadata... Reading metadata... ..objects.. ..blocks.. ..inodes.. ..inode_blocks.. ..symlink_targets.. ..names.. ..contents.. ..ext_attributes.. Mounting filesystem... # df -h /mnt Filesystem Size Used Avail Use% Mounted on s3c://s.greenqloud.com:443/bucket-name 1.0T 0 1.0T 0% /s3ql #

The file system is now ready for use. I use rsync to store my backups in it, and as the metadata used by rsync is downloaded at mount time, no network traffic (and storage cost) is triggered by running rsync. To unmount, one should not use the normal umount command, as this will not flush the cache to the cloud storage, but instead running the umount.s3ql command like this:

# umount.s3ql /s3ql #

There is a fsck command available to check the file system and correct any problems detected. This can be used if the local server crashes while the file system is mounted, to reset the "already mounted" flag. This is what it look like when processing a working file system:

# fsck.s3ql --force --ssl s3c://s.greenqloud.com:443/bucket-name Using cached metadata. File system seems clean, checking anyway. Checking DB integrity... Creating temporary extra indices... Checking lost+found... Checking cached objects... Checking names (refcounts)... Checking contents (names)... Checking contents (inodes)... Checking contents (parent inodes)... Checking objects (reference counts)... Checking objects (backend)... ..processed 5000 objects so far.. ..processed 10000 objects so far.. ..processed 15000 objects so far.. Checking objects (sizes)... Checking blocks (referenced objects)... Checking blocks (refcounts)... Checking inode-block mapping (blocks)... Checking inode-block mapping (inodes)... Checking inodes (refcounts)... Checking inodes (sizes)... Checking extended attributes (names)... Checking extended attributes (inodes)... Checking symlinks (inodes)... Checking directory reachability... Checking unix conventions... Checking referential integrity... Dropping temporary indices... Backing up old metadata... Dumping metadata... ..objects.. ..blocks.. ..inodes.. ..inode_blocks.. ..symlink_targets.. ..names.. ..contents.. ..ext_attributes.. Compressing and uploading metadata... Wrote 0.89 MB of compressed metadata. #

Thanks to the cache, working on files that fit in the cache is very quick, about the same speed as local file access. Uploading large amount of data is to me limited by the bandwidth out of and into my house. Uploading 685 MiB with a 100 MiB cache gave me 305 kiB/s, which is very close to my upload speed, and downloading the same Debian installation ISO gave me 610 kiB/s, close to my download speed. Both were measured using dd. So for me, the bottleneck is my network, not the file system code. I do not know what a good cache size would be, but suspect that the cache should e larger than your working set.

I mentioned that only one machine can mount the file system at the time. If another machine try, it is told that the file system is busy:

# mount.s3ql --cachedir /var/lib/s3ql-cache --authfile /root/.s3ql/authinfo2 \ --ssl --allow-root s3c://s.greenqloud.com:443/bucket-name /s3ql Using 8 upload threads. Backend reports that fs is still mounted elsewhere, aborting. #

The file content is uploaded when the cache is full, while the metadata is uploaded once every 24 hour by default. To ensure the file system content is flushed to the cloud, one can either umount the file system, or ask s3ql to flush the cache and metadata using s3qlctrl:

# s3qlctrl upload-meta /s3ql # s3qlctrl flushcache /s3ql #

If you are curious about how much space your data uses in the cloud, and how much compression and deduplication cut down on the storage usage, you can use s3qlstat on the mounted file system to get a report:

# s3qlstat /s3ql Directory entries: 9141 Inodes: 9143 Data blocks: 8851 Total data size: 22049.38 MB After de-duplication: 21955.46 MB (99.57% of total) After compression: 21877.28 MB (99.22% of total, 99.64% of de-duplicated) Database size: 2.39 MB (uncompressed) (some values do not take into account not-yet-uploaded dirty blocks in cache) #

I mentioned earlier that there are several possible suppliers of storage. I did not try to locate them all, but am aware of at least Greenqloud, Google Drive, Amazon S3 web serivces, Rackspace and Crowncloud. The latter even accept payment in Bitcoin. Pick one that suit your need. Some of them provide several GiB of free storage, but the prize models are quire different and you will have to figure out what suit you best.

While researching this blog post, I had a look at research papers and posters discussing the S3QL file system. There are several, which told me that the file system is getting a critical check by the science community and increased my confidence in using it. One nice poster is titled "An Innovative Parallel Cloud Storage System using OpenStack’s SwiftObject Store and Transformative Parallel I/O Approach" by Hsing-Bung Chen, Benjamin McClelland, David Sherrill, Alfred Torrez, Parks Fields and Pamela Smith. Please have a look.

Given my problems with different file systems earlier, I decided to check out the mounted S3QL file system to see if it would be usable as a home directory (in other word, that it provided POSIX semantics when it come to locking and umask handling etc). Running my test code to check file system semantics, I was happy to discover that no error was found. So the file system can be used for home directories, if one chooses to do so.

If you do not want a locally file system, and want something that work without the Linux fuse file system, I would like to mention the Tarsnap service, which also provide locally encrypted backup using a command line client. It have a nicer access control system, where one can split out read and write access, allowing some systems to write to the backup and others to only read from it.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Catégories: Elsewhere

.VDMi/Blog: Why does my file get removed after six hours?

Planet Drupal - mer, 09/04/2014 - 09:29
My managed image got removed after a certain amount of time. How did that happen? After searching a bit I figure out.

The Form API has a form element called managed_file. It uploads a file and adds it to the managed files table. This way Drupal has knowledge about and control over it. But now I got the situation that after a certain amount of time the image got removed. It just disappeared. What is happening here?

Well the managed_file works with Ajax. To smooth the proces it adds the managed file and leaves the status on temporary until some one specifies 'this is my file its managed'. You do that by adding this snippet of code to your submit handler.

$file = file_load($form_state['values']['file_element_name']);

// Change status to permanent.
$file->status = FILE_STATUS_PERMANENT;

// Save.
file_save($file);

If you have your form managed by

system_settings_form()

you want to add a extra submit handler. You can do that this way.

$form['#submit'][] = 'extra_admin_submit';
Catégories: Elsewhere

Daniel Pocock: Double whammy for CACert.org users

Planet Debian - mer, 09/04/2014 - 07:47

If you are using OpenSSL (or ever did use it with any of your current keypairs in the last 3-4 years), you are probably in a rush to upgrade all your systems and replace all your private keys right now.

If your certificate authority is CACert.org then there is an extra surprise in store for you. CACert.org has changed their hash to SHA-512 recently and some client/server connections silently fail to authenticate with this hash. Any replacement certificates you obtain from CACert.org today are likely to be signed using the new hash. Amongst other things, if you use CACert.org as the CA for a distributed LDAP authentication system, you will find users unable to log in until you upgrade all SSL client code or change all clients to trust an alternative root.

Catégories: Elsewhere

Modules Unraveled: 103 Content Branching and Static Site Generation Using Zariz with Amitai Burstein - Modules Unraveled Podcast

Planet Drupal - mer, 09/04/2014 - 07:00
Published: Wed, 04/09/14Download this episodeZariz
  • What is Zariz?
  • How did this come about?
  • How does it help content creators?
  • How is this different from Workbench Moderation, and the default revisioning system?
  • You mentioned that it duplicates nodes, how do the URLs stay in tact?
  • Talk a bit about how you can create static site from a Drupal site.
Use Cases
  • Content staging
  • Static site generation
    • What about authenticated users?
    • How does this help performance and scalability?
Questions from Twitter
  • Kate
    Is Zariz an alternative to drupal.org/project/sps?
Video

Screencast demo starts at about 40:23

Episode Links: Amitai on drupal.orgAmitai on TwitterZariz RepoTags: 
Catégories: Elsewhere

Andrew Pollock: [life] Day 71: Tumble Tastics trial, painting and plaster fun

Planet Debian - mer, 09/04/2014 - 06:15

Zoe slept in even later this morning. I'm liking this colder weather. We had nothing particular happening first thing today, so we just snuggled in bed for a bit before we got started.

Tumble Tastics were offering free trial classes this week, so I signed Zoe up for one today. She really enjoyed going to Gold Star Gymnastics in the US, and has asked me about finding a gym class over here every now and then.

Tumble Tastics is a much smaller affair than Gold Star, but at 300 metres from home on foot, it's awesomely convenient. Zoe scootered there this morning.

It seems to be physically part of what I'm guessing used to be the Church of Christ's church hall, so it's not big at all, but the room that Zoe had her class in still had plenty of equipment in it. There were 8 kids in her class, all about her size. I peeked around the door and watched.

Most of the class was instructor led and mainly mat work, but then part way through, the parents were invited in, and the teacher walked us all through a course around the room, using the various equipment, and the parents had to spot for their kids.

The one thing that cracked me up was when the kids were supposed to be tucking into a ball and rocking on their backs. Zoe instead did a Brazilian Jiu-Jitsu break-fall and fell backwards slapping the mat instead. It was good to see that some of what she learned in those classes has kicked in reflexively.

She really enjoyed the rope swing and hanging upside down on the uneven bars.

The class ran for 50 minutes (I was only expecting it to last 30 minutes) and Zoe did really well straight off. I think we'll make this her 4th term extra-curricular activity.

We scootered home the longer way, because we were in no particular hurry. Zoe did some painting when we got home, and then we had lunch.

After lunch we goofed off for a little bit, and then we did quiet time. Zoe napped for about two and a half hours, and then we did some plaster play.

I'd picked up a fish ice cube tray from IKEA on the weekend for 99 cents (queue Thrift Shop), and I bought a bag of plaster of Paris a while back, but haven't had a chance to do anything with it yet. I bribed Zoe into doing quiet time by telling her we'd do something new with the ice cube tray I'd bought.

We mixed up a few paper cups with plaster of Paris in them and then I squirted some paint in. I'm not sure if the paint caused a reaction, or the plaster was already starting to set by the time the paint got mixed in, but it became quite viscous as soon as the paint was mixed in. We did three different colours and used tongue depressers to jam it into the tray. Zoe seemed to twig that it was the same stuff as the impressions of her baby feet, which I thought was a pretty clever connection to make.

After that, there was barely enough time to watch a tiny bit of TV before Sarah arrived to pick Zoe up. I told her that her plaster would be set by the time she got dropped off in the morning.

I procrastinated past the point of no return and didn't go for a run. Instead I decided to go out to Officeworks and print out some photos to stick in the photo frame I bought from IKEA on the weekend.

Catégories: Elsewhere

Dirk Eddelbuettel: BH release 1.54.0-1

Planet Debian - mer, 09/04/2014 - 04:41
A new release of the BH package is now on CRAN and its mirrors. BH provides (a sizeable subset of) the Boost library for C++, particularly (large) parts delivered as pure template headers not requiring linking. See the BH page for more details.

This release provides our first update relative to the Boost tarballs we started with. It moves us from 1.51.0 (which was getting a little long in the tooth) to 1.54.0. This is just about the first time ever that I didn't package something straight from the current release (now 1.55.0). My aim was to to balance the oh, shiny, new aspect with some stability. Comments welcome--maybe I will go to the bleeding edge next time.

As before, the CRAN is created by running bcp over a number of selected components of Boost. If you'd like to see additional ones include, please do get in touch too. Before uploading, I also tested against all of these sixteen CRAN dependents I could quickly test on my server given the installed dependencies there.

The complete list changes follows below. Changes in version 1.54.0-1 (2014-04-07)
  • Upgraded to Boost 1.54.0

  • Adjust build script local/script/CreateBoost.sh accordingly

  • Renamed generation_runge_kutta_cash_karp54_classic.hpp to generation_runge_kutta_cash_karp54_cl.hpp to remain within 100-character limit for tar

Courtesy of CRANberries, there is also a diffstat report for the most recent release.

Comments and suggestions are welcome via the mailing list or issue tracker at the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Catégories: Elsewhere

Fuse Interactive: Watch as I try to upgrade this module to Drupal 8. What happens next you won't BELIEVE!

Planet Drupal - mer, 09/04/2014 - 03:05
Drupal 8 is coming with more API changes than ever before. Are you ready?? Prepare yourself by upgrading your first Drupal 8 module right here on the Fuse Interactive blog. Learn by doing and follow along as we explore the new Routing component, OOP Drupal, PSR autoloading... and more!
Catégories: Elsewhere

PreviousNext: Secure your infrastructure with Docker and Puppet.

Planet Drupal - mer, 09/04/2014 - 02:10

I recently spoke at the Drupal Melbourne meetup about running Puppet and Docker to increase security for running multiple sites on the one host. It's alot of work to get setup properly for a remote speaker so I would like to thank the organisers for allowing me to present.

Catégories: Elsewhere

Metal Toad: Pond Life Ep.2

Planet Drupal - mer, 09/04/2014 - 01:09

Hello Everyone!
Welcome back to the pond! Last week we touched on the importance of mentoring juniors and Github best practices. In this week's episode, we'll be following up on the junior workflow from last week by discussing two tools you should definitely have and how to install them, exploring new ground by touching on some entry level SCSS techniques, sharing my AHA! and FAIL moments of the week, and lastly, our weekly query for you good people out there to ponder. So lets jump right into it shall we?

Catégories: Elsewhere

DrupalCon Austin News: Symfony Community: A Special Invitation to DrupalCon Austin

Planet Drupal - mar, 08/04/2014 - 21:49


With the rapidly approaching release of Drupal 8, many Symfony developers may be considering going to Austin for DrupalCon in June. Our advice? Do it!

Catégories: Elsewhere

Tanguy Ortolo: Disable your spammed addresses with Postfix

Planet Debian - mar, 08/04/2014 - 19:45
Using address extension

Postfix (and many other mail servers) offers one nice address extension feature: addresses like <user+whaterver@> are implicit aliases to <user@>. This allows users to implement a simple measure to fight spam:

  1. when SomeCompany® or whatever asks for your email address, give them <user+somecompany@>;
  2. if you start receiving spam at that address, you know who sold or was stolen your address;
  3. finally, you will be able to disable that address so messages are simply refused with a permanent error code.
Disabling an extended address

So, here is how to implement that last step with Postfix, when you detect that your extended address <user+evilcorp@> is being spammed. In /etc/postfix/main.cf:

smtpd_recipient_restrictions = check_recipient_access hash:/etc/postfix/recipients, […]

Then, create /etc/postfix/recipients containing the addresses to disable:

user+evilcorp@example.com 553 5.7.1 I did not subscribe to receive spam, go a way

Of course, the error codes and message can be freely configured, just make sure you are using a permanent error code so senders do not retry. Hash that table, reload Postfix and it is done:

# postmap /etc/postfix/recipients # service postfix reload

After that, your mail server will reject messages sent to these addresses. And it will do so at the RCPT TO step, saving your bandwidth for more useful things.

Catégories: Elsewhere

Daniel Pocock: reConServer for easy SIP conferencing

Planet Debian - mar, 08/04/2014 - 18:24

In the lead up to the release of Debian wheezy, there was quite some debate about the Mumble conferencing software which uses the deprecated and unsupported CELT codec. Although Mumble has very recently added Opus support, it is still limited by the fact that it is a standalone solution without any support for distributed protocols like SIP or XMPP.

Making SIP conferencing easy

Of course, people could always set up SIP conferences by installing Asterisk but for many use cases that may be overkill and may simply introduce alternative security and administration overheads.

Enter reConServer

The popular reSIProcate SIP stack includes a high-level programming API, the Conversation Manager, dubbed librecon. It was developed and contributed to the open source project by Scott Godin of SIP Spectrum. In very simple terms, a Conversation object with two Participants is a phone call. A Conversation object with more than two Participants is a conference.

The original librecon includes a fully functional demo app, testUA that allows you to control conferences from the command line.

As part of Google Summer of Code 2013, Catalin Constantin Usurelu took the testUA.cxx code and converted it into a proper daemon process. It is now available as a ready-to-run SIP conferencing server package in Debian and Ubuntu.

The quick and easy way to try it

Normally, a SIP conferencing server will be linked to a SIP proxy and other infrastructure.

For trying it out quickly, however, no SIP proxy is necessary.

Simply install the package with the default settings and then configure a client to talk to the reConServer directly by dialing the IP address of the server.

For example, set the following options in /etc/reConServer/reConServer.config:

UDPPort = 5062 EnableAutoAnswer = true

and it will happily accept all SIP calls sent to the IP address where it is running.

Now configure Jitsi to talk to it directly in serverless SIP configuration:

Notice here that we simply put a username without any domain part, this tells Jitsi to create an account that can operate without a SIP proxy or registration server:

Calling in to the conference

Notice in the screenshot below we simply dial the IP address and port number of the reConServer process, sip:192.168.1.100:5062. When the first call comes in, reConServer will answer and place the caller on hold. When the next caller arrives, the hold should automatically finish and audio will be heard.

Next steps

To make it run as part of a proper SIP deployment, set the necessary configuration options (username, password, proxy) to make reConServer register to the SIP proxy. Users can then call the conference through the proxy.

To discuss any problems or questions, please come and join the recon-users mailing list or the Jitsi users list

Consider using Wireshark to observe the SIP packets and learn more about the protocol.

Catégories: Elsewhere

Julian Granger-Bevan: Improving SEO using Drupal Similar by Terms

Planet Drupal - mar, 08/04/2014 - 18:00

Search Engine Optimisation (SEO) is the process of altering your website to maximise the exposure of your website via search engines such as Google and Bing.

The aim is to bring more visitors to your website.

If your website is built using the Drupal CMS, this article will give you an easy tip that will both improve the experience for your visitors when they are on your site and help boost your search engine rankings.

The method is made easy using the Similar by Terms module for Drupal, and exploits the need for visitors to be able to find other relevant content when they are on your website.

Why use Similar by Terms?

Search engines such as Google are looking at hundreds of factors when they decide which pages to display prominently in search engine results pages (SERPs).

These factors include content, quality and context.

Your site is more likely to be placed highly in SERPS (leading to more traffic) if Google identifies that it is authoritative on a particular subject. Links around between pages on your website will help Google recognise the topic that each page on your website covers, and means that it is more likely to rank highly for that searches on that topic.

Do not get confused, this is not the same as link building from other websites (for which care needs to be taken as search engines will penalise you if these are unnatural).

Similar by Terms provides an automated means of displaying related content links on your website. It does this by comparing the taxonomy terms that each node is tagged with, and creating a simple ranking based on the overlapping terms  from which it can draw the top few nodes to show to your visitors as links.

Links to relevant content also improve the experience for your visitors, by giving them suggestions for what to read next. A visitor is much more likely to find links useful (and continue to browse your website) if the links are relevant to the page that they are already on.

How to install Similar by Terms

To install Similar by Terms, download the code from http://drupal.org/project/similarterms and enable on your website by visiting the /admin/modules page.

You will need to have also installed two dependencies: Chaos Tools Suite and Views.

Similar by Terms exposes a view to your website, and the next thing you will want to do is edit this to suit your needs. The view can be found at the page /admin/structure/views.

The default view is quite basic, and simply returns a list of the titles of the related nodes. You will likely want to edit this (perhaps to also show a teaser or image from the node). These edits can be made just like any other view.

One setting that is unique to Similar by Terms is the taxonomy vocabularies that are considered when ranking nodes for similarity. You can opt to include just one, or all of your vocabularies in the comparison.

To do this, click "Advanced" on the right hand side of the edit screen, then click on "Similar By Terms: Nid" in the contextual filters section.

Now a dialog appears where you can choose which vocabularies to use.

You can also create your own views utilising the functionality provided by Similar by Terms. Simply copy the relationships and sorting rules that exist in the default view that Similar by Terms provided.

One last hint

The default behaviour of Similar By Terms will only show nodes in the list that share one or more taxonomy terms with the node being viewed.

This means, that you might only see one related node, or even none at all.

For the styling of your website you're likely to want to always show the same number of nodes in the list. Whilst there is a feature request in the issues queue for this, there is a simple method that will solve this for you straight away.

The answer is to create a new taxonomy vocabulary called "Included in Similar By Terms", with a single term called "Included". Let that term default on all nodes on your website. This way, all nodes will have at least one taxonomy term in common, and the real similar nodes will rise to the top of the list above those that aren't really related.

Category: WebsitesTags: DrupalSimilar by TermsDrupal Planetrelated contenttutorialhow to
Catégories: Elsewhere

Mediacurrent: Running Drupal on OpenShift from Red Hat

Planet Drupal - mar, 08/04/2014 - 17:52

An interesting platform I came across recently for developing and deploying cloud applications is OpenShift by RedHat. OpenShift is a next generation application hosting platform. The software running this service is open-sourced under the name OpenShift Origin, and is available on GitHub. Developers use Git to deploy web applications in different languages to the platform.

Catégories: Elsewhere

Acquia: Using Composer Manager to get off the Island Now

Planet Drupal - mar, 08/04/2014 - 17:28

On the eve of 2013, prolific Drupal contributor Larry Garfield put forth a challenge to "get off the island", and judging by the adoption of non-Drupal projects in Drupal 8 core I would say the community has responded.

Catégories: Elsewhere

Phase2: Say Goodbye To menu_get_object() @NYC Camp

Planet Drupal - mar, 08/04/2014 - 13:30

Drupal 8 is bringing some great new features in addition to some fun DX changes. One of the ways I like to learn about these changes is to deconstruct the API.

The best way to deconstruct the API is to dive into code that has a certain purpose, like looking at the Breadcrumb API.

Since we know we’re focusing on Drupal 7 to Drupal 8 changes, we can also use the excellent documentation in the change records to help us.

In my upcoming NYCCamp presentation, I’ll review some of the common API functions we used in Drupal 7 and how they’ve changed in Drupal 8.

What Node Am I On?

A lot of custom blocks that show related content, connected taxonomy, or any other relationship to currently viewed page typically depend on menu_get_object(). I’m sad to say that our old friend is gone.

In Drupal 8, the way to get details about nodes are through the attributes of the request object in the global \Drupal namespace.

While the DX of this implementation is currently being discussed, as of this writing, to get details about the current node:

<?php $node = \Drupal::request()->attributes->get('node'); ?>

drupal_render() is EVERYTHING!

Consistency is a big theme (no pun intended) in Drupal 8. Render arrays are the main driver to staging content to be passed to the theme layer.

As such, the theme() function is now gone.

Instead, a new #theme array key is passed to build a piece of content programmatically.

For old core theme functions, like theme_table() or theme_link(), you can pass in the ‘table’ or ‘link’ keyword, respectively, to the #type array key.

As noted in the change record, to create a table of data with a pager, set the various keys, then pass it to drupal_render():

<?php // Theme is available as an element type (may have additional processing in rendering). $table = array( '#type' => 'table', '#header' => $header, '#rows' => $rows, '#attributes' => array( 'id' => 'my-module-table', ), ); $markup = drupal_render($table); // Pager is not an element type, use #theme directly. $pager = array('#theme' => 'pager'); $markup = drupal_render($pager); ?>

Want More?

If you can’t make it out to NYC, definitely look for me at either the upcoming Chicago Meetup or Drupalcon Austin!

I hope to you see in you in NYC this weekend!

Catégories: Elsewhere

Bálint Réczey: Move friends from XP to Linux days

Planet Debian - mar, 08/04/2014 - 12:38


Today Microsoft ends support for Windows XP.

To keep my friends’ PC-s currently running XP secure I announce the the “Move friends from XP to Linux days”.

If you are my friend feel free to contact me and we find some time to install Ubuntu on your machine keeping your Windows installation bootable as long as you want. Ubuntu is a Debian derivative Linux distribution which is easy to use.

Hungarian version

Catégories: Elsewhere

Pages

Subscribe to jfhovinne agrégateur - Elsewhere